How an empty S3 bucket can make your AWS bill explode

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Imagine you create an empty, private AWS S3 bucket in a region of your preference. What will your AWS bill be the next morning?

A few weeks ago, I began working on the PoC of a document indexing system for my client. I created a single S3 bucket in the eu-west-1 region and uploaded some files there for testing. Two days later, I checked my AWS billing page, primarily to make sure that what I was doing was well within the free-tier limits. Apparently, it wasn’t. My bill was over $1,300, with the billing console showing nearly 100,000,000 S3 PUT requests executed within just one day!

1714662890127.png

Where were these requests coming from?​

By default, AWS doesn’t log requests executed against your S3 buckets. However, such logs can be enabled using AWS CloudTrail or S3 Server Access Logging. After enabling CloudTrail logs, I immediately observed thousands of write requests originating from multiple accounts or entirely outside of AWS.

But why would some third parties bombard my S3 bucket with unauthorised requests?​

Was it some kind of DDoS-like attack against my account? Against AWS? As it turns out, one of the popular open-source tools had a default configuration to store their backups in S3. And, as a placeholder for a bucket name, they used… the same name that I used for my bucket. This meant that every deployment of this tool with default configuration values attempted to store its backups in my S3 bucket!

Note: I can’t disclose the name of the tool I’m referring to, as that would put the impacted companies at risk of data leak (as explained further).
So, a horde of misconfigured systems is attempting to store their data in my private S3 bucket. But why should I be the one paying for this mistake? Here’s why:

S3 charges you for unauthorized incoming requests​

This was confirmed in my exchange with AWS support. As they wrote:

Yes, S3 charges for unauthorized requests (4xx) as well[1]. That’s expected behavior.
So, if I were to open my terminal now and type:

aws s3 cp ./file.txt s3://your-bucket-name/random_key

I would receive an AccessDenied error, but you would be the one to pay for that request. And I don’t even need an AWS account to do so.

Another question was bugging me: why was over half of my bill coming from the us-east-1 region? I didn’t have a single bucket there! The answer to that is that the S3 requests without a specified region default to us-east-1 and are redirected as needed. And the bucket’s owner pays extra for that redirected request.

The security aspect​

We now understand why my S3 bucket was bombarded with millions of requests and why I ended up with a huge S3 bill. At that point, I had one more idea I wanted to explore. If all those misconfigured systems were attempting to back up their data into my S3 bucket, why not just let them do so? I opened my bucket for public writes and collected over 10GB of data within less than 30 seconds. Of course, I can’t disclose whose data it was. But it left me amazed at how an innocent configuration oversight could lead to a dangerous data leak!

What did I learn from all this?​

Lesson 1: Anyone who knows the name of any of your S3 buckets can ramp up your AWS bill as they like.​

Other than deleting the bucket, there’s nothing you can do to prevent it. You can’t protect your bucket with services like CloudFront or WAF when it’s being accessed directly through the S3 API. Standard S3 PUT requests are priced at just $0.005 per 1,000 requests, but a single machine can easily execute thousands of such requests per second.

Lesson 2: Adding a random suffix to your bucket names can enhance security.​

This practice reduces vulnerability to misconfigured systems or intentional attacks. At least avoid using short and common names for your S3 buckets.

Lesson 3: When executing a lot of requests to S3, make sure to explicitly specify the AWS region.​

This way you will avoid additional costs of S3 API redirects.

Aftermath:​

  1. I reported my findings to the maintainers of the vulnerable open-source tool. They quickly fixed the default configuration, although they can’t fix the existing deployments.
  2. I notified the AWS security team. I suggested that they restrict the unfortunate S3 bucket name to protect their customers from unexpected charges, and to protect the impacted companies from data leaks. But they were unwilling to address misconfigurations of third-party products.
  3. I reported the issue to two companies whose data I found in my bucket. They did not respond to my emails, possibly considering them as spam.
  4. AWS was kind enough to cancel my S3 bill. However, they emphasized that this was done as an exception.
Thank you for taking the time to read my post. I hope it will help you steer clear of unexpected AWS charges!

Source
Archive
 
I have a feeling this is one of those industry publications that normies aren't supposed to see, kinda like Soldier of Fortune.
 
Imagine you create an empty, private AWS S3 bucket in a region of your preference. What will your AWS bill be the next morning?
200w.gif

Seriously though, it sounds like this guy lucked out they refunded him. I can't believe this data whatever service didn't know this was going on. They probably make a ton of money from this error.
 
I have a feeling this is one of those industry publications that normies aren't supposed to see, kinda like Soldier of Fortune.
Yeh, I was on the fence even posting this here since this is essentially a guide to DDOS a target and make them pay for the luxury of being attacked. I ultimately decided to share it because likely the only way Amazon will take this seriously is if they get news that this exploit is being shared on unsavory parts of the internet - like the notorious hate New Zealand agriculture website.
 
First thing, it's up to you to secure your shit and setup the permissions for who can do what. But that being said; if all it takes is a random outsider to theoretically type in every possible string to see what's out there (or write a program to do it for them), Amazon hasn't secured their shit either.
 
That's what they get when they outsource literally everything.
If not for chinks in their code (pun intended, just look at how I fucking code), they have curry flavoured glitches like these.

Deserved.
 
This is absolute insanity on the part of AWS. While there are other things you can do to abuse AWS customers (like sucking down shit loads of bandwidth to rape them on egress charges), this is particularly insidious given how little investment it takes for the attacker and there is literally no defense once the bucket's existence has been revealed.

AWS nickel and diming is abhorrent and I have no idea why customers put up with this shit.
 
AWS nickel and diming is abhorrent and I have no idea why customers put up with this shit.
Because the "customer" is a CFO looking to shitcan long-term financial viability for lower expense numbers THIS QUARTER.

That initial decision results in hiring choices that result in a couple teams of "AWS Experts" rather than people who know how to maintain in-house infrastructure, and so they're trapped.
 
View attachment 5954820

Seriously though, it sounds like this guy lucked out they refunded him. I can't believe this data whatever service didn't know this was going on. They probably make a ton of money from this error.
Interestingly Amazon seems very accommodating when it comes to billing issues.

One of my devs oopsie doodled while setting something up and I ended up getting tagged for several thousand. At first we did not realize what had happened and once the AWS rep and I figured it out it was definitely that dev's fault and I fully expected to just eat the charge but they refunded me within a few days.

The rep also pointed me at some resources to prevent issues like this from happening again.

I really would prefer not to use AWS stuff but I kind of have to given my time and general company resources.
 
View attachment 5954820

Seriously though, it sounds like this guy lucked out they refunded him. I can't believe this data whatever service didn't know this was going on. They probably make a ton of money from this error.
nah cloud providers know that inexperienced devs and admins leave shit running or misconfigured all the time. They're pretty willing to write off a genuine mistake as long as you don't keep doing it. they'll give you hundreds of $ credits on new accounts too. there's plenty of competition in the space and being a dick over a couple thousand $ of mistakes has the potential to lose on on hundreds of thousands of future revenue
 
Seriously though, it sounds like this guy lucked out they refunded him. I can't believe this data whatever service didn't know this was going on. They probably make a ton of money from this error.
This is a well-known dirty trick of the SaaS companies, of which Amazon stands on top of the list.

They gladly refund anyone who notices, because they know so many companies don't even look at their AWS bills after a certain point, and Amazon does NOT want to setup anything like billing alerts or "shutdown after this point" ever because they like money.
 
I'm not an IT person, but I swear every time I hear anything about cloud services,, I get the feeling that cloud is the biggest scam in the world.
 
Some retard takes in here. But also kinda lulzy you decided to post this here instead of anyplace else, I wonder if the press will spin this into anything.

Amazon does NOT want to setup anything like billing alerts
It has billing alerts.

That's what they get when they outsource literally everything.
If not for chinks in their code (pun intended, just look at how I fucking code), they have curry flavoured glitches like these.

Deserved.
Not a bug.

First thing, it's up to you to secure your shit and setup the permissions for who can do what.
The issue here is that OP is billed for „permission denied“ errors.
 
can somebody please explain this in English?
Imagine Google/Microsoft/Yahoo charged you for email. But it’s okay! Because the first 1,000,000 emails are free, and even then, after that, it’s $.0005 per thousand emails. Their target customers are big corporations who send out 1 million+ emails per day. You’re realistically never going to leave the free tier, and even if you do, the cost will be negligible.

Now imagine your worst enemy is able to find out your email address. He then begins sending you 100,000 phantom emails per second. You never receive the emails. You don’t know they’re coming in. The service doesn’t even bother to check that the emails are coming from a legitimate email address. He’s just pinging you with emails over and over, costing you $.50 per second in compute cost. You don’t know what is happening until the end of the month when you receive a $30k+ bill from your email provider.

AWS S3 is a service which hold files. A significant chunk of the internet is powered by S3 buckets. I believe in recent MATI episodes Null mentioned that he uses an S3 bucket to hold attachments for the site. Let’s say Null named his bucket something predictable like “Kiwifarms-Bucket” or “Kiwifarms-MATI”, or the bucket name is publicly stored somewhere in the site code - trannies could use their collective autism to slam him with an absolutely massive AWS bill. Which AWS may or may not choose to waive.

Edit: the reason why this matters is this is a silent, remarkably easy to pull off attack. Someone with next to no know-how could set up an effective attack vector in maybe 10 mins so long as they know the victim’s bucket name. Hackers with a large botnet could likely cause millions in damages.
 
Last edited:
Back
Top Bottom