S3 Buckets Part 2: The Risks of Misconfigured S3 Buckets, and What You Can Do About Them

In the first part of this series, we provided an overview of AWS cloud storage service – S3. We discussed the three components of an S3 object, the content, the identifier, and the metadata, as well as how to access objects from within a bucket using AWS evaluation, including the risks involved. If you missed part one, you can check it out here.

In the second installment of this two-part series, we will be discussing the specific risks of misconfigured S3 buckets, including diving into our own research that has uncovered a cross-account attack path you may not be aware of.

Let’s get started.

Risks Involving Misconfigured Buckets

There are two main risks when considering misconfigurations on AWS buckets, read and write access permissions. First, a misconfigured bucket that allows public read access can lead to a data breach. Second, a misconfigured bucket allowing public write access can be used to serve or control malware, damage a website hosted in S3, store any amount of data at your expense, and even encrypt your files for the purposes of demanding a ransom.

Booz Allen Hamilton is a leading U.S. government contractor, famous for a data breach that involved misconfigured buckets. Booz Allen Hamilton left sensitive data on AWS S3, publicly accessible, exposing 60,000 files related to the Dept of Defense. In addition, the data in the bucket was stored with no encryption. The company claimed that the data itself was not classified, but it included credentials to sensitive government systems, credentials belonging to a senior engineer at Booz Allen Hamilton, and more.

Another example is Verizon, an American wireless network operator. This company suffered from two data breaches only a few months apart, exposing more than 6 million customer accounts. Both breaches were caused by S3 misconfigurations.

 

Researching into the Current Status of AWS Buckets

To understand the scope of the issue, we started by searching a few AWS environments for their public buckets, and their “objects can be public” buckets. (See part one for a list of definitions)

The results were quite interesting. The average amount of public buckets stands at almost 4% per company, and the average amount of “objects can be public” is around 42%.

This means that almost 50% of a company’s buckets could potentially be misconfigured!

To describe our research in more detail, here is what we tried to accomplish.

We wanted to get a hermetic image of the possible outcomes of using the block public settings. We simulated every combination, trying to access the bucket and objects, and then documented each result.

The most interesting cases were when AWS evaluated the bucket status as “objects can be public”, and as mentioned in part 1, that’s not surprising because this is the most confusing definition.

It’s clear that because AWS evaluation is happening at the bucket level, and it does not take into consideration the objects’ ACLs, this will open company’s up to risk, as this is the key for understanding where they might be vulnerable.

Our New Python Tool

As you can see from the statistics, the “object can by public” status is very common. This left us wanting to provide a solution that can help in dealing with this issue.

We have therefore created an open-source python tool that you can use to get a better grasp of your buckets and objects. It takes the public evaluation a step further and tells you the buckets and objects that are publicly accessible, with no “can be public” about it. You can find the tool here: Free S3 buckets scanner

Our Unexpected Find! New Cross-Account Attack on S3 Buckets

While performing the research, we found ourselves analyzing a lot of bucket policy examples, so that we could fully understand which ones allows public access.

It was then that we came across an interesting case, where the policy grants AWS Config and CloudTrail services access to a bucket. These two services use S3 Buckets to store their output, and while setting them, the user must choose whether to create a new bucket, to use an existing one from their own account, or to use an existing one from another account.

CloudTrail Example

Config Example

Many users can ease the process of specifying several different accounts as a list by creating a general path instead. A general path can be:

  • arn:aws:s3:::{bucket-name}/*
  • arn:aws:s3:::{bucket-name}/AWSLogs/*
  • arn:aws:s3:::{bucket-name}/AWSLogs/*/Config/*

This case is common when using the AWS Organizations and Control Tower, which creates a dedicated account for logging and auditing. When you have only a few accounts, that can work with a bucket policy with several resources for the other accounts. However, when you have 100 or 1000 accounts, wildcards are mostly in use, which can open you up to risk.

The tricky thing in this case specifically is that since the principal is the AWS service, the source account is evaluated from the resource, as opposed to normal bucket policies.

Each one of the general resource path patterns above will enable any AWS account to define the bucket as their Config bucket, and by doing that, use it to store their data at your expense. It does not allow them to read objects that are stored in the bucket, only to write new ones, making it the second category of risk we discussed above.

This configuration opens up your bucket for unauthorized writes from any AWS account.

Note: In our research, we tested AWS Cloudtrail and Config but this issue may be valid for other AWS Services that use S3 Buckets to store their data by default. In other cases, the permission may also be a read permission that can lead to a serious data breach.

Tips for Secured S3 Buckets

  1. Try our tool! Our Python tool will be able to show you all the buckets that are publicly accessible. We’d love to hear what you found!

  2. Keep alert. Continuously assess the bucket states in your account – you can use config rules or AWS “Access Analyzer for S3” for that. Notice that the “Access Analyzer for S3” does not cover the two main cases we discussed – the cross-account attack and the bucket with “objects can be public” status that has publicly accessible objects.

  3. Think smart. In order to deny public access, it is preferable to use the block public access, and not a deny policy.

  4. Avoid wildcards. Your buckets and objects are your responsibility. Try to make your policies as specific as you can.

For more advice on securing your cloud environment, get in touch to schedule a demo of the Lightspin platform.

-----------------------------------

About Lightspin

Lightspin’s contextual cloud security protects cloud and Kubernetes environments from build to runtime and simplifies cloud security for security and DevOps teams. Using patent-pending advanced graph-based technology, Lightspin empowers cloud and security teams to eliminate risks and maximize productivity by proactively and automatically detecting all security risks, smartly prioritizing the most critical issues, and easily fixing them.

For more information, visit: https://www.lightspin.io/