Amazon Simple Storage Service: User Guide API Version 2006-03-01
Amazon Simple Storage Service: User Guide API Version 2006-03-01
User Guide
API Version 2006-03-01
Amazon Simple Storage Service User Guide
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon Simple Storage Service User Guide
Table of Contents
........................................................................................................................................................ x
What is Amazon S3? ........................................................................................................................... 1
How do I...? ............................................................................................................................... 1
Advantages of using Amazon S3 .................................................................................................. 1
Amazon S3 concepts .................................................................................................................. 2
Buckets ............................................................................................................................. 2
Objects ............................................................................................................................. 3
Keys ................................................................................................................................. 3
Regions ............................................................................................................................. 3
Amazon S3 data consistency model ...................................................................................... 3
Amazon S3 features ................................................................................................................... 5
Storage classes .................................................................................................................. 6
Bucket policies ................................................................................................................... 6
AWS identity and access management .................................................................................. 7
Access control lists ............................................................................................................. 7
Versioning ......................................................................................................................... 7
Operations ........................................................................................................................ 7
Amazon S3 application programming interfaces (API) ..................................................................... 7
The REST interface ............................................................................................................. 8
The SOAP interface ............................................................................................................ 8
Paying for Amazon S3 ................................................................................................................ 8
Related services ......................................................................................................................... 8
Getting started ................................................................................................................................ 10
Setting up ............................................................................................................................... 10
Sign up for AWS .............................................................................................................. 11
Create an IAM user ........................................................................................................... 11
Sign in as an IAM user ...................................................................................................... 12
Step 1: Create a bucket ............................................................................................................. 12
Step 2: Upload an object .......................................................................................................... 13
Step 3: Download an object ...................................................................................................... 14
Step 4: Copy an object ............................................................................................................. 14
Step 5: Delete the objects and bucket ........................................................................................ 15
Emptying your bucket ....................................................................................................... 15
Deleting an object ............................................................................................................ 16
Deleting your bucket ........................................................................................................ 16
Where do I go from here? ......................................................................................................... 16
Common use scenarios ...................................................................................................... 17
Considerations ................................................................................................................. 17
Advanced features ............................................................................................................ 18
Changing the console language ......................................................................................... 18
Access control .................................................................................................................. 19
Development resources ..................................................................................................... 23
Working with buckets ....................................................................................................................... 24
Buckets overview ...................................................................................................................... 24
About permissions ............................................................................................................ 25
Managing public access to buckets ..................................................................................... 25
Bucket configuration ......................................................................................................... 26
Naming rules ........................................................................................................................... 27
Example bucket names ..................................................................................................... 28
Creating a bucket ..................................................................................................................... 28
Viewing bucket properties ......................................................................................................... 33
Accessing a bucket ................................................................................................................... 33
Virtual-hosted–style access ................................................................................................ 34
Path-style access .............................................................................................................. 34
Welcome to the new Amazon S3 User Guide! The Amazon S3 User Guide combines information and
instructions from the three retired guides: Amazon S3 Developer Guide, Amazon S3 Console User Guide,
and Amazon S3 Getting Started Guide.
Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of
data, at any time, from anywhere on the web.
This guide describes how you send requests to create buckets, store and retrieve your objects, and
manage permissions on your resources. The guide also describes access control and the authentication
process. Access control defines who can access objects and buckets within Amazon S3, and the type of
access (for example, READ and WRITE). The authentication process verifies the identity of a user who is
trying to access Amazon Web Services (AWS).
Topics
• How do I...? (p. 1)
• Advantages of using Amazon S3 (p. 1)
• Amazon S3 concepts (p. 2)
• Amazon S3 features (p. 5)
• Amazon S3 application programming interfaces (API) (p. 7)
• Paying for Amazon S3 (p. 8)
• Related services (p. 8)
How do I...?
Information Relevant sections
How do I work with access points? Managing data access with Amazon S3 access points
(p. 418)
How do I manage access to my Identity and access management in Amazon S3 (p. 209)
resources?
• Creating buckets – Create and name a bucket that stores data. Buckets are the fundamental
containers in Amazon S3 for data storage.
• Storing data – Store an infinite amount of data in a bucket. Upload as many objects as you like into
an Amazon S3 bucket. Each object can contain up to 5 TB of data. Each object is stored and retrieved
using a unique developer-assigned key.
• Downloading data – Download your data or enable others to do so. Download your data anytime you
like, or allow others to do the same.
• Permissions – Grant or deny access to others who want to upload or download data into your
Amazon S3 bucket. Grant upload and download permissions to three types of users. Authentication
mechanisms can help keep data secure from unauthorized access.
• Standard interfaces – Use standards-based REST and SOAP interfaces designed to work with any
internet-development toolkit.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or
the AWS SDKs.
Amazon S3 concepts
This section describes key concepts and terminology you need to understand to use Amazon S3
effectively. They are presented in the order that you will most likely encounter them.
Topics
• Buckets (p. 2)
• Objects (p. 3)
• Keys (p. 3)
• Regions (p. 3)
• Amazon S3 data consistency model (p. 3)
Buckets
A bucket is a container for objects stored in Amazon S3. Every object is contained in a bucket. For
example, if the object named photos/puppy.jpg is stored in the awsexamplebucket1 bucket in the
US West (Oregon) Region, then it is addressable using the URL https://awsexamplebucket1.s3.us-
west-2.amazonaws.com/photos/puppy.jpg.
You can configure buckets so that they are created in a specific AWS Region. For more information, see
Accessing a Bucket (p. 33). You can also configure a bucket so that every time an object is added to it,
Amazon S3 generates a unique version ID and assigns it to the object. For more information, see Using
Versioning (p. 453).
For more information about buckets, see Buckets overview (p. 24).
Objects
Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata.
The data portion is opaque to Amazon S3. The metadata is a set of name-value pairs that describe
the object. These include some default metadata, such as the date last modified, and standard HTTP
metadata, such as Content-Type. You can also specify custom metadata at the time the object is
stored.
An object is uniquely identified within a bucket by a key (name) and a version ID. For more information,
see Keys (p. 3) and Using Versioning (p. 453).
Keys
A key is the unique identifier for an object within a bucket. Every object in a bucket has exactly
one key. The combination of a bucket, key, and version ID uniquely identify each object. So you
can think of Amazon S3 as a basic data map between "bucket + key + version" and the object
itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web
service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://
doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl, "doc" is the name of the bucket and
"2006-03-01/AmazonS3.wsdl" is the key.
For more information about object keys, see Creating object key names (p. 58).
Regions
You can choose the geographical AWS Region where Amazon S3 will store the buckets that you create.
You might choose a Region to optimize latency, minimize costs, or address regulatory requirements.
Objects stored in a Region never leave the Region unless you explicitly transfer them to another Region.
For example, objects stored in the Europe (Ireland) Region never leave it.
Note
You can only access Amazon S3 and its features in AWS Regions that are enabled for your
account.
For a list of Amazon S3 Regions and endpoints, see Regions and Endpoints in the AWS General Reference.
Updates to a single key are atomic. For example, if you PUT to an existing key from one thread and
perform a GET on the same key from a second thread concurrently, you will get either the old data or the
new data, but never partial or corrupt data.
Amazon S3 achieves high availability by replicating data across multiple servers within AWS data centers.
If a PUT request is successful, your data is safely stored. Any read (GET or LIST) that is initiated following
the receipt of a successful PUT response will return the data written by the PUT. Here are examples of
this behavior:
• A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new
object will appear in the list.
• A process replaces an existing object and immediately tries to read it. Amazon S3 will return the new
data.
• A process deletes an existing object and immediately tries to read it. Amazon S3 will not return any
data as the object has been deleted.
• A process deletes an existing object and immediately lists keys within its bucket. The object will not
appear in the listing.
Note
• Amazon S3 does not support object locking for concurrent writers. If two PUT requests are
simultaneously made to the same key, the request with the latest timestamp wins. If this is an
issue, you will need to build an object-locking mechanism into your application
• Updates are key-based. There is no way to make atomic updates across keys. For example,
you cannot make the update of one key dependent on the update of another key unless you
design this functionality into your application.
• If you delete a bucket and immediately list all buckets, the deleted bucket might still appear in the list.
• If you enable versioning on a bucket for the first time, it might take a short amount of time for the
change to be fully propagated. We recommend that you wait for 15 minutes after enabling versioning
before issuing write operations (PUT or DELETE) on objects in the bucket.
Concurrent applications
This section provides examples of behavior to be expected from Amazon S3 when multiple clients are
writing to the same items.
In this example, both W1 (write 1) and W2 (write 2) complete before the start of R1 (read 1) and R2 (read
2). Because S3 is strongly consistent, R1 and R2 both return color = ruby.
In the next example, W2 does not complete before the start of R1. Therefore, R1 might return color =
ruby or color = garnet. However, since W1 and W2 finish before the start of R2, R2 returns color =
garnet.
In the last example, W2 begins before W1 has received an acknowledgement. Therefore, these writes are
considered concurrent. Amazon S3 internally uses last-writer-wins semantics to determine which write
takes precedence. However, the order in which Amazon S3 receives the requests and the order in which
applications receive acknowledgements cannot be predicted due to factors such as network latency.
For example, W2 might be initiated by an Amazon EC2 instance in the same region while W1 might be
initiated by a host that is further away. The best way to determine the final value is to perform a read
after both writes have been acknowledged.
Amazon S3 features
This section describes important Amazon S3 features.
Topics
• Storage classes (p. 6)
• Bucket policies (p. 6)
• AWS identity and access management (p. 7)
• Access control lists (p. 7)
• Versioning (p. 7)
• Operations (p. 7)
Storage classes
Amazon S3 offers a range of storage classes designed for different use cases. These include Amazon S3
STANDARD for general-purpose storage of frequently accessed data, Amazon S3 STANDARD_IA for long-
lived, but less frequently accessed data, and S3 Glacier for long-term archive.
For more information, see Using Amazon S3 storage classes (p. 496).
Bucket policies
Bucket policies provide centralized access control to buckets and objects based on a variety of conditions,
including Amazon S3 operations, requesters, resources, and aspects of the request (for example, IP
address). The policies are expressed in the access policy language and enable centralized management of
permissions. The permissions attached to a bucket apply to all of the bucket's objects that are owned by
the bucket owner account.
Both individuals and companies can use bucket policies. When companies register with Amazon S3,
they create an account. Thereafter, the company becomes synonymous with the account. Accounts are
financially responsible for the AWS resources that they (and their employees) create. Accounts have
the power to grant bucket policy permissions and assign employees permissions based on a variety of
conditions. For example, an account could create a policy that gives a user write access:
• To a particular S3 bucket
• From an account's corporate network
• During business hours
An account can grant one user limited read and write access, but allow another to create and delete
buckets also. An account could allow several field offices to store their daily reports in a single bucket. It
could allow each office to write only to a certain set of names (for example, "Nevada/*" or "Utah/*") and
only from the office's IP address range.
Unlike access control lists (described later), which can add (grant) permissions only on individual objects,
policies can either add or deny permissions across all (or a subset) of objects within a bucket. With one
request, an account can set the permissions of any number of objects in a bucket. An account can use
wildcards (similar to regular expression operators) on Amazon Resource Names (ARNs) and other values.
The account could then control access to groups of objects that begin with a common prefix or end with
a given extension, such as .html.
Only the bucket owner is allowed to associate a policy with a bucket. Policies (written in the access policy
language) allow or deny requests based on the following:
• Amazon S3 bucket operations (such as PUT ?acl), and object operations (such as PUT Object, or
GET Object)
• Requester
• Conditions specified in the policy
An account can control access based on specific Amazon S3 operations, such as GetObject,
GetObjectVersion, DeleteObject, or DeleteBucket.
The conditions can be such things as IP addresses, IP address ranges in CIDR notation, dates, user agents,
HTTP referrer, and transports (HTTP and HTTPS).
For more information, see Bucket policies and user policies (p. 226).
For example, you can use IAM with Amazon S3 to control the type of access a user or group of users has
to specific parts of an Amazon S3 bucket your AWS account owns.
Versioning
You can use versioning to keep multiple versions of an object in the same bucket. For more information,
see Using versioning in S3 buckets (p. 453).
Operations
Following are the most common operations that you'll run through the API.
Common operations
• Create a bucket – Create and name your own bucket in which to store your objects.
• Write an object – Store data by creating or overwriting an object. When you write an object, you
specify a unique key in the namespace of your bucket. This is also a good time to specify any access
control you want on the object.
• Read an object – Read data back. You can download the data via HTTP or BitTorrent.
• Delete an object – Delete some of your data.
• List keys – List the keys contained in one of your buckets. You can filter the key list based on a prefix.
These operations and all other functionality are described in detail throughout this guide.
Amazon S3 provides a REST and a SOAP interface. They are similar, but there are some differences. For
example, in the REST interface, metadata is returned in HTTP headers. Because we only support HTTP
requests of up to 4 KB (not including the body), the amount of metadata you can supply is restricted.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.
You can use any toolkit that supports HTTP to use the REST API. You can even use a browser to fetch
objects, as long as they are anonymously readable.
The REST API uses the standard HTTP headers and status codes, so that standard browsers and toolkits
work as expected. In some areas, we have added functionality to HTTP (for example, we added headers
to support access control). In these cases, we have done our best to add the new functionality in a way
that matched the style of standard HTTP usage.
The SOAP API provides a SOAP 1.1 interface using document literal encoding. The most common way to
use SOAP is to download the WSDL (see https://doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl),
use a SOAP toolkit such as Apache Axis or Microsoft .NET to create bindings, and then write code that
uses the bindings to call Amazon S3.
Amazon S3 charges you only for what you actually use, with no hidden fees and no overage charges.
This gives developers a variable-cost service that can grow with their business while enjoying the cost
advantages of the AWS infrastructure.
Before storing anything in Amazon S3, you must register with the service and provide a payment method
that is charged at the end of each month. There are no setup fees to begin using the service. At the end
of the month, your payment method is automatically charged for that month's usage.
For information about paying for Amazon S3 storage, see Amazon S3 Pricing.
Related services
After you load your data into Amazon S3, you can use it with other AWS services. The following are the
services you might use most frequently:
• Amazon Elastic Compute Cloud (Amazon EC2) – This service provides virtual compute resources in
the cloud. For more information, see the Amazon EC2 product details page.
• Amazon EMR – This service enables businesses, researchers, data analysts, and developers to easily
and cost-effectively process vast amounts of data. It uses a hosted Hadoop framework running on the
web-scale infrastructure of Amazon EC2 and Amazon S3. For more information, see the Amazon EMR
product details page.
• AWS Snowball – This service accelerates transferring large amounts of data into and out of AWS using
physical storage devices, bypassing the internet. Each AWS Snowball device type can transport data
at faster-than internet speeds. This transport is done by shipping the data in the devices through a
regional carrier. For more information, see the AWS Snowball product details page.
To store an object in Amazon S3, you create a bucket and then upload the object to the bucket. When
the object is in the bucket, you can open it, download it, and move it. When you no longer need an object
or a bucket, you can clean up your resources.
With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.
Prerequisites
Before you begin, confirm that you've completed the steps in Prerequisite: Setting up Amazon
S3 (p. 10).
Topics
• Prerequisite: Setting up Amazon S3 (p. 10)
• Step 1: Create your first S3 bucket (p. 12)
• Step 2: Upload an object to your bucket (p. 13)
• Step 3: Download an object (p. 14)
• Step 4: Copy your object to a folder (p. 14)
• Step 5: Delete your objects and bucket (p. 15)
• Where do I go from here? (p. 16)
With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.
When you sign up for AWS and set up Amazon S3, you can optionally change the display language in the
AWS Management Console. For more information, see Changing the language of the AWS Management
Console? (p. 18).
Topics
• Sign up for AWS (p. 11)
• Create an IAM user (p. 11)
• Sign in as an IAM user (p. 12)
1. Open https://portal.aws.amazon.com/billing/signup.
2. Follow the online instructions.
Part of the sign-up procedure involves receiving a phone call and entering a verification code on the
phone keypad.
AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view
your current account activity and manage your account by going to https://aws.amazon.com/ and
choosing My Account.
If you signed up for AWS but have not created an IAM user for yourself, follow these steps.
To create an administrator user for yourself and add the user to an administrators group
(console)
1. Sign in to the IAM console as the account owner by choosing Root user and entering your AWS
account email address. On the next page, enter your password.
Note
We strongly recommend that you adhere to the best practice of using the Administrator
IAM user below and securely lock away the root user credentials. Sign in as the root user
only to perform a few account and service management tasks.
2. In the navigation pane, choose Users and then choose Add user.
3. For User name, enter Administrator.
4. Select the check box next to AWS Management Console access. Then select Custom password, and
then enter your new password in the text box.
5. (Optional) By default, AWS requires the new user to create a new password when first signing in. You
can clear the check box next to User must create a new password at next sign-in to allow the new
user to reset their password after they sign in.
6. Choose Next: Permissions.
7. Under Set permissions, choose Add user to group.
8. Choose Create group.
9. In the Create group dialog box, for Group name enter Administrators.
10. Choose Filter policies, and then select AWS managed -job function to filter the table contents.
11. In the policy list, select the check box for AdministratorAccess. Then choose Create group.
Note
You must activate IAM user and role access to Billing before you can use the
AdministratorAccess permissions to access the AWS Billing and Cost Management
console. To do this, follow the instructions in step 1 of the tutorial about delegating access
to the billing console.
12. Back in the list of groups, select the check box for your new group. Choose Refresh if necessary to
see the group in the list.
13. Choose Next: Tags.
14. (Optional) Add metadata to the user by attaching tags as key-value pairs. For more information
about using tags in IAM, see Tagging IAM entities in the IAM User Guide.
15. Choose Next: Review to see the list of group memberships to be added to the new user. When you
are ready to proceed, choose Create user.
You can use this same process to create more groups and users and to give your users access to your AWS
account resources. To learn about using policies that restrict user permissions to specific AWS resources,
see Access management and Example policies.
Before you sign in as an IAM user, you can verify the sign-in link for IAM users in the IAM console. On the
IAM Dashboard, under IAM users sign-in link, you can see the sign-in link for your AWS account. The URL
for your sign-in link contains your AWS account ID without dashes (‐).
If you don't want the URL for your sign-in link to contain your AWS account ID, you can create an account
alias. For more information, see Creating, deleting, and listing an AWS account alias in the IAM User
Guide.
Your sign-in link includes your AWS account ID (without dashes) or your AWS account alias:
https://aws_account_id_or_alias.signin.aws.amazon.com/console
3. Enter the IAM user name and password that you just created.
When you're signed in, the navigation bar displays "your_user_name @ your_aws_account_id".
To create a bucket using the AWS Command Line Interface, see create-bucket in the AWS CLI Command
Reference.
To create a bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Create bucket.
After you create the bucket, you can't change its name. For information about naming buckets, see
Bucket naming rules (p. 27).
Important
Avoid including sensitive information, such as account numbers, in the bucket name. The
bucket name is visible in the URLs that point to the objects in the bucket.
4. In Region, choose the AWS Region where you want the bucket to reside.
Choose a Region that is close to you geographically to minimize latency and costs and to address
regulatory requirements. Objects stored in a Region never leave that Region unless you explicitly
transfer them to another Region. For a list of Amazon S3 AWS Regions, see AWS Service Endpoints
in the Amazon Web Services General Reference.
5. In Bucket settings for Block Public Access, keep the values set to the defaults.
By default, Amazon S3 blocks all public access to your buckets. We recommend that you keep
all Block Public Access settings enabled. For more information about blocking public access, see
Blocking public access to your Amazon S3 storage (p. 408).
6. Choose Create bucket.
Next step
To add an object to your bucket, see Step 2: Upload an object to your bucket (p. 13).
Next step
Next step
To copy and paste your object within Amazon S3, see Step 4: Copy your object to a folder (p. 14).
To navigate into a folder and choose a subfolder as your destination, choose the folder name.
c. Choose Choose destination.
The path to your destination folder appears in the Destination box. In Destination, you can
alternately enter your destination path, for example, s3://bucket-name/folder-name/.
7. In the bottom right, choose Copy.
Next step
To delete an object and a bucket in Amazon S3, see Step 5: Delete your objects and bucket (p. 15).
Before you delete your bucket, empty the bucket or delete the objects in the bucket. After you delete
your objects and bucket, they are no longer available.
If you want to continue to use the same bucket name, we recommend that you delete the objects or
empty the bucket, but don't delete the bucket. After you delete a bucket, the name becomes available
to reuse. However, another AWS account might create a bucket with the same name before you have a
chance to reuse it.
Topics
• Emptying your bucket (p. 15)
• Deleting an object (p. 16)
• Deleting your bucket (p. 16)
To empty a bucket
1. In the Buckets list, select the bucket that you want to empty, and then choose Empty.
2. To confirm that you want to empty the bucket and delete all the objects in it, in Empty bucket,
enter the name of the bucket.
Important
Emptying the bucket cannot be undone. Objects added to the bucket while the empty
bucket action is in progress will be deleted.
3. To empty the bucket and delete all the objects in it, and choose Empty.
An Empty bucket: Status page opens that you can use to review a summary of failed and successful
object deletions.
4. To return to your bucket list, choose Exit.
Deleting an object
If you want to choose which objects you delete without emptying all the objects from your bucket, you
can delete an object.
1. In the Buckets list, choose the name of the bucket that you want to delete an object from.
2. Select the check box to the left of the names of the objects that you want to delete.
3. Choose Actions and choose Delete from the list of options that appears.
The following topics explain various ways in which you can gain a deeper understanding of Amazon S3
so that you can implement it in your applications.
Topics
• Common use scenarios (p. 17)
• Considerations going forward (p. 17)
• Advanced Amazon S3 features (p. 18)
• Changing the language of the AWS Management Console? (p. 18)
• Access control best practices (p. 19)
• Development resources (p. 23)
• Backup and storage – Provide data backup and storage services for others.
• Application hosting – Provide services that deploy, install, and manage web applications.
• Media hosting – Build a redundant, scalable, and highly available infrastructure that hosts video,
photo, or music uploads and downloads.
• Software delivery – Host your software applications that customers can download.
Topics
• AWS account and security credentials (p. 17)
• Security (p. 17)
• AWS integration (p. 17)
• Pricing (p. 18)
If you're an account owner or administrator and want to know more about IAM, see the product
description at https://aws.amazon.com/iam or the technical documentation in the IAM User Guide.
Security
Amazon S3 provides authentication mechanisms to secure data stored in Amazon S3 against
unauthorized access. Unless you specify otherwise, only the AWS account owner can access data
uploaded to Amazon S3. For more information about how to manage access to buckets and objects, go
to Identity and access management in Amazon S3 (p. 209).
You can also encrypt your data before uploading it to Amazon S3.
AWS integration
You can use Amazon S3 alone or in concert with one or more other Amazon products. The following are
the most common products used with Amazon S3:
• Amazon EC2
• Amazon EMR
• Amazon SQS
• Amazon CloudFront
Pricing
Learn the pricing structure for storing and transferring data on Amazon S3. For more information, see
Amazon S3 pricing.
Link Functionality
Using Requester Pays buckets for storage Learn how to configure a bucket so that a
transfers and usage (p. 51) customer pays for the downloads they make.
Publishing content using Amazon S3 and Use BitTorrent, which is an open, peer-to-peer
BitTorrent (p. 155) protocol for distributing files.
Using versioning in S3 buckets (p. 453) Learn about Amazon S3 versioning capabilities.
Hosting a static website using Amazon Learn how to host a static website on Amazon S3.
S3 (p. 857)
Managing your storage lifecycle (p. 501) Learn how to manage the lifecycle of objects
in your bucket. Lifecycle management
includes expiring objects and archiving objects
(transitioning objects to the S3 S3 Glacier
storage class).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. On the left-side of the bottom navigation bar, choose the language menu.
3. From the language menu, choose the language that you want.
This will change the language for the entire AWS Management Console.
Topics
• Creating a new bucket (p. 19)
• Storing and sharing data (p. 20)
• Sharing resources (p. 21)
• Protecting data (p. 21)
S3 Block Public Access provides four settings to help you avoid inadvertently exposing your S3 resources.
You can apply these settings in any combination to individual access points, buckets, or entire AWS
accounts. If you apply a setting to an account, it applies to all buckets and access points that are owned
by that account. By default, the Block all public access setting is applied to new buckets created in the
Amazon S3 console.
If the S3 Block Public Access settings are too restrictive, you can use AWS Identity and Access
Management (IAM) identities to grant access to specific users rather than disabling all Block Public Access
settings. Using Block Public Access with IAM identities helps ensure that any operation that is blocked by
a Block Public Access setting is rejected unless the requesting user has been given specific permission.
For more information, see Block public access settings (p. 409).
When setting up accounts for new team members who require S3 access, use IAM users and roles to
ensure least privileges. You can also implement a form of IAM multi-factor authentication (MFA) to
support a strong identity foundation. Using IAM identities, you can grant unique permissions to users
and specify what resources they can access and what actions they can take. IAM identities provide
increased capabilities, including the ability to require users to enter login credentials before accessing
shared resources and apply permission hierarchies to different objects within a single bucket.
For more information, see Example 1: Bucket owner granting its users bucket permissions (p. 360).
Bucket policies
With bucket policies, you can personalize bucket access to help ensure that only those users you have
approved can access resources and perform actions within them. In addition to bucket policies, you
should use bucket-level Block Public Access settings to further limit public access to your data.
For more information, see Bucket policies and user policies (p. 226).
When creating policies, avoid the use of wildcards in the Principal element because it effectively
allows anyone to access your Amazon S3 resources. It's better to explicitly list users or groups that are
allowed to access the bucket. Rather than including a wildcard for their actions, grant them specific
permissions when applicable.
To further maintain the practice of least privileges, Deny statements in the Effect element should be
as broad as possible and Allow statements should be as narrow as possible. Deny effects paired with the
"s3:*" action are another good way to implement opt-in best practices for the users included in policy
condition statements.
For more information about specifying conditions for when a policy is in effect, see Amazon S3 condition
keys (p. 232).
When adding users in a corporate setting, you can use a virtual private cloud (VPC) endpoint to allow any
users in your virtual network to access your Amazon S3 resources. VPC endpoints enable developers to
provide specific access and permissions to groups of users based on the network the user is connected to.
Rather than adding each user to an IAM role or group, you can use VPC endpoints to deny bucket access
if the request doesn’t originate from the specified endpoint.
For more information, see Controlling access from VPC endpoints with bucket policies (p. 321).
If you use the Amazon S3 console to manage buckets and objects, you should implement S3 Versioning
and S3 Object Lock. These features help prevent accidental changes to critical data and enable you to
roll back unintended actions. This capability is particularly useful when there are multiple users with full
write and execute permissions accessing the Amazon S3 console.
For information about S3 Versioning, see Using versioning in S3 buckets (p. 453). For information about
Object Lock, see Using S3 Object Lock (p. 488).
To manage your objects so that they are stored cost effectively throughout their lifecycle, you can pair
lifecycle policies with object versioning. Lifecycle policies define actions that you want S3 to take during
an object's lifetime. For example, you can create a lifecycle policy that will transition objects to another
storage class, archive them, or delete them after a specified period of time. You can define a lifecycle
policy for all objects or a subset of objects in the bucket by using a shared prefix or tag.
For more information, see Managing your storage lifecycle (p. 501).
When creating buckets that are accessed by different office locations, you should consider implementing
S3 Cross-Region Replication. Cross-Region Replication helps ensure that all users have access to the
resources they need and increases operational efficiency. Cross-Region Replication offers increased
availability by copying objects across S3 buckets in different AWS Regions. However, the use of this tool
increases storage costs.
When configuring a bucket to be used as a publicly accessed static website, you need to disable all Block
Public Access settings. It is important to only provide s3:GetObject actions and not ListObject or
PutObject permissions when writing the bucket policy for your static website. This helps ensure that
users cannot view all the objects in your bucket or add their own content.
For more information, see Setting permissions for website access (p. 867).
Amazon CloudFront provides the capabilities required to set up a secure static website. Amazon S3
static websites only support HTTP endpoints. CloudFront uses the durable storage of Amazon S3 while
providing additional security headers like HTTPS. HTTPS adds security by encrypting a normal HTTP
request and protecting against common cyber attacks.
For more information, see Getting started with a secure static website in the Amazon CloudFront
Developer Guide.
Sharing resources
There are several different ways that you can share resources with a specific group of users. You can
use the following tools to share a set of documents or other resources to a single group of users,
department, or an office. Although they can all be used to accomplish the same goal, some tools might
pair better than others with your existing settings.
User policies
You can share resources with a limited group of people using IAM groups and user policies. When
creating a new IAM user, you are prompted to create and add them to a group. However, you can create
and add users to groups at any point. If the individuals you intend to share these resources with are
already set up within IAM, you can add them to a common group and share the bucket with their group
within the user policy. You can also use IAM user policies to share individual objects within a bucket.
For more information, see Allowing an IAM user access to one of your buckets (p. 349).
As a general rule, we recommend that you use S3 bucket policies or IAM policies for access control.
Amazon S3 access control lists (ACLs) are a legacy access control mechanism that predates IAM. If
you already use S3 ACLs and you find them sufficient, there is no need to change. However, certain
access control scenarios require the use of ACLs. For example, when a bucket owner wants to grant
permission to objects, but not all objects are owned by the bucket owner, the object owner must first
grant permission to the bucket owner. This is done using an object ACL.
For more information, see Example 3: Bucket owner granting its users permissions to objects it does not
own (p. 369).
Prefixes
When trying to share specific resources from a bucket, you can replicate folder-level permissions using
prefixes. The Amazon S3 console supports the folder concept as a means of grouping objects by using a
shared name prefix for objects. You can then specify a prefix within the conditions of an IAM user's policy
to grant them explicit permission to access the resources associated with that prefix.
For more information, see Organizing objects in the Amazon S3 console using folders (p. 141).
Tagging
If you use object tagging to categorize storage, you can share objects that have been tagged with a
specific value with specified users. Resource tagging allows you to control access to objects based on the
tags associated with the resource that a user is trying to access. To do this, use the ResourceTag/key-
name condition within an IAM user policy to allow access to the tagged resources.
For more information, see Controlling access to AWS resources using resource tags in the IAM User Guide.
Protecting data
Use the following tools to help protect data in transit and at rest, both of which are crucial in
maintaining the integrity and accessibility of your data.
Object encryption
Amazon S3 offers several object encryption options that protect data in transit and at rest. Server-side
encryption encrypts your object before saving it on disks in its data centers and then decrypts it when
you download the objects. As long as you authenticate your request and you have access permissions,
there is no difference in the way you access encrypted or unencrypted objects. When setting up server-
side encryption, you have three mutually exclusive options:
For more information, see Protecting data using server-side encryption (p. 157).
Client-side encryption is the act of encrypting data before sending it to Amazon S3. For more
information, see Protecting data using client-side encryption (p. 198).
Signing methods
Signature Version 4 is the process of adding authentication information to AWS requests sent by HTTP.
For security, most requests to AWS must be signed with an access key, which consists of an access key ID
and secret access key. These two keys are commonly referred to as your security credentials.
For more information, see Authenticating Requests (AWS Signature Version 4) and Signature Version 4
signing process.
Monitoring is an important part of maintaining the reliability, availability, and performance of your
Amazon S3 solutions so that you can more easily debug a multi-point failure if one occurs. Logging can
provide insight into any errors users are receiving, and when and what requests are made. AWS provides
several tools for monitoring your Amazon S3 resources:
• Amazon CloudWatch
• AWS CloudTrail
• Amazon S3 Access Logs
• AWS Trusted Advisor
For more information, see Logging and monitoring in Amazon S3 (p. 442).
Amazon S3 is integrated with AWS CloudTrail, a service that provides a record of actions taken by a
user, a role, or an AWS service in Amazon S3. This feature can be paired with Amazon GuardDuty, which
monitors threats against your Amazon S3 resources by analyzing CloudTrail management events and
CloudTrail S3 data events. These data sources monitor different kinds of activity. For example, S3 related
CloudTrail management events include operations that list or configure S3 projects. GuardDuty analyzes
S3 data events from all of your S3 buckets and monitors them for malicious and suspicious activity.
For more information, see Amazon S3 protection in Amazon GuardDuty in the Amazon GuardDuty User
Guide.
Development resources
To help you build applications using the language of your choice, we provide the following resources:
• Sample Code and Libraries – The AWS Developer Center has sample code and libraries written
especially for Amazon S3.
You can use these code samples as a means of understanding how to implement the Amazon S3 API.
For more information, see the AWS Developer Center.
• Tutorials – Our Resource Center offers more Amazon S3 tutorials.
These tutorials provide a hands-on approach for learning Amazon S3 functionality. For more
information, see Articles & Tutorials.
• Customer Forum – We recommend that you review the Amazon S3 forum to get an idea of what other
users are doing and to benefit from the questions they ask.
The forum can help you understand what you can and can't do with Amazon S3. The forum also serves
as a place for you to ask questions that other users or AWS representatives might answer. You can use
the forum to report issues with the service or the API. For more information, see Discussion Forums.
To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the
object is in the bucket, you can open it, download it, and move it. When you no longer need an object or
a bucket, you can clean up your resources.
Note
With Amazon S3, you pay only for what you use. For more information about Amazon S3
features and pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started
with Amazon S3 for free. For more information, see AWS Free Tier.
The topics in this section provide an overview of working with buckets in Amazon S3. They include
information about naming, creating, accessing, and deleting buckets.
Topics
• Buckets overview (p. 24)
• Bucket naming rules (p. 27)
• Creating a bucket (p. 28)
• Viewing the properties for an S3 bucket (p. 33)
• Accessing a bucket (p. 33)
• Emptying a bucket (p. 35)
• Deleting a bucket (p. 37)
• Setting default server-side encryption behavior for Amazon S3 buckets (p. 39)
• Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration (p. 43)
• Using Requester Pays buckets for storage transfers and usage (p. 51)
• Bucket restrictions and limitations (p. 54)
Buckets overview
To upload your data (photos, videos, documents, etc.) to Amazon S3, you must first create an S3 bucket
in one of the AWS Regions. You can then upload any number of objects to the bucket.
In terms of implementation, buckets and objects are AWS resources, and Amazon S3 provides APIs for
you to manage them. For example, you can create a bucket and upload objects using the Amazon S3 API.
You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3
APIs to send requests to Amazon S3.
This section describes how to work with buckets. For information about working with objects, see
Amazon S3 objects overview (p. 56).
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts.
This means that after a bucket is created, the name of that bucket cannot be used by another AWS
account in any AWS Region until the bucket is deleted. You should not depend on specific bucket naming
conventions for availability or security verification purposes. For bucket naming guidelines, see Bucket
naming rules (p. 27).
Amazon S3 creates buckets in a Region that you specify. To optimize latency, minimize costs, or address
regulatory requirements, choose any AWS Region that is geographically close to you. For example, if
you reside in Europe, you might find it advantageous to create buckets in the Europe (Ireland) or Europe
(Frankfurt) Regions. For a list of Amazon S3 Regions, see Regions and Endpoints in the AWS General
Reference.
Note
Objects that belong to a bucket that you create in a specific AWS Region never leave that
Region, unless you explicitly transfer them to another Region. For example, objects that are
stored in the Europe (Ireland) Region never leave it.
Topics
• About permissions (p. 25)
• Managing public access to buckets (p. 25)
• Bucket configuration options (p. 26)
About permissions
You can use your AWS account root user credentials to create a bucket and perform any other Amazon
S3 operation. However, we recommend that you do not use the root user credentials of your AWS
account to make requests, such as to create a bucket. Instead, create an AWS Identity and Access
Management (IAM) user, and grant that user full access (users by default have no permissions).
These users are referred to as administrators. You can use the administrator user credentials, instead
of the root user credentials of your account, to interact with AWS and perform tasks, such as create a
bucket, create users, and grant them permissions.
For more information, see AWS account root user credentials and IAM user credentials in the AWS
General Reference and Security best practices in IAM in the IAM User Guide.
The AWS account that creates a resource owns that resource. For example, if you create an IAM user
in your AWS account and grant the user permission to create a bucket, the user can create a bucket.
But the user does not own the bucket; the AWS account that the user belongs to owns the bucket. The
user needs additional permission from the resource owner to perform any other bucket operations. For
more information about managing permissions for your Amazon S3 resources, see Identity and access
management in Amazon S3 (p. 209).
To help ensure that all of your Amazon S3 buckets and objects have their public access blocked, we
recommend that you turn on all four settings for Block Public Access for your account. These settings
block all public access for all current and future buckets.
Before applying these settings, verify that your applications will work correctly without public access. If
you require some level of public access to your buckets or objects—for example, to host a static website
as described at Hosting a static website using Amazon S3 (p. 857)—you can customize the individual
settings to suit your storage use cases. For more information, see Blocking public access to your Amazon
S3 storage (p. 408).
These are referred to as subresources because they exist in the context of a specific bucket or object. The
following table lists subresources that enable you to manage bucket-specific configurations.
Subresource Description
cors (cross-origin You can configure your bucket to allow cross-origin requests.
resource sharing)
For more information, see Using cross-origin resource sharing (CORS) (p. 397).
event notification You can enable your bucket to send you notifications of specified bucket events.
lifecycle You can define lifecycle rules for objects in your bucket that have a well-defined
lifecycle. For example, you can define a rule to archive objects one year after
creation, or delete an object 10 years after creation.
For more information, see Managing your storage lifecycle (p. 501).
location When you create a bucket, you specify the AWS Region where you want Amazon
S3 to create the bucket. Amazon S3 stores this information in the location
subresource and provides an API for you to retrieve this information.
logging Logging enables you to track requests for access to your bucket. Each access
log record provides details about a single access request, such as the requester,
bucket name, request time, request action, response status, and error code, if
any. Access log information can be useful in security and access audits. It can also
help you learn about your customer base and understand your Amazon S3 bill.
object locking To use S3 Object Lock, you must enable it for a bucket. You can also optionally
configure a default retention mode and period that applies to new objects that
are placed in the bucket.
policy and ACL All your resources (such as buckets and objects) are private by default. Amazon
(access control list) S3 supports both bucket policy and access control list (ACL) options for you to
grant and manage bucket-level permissions. Amazon S3 stores the permission
information in the policy and acl subresources.
Subresource Description
requestPayment By default, the AWS account that creates the bucket (the bucket owner) pays
for downloads from the bucket. Using this subresource, the bucket owner
can specify that the person requesting the download will be charged for the
download. Amazon S3 provides an API for you to manage this subresource.
For more information, see Using Requester Pays buckets for storage transfers
and usage (p. 51).
tagging You can add cost allocation tags to your bucket to categorize and track your AWS
costs. Amazon S3 provides the tagging subresource to store and manage tags on
a bucket. Using tags you apply to your bucket, AWS generates a cost allocation
report with usage and costs aggregated by your tags.
For more information, see Billing and usage reporting for S3 buckets (p. 619).
transfer Transfer Acceleration enables fast, easy, and secure transfers of files over long
acceleration distances between your client and an S3 bucket. Transfer Acceleration takes
advantage of the globally distributed edge locations of Amazon CloudFront.
For more information, see Configuring fast, secure file transfers using Amazon S3
Transfer Acceleration (p. 43).
website You can configure your bucket for static website hosting. Amazon S3 stores this
configuration by creating a website subresource.
For more information, see Hosting a static website using Amazon S3 (p. 857).
For best compatibility, we recommend that you avoid using dots (.) in bucket names, except for buckets
that are used only for static website hosting. If you include dots in a bucket's name, you can't use virtual-
host-style addressing over HTTPS, unless you perform your own certificate validation. This is because the
security certificates used for virtual hosting of buckets don't work for buckets with dots in their names.
This limitation doesn't affect buckets used for static website hosting, because static website hosting is
only available over HTTP. For more information about virtual-host-style addressing, see Virtual hosting
of buckets (p. 935). For more information about static website hosting, see Hosting a static website
using Amazon S3 (p. 857).
Note
Before March 1, 2018, buckets created in the US East (N. Virginia) Region could have names
that were up to 255 characters long and included uppercase letters and underscores. Beginning
March 1, 2018, new buckets in US East (N. Virginia) must conform to the same rules applied in
all other Regions.
• docexamplebucket1
• log-delivery-march-2020
• my-hosted-content
The following example bucket names are valid but not recommended for uses other than static website
hosting:
• docexamplewebsite.com
• www.docexamplewebsite.com
• my.example.s3.bucket
Creating a bucket
To upload your data to Amazon S3, you must first create an Amazon S3 bucket in one of the AWS
Regions. When you create a bucket, you must choose a bucket name and Region. You can optionally
choose other storage management options for the bucket. After you create a bucket, you cannot change
the bucket name or Region. For information about naming buckets, see Bucket naming rules (p. 27).
The AWS account that creates the bucket owns it. You can upload any number of objects to the bucket.
By default, you can create up to 100 buckets in each of your AWS accounts. If you need more buckets,
you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a service limit
increase. To learn how to submit a bucket limit increase, see AWS service quotas in the AWS General
Reference. You can store any number of objects in a bucket.
You can use the Amazon S3 console, Amazon S3 APIs, AWS CLI, or AWS SDKs to create a bucket.
After you create the bucket, you can't change its name. For information about naming buckets, see
Bucket naming rules (p. 27).
Important
Avoid including sensitive information, such as account numbers, in the bucket name. The
bucket name is visible in the URLs that point to the objects in the bucket.
4. In Region, choose the AWS Region where you want the bucket to reside.
Choose a Region close to you to minimize latency and costs and address regulatory requirements.
Objects stored in a Region never leave that Region unless you explicitly transfer them to another
Region. For a list of Amazon S3 AWS Regions, see AWS service endpoints in the Amazon Web Services
General Reference.
5. In Bucket settings for Block Public Access, choose the Block Public Access settings that you want to
apply to the bucket.
We recommend that you keep all settings enabled unless you know that you need to turn off one
or more of them for your use case, such as to host a public website. Block Public Access settings
that you enable for the bucket are also enabled for all access points that you create on the bucket.
For more information about blocking public access, see Blocking public access to your Amazon S3
storage (p. 408).
6. (Optional) If you want to enable S3 Object Lock, do the following:
For more information about the S3 Object Lock feature, see Using S3 Object Lock (p. 488).
7. Choose Create bucket.
To create a client to access a dual-stack endpoint, you must specify an AWS Region. For more
information, see Dual-stack endpoints (p. 904). For a list of available AWS Regions, see Regions and
endpoints in the AWS General Reference.
When you create a client, the Region maps to the Region-specific endpoint. The client uses this endpoint
to communicate with Amazon S3: s3.<region>.amazonaws.com. If your Region launched after March
20, 2019, your client and bucket must be in the same Region. However, you can use a client in the US
East (N. Virginia) Region to create a bucket in any Region that launched before March 20, 2019. For more
information, see Legacy Endpoints (p. 939).
• Create a client by explicitly specifying an AWS Region — In the example, the client uses the s3.us-
west-2.amazonaws.com endpoint to communicate with Amazon S3. You can specify any AWS
Region. For a list of AWS Regions, see Regions and endpoints in the AWS General Reference.
• Send a create bucket request by specifying only a bucket name — The client sends a request to
Amazon S3 to create the bucket in the Region where you created a client.
• Retrieve information about the location of the bucket — Amazon S3 stores bucket location
information in the location subresource that is associated with the bucket.
Java
This example shows how to create an Amazon S3 bucket using the AWS SDK for Java. For
instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CreateBucketRequest;
import com.amazonaws.services.s3.model.GetBucketLocationRequest;
import java.io.IOException;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
if (!s3Client.doesBucketExistV2(bucketName)) {
// Because the CreateBucketRequest object doesn't specify a region, the
// Verify that the bucket was created by retrieving it and checking its
location.
String bucketLocation = s3Client.getBucketLocation(new
GetBucketLocationRequest(bucketName));
System.out.println("Bucket location: " + bucketLocation);
}
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it and returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}
.NET
For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).
Example
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Util;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class CreateBucketTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
CreateBucketAsync().Wait();
}
}
// Retrieve the bucket location.
string bucketLocation = await FindBucketLocationAsync(s3Client);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
static async Task<string> FindBucketLocationAsync(IAmazonS3 client)
{
string bucketLocation;
var request = new GetBucketLocationRequest()
{
BucketName = bucketName
};
GetBucketLocationResponse response = await
client.GetBucketLocationAsync(request);
bucketLocation = response.Location.ToString();
return bucketLocation;
}
}
}
Ruby
For information about how to create and test a working sample, see Using the AWS SDK for Ruby -
Version 3 (p. 953).
Example
require 'aws-sdk-s3'
For information about the AWS CLI, see What is the AWS Command Line Interface? in the AWS Command
Line Interface User Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to view the properties for.
3. Choose Properties.
4. On the Properties page, you can configure the following properties for the bucket.
• Bucket Versioning – Keep multiple versions of an object in one bucket by using versioning. By
default, versioning is disabled for a new bucket. For information about enabling versioning, see
Enabling versioning on buckets (p. 457).
• Tags – With AWS cost allocation, you can use bucket tags to annotate billing for your use of a
bucket. A tag is a key-value pair that represents a label that you assign to a bucket. To add tags,
choose Tags, and then choose Add tag. For more information, see Using cost allocation S3 bucket
tags (p. 618).
• Default encryption – Enabling default encryption provides you with automatic server-side
encryption. Amazon S3 encrypts an object before saving it to a disk and decrypts the object when
you download it. For more information, see Setting default server-side encryption behavior for
Amazon S3 buckets (p. 39).
• Server access logging – Get detailed records for the requests that are made to your bucket with
server access logging. By default, Amazon S3 doesn't collect server access logs. For information
about enabling server access logging, see Enabling Amazon S3 server access logging (p. 753)
• AWS CloudTrail data events – Use CloudTrail to log data events. By default, trails don't log data
events. Additional charges apply for data events. For more information, see Logging Data Events
for Trails in the AWS CloudTrail User Guide.
• Event notifications – Enable certain Amazon S3 bucket events to send notification messages to a
destination whenever the events occur. To enable events, choose Create event notification, and
then specify the settings you want to use. For more information, see Enabling and configuring
event notifications using the Amazon S3 console (p. 792).
• Transfer acceleration – Enable fast, easy, and secure transfers of files over long distances between
your client and an S3 bucket. For information about enabling transfer acceleration, see Enabling
and using S3 Transfer Acceleration (p. 46).
• Object Lock – Use S3 Object Lock to prevent an object from being deleted or overwritten for a
fixed amount of time or indefinitely. For more information, see Using S3 Object Lock (p. 488).
• Requester Pays – Enable Requester Pays if you want the requester (instead of the bucket owner)
to pay for requests and data transfers. For more information, see Using Requester Pays buckets for
storage transfers and usage (p. 51).
• Static website hosting – You can host a static website on Amazon S3. To enable static website
hosting, choose Static website hosting, and then specify the settings you want to use. For more
information, see Hosting a static website using Amazon S3 (p. 857).
Accessing a bucket
You can access your bucket using the Amazon S3 console. Using the console UI, you can perform almost
all bucket operations without having to write any code.
If you access a bucket programmatically, Amazon S3 supports RESTful architecture in which your buckets
and objects are resources, each with a resource URI that uniquely identifies the resource.
Amazon S3 supports both virtual-hosted–style and path-style URLs to access a bucket. Because
buckets can be accessed using path-style and virtual-hosted–style URLs, we recommend that you
create buckets with DNS-compliant bucket names. For more information, see Bucket restrictions and
limitations (p. 54).
Note
Virtual-hosted-style and path-style requests use the S3 dot Region endpoint structure
(s3.Region), for example, https://my-bucket.s3.us-west-2.amazonaws.com. However,
some older Amazon S3 Regions also support S3 dash Region endpoints s3-Region, for
example, https://my-bucket.s3-us-west-2.amazonaws.com. If your bucket is in one of
these Regions, you might see s3-Region endpoints in your server access logs or AWS CloudTrail
logs. We recommend that you do not use this endpoint structure in your requests.
Virtual-hosted–style access
In a virtual-hosted–style request, the bucket name is part of the domain name in the URL.
https://bucket-name.s3.Region.amazonaws.com/key name
In this example, my-bucket is the bucket name, US West (Oregon) is the Region, and puppy.png is the
key name:
https://my-bucket.s3.us-west-2.amazonaws.com/puppy.png
For more information about virtual hosted style access, see Virtual Hosted-Style Requests (p. 936).
Path-style access
In Amazon S3, path-style URLs use the following format.
https://s3.Region.amazonaws.com/bucket-name/key name
For example, if you create a bucket named mybucket in the US West (Oregon) Region, and you want to
access the puppy.jpg object in that bucket, you can use the following path-style URL:
https://s3.us-west-2.amazonaws.com/mybucket/puppy.jpg
S3 Access Points only support virtual-host-style addressing. To address a bucket through an access point,
use the following format.
https://AccessPointName-AccountId.s3-accesspoint.region.amazonaws.com.
Note
• If your access point name includes dash (-) characters, include the dashes in the URL and
insert another dash before the account ID. For example, to use an access point named
finance-docs owned by account 123456789012 in Region us-west-2, the appropriate
URL would be https://finance-docs-123456789012.s3-accesspoint.us-
west-2.amazonaws.com.
• S3 Access Points don't support access by HTTP, only secure access by HTTPS.
S3://bucket-name/key-name
For example, the following example uses the sample bucket described in the earlier path-style section.
S3://mybucket/puppy.jpg
Emptying a bucket
You can empty a bucket's contents using the Amazon S3 console, AWS SDKs, or AWS Command Line
Interface (AWS CLI). When you empty a bucket, you delete all the content, but you keep the bucket.
You can also specify a lifecycle configuration on a bucket to expire objects so that Amazon S3 can delete
them. However, there are limitations on this method based on the number of objects in your bucket and
the bucket's versioning status.
To empty an S3 bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, select the option next to the name of the bucket that you want to empty,
and then choose Empty.
3. On the Empty bucket page, confirm that you want to empty the bucket by entering the bucket
name into the text field, and then choose Empty.
4. (Optional) Monitor the progress of the bucket emptying process on the Empty bucket: Status page.
The following rm command removes objects that have the key name prefix doc, for example, doc/doc1
and doc/doc2.
Use the following command to remove all objects without specifying a prefix.
For more information, see Using high-level S3 commands with the AWS CLI in the AWS Command Line
Interface User Guide.
Note
You can't remove objects from a bucket that has versioning enabled. Amazon S3 adds a delete
marker when you delete an object, which is what this command does. For more information
about S3 Bucket Versioning, see Using versioning in S3 buckets (p. 453).
For an example of how to empty a bucket using AWS SDK for Java, see Deleting a bucket (p. 37). The
code deletes all objects, regardless of whether the bucket has versioning enabled, and then it deletes the
bucket. To just empty the bucket, make sure that you remove the statement that deletes the bucket.
For more information about using other AWS SDKs, see Tools for Amazon Web Services.
If your bucket has versioning enabled, you can also configure the rule to expire noncurrent objects. To
fully empty the contents of a versioning-enabled bucket, you must configure an expiration policy on
both current and noncurrent objects in the bucket.
For more information, see Managing your storage lifecycle (p. 501) and Expiring objects (p. 507).
Deleting a bucket
You can delete an empty Amazon S3 bucket, and when you're using the AWS Management Console, you
can delete a bucket that contains objects. If you delete a bucket that contains objects, all the objects in
the bucket are permanently deleted.
When you delete a bucket that has S3 Bucket Versioning enabled, all versions of all the objects in the
bucket are permanently deleted. For more information about versioning, see Working with objects in a
versioning-enabled bucket (p. 462).
• Bucket names are unique. If you delete a bucket, another AWS user can use the name.
• When you delete a bucket that contains objects, all the objects in the bucket are permanently deleted,
including objects that transitioned to the S3 Glacier storage class.
• If the bucket hosts a static website, and you created and configured an Amazon Route 53 hosted zone
as described in Configuring a static website using a custom domain registered with Route 53 (p. 884),
you must clean up the Route 53 hosted zone settings that are related to the bucket. For more
information, see Step 2: Delete the Route 53 hosted zone (p. 898).
• If the bucket receives log data from Elastic Load Balancing (ELB): We recommend that you stop the
delivery of ELB logs to the bucket before deleting it. After you delete the bucket, if another user
creates a bucket using the same name, your log data could potentially be delivered to that bucket. For
information about ELB access logs, see Access logs in the User Guide for Classic Load Balancers and
Access logs in the User Guide for Application Load Balancers.
Important
Bucket names are unique. If you delete a bucket, another AWS user can use the name. If you
want to continue to use the same bucket name, don't delete the bucket. We recommend that
you empty the bucket and keep it.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, select the option next to the name of the bucket that you want to delete, and
then choose Delete at the top of the page.
3. On the Delete bucket page, confirm that you want to delete the bucket by entering the bucket
name into the text field, and then choose Delete bucket.
Note
If the bucket contains any objects, empty the bucket before deleting it by selecting the
empty bucket configuration link in the This bucket is not empty error alert and following
the instructions on the Empty bucket page. Then return to the Delete bucket page and
delete the bucket.
Java
The following Java example deletes a bucket that contains objects. The example deletes all objects,
and then it deletes the bucket. The example works for buckets with or without versioning enabled.
Note
For buckets without versioning enabled, you can delete all objects directly and then delete
the bucket. For buckets with versioning enabled, you must delete all object versions before
deleting the bucket.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;
import java.util.Iterator;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
if (versionList.isTruncated()) {
versionList = s3Client.listNextBatchOfVersions(versionList);
} else {
break;
}
}
// After all objects and object versions are deleted, delete the bucket.
s3Client.deleteBucket(bucketName);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client couldn't
// parse the response from Amazon S3.
e.printStackTrace();
}
}
}
If your bucket does not have versioning enabled, you can use the rb (remove bucket) AWS CLI command
with the --force parameter to delete the bucket and all the objects in it. This command deletes all
objects first and then deletes the bucket.
For more information, see Using High-Level S3 Commands with the AWS Command Line Interface in the
AWS Command Line Interface User Guide.
When you configure your bucket to use default encryption with SSE-KMS, you can also enable S3 Bucket
Keys to decrease request traffic from Amazon S3 to AWS Key Management Service (AWS KMS) and
reduce the cost of encryption. For more information, see Reducing the cost of SSE-KMS with Amazon S3
Bucket Keys (p. 166).
When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk and
decrypts it when you download the objects. For more information about protecting data using
server-side encryption and encryption key management, see Protecting data using server-side
encryption (p. 157).
For more information about permissions required for default encryption, see PutBucketEncryption in the
Amazon Simple Storage Service API Reference.
To set up default encryption on a bucket, you can use the Amazon S3 console, AWS CLI, AWS SDKs, or
the REST API. For more information, see the section called “Enabling default encryption” (p. 41).
To encrypt your existing Amazon S3 objects with a single request, you can use Amazon S3 Batch
Operations. You provide S3 Batch Operations with a list of objects to operate on, and Batch Operations
calls the respective API to perform the specified operation. You can use the copy operation to copy the
existing unencrypted objects and write the new encrypted objects to the same bucket. A single Batch
Operations job can perform the specified operation on billions of objects containing exabytes of data.
For more information, see Performing S3 Batch Operations (p. 662).
You can also encrypt existing objects using the Copy Object API. For more information, see the AWS
Storage Blog post Encrypting existing Amazon S3 objects with the AWS CLI.
Note
Amazon S3 buckets with default bucket encryption using SSE-KMS cannot be used as
destination buckets for the section called “Logging server access” (p. 751). Only SSE-S3
default encryption is supported for server access log destination buckets.
• The AWS managed CMK (aws/s3) is used when a CMK Amazon Resource Name (ARN) or alias is not
provided at request time, nor via the bucket's default encryption configuration.
• If you're uploading or accessing S3 objects using AWS Identity and Access Management (IAM)
principals that are in the same AWS account as your CMK, you can use the AWS managed CMK (aws/
s3).
• Use a customer managed CMK if you want to grant cross-account access to your S3 objects. You can
configure the policy of a customer managed CMK to allow access from another account.
• If specifying your own CMK, you should use a fully qualified CMK key ARN. When using a CMK alias,
be aware that AWS KMS will resolve the key within the requester’s account. This can result in data
encrypted with a CMK that belongs to the requester, and not the bucket administrator.
• You must specify a key that you (the requester) have been granted Encrypt permission to. For
more information, see Allows key users to use a CMK for cryptographic operations in the AWS Key
Management Service Developer Guide.
For more information about when to use customer managed CMKs and the AWS managed CMK, see
Should I use an AWS AWS KMS-managed key or a custom AWS KMS key to encrypt my objects on
Amazon S3?
• If objects in the source bucket are not encrypted, the replica objects in the destination bucket are
encrypted using the default encryption settings of the destination bucket. This results in the ETag of
the source object being different from the ETag of the replica object. You must update applications
that use the ETag to accommodate for this difference.
• If objects in the source bucket are encrypted using SSE-S3 or SSE-KMS, the replica objects in the
destination bucket use the same encryption as the source object encryption. The default encryption
settings of the destination bucket are not used.
For more information about using default encryption with SSE-KMS, see Replicating encrypted
objects (p. 599).
When you configure your bucket to use S3 Bucket Keys for SSE-KMS on new objects, AWS KMS generates
a bucket-level key that is used to create a unique data key for objects in the bucket. This bucket key is
used for a time-limited period within Amazon S3, reducing the need for Amazon S3 to make requests to
AWS KMS to complete encryption operations.
For more information about using an S3 Bucket Key, see Reducing the cost of SSE-KMS with Amazon S3
Bucket Keys (p. 166).
When you configure default encryption using AWS KMS, you can also configure S3 Bucket Key. For more
information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys (p. 166).
Default encryption works with all existing and new Amazon S3 buckets. Without default encryption, to
encrypt all objects stored in a bucket, you must include encryption information with every object storage
request. You must also set up an Amazon S3 bucket policy to reject storage requests that don't include
encryption information.
There are no additional charges for using default encryption for S3 buckets. Requests to configure the
default encryption feature incur standard Amazon S3 request charges. For information about pricing,
see Amazon S3 pricing. For SSE-KMS CMK storage, AWS KMS charges apply and are listed at AWS KMS
pricing.
You can enable Amazon S3 default encryption for an S3 bucket using the Amazon S3 console, the AWS
SDKs, the Amazon S3 REST API, and the AWS Command Line Interface (AWS CLI).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want.
3. Choose Properties.
4. Under Default encryption, choose Edit.
5. To enable or disable server-side encryption, choose Enable or Disable.
6. To enable server-side encryption using an Amazon S3-managed key, under Encryption key type,
choose Amazon S3 key (SSE-S3).
For more information about using Amazon S3 server-side encryption to encrypt your data, see
Protecting data using server-side encryption with Amazon S3-managed encryption keys (SSE-
S3) (p. 174).
7. To enable server-side encryption using an AWS KMS CMK, follow these steps:
a. Under Encryption key type, choose AWS Key Management Service key (SSE-KMS).
Important
If you use the AWS KMS option for your default encryption configuration, you are
subject to the RPS (requests per second) limits of AWS KMS. For more information
about AWS KMS quotas and how to request a quota increase, see Quotas.
b. Under AWS KMS key choose one of the following:
Important
You can only use KMS CMKs that are enabled in the same AWS Region as the bucket.
When you choose Choose from your KMS master keys, the S3 console only lists 100
KMS CMKs per Region. If you have more than 100 CMKs in the same Region, you can
only see the first 100 CMKs in the S3 console. To use a KMS CMK that is not listed in the
console, choose Custom KMS ARN, and enter the KMS CMK ARN.
When you use an AWS KMS CMK for server-side encryption in Amazon S3, you
must choose a symmetric CMK. Amazon S3 only supports symmetric CMKs and not
asymmetric CMKs. For more information, see Using symmetric and asymmetric keys in
the AWS Key Management Service Developer Guide.
For more information about creating an AWS KMS CMK, see Creating keys in the AWS Key
Management Service Developer Guide. For more information about using AWS KMS with
Amazon S3, see Protecting Data Using Server-Side Encryption with CMKs Stored in AWS Key
Management Service (SSE-KMS) (p. 158).
8. To use S3 Bucket Keys, under Bucket Key, choose Enable.
When you configure your bucket to use default encryption with SSE-KMS, you can also enable S3
Bucket Key. S3 Bucket Keys decrease request traffic from Amazon S3 to AWS KMS and lower the
cost of encryption. For more information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket
Keys (p. 166).
9. Choose Save changes.
For more information, see PutBucketEncryption in the Amazon Simple Storage Service API Reference.
After you enable default encryption for a bucket, the following encryption behavior applies:
• There is no change to the encryption of the objects that existed in the bucket before default
encryption was enabled.
• When you upload objects after enabling default encryption:
• If your PUT request headers don't include encryption information, Amazon S3 uses the bucket’s
default encryption settings to encrypt the objects.
• If your PUT request headers include encryption information, Amazon S3 uses the encryption
information from the PUT request to encrypt objects before storing them in Amazon S3.
• If you use the SSE-KMS option for your default encryption configuration, you are subject to the RPS
(requests per second) limits of AWS KMS. For more information about AWS KMS limits and how to
request a limit increase, see AWS KMS limits.
• PutBucketEncryption
• GetBucketEncryption
• DeleteBucketEncryption
You can also create Amazon CloudWatch Events with S3 bucket-level operations as the event type.
For more information about CloudTrail events, see Enable logging for objects in a bucket using the
console (p. 744).
You can use CloudTrail logs for object-level Amazon S3 actions to track PUT and POST requests to
Amazon S3. You can use these actions to verify whether default encryption is being used to encrypt
objects when incoming PUT requests don't have encryption headers.
When Amazon S3 encrypts an object using the default encryption settings, the log includes
the following field as the name/value pair: "SSEApplied":"Default_SSE_S3" or
"SSEApplied":"Default_SSE_KMS".
When Amazon S3 encrypts an object using the PUT encryption headers, the log includes one of the
following fields as the name/value pair: "SSEApplied":"SSE_S3", "SSEApplied":"SSE_KMS or
"SSEApplied":"SSE_C".
For multipart uploads, this information is included in the InitiateMultipartUpload API requests. For
more information about using CloudTrail and CloudWatch, see Monitoring Amazon S3 (p. 732).
of the globally distributed edge locations in Amazon CloudFront. As the data arrives at an edge location,
the data is routed to Amazon S3 over an optimized network path.
When you use Transfer Acceleration, additional data transfer charges might apply. For more information
about pricing, see Amazon S3 pricing.
• Your customers upload to a centralized bucket from all over the world.
• You transfer gigabytes to terabytes of data on a regular basis across continents.
• You can't use all of your available bandwidth over the internet when uploading to Amazon S3.
For more information about when to use Transfer Acceleration, see Amazon S3 FAQs.
• Transfer Acceleration is only supported on virtual-hosted style requests. For more information about
virtual-hosted style requests, see Making requests using the REST API (p. 933).
• The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain
periods (".").
• Transfer Acceleration must be enabled on the bucket. For more information, see Enabling and using S3
Transfer Acceleration (p. 46).
After you enable Transfer Acceleration on a bucket, it might take up to 20 minutes before the data
transfer speed to the bucket increases.
Note
Transfer Acceleration is currently not supported for buckets located in the following Regions:
• Africa (Cape Town) (af-south-1)
• Asia Pacific (Hong Kong) (ap-east-1)
• Asia Pacific (Osaka-Local) (ap-northeast-3)
• Europe (Stockholm) (eu-north-1)
• Europe (Milan) (eu-south-1)
• Middle East (Bahrain) (me-south-1)
• To access the bucket that is enabled for Transfer Acceleration, you must use the endpoint
bucketname.s3-accelerate.amazonaws.com. Or, use the dual-stack endpoint bucketname.s3-
accelerate.dualstack.amazonaws.com to connect to the enabled bucket over IPv6.
• You must be the bucket owner to set the transfer acceleration state. The bucket owner can
assign permissions to other users to allow them to set the acceleration state on a bucket. The
s3:PutAccelerateConfiguration permission permits users to enable or disable Transfer
Acceleration on a bucket. The s3:GetAccelerateConfiguration permission permits users to
return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended. For more
information about these permissions, see Example — Bucket subresource operations (p. 231) and
Identity and access management in Amazon S3 (p. 209).
The following sections describe how to get started and use Amazon S3 Transfer Acceleration for
transferring data.
Topics
To get started using Amazon S3 Transfer Acceleration, perform the following steps:
You can enable Transfer Acceleration on a bucket any of the following ways:
• Use the Amazon S3 console.
• Use the REST API PUT Bucket accelerate operation.
• Use the AWS CLI and AWS SDKs. For more information, see Developing with Amazon S3 using the
AWS SDKs, and explorers (p. 943).
For more information, see Enabling and using S3 Transfer Acceleration (p. 46).
Note
For your bucket to work with transfer acceleration, the bucket name must conform to DNS
naming requirements and must not contain periods (".").
2. Transfer data to and from the acceleration-enabled bucket
Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and IPv4. The Transfer
Acceleration dual-stack endpoint only uses the virtual hosted-style type of endpoint name. For
more information, see Getting started making requests over IPv6 (p. 902) and Using Amazon S3
dual-stack endpoints (p. 904).
Note
You can continue to use the regular endpoint in addition to the accelerate endpoints.
You can point your Amazon S3 PUT object and GET object requests to the s3-accelerate
endpoint domain name after you enable Transfer Acceleration. For example, suppose that you
currently have a REST API application using PUT Object that uses the hostname mybucket.s3.us-
east-1.amazonaws.com in the PUT request. To accelerate the PUT, you change the hostname in
your request to mybucket.s3-accelerate.amazonaws.com. To go back to using the standard
upload speed, change the name back to mybucket.s3.us-east-1.amazonaws.com.
After Transfer Acceleration is enabled, it can take up to 20 minutes for you to realize the performance
benefit. However, the accelerate endpoint is available as soon as you enable Transfer Acceleration.
You can use the accelerate endpoint in the AWS CLI, AWS SDKs, and other tools that transfer data
to and from Amazon S3. If you are
APIusing the2006-03-01
Version AWS SDKs, some of the supported languages use
45
Amazon Simple Storage Service User Guide
Enabling Transfer Acceleration
an accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint
for Transfer Acceleration to bucketname.s3-accelerate.amazonaws.com. For examples of
how to use an accelerate endpoint client configuration flag, see Enabling and using S3 Transfer
Acceleration (p. 46).
You can use all Amazon S3 operations through the transfer acceleration endpoints except for the
following:
Also, Amazon S3 Transfer Acceleration does not support cross-Region copies using PUT Object - Copy.
This section provides examples of how to enable Amazon S3 Transfer Acceleration on a bucket and use
the acceleration endpoint for the enabled bucket.
For more information about Transfer Acceleration requirements, see Configuring fast, secure file
transfers using Amazon S3 Transfer Acceleration (p. 43).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to enable transfer acceleration for.
3. Choose Properties.
4. Under Transfer acceleration, choose Edit.
5. Choose Enable, and choose Save changes.
1. After Amazon S3 enables transfer acceleration for your bucket, view the Properties tab for the
bucket.
2. Under Transfer acceleration, Accelerated endpoint displays the transfer acceleration endpoint for
your bucket. Use this endpoint to access accelerated data transfers to and from your bucket.
The following example sets Status=Enabled to enable Transfer Acceleration on a bucket. You use
Status=Suspended to suspend Transfer Acceleration.
Example
All requests are sent using the virtual style of bucket addressing: my-bucket.s3-
accelerate.amazonaws.com. Any ListBuckets, CreateBucket, and DeleteBucket requests are
not sent to the accelerate endpoint because the endpoint doesn't support those operations.
For more information about use_accelerate_endpoint, see AWS CLI S3 Configuration in the AWS CLI
Command Reference.
Example
If you want to use the accelerate endpoint for some AWS CLI commands but not others, you can use
either one of the following two methods:
• Use the accelerate endpoint per command by setting the --endpoint-url parameter to https://
s3-accelerate.amazonaws.com or http://s3-accelerate.amazonaws.com for any s3 or
s3api command.
• Set up separate profiles in your AWS Config file. For example, create one profile that sets
use_accelerate_endpoint to true and a profile that does not set use_accelerate_endpoint.
When you run a command, specify which profile you want to use, depending upon whether you want
to use the accelerate endpoint.
Example
The following example uploads a file to a bucket enabled for Transfer Acceleration by using the --
endpoint-url parameter to specify the accelerate endpoint.
Example
Java
Example
The following example shows how to use an accelerate endpoint to upload an object to Amazon S3.
The example does the following:
• Creates an AmazonS3Client that is configured to use accelerate endpoints. All buckets that the
client accesses must have Transfer Acceleration enabled.
• Enables Transfer Acceleration on a specified bucket. This step is necessary only if the bucket you
specify doesn't already have Transfer Acceleration enabled.
• Verifies that transfer acceleration is enabled for the specified bucket.
• Uploads a new object to the specified bucket using the bucket's accelerate endpoint.
For more information about using Transfer Acceleration, see Getting started with Amazon S3
Transfer Acceleration (p. 45). For instructions on creating and testing a working sample, see
Testing the Amazon S3 Java Code Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketAccelerateConfiguration;
import com.amazonaws.services.s3.model.BucketAccelerateStatus;
import com.amazonaws.services.s3.model.GetBucketAccelerateConfigurationRequest;
import com.amazonaws.services.s3.model.SetBucketAccelerateConfigurationRequest;
try {
// Create an Amazon S3 client that is configured to use the accelerate
endpoint.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.enableAccelerateMode()
.build();
.NET
The following example shows how to use the AWS SDK for .NET to enable Transfer Acceleration on
a bucket. For instructions on how to create and test a working sample, see Running the Amazon
S3 .NET Code Examples (p. 951).
Example
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class TransferAccelerationTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
EnableAccelerationAsync().Wait();
}
When uploading an object to a bucket that has Transfer Acceleration enabled, you specify using the
acceleration endpoint at the time of creating a client.
Javascript
For an example of enabling Transfer Acceleration by using the AWS SDK for JavaScript, see Calling
the putBucketAccelerateConfiguration operation in the AWS SDK for JavaScript API Reference.
Python (Boto)
For an example of enabling Transfer Acceleration by using the SDK for Python, see
put_bucket_accelerate_configuration in the AWS SDK for Python (Boto3) API Reference.
Other
For information about using other AWS SDKs, see Sample Code and Libraries.
You can access the Speed Comparison tool using either of the following methods:
• Copy the following URL into your browser window, replacing region with the AWS Region that you
are using (for example, us-west-2) and yourBucketName with the name of the bucket that you
want to evaluate:
https://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-
speed-comparsion.html?region=region&origBucketName=yourBucketName
For a list of the Regions supported by Amazon S3, see Amazon S3 endpoints and quotas in the AWS
General Reference.
• Use the Amazon S3 console.
Typically, you configure buckets to be Requester Pays buckets when you want to share data but not
incur charges associated with others accessing the data. For example, you might use Requester Pays
buckets when making available large datasets, such as zip code directories, reference data, geospatial
information, or web crawling data.
Important
If you enable Requester Pays on a bucket, anonymous access to that bucket is not allowed.
You must authenticate all requests involving Requester Pays buckets. The request authentication enables
Amazon S3 to identify and charge the requester for their use of the Requester Pays bucket.
When the requester assumes an AWS Identity and Access Management (IAM) role before making their
request, the account to which the role belongs is charged for the request. For more information about
IAM roles, see IAM roles in the IAM User Guide.
After you configure a bucket to be a Requester Pays bucket, requesters must include x-amz-request-
payer in their requests either in the header, for POST, GET and HEAD requests, or as a parameter in
a REST request to show that they understand that they will be charged for the request and the data
download.
• Anonymous requests
• BitTorrent
• SOAP requests
• Using a Requester Pays bucket as the target bucket for end-user logging, or vice versa. However, you
can turn on end-user logging on a Requester Pays bucket where the target bucket is not a Requester
Pays bucket.
• The requester doesn't include the parameter x-amz-request-payer in the header (GET, HEAD, or
POST) or as a parameter (REST) in the request (HTTP code 403).
• Request authentication fails (HTTP code 403).
• The request is anonymous (HTTP code 403).
• The request is a SOAP request.
Topics
• Configuring Requester Pays on a bucket (p. 52)
• Retrieving the requestPayment configuration using the REST API (p. 53)
• Downloading objects in Requester Pays buckets (p. 54)
This section provides examples of how to configure Requester Pays on an Amazon S3 bucket using the
console and the REST API.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to enable Requester Pays for.
3. Choose Properties.
4. Under Requester pays, choose Edit.
5. Choose Enable, and choose Save changes.
Amazon S3 enables Requester Pays for your bucket and displays your Bucket overview. Under
Requester pays, you see Enabled.
To revert a Requester Pays bucket to a regular bucket, you use the value BucketOwner. Typically, you
would use BucketOwner when uploading data to the Amazon S3 bucket, and then you would set the
value to Requester before publishing the objects in the bucket.
To set requestPayment
• Use a PUT request to set the Payer value to Requester on a specified bucket.
<RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Payer>Requester</Payer>
</RequestPaymentConfiguration>
HTTP/1.1 200 OK
x-amz-id-2: [id]
x-amz-request-id: [request_id]
Date: Wed, 01 Mar 2009 12:00:00 GMT
Content-Length: 0
Connection: close
Server: AmazonS3
x-amz-request-charged:requester
You can set Requester Pays only at the bucket level. You can't set Requester Pays for specific objects
within the bucket.
You can configure a bucket to be BucketOwner or Requester at any time. However, there might be a
few minutes before the new configuration value takes effect.
Note
Bucket owners who give out presigned URLs should consider carefully before configuring a
bucket to be Requester Pays, especially if the URL has a long lifetime. The bucket owner is
charged each time the requester uses a presigned URL that uses the bucket owner's credentials.
• Use a GET request to obtain the requestPayment resource, as shown in the following request.
HTTP/1.1 200 OK
x-amz-id-2: [id]
x-amz-request-id: [request_id]
Date: Wed, 01 Mar 2009 12:00:00 GMT
Content-Type: [type]
Content-Length: [length]
Connection: close
Server: AmazonS3
</RequestPaymentConfiguration>
• For GET, HEAD, and POST requests, include x-amz-request-payer : requester in the header
• For signed URLs, include x-amz-request-payer=requester in the request
If the request succeeds and the requester is charged, the response includes the header x-amz-request-
charged:requester. If x-amz-request-payer is not in the request, Amazon S3 returns a 403 error
and charges the bucket owner for the request.
Note
Bucket owners do not need to add x-amz-request-payer to their requests.
Ensure that you have included x-amz-request-payer and its value in your signature
calculation. For more information, see Constructing the CanonicalizedAmzHeaders
Element (p. 972).
• Use a GET request to download an object from a Requester Pays bucket, as shown in the following
request.
If the GET request succeeds and the requester is charged, the response includes x-amz-request-
charged:requester.
Amazon S3 can return an Access Denied error for requests that try to get objects from a Requester
Pays bucket. For more information, see Error Responses in the Amazon Simple Storage Service API
Reference.
When you create a bucket, you choose its name and the AWS Region to create it in. After you create a
bucket, you can't change its name or Region.
When naming a bucket, choose a name that is relevant to you or your business. Avoid using names
associated with others. For example, you should avoid using AWS or Amazon in your bucket name.
By default, you can create up to 100 buckets in each of your AWS accounts. If you need additional
buckets, you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a
service limit increase. There is no difference in performance whether you use many buckets or just a few.
For information about how to increase your bucket limit, see AWS service quotas in the AWS General
Reference.
If a bucket is empty, you can delete it. After a bucket is deleted, the name becomes available for reuse.
However, after you delete the bucket, you might not be able to reuse the name for various reasons.
For example, when you delete the bucket and the name becomes available for reuse, another AWS
account might create a bucket with that name. In addition, some time might pass before you can reuse
the name of a deleted bucket. If you want to use the same bucket name, we recommend that you don't
delete the bucket.
There is no limit to the number of objects that you can store in a bucket. You can store all of your objects
in a single bucket, or you can organize them across several buckets. However, you can't create a bucket
from within another bucket.
Bucket operations
The high availability engineering of Amazon S3 is focused on get, put, list, and delete operations. Because
bucket operations work against a centralized, global resource space, it is not appropriate to create or
delete buckets on the high availability code path of your application. It's better to create or delete
buckets in a separate initialization or setup routine that you run less often.
If your application automatically creates buckets, choose a bucket naming scheme that is unlikely to
cause naming conflicts. Ensure that your application logic will choose a different bucket name if a bucket
name is already taken.
For more information about bucket naming, see Bucket naming rules (p. 27).
To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the
object is in the bucket, you can open it, download it, and move it. When you no longer need an object or
a bucket, you can clean up these resources.
With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.
Topics
• Amazon S3 objects overview (p. 56)
• Creating object key names (p. 58)
• Working with object metadata (p. 60)
• Uploading objects (p. 65)
• Uploading and copying objects using multipart upload (p. 72)
• Copying objects (p. 102)
• Downloading an object (p. 109)
• Deleting Amazon S3 objects (p. 115)
• Organizing, listing, and working with your objects (p. 135)
• Accessing an object using a presigned URL (p. 144)
• Retrieving Amazon S3 objects using BitTorrent (p. 153)
Key
The name that you assign to an object. You use the object key to retrieve the object. For more
information, see Working with object metadata (p. 60).
Version ID
Within a bucket, a key and version ID uniquely identify an object. The version ID is a string that
Amazon S3 generates when you add an object to a bucket. For more information, see Using
versioning in S3 buckets (p. 453).
Value
An object value can be any sequence of bytes. Objects can range in size from zero to 5 TB. For more
information, see Uploading objects (p. 65).
Metadata
A set of name-value pairs with which you can store information regarding the object. You can assign
metadata, referred to as user-defined metadata, to your objects in Amazon S3. Amazon S3 also
assigns system-metadata to these objects, which it uses for managing objects. For more information,
see Working with object metadata (p. 60).
Subresources
Amazon S3 uses the subresource mechanism to store object-specific additional information. Because
subresources are subordinates to objects, they are always associated with some other entity such as
an object or a bucket. For more information, see Object subresources (p. 57).
Access control information
You can control access to the objects you store in Amazon S3. Amazon S3 supports both the
resource-based access control, such as an access control list (ACL) and bucket policies, and user-
based access control. For more information, see Identity and access management in Amazon
S3 (p. 209).
Your Amazon S3 resources (for example, buckets and objects) are private by default. You must
explicitly grant permission for others to access these resources. For more information about sharing
objects, see Accessing an object using a presigned URL (p. 144).
Object subresources
Amazon S3 defines a set of subresources associated with buckets and objects. Subresources are
subordinates to objects. This means that subresources don't exist on their own. They are always
associated with some other entity, such as an object or a bucket.
The following table lists the subresources associated with Amazon S3 objects.
Subresource Description
acl Contains a list of grants identifying the grantees and the permissions granted. When
you create an object, the acl identifies the object owner as having full control over the
object. You can retrieve an object ACL or replace it with an updated list of grants. Any
update to an ACL requires you to replace the existing ACL. For more information about
ACLs, see Managing access with ACLs (p. 383).
torrent Amazon S3 supports the BitTorrent protocol. Amazon S3 uses the torrent subresource
to return the torrent file associated with the specific object. To retrieve a torrent file,
you specify the torrent subresource in your GET request. Amazon S3 creates a torrent
file and returns it. You can only retrieve the torrent subresource; you cannot create,
update, or delete the torrent subresource. For more information, see Retrieving
Amazon S3 objects using BitTorrent (p. 153).
Note
Amazon S3 does not support the BitTorrent protocol in AWS Regions launched
after May 30, 2016.
When you create an object, you specify the key name, which uniquely identifies the object in the bucket.
For example, on the Amazon S3 console, when you highlight a bucket, a list of objects in your bucket
appears. These names are the object keys. The name for a key is a sequence of Unicode characters whose
UTF-8 encoding is at most 1,024 bytes long.
The Amazon S3 data model is a flat structure: You create a bucket, and the bucket store objects. There
is no hierarchy of subbuckets or subfolders. However, you can infer logical hierarchy using key name
prefixes and delimiters as the Amazon S3 console does. The Amazon S3 console supports a concept of
folders. For more information about how to edit metadata from the Amazon S3 console, see Editing
object metadata in the Amazon S3 console (p. 63).
Suppose that your bucket (admin-created) has four objects with the following object keys:
Development/Projects.xls
Finance/statement1.pdf
Private/taxdocument.pdf
s3-dg.pdf
The console uses the key name prefixes (Development/, Finance/, and Private/) and delimiter ('/')
to present a folder structure. The s3-dg.pdf key does not have a prefix, so its object appears directly at
the root level of the bucket. If you open the Development/ folder, you see the Projects.xlsx object
in it.
• Amazon S3 supports buckets and objects, and there is no hierarchy. However, by using prefixes and
delimiters in an object key name, the Amazon S3 console and the AWS SDKs can infer hierarchy and
introduce the concept of folders.
• The Amazon S3 console implements folder object creation by creating a zero-byte object with the
folder prefix and delimiter value as the key. These folder objects don't appear in the console. Otherwise
they behave like any other objects and can be viewed and manipulated through the REST API, AWS CLI,
and AWS SDKs.
Safe characters
The following character sets are generally safe for use in key names.
• 4my-organization
• my.great_photos-2014/jan/myvacation.jpg
• videos/2014/birthday/video1.wmv
Important
If an object key name ends with a single period (.), or two periods (..), you can’t download the
object using the Amazon S3 console. To download an object with a key name ending with “.” or
“..”, you must use the AWS Command Line Interface (AWS CLI), AWS SDKs, or REST API.
• Ampersand ("&")
• Dollar ("$")
• ASCII character ranges 00–1F hex (0–31 decimal) and 7F (127 decimal)
• 'At' symbol ("@")
• Equals ("=")
• Semicolon (";")
• Colon (":")
• Plus ("+")
• Space – Significant sequences of spaces might be lost in some uses (especially multiple spaces)
• Comma (",")
• Question mark ("?")
Characters to avoid
Avoid the following characters in a key name because of significant special handling for consistency
across all applications.
• Backslash ("\")
• Left curly brace ("{")
• Non-printable ASCII characters (128–255 decimal characters)
• Caret ("^")
• Right curly brace ("}")
• Percent character ("%")
• Grave accent / back tick ("`")
• Right square bracket ("]")
• Quotation marks
• 'Greater Than' symbol (">")
• Left square bracket ("[")
• Tilde ("~")
• 'Less Than' symbol ("<")
• 'Pound' character ("#")
• Vertical bar / pipe ("|")
• ' as '
• ” as "
• & as &
• < as <
• > as >
• \r as or 
• \n as or 

Example
The following example illustrates the use of an XML entity code as a substitution for a carriage return.
This DeleteObjects request deletes an object with the key parameter: /some/prefix/objectwith
\rcarriagereturn (where the \r is the carriage return).
<Delete xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Object>
<Key>/some/prefix/objectwith carriagereturn</Key>
</Object>
</Delete>
There are two kinds of metadata in Amazon S3: system-defined metadata and user-defined metadata. The
sections below provide more information about system-defined and user-defined metadata. For more
information about editing metadata using the Amazon S3 console, see Editing object metadata in the
Amazon S3 console (p. 63).
1. Metadata such as object creation date is system controlled, where only Amazon S3 can modify the
value.
2. Other system metadata, such as the storage class configured for the object and whether the object
has server-side encryption enabled, are examples of system metadata whose values you control.
If your bucket is configured as a website, sometimes you might want to redirect a page request to
another page or an external URL. In this case, a webpage is an object in your bucket. Amazon S3 stores
the page redirect value as system metadata whose value you control.
When you create objects, you can configure values of these system metadata items or update the
values when you need to. For more information about storage classes, see Using Amazon S3 storage
classes (p. 496).
For more information about server-side encryption, see Protecting data using encryption (p. 157).
Note
The PUT request header is limited to 8 KB in size. Within the PUT request header, the system-
defined metadata is limited to 2 KB in size. The size of system-defined metadata is measured by
taking the sum of the number of bytes in the US-ASCII encoding of each key and value.
The following table provides a list of system-defined metadata and whether you can update it.
x-amz-storage-class Storage class used for storing the object. For more Yes
information, see Using Amazon S3 storage classes (p. 496).
When metadata is retrieved through the REST API, Amazon S3 combines headers that have the same
name (ignoring case) into a comma-delimited list. If some metadata contains unprintable characters,
it is not returned. Instead, the x-amz-missing-meta header is returned with a value of the number
of unprintable metadata entries. The HeadObject action retrieves metadata from an object without
returning the object itself. This operation is useful if you're only interested in an object's metadata.
To use HEAD, you must have READ access to the object. For more information, see HeadObject in the
Amazon Simple Storage Service API Reference.
User-defined metadata is a set of key-value pairs. Amazon S3 stores user-defined metadata keys in
lowercase.
To avoid issues around the presentation of these metadata values, you should conform to using US-ASCII
characters when using REST and UTF-8 when using SOAP or browser-based uploads via POST.
When using non US-ASCII characters in your metadata values, the provided Unicode string is examined
for non US-ASCII characters. If the string contains only US-ASCII characters, it is presented as is. If the
string contains non US-ASCII characters, it is first character-encoded using UTF-8 and then encoded into
US-ASCII.
Note
The PUT request header is limited to 8 KB in size. Within the PUT request header, the user-
defined metadata is limited to 2 KB in size. The size of user-defined metadata is measured by
taking the sum of the number of bytes in the UTF-8 encoding of each key and value.
For information about adding metadata to your object after it’s been uploaded, Editing object metadata
in the Amazon S3 console (p. 63).
You can also set some metadata when you upload the object and later edit it as your needs change. For
example, you might have a set of objects that you initially store in the STANDARD storage class. Over
time, you might no longer need this data to be highly available. So you change the storage class to
GLACIER by editing the value of the x-amz-storage-class key from STANDARD to GLACIER.
Note
Consider the following issues when you are editing object metadata in Amazon S3:
• This action creates a copy of the object with updated settings and the last-modified date. If S3
Versioning is enabled, a new version of the object is created, and the existing object becomes
an older version. The IAM role that changes the property also becomes the owner of the new
object or (object version).
• Editing metadata updates values for existing key names.
• Objects that are encrypted with customer-provided encryption keys (SSE-C) cannot be copied
using the console. You must use the AWS CLI, AWS SDK, or the Amazon S3 REST API.
Warning
When editing metadata of folders, wait for the Edit metadata operation to finish before
adding new objects to the folder. Otherwise, new objects might also be edited.
The following topics describe how to edit metadata of an object using the Amazon S3 console.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to your Amazon S3 bucket or folder, and select the check box to the left of the names of
the objects with metadata you want to edit.
3. On the Actions menu, choose Edit actions, and choose Edit metadata.
4. Review the objects listed, and choose Add metadata.
5. For metadata Type, select System-defined.
6. Specify a unique Key and the metadata Value.
7. To edit additional metadata, choose Add metadata. You can also choose Remove to remove a set of
type-key-values.
8. When you are done, choose Edit metadata and Amazon S3 edits the metadata of the specified
objects.
User-defined metadata can be as large as 2 KB total. To calculate the total size of user-defined metadata,
sum the number of bytes in the UTF-8 encoding for each key and value. Both keys and their values must
conform to US-ASCII standards. For more information, see User-defined object metadata (p. 62).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the objects that you want to add
metadata to.
Uploading objects
When you upload a file to Amazon S3, it is stored as an S3 object. Objects consist of the file data and
metadata that describes the object. You can have an unlimited number of objects in a bucket. Before
you can upload files to an Amazon S3 bucket, you need write permissions for the bucket. For more
information about access permissions, see Identity and access management in Amazon S3 (p. 209).
You can upload any file type—images, backups, data, movies, etc.—into an S3 bucket. The maximum size
of a file that you can upload by using the Amazon S3 console is 160 GB. To upload a file larger than 160
GB, use the AWS CLI, AWS SDK, or Amazon S3 REST API.
If you upload an object with a key name that already exists in a versioning-enabled bucket, Amazon
S3 creates another version of the object instead of replacing the existing object. For more information
about versioning, see Using the S3 console (p. 458).
Depending on the size of the data you are uploading, Amazon S3 offers the following options:
• Upload an object in a single operation using the AWS SDKs, REST API, or AWS CLI—With a single
PUT operation, you can upload a single object up to 5 GB in size.
• Upload a single object using the Amazon S3 Console—With the Amazon S3 Console, you can upload
a single object up to 160 GB in size.
• Upload an object in parts using the AWS SDKs, REST API, or AWS CLI—Using the multipart upload
API, you can upload a single large object, up to 5 TB in size.
The multipart upload API is designed to improve the upload experience for larger objects. You can
upload an object in parts. These object parts can be uploaded independently, in any order, and in
parallel. You can use a multipart upload for objects from 5 MB to 5 TB in size. For more information,
see Uploading and copying objects using multipart upload (p. 72).
When uploading an object, you can optionally request that Amazon S3 encrypt it before saving
it to disk, and decrypt it when you download it. For more information, see Protecting data using
encryption (p. 157).
When you upload an object, the object key name is the file name and any optional prefixes. In the
Amazon S3 console, you can create folders to organize your objects. In Amazon S3, folders are
represented as prefixes that appear in the object key name. If you upload an individual object to a folder
in the Amazon S3 console, the folder name is included in the object key name.
For example, if you upload an object named sample1.jpg to a folder named backup, the key name is
backup/sample1.jpg. However, the object is displayed in the console as sample1.jpg in the backup
folder. For more information about key names, see Working with object metadata (p. 60).
Note
If you rename an object or change any of the properties in the S3 console, for example Storage
Class, Encryption, Metadata, a new object is created to replace the old one. If S3 Versioning
is enabled, a new version of the object is created, and the existing object becomes an older
version. The role that changes the property also becomes the owner of the new object or (object
version).
When you upload a folder, Amazon S3 uploads all of the files and subfolders from the specified folder
to your bucket. It then assigns an object key name that is a combination of the uploaded file name
and the folder name. For example, if you upload a folder named /images that contains two files,
sample1.jpg and sample2.jpg, Amazon S3 uploads the files and then assigns the corresponding key
names, images/sample1.jpg and images/sample2.jpg. The key names include the folder name
as a prefix. The Amazon S3 console displays only the part of the key name that follows the last “/”. For
example, within an images folder the images/sample1.jpg and images/sample2.jpg objects are
displayed as sample1.jpg and a sample2.jpg.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to upload your folders or files to.
3. Choose Upload.
4. In the Upload window, do one of the following:
Amazon S3 uploads your objects and folders. When the upload completes, you can see a success
message on the Upload: status page.
7. To configure additional object properties before uploading, see To configure additional object
properties (p. 66).
For more information about storage classes, see Using Amazon S3 storage classes (p. 496).
3. To update the encryption settings for your objects, under Server-side encryption settings, do the
following.
For more information, see Protecting data using server-side encryption with Amazon S3-
managed encryption keys (SSE-S3) (p. 174).
c. To encrypt the uploaded files using the AWS Key Management Service (AWS KMS), choose AWS
Key Management Service key (SSE-KMS). Then choose an option for AWS KMS key.
For more information about creating a customer managed AWS KMS CMK, see Creating Keys
in the AWS Key Management Service Developer Guide. For more information about protecting
data with AWS KMS, see Protecting Data Using Server-Side Encryption with CMKs Stored in
AWS Key Management Service (SSE-KMS) (p. 158).
• Enter KMS master key ARN - Specify the AWS KMS key ARN for a customer managed CMK,
and enter the Amazon Resource Name (ARN).
You can use the KMS master key ARN to give an external account the ability to use an object
that is protected by an AWS KMS CMK. To do this, choose Enter KMS master key ARN, and
enter the Amazon Resource Name (ARN) for the external account. Administrators of an
external account that have usage permissions to an object protected by your AWS KMS CMK
can further restrict access by creating a resource-level IAM policy.
Note
To encrypt objects in a bucket, you can use only CMKs that are available in the same
AWS Region as the bucket.
4. To change access control list permissions, under Access control list (ACL), edit permissions.
For information about object access permissions, see Using the S3 console to set ACL permissions for
an object (p. 390). You can grant read access to your objects to the general public (everyone in the
world) for all of the files that you're uploading. We recommend that you do not change the default
setting for public read access. Granting public read access is applicable to a small subset of use cases
such as when buckets are used for websites. You can always make changes to object permissions
after you upload the object.
5. To add tags to all of the objects that you are uploading, choose Add tag. Type a tag name in the Key
field. Type a value for the tag.
Object tagging gives you a way to categorize storage. Each tag is a key-value pair. Key and tag
values are case sensitive. You can have up to 10 tags per object. A tag key can be up to 128 Unicode
characters in length and tag values can be up to 255 Unicode characters in length. For more
information about object tags, see Categorizing your storage using tags (p. 609).
6. To add metadata, choose Add metadata.
For system-defined metadata, you can select common HTTP headers, such as Content-Type
and Content-Disposition. For a list of system-defined metadata and information about whether
you can add the value, see System-defined object metadata (p. 61). Any metadata starting
with prefix x-amz-meta- is treated as user-defined metadata. User-defined metadata is stored
with the object and is returned when you download the object. Both the keys and their values
must conform to US-ASCII standards. User-defined metadata can be as large as 2 KB. For
more information about system defined and user defined metadata, see Working with object
metadata (p. 60).
b. For Key, choose a key.
c. Type a value for the key.
7. To upload your objects, choose Upload.
Amazon S3 uploads your object. When the upload completes, you can see a success message on the
Upload: status page.
8. Choose Exit.
Java
The following example creates two objects. The first object has a text string as data, and the second
object is a file. The example creates the first object by specifying the bucket name, object key, and
text data directly in a call to AmazonS3Client.putObject(). The example creates the second
object by using a PutObjectRequest that specifies the bucket name, object key, and file path. The
PutObjectRequest also specifies the ContentType header and title metadata.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import java.io.File;
import java.io.IOException;
try {
//This code expects that you have AWS credentials set up per:
// https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-
credentials.html
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();
.NET
The following C# code example creates two objects with two PutObjectRequest requests:
• The first PutObjectRequest request saves a text string as sample object data. It also specifies
the bucket and object key names.
• The second PutObjectRequest request uploads a file by specifying the file name. This request
also specifies the ContentType header and optional object metadata (a title).
For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadObjectTest
{
private const string bucketName = "*** bucket name ***";
// For simplicity the example creates two objects from the same file.
// You specify key names for these objects.
private const string keyName1 = "*** key name for first object created ***";
private const string keyName2 = "*** key name for second object created ***";
private const string filePath = @"*** file path ***";
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.EUWest1;
putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
PutObjectResponse response2 = await client.PutObjectAsync(putRequest2);
}
catch (AmazonS3Exception e)
{
Console.WriteLine(
"Error encountered ***. Message:'{0}' when writing an object"
, e.Message);
}
catch (Exception e)
{
Console.WriteLine(
PHP
This topic guides you through using classes from the AWS SDK for PHP to upload an object of
up to 5 GB in size. For larger files, you must use multipart upload API. For more information, see
Uploading and copying objects using multipart upload (p. 72).
This topic assumes that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly installed.
The following PHP example creates an object in a specified bucket by uploading data using the
putObject() method. For information about running the PHP examples in this guide, see Running
PHP Examples (p. 952).
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
try {
// Upload data.
$result = $s3->putObject([
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => 'Hello, world!',
'ACL' => 'public-read'
]);
Ruby
The AWS SDK for Ruby - Version 3 has two ways of uploading an object to Amazon S3. The first
uses a managed file uploader, which makes it easy to upload files of any size from disk. To use the
managed file uploader method:
Example
require 'aws-sdk-s3'
The second way that AWS SDK for Ruby - Version 3 can upload an object uses the #put method of
Aws::S3::Object. This is useful if the object is a string or an I/O object that is not a file on disk. To
use this method:
Example
require 'aws-sdk-s3'
# './my-file.txt'
# )
def object_uploaded?(s3_resource, bucket_name, object_key, file_path)
object = s3_resource.bucket(bucket_name).object(object_key)
File.open(file_path, 'rb') do |file|
object.put(body: file)
end
return true
rescue StandardError => e
puts "Error uploading object: #{e.message}"
return false
end
• If you're uploading large objects over a stable high-bandwidth network, use multipart upload to
maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded
performance.
• If you're uploading over a spotty network, use multipart upload to increase resiliency to network errors
by avoiding upload restarts. When using multipart upload, you need to retry uploading only parts that
are interrupted during the upload. You don't need to restart uploading your object from the beginning.
You can list all of your in-progress multipart uploads or get a list of the parts that you have uploaded for
a specific multipart upload. Each of these operations is explained in this section.
When you send a request to initiate a multipart upload, Amazon S3 returns a response with an upload
ID, which is a unique identifier for your multipart upload. You must include this upload ID whenever you
upload parts, list the parts, complete an upload, or stop an upload. If you want to provide any metadata
describing the object being uploaded, you must provide it in the request to initiate multipart upload.
Parts upload
When uploading a part, in addition to the upload ID, you must specify a part number. You can choose
any part number between 1 and 10,000. A part number uniquely identifies a part and its position in the
object you are uploading. The part number that you choose doesn’t need to be in a consecutive sequence
(for example, it can be 1, 5, and 14). If you upload a new part using the same part number as a previously
uploaded part, the previously uploaded part is overwritten.
Whenever you upload a part, Amazon S3 returns an ETag header in its response. For each part upload,
you must record the part number and the ETag value. You must include these values in the subsequent
request to complete the multipart upload.
Note
After you initiate a multipart upload and upload one or more parts, you must either complete
or stop the multipart upload in order to stop getting charged for storage of the uploaded parts.
Only after you either complete or stop a multipart upload will Amazon S3 free up the parts
storage and stop charging you for the parts storage.
When you complete a multipart upload, Amazon S3 creates an object by concatenating the parts in
ascending order based on the part number. If any object metadata was provided in the initiate multipart
upload request, Amazon S3 associates that metadata with the object. After a successful complete
request, the parts no longer exist.
Your complete multipart upload request must include the upload ID and a list of both part numbers
and corresponding ETag values. The Amazon S3 response includes an ETag that uniquely identifies the
combined object data. This ETag is not necessarily an MD5 hash of the object data.
You can optionally stop the multipart upload. After stopping a multipart upload, you cannot upload any
part using that upload ID again. All storage from any part of the canceled multipart upload is then freed.
If any part uploads were in-progress, they can still succeed or fail even after you stop. To free all storage
consumed by all parts, you must stop a multipart upload only after all part uploads have completed.
You can list the parts of a specific multipart upload or all in-progress multipart uploads. The list parts
operation returns the parts information that you have uploaded for a specific multipart upload. For each
list parts request, Amazon S3 returns the parts information for the specified multipart upload, up to a
maximum of 1,000 parts. If there are more than 1,000 parts in the multipart upload, you must send a
series of list part requests to retrieve all the parts. Note that the returned list of parts doesn't include
parts that haven't completed uploading. Using the list multipart uploads operation, you can obtain a list
of multipart uploads in progress.
An in-progress multipart upload is an upload that you have initiated, but have not yet completed or
stopped. Each request returns at most 1,000 multipart uploads. If there are more than 1,000 multipart
uploads in progress, you need to send additional requests to retrieve the remaining multipart uploads.
Only use the returned listing for verification. You should not use the result of this listing when sending
a complete multipart upload request. Instead, maintain your own list of the part numbers you specified
when uploading parts and the corresponding ETag values that Amazon S3 returns
These libraries provide a high-level abstraction that makes uploading multipart objects easy. However, if
your application requires, you can use the REST API directly. The following sections in the Amazon Simple
Storage Service API Reference describe the REST API for multipart upload.
The following sections in the AWS Command Line Interface describe the operations for multipart upload.
Create You must be allowed to perform the s3:PutObject action on an object to create
Multipart multipart upload.
Upload
The bucket owner can allow other principals to perform the s3:PutObject action.
Initiate You must be allowed to perform the s3:PutObject action on an object to initiate
Multipart multipart upload.
Upload
The bucket owner can allow other principals to perform the s3:PutObject action.
Initiator Container element that identifies who initiated the multipart upload. If the initiator is
an AWS account, this element provides the same information as the Owner element.
If the initiator is an IAM User, this element provides the user ARN and display name.
Upload Part You must be allowed to perform the s3:PutObject action on an object to upload a
part.
The bucket owner must allow the initiator to perform the s3:PutObject action on
an object in order for the initiator to upload a part for that object.
Upload Part You must be allowed to perform the s3:PutObject action on an object to upload a
(Copy) part. Because you are uploading a part from an existing object, you must be allowed
s3:GetObject on the source object.
For the initiator to upload a part for an object, the owner of the bucket must allow
the initiator to perform the s3:PutObject action on the object.
Complete You must be allowed to perform the s3:PutObject action on an object to complete
Multipart a multipart upload.
Upload
Stop Multipart You must be allowed to perform the s3:AbortMultipartUpload action to stop a
Upload multipart upload.
By default, the bucket owner and the initiator of the multipart upload are allowed
to perform this action. If the initiator is an IAM user, that user's AWS account is also
allowed to stop that multipart upload.
In addition to these defaults, the bucket owner can allow other principals to perform
the s3:AbortMultipartUpload action on an object. The bucket owner can deny
any principal the ability to perform the s3:AbortMultipartUpload action.
By default, the bucket owner has permission to list parts for any multipart upload to
the bucket. The initiator of the multipart upload has the permission to list parts of
the specific multipart upload. If the multipart upload initiator is an IAM user, the AWS
account controlling that IAM user also has permission to list parts of that upload.
In addition to these defaults, the bucket owner can allow other principals to perform
the s3:ListMultipartUploadParts action on an object. The bucket owner can
also deny any principal the ability to perform the s3:ListMultipartUploadParts
action.
In addition to the default, the bucket owner can allow other principals to perform the
s3:ListBucketMultipartUploads action on the bucket.
AWS KMS To perform a multipart upload with encryption using an AWS Key Management
Encrypt and Service (AWS KMS) customer master key (CMK), the requester must have permission
Decrypt to the kms:Decrypt and kms:GenerateDataKey* actions on the key. These
related permissions are required because Amazon S3 must decrypt and read data from the
permissions encrypted file parts before it completes the multipart upload.
For more information, see Uploading a large file to Amazon S3 with encryption using
an AWS KMS CMK in the AWS Knowledge Center.
If your IAM user or role is in the same AWS account as the AWS KMS CMK, then you
must have these permissions on the key policy. If your IAM user or role belongs to a
different account than the CMK, then you must have the permissions on both the key
policy and your IAM user or role.
>
For information on the relationship between ACL permissions and permissions in access policies, see
Mapping of ACL permissions and access policy permissions (p. 386). For information on IAM users, go to
Working with Users and Groups.
Topics
• Configuring a bucket lifecycle policy to abort incomplete multipart uploads (p. 77)
• Uploading an object using multipart upload (p. 78)
• Uploading a directory using the high-level .NET TransferUtility class (p. 88)
Amazon S3 supports a bucket lifecycle rule that you can use to direct Amazon S3 to stop multipart
uploads that don't complete within a specified number of days after being initiated. When a multipart
upload is not completed within the timeframe, it becomes eligible for an abort operation and Amazon S3
stops the multipart upload (and deletes the parts associated with the multipart upload).
The following is an example lifecycle configuration that specifies a rule with the
AbortIncompleteMultipartUpload action.
<LifecycleConfiguration>
<Rule>
<ID>sample-rule</ID>
<Prefix></Prefix>
<Status>Enabled</Status>
<AbortIncompleteMultipartUpload>
<DaysAfterInitiation>7</DaysAfterInitiation>
</AbortIncompleteMultipartUpload>
</Rule>
</LifecycleConfiguration>
In the example, the rule does not specify a value for the Prefix element (object key name prefix).
Therefore, it applies to all objects in the bucket for which you initiated multipart uploads. Any multipart
uploads that were initiated and did not complete within seven days become eligible for an abort
operation. The abort action has no effect on completed multipart uploads.
For more information about the bucket lifecycle configuration, see Managing your storage
lifecycle (p. 501).
Note
If the multipart upload is completed within the number of days specified in the rule, the
AbortIncompleteMultipartUpload lifecycle action does not apply (that is, Amazon S3 does
not take any action). Also, this action does not apply to objects. No objects are deleted by this
lifecycle action.
1. Set up the AWS CLI. For instructions, see Developing with Amazon S3 using the AWS CLI (p. 942).
2. Save the following example lifecycle configuration in a file (lifecycle.json). The example
configuration specifies empty prefix and therefore it applies to all objects in the bucket. You can
specify a prefix to restrict the policy to a subset of objects.
{
"Rules": [
{
"ID": "Test Rule",
"Status": "Enabled",
"Filter": {
"Prefix": ""
},
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}
]
}
3. Run the following CLI command to set lifecycle configuration on your bucket.
4. To verify, retrieve the lifecycle configuration using the get-bucket-lifecycle CLI command.
You can upload data from a file or a stream. You can also set advanced options, such as the part size
you want to use for the multipart upload, or the number of concurrent threads you want to use when
uploading the parts. You can also set optional object properties, the storage class, or the access control
list (ACL). You use the PutObjectRequest and the TransferManagerConfiguration classes to set
these advanced options.
When possible, TransferManager tries to use multiple threads to upload multiple parts of a single
upload at once. When dealing with large content sizes and high bandwidth, this can increase throughput
significantly.
In addition to file-upload functionality, the TransferManager class enables you to stop an in-progress
multipart upload. An upload is considered to be in progress after you initiate it and until you complete or
stop it. The TransferManager stops all in-progress multipart uploads on a specified bucket that were
initiated before a specified date and time.
If you need to pause and resume multipart uploads, vary part sizes during the upload, or do not know
the size of the data in advance, use the low-level PHP API. For more information about multipart
uploads, including additional functionality offered by the low-level API methods, see Using the AWS
SDKs (low-level-level API) (p. 82).
Java
The following example loads an object using the high-level multipart upload Java API (the
TransferManager class). For instructions on creating and testing a working sample, see Testing the
Amazon S3 Java Code Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;
import java.io.File;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
.NET
To upload a file to an S3 bucket, use the TransferUtility class. When uploading data from a file,
you must provide the object's key name. If you don't, the API uses the file name for the key name.
When uploading data from a stream, you must provide the object's key name.
To set advanced upload options—such as the part size, the number of threads when
uploading the parts concurrently, metadata, the storage class, or ACL—use the
TransferUtilityUploadRequest class.
The following C# example uploads a file to an Amazon S3 bucket in multiple parts. It shows how to
use various TransferUtility.Upload overloads to upload a file. Each successive call to upload
replaces the previous upload. For information about the example's compatibility with a specific
version of the AWS SDK for .NET and instructions for creating and testing a working sample, see
Running the Amazon S3 .NET Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadFileMPUHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file to
upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
// Option 1. Upload a file. The file name is used as the object key
name.
await fileTransferUtility.UploadAsync(filePath, bucketName);
Console.WriteLine("Upload 1 completed");
await fileTransferUtility.UploadAsync(fileTransferUtilityRequest);
Console.WriteLine("Upload 4 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}
PHP
The following PHP example uploads a file to an Amazon S3 bucket. The example demonstrates how
to set parameters for the MultipartUploader object.
For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).
require 'vendor/autoload.php';
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
]);
Java
The following example shows how to use the low-level Java classes to upload a file. It performs the
following steps:
Example
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
// Create a list of ETag objects. You retrieve ETags for each object part
uploaded,
// then, after each individual part has been uploaded, pass the list of
ETags to
// the request to complete the upload.
List<PartETag> partETags = new ArrayList<PartETag>();
// Upload the part and add the response's ETag to our list.
UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest);
partETags.add(uploadResult.getPartETag());
filePosition += partSize;
}
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}
.NET
The following C# example shows how to use the low-level AWS SDK for .NET multipart upload API
to upload a file to an S3 bucket. For information about Amazon S3 multipart uploads, see Uploading
and copying objects using multipart upload (p. 72).
Note
When you use the AWS SDK for .NET API to upload large objects, a timeout might occur
while data is being written to the request stream. You can set an explicit timeout using the
UploadPartRequest.
The following C# example uploads a file to an S3 bucket using the low-level multipart upload API.
For information about the example's compatibility with a specific version of the AWS SDK for .NET
and instructions for creating and testing a working sample, see Running the Amazon S3 .NET Code
Examples (p. 951).
using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadFileMPULowLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file to
upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
BucketName = bucketName,
Key = keyName
};
// Upload parts.
long contentLength = new FileInfo(filePath).Length;
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB
try
{
Console.WriteLine("Uploading parts");
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++)
{
UploadPartRequest uploadRequest = new UploadPartRequest
{
BucketName = bucketName,
Key = keyName,
UploadId = initResponse.UploadId,
PartNumber = i,
PartSize = partSize,
FilePosition = filePosition,
FilePath = filePath
};
filePosition += partSize;
}
UploadId = initResponse.UploadId
};
await s3Client.AbortMultipartUploadAsync(abortMPURequest);
}
}
public static void UploadPartProgressEventCallback(object sender,
StreamTransferProgressArgs e)
{
// Process event.
Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
}
}
PHP
This topic shows how to use the low-level uploadPart method from version 3 of the AWS SDK for
PHP to upload a file in multiple parts. It assumes that you are already following the instructions for
Using the AWS SDK for PHP and Running PHP Examples (p. 952) and have the AWS SDK for PHP
properly installed.
The following PHP example uploads a file to an Amazon S3 bucket using the low-level PHP API
multipart upload. For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).
require 'vendor/autoload.php';
use Aws\S3\S3Client;
$result = $s3->createMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'StorageClass' => 'REDUCED_REDUNDANCY',
'Metadata' => [
'param1' => 'value 1',
'param2' => 'value 2',
'param3' => 'value 3'
]
]);
$uploadId = $result['UploadId'];
Alternatively, you can use the following multipart upload client operations directly:
For more information, see Using the AWS SDK for Ruby - Version 3 (p. 953).
You can also use the REST API to make your own REST requests, or you can use one of the AWS SDKs. For
more information about the REST API, see Using the REST API (p. 87). For more information about the
SDKs, see Uploading an object using multipart upload (p. 78).
To select files in the specified directory based on filtering criteria, specify filtering expressions. For
example, to upload only the .pdf files from a directory, specify the "*.pdf" filter expression.
When uploading files from a directory, you don't specify the key names for the resulting objects. Amazon
S3 constructs the key names using the original file path. For example, assume that you have a directory
called c:\myfolder with the following structure:
Example
C:\myfolder
\a.txt
\b.pdf
\media\
An.mp3
When you upload this directory, Amazon S3 uses the following key names:
Example
a.txt
b.pdf
media/An.mp3
Example
The following C# example uploads a directory to an Amazon S3 bucket. It shows how to use various
TransferUtility.UploadDirectory overloads to upload the directory. Each successive call to
upload replaces the previous upload. For instructions on how to create and test a working sample, see
Running the Amazon S3 .NET Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadDirMPUHighLevelAPITest
{
private const string existingBucketName = "*** bucket name ***";
private const string directoryPath = @"*** directory path ***";
// The example uploads only .txt files.
private const string wildCard = "*.txt";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
UploadDirAsync().Wait();
}
// 1. Upload a directory.
await directoryTransferUtility.UploadDirectoryAsync(directoryPath,
existingBucketName);
Console.WriteLine("Upload statement 1 completed");
await directoryTransferUtility.UploadDirectoryAsync(request);
Console.WriteLine("Upload statement 3 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine(
"Error encountered ***. Message:'{0}' when writing an object",
e.Message);
}
catch (Exception e)
{
Console.WriteLine(
"Unknown encountered on server. Message:'{0}' when writing an object",
e.Message);
}
}
}
}
The following tasks guide you through using the low-level Java classes to list all in-progress
multipart uploads on a bucket.
Example
ListMultipartUploadsRequest allMultpartUploadsRequest =
new ListMultipartUploadsRequest(existingBucketName);
MultipartUploadListing multipartUploadListing =
s3Client.listMultipartUploads(allMultpartUploadsRequest);
.NET
To list all of the in-progress multipart uploads on a specific bucket, use the AWS SDK
for .NET low-level multipart upload API's ListMultipartUploadsRequest class.
The AmazonS3Client.ListMultipartUploads method returns an instance of the
ListMultipartUploadsResponse class that provides information about the in-progress
multipart uploads.
An in-progress multipart upload is a multipart upload that has been initiated using the initiate
multipart upload request, but has not yet been completed or stopped. For more information about
Amazon S3 multipart uploads, see Uploading and copying objects using multipart upload (p. 72).
The following C# example shows how to use the AWS SDK for .NET to list all in-progress multipart
uploads on a bucket. For information about the example's compatibility with a specific version of
the AWS SDK for .NET and instructions on how to create and test a working sample, see Running the
Amazon S3 .NET Code Examples (p. 951).
PHP
This topic shows how to use the low-level API classes from version 3 of the AWS SDK for PHP to
list all in-progress multipart uploads on a bucket. It assumes that you are already following the
instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 952) and have the AWS
SDK for PHP properly installed.
The following PHP example demonstrates listing all in-progress multipart uploads on a bucket.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
Java
Example
Example
The following Java code uploads a file and uses the ProgressListener to track the upload
progress. For instructions on how to create and test a working sample, see Testing the Amazon S3
Java Code Examples (p. 950).
import java.io.File;
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.event.ProgressEvent;
import com.amazonaws.event.ProgressListener;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.Upload;
// You can ask the upload for its progress, or you can
// add a ProgressListener to your request to receive notifications
// when bytes are transferred.
request.setGeneralProgressListener(new ProgressListener() {
@Override
public void progressChanged(ProgressEvent progressEvent) {
System.out.println("Transferred bytes: " +
progressEvent.getBytesTransferred());
}
});
try {
// You can block and wait for the upload to finish
upload.waitForCompletion();
} catch (AmazonClientException amazonClientException) {
System.out.println("Unable to upload file, upload aborted.");
amazonClientException.printStackTrace();
}
}
}
.NET
The following C# example uploads a file to an S3 bucket using the TransferUtility class, and
tracks the progress of the upload. For information about the example's compatibility with a specific
version of the AWS SDK for .NET and instructions for creating and testing a working sample, see
Running the Amazon S3 .NET Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class TrackMPUUsingHighLevelAPITest
{
private const string bucketName = "*** provide the bucket name ***";
private const string keyName = "*** provide the name for the uploaded object
***";
private const string filePath = " *** provide the full path name of the file to
upload **";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
Key = keyName
};
uploadRequest.UploadProgressEvent +=
new EventHandler<UploadProgressArgs>
(uploadRequest_UploadPartProgressEvent);
await fileTransferUtility.UploadAsync(uploadRequest);
Console.WriteLine("Upload completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
You are billed for all storage associated with uploaded parts. For more information, see Multipart upload
and pricing (p. 74). So it's important that you either complete the multipart upload to have the object
created or stop the multipart upload to remove any uploaded parts.
You can stop an in-progress multipart upload in Amazon S3 using the AWS Command Line Interface
(AWS CLI), REST API, or AWS SDKs. You can also stop an incomplete multipart upload using a bucket
lifecycle policy.
The following tasks guide you through using the high-level Java classes to stop multipart uploads.
The following Java code stops all multipart uploads in progress that were initiated on a specific
bucket over a week ago. For instructions on how to create and test a working sample, see Testing the
Amazon S3 Java Code Examples (p. 950).
import java.util.Date;
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.transfer.TransferManager;
try {
tm.abortMultipartUploads(existingBucketName, oneWeekAgo);
} catch (AmazonClientException amazonClientException) {
System.out.println("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
}
}
Note
You can also stop a specific multipart upload. For more information, see Using the AWS
SDKs (low-level API) (p. 96).
.NET
The following C# example stops all in-progress multipart uploads that were initiated on a specific
bucket over a week ago. For information about the example's compatibility with a specific version of
the AWS SDK for .NET and instructions on creating and testing a working sample, see Running the
Amazon S3 .NET Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class AbortMPUUsingHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
Note
You can also stop a specific multipart upload. For more information, see Using the AWS
SDKs (low-level API) (p. 96).
To stop a multipart upload, you provide the upload ID, and the bucket and key names that are used in
the upload. After you have stopped a multipart upload, you can't use the upload ID to upload additional
parts. For more information about Amazon S3 multipart uploads, see Uploading and copying objects
using multipart upload (p. 72).
Java
Example
InitiateMultipartUploadRequest initRequest =
new InitiateMultipartUploadRequest(existingBucketName, keyName);
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);
Note
Instead of a specific multipart upload, you can stop all your multipart uploads initiated
before a specific time that are still in progress. This clean-up operation is useful to stop old
multipart uploads that you initiated but did not complete or stop. For more information,
see Using the AWS SDKs (high-level API) (p. 94).
.NET
The following C# example shows how to stop a multipart upload. For a complete C# sample that
includes the following code, see Using the AWS SDKs (low-level-level API) (p. 82).
You can also abort all in-progress multipart uploads that were initiated prior to a specific time. This
clean-up operation is useful for aborting multipart uploads that didn't complete or were aborted.
For more information, see Using the AWS SDKs (high-level API) (p. 94).
PHP
This example shows how to use a class from version 3 of the AWS SDK for PHP to abort a multipart
upload that is in progress. It assumes that you are already following the instructions for Using the
AWS SDK for PHP and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly
installed. The example the abortMultipartUpload() method.
For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).
require 'vendor/autoload.php';
use Aws\S3\S3Client;
Java
Example
The following example shows how to use the Amazon S3 low-level Java API to perform a multipart
copy. For instructions on creating and testing a working sample, see Testing the Amazon S3 Java
Code Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
// Get the object size to track the end of the copy operation.
GetObjectMetadataRequest metadataRequest = new
GetObjectMetadataRequest(sourceBucketName, sourceObjectKey);
ObjectMetadata metadataResult =
s3Client.getObjectMetadata(metadataRequest);
long objectSize = metadataResult.getContentLength();
// Complete the upload request to concatenate all uploaded parts and make
the copied object available.
CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest(
destBucketName,
destObjectKey,
initResult.getUploadId(),
getETags(copyResponses));
s3Client.completeMultipartUpload(completeRequest);
System.out.println("Multipart copy complete.");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
.NET
The following C# example shows how to use the AWS SDK for .NET to copy an Amazon S3 object
that is larger than 5 GB from one source location to another, such as from one bucket to another. To
copy objects that are smaller than 5 GB, use the single-operation copy procedure described in Using
the AWS SDKs (p. 105). For more information about Amazon S3 multipart uploads, see Uploading
and copying objects using multipart upload (p. 72).
This example shows how to copy an Amazon S3 object that is larger than 5 GB from one S3
bucket to another using the AWS SDK for .NET multipart upload API. For information about SDK
compatibility and instructions for creating and testing a working sample, see Running the Amazon
S3 .NET Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class CopyObjectUsingMPUapiTest
{
private const string sourceBucket = "*** provide the name of the bucket with
source object ***";
private const string targetBucket = "*** provide the name of the bucket to copy
the object to ***";
private const string sourceObjectKey = "*** provide the name of object to copy
***";
private const string targetObjectKey = "*** provide the name of the object copy
***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
try
{
// Get the size of the object.
GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest
{
BucketName = sourceBucket,
Key = sourceObjectKey
};
GetObjectMetadataResponse metadataResponse =
await s3Client.GetObjectMetadataAsync(metadataRequest);
long objectSize = metadataResponse.ContentLength; // Length in bytes.
long bytePosition = 0;
for (int i = 1; bytePosition < objectSize; i++)
{
CopyPartRequest copyRequest = new CopyPartRequest
{
DestinationBucket = targetBucket,
DestinationKey = targetObjectKey,
SourceBucket = sourceBucket,
SourceKey = sourceObjectKey,
UploadId = uploadId,
FirstByte = bytePosition,
LastByte = bytePosition + partSize - 1 >= objectSize ?
objectSize - 1 : bytePosition + partSize - 1,
PartNumber = i
};
copyResponses.Add(await s3Client.CopyPartAsync(copyRequest));
bytePosition += partSize;
}
You can use these APIs to make your own REST requests, or you can use one the SDKs we provide. For
more information about using Multipart Upload with the AWS CLI, see Using the AWS CLI (p. 88). For
more information about the SDKs, see API support for multipart upload (p. 74).
Item Specification
Part size 5 MB to 5 GB. There is no size limit on the last part of your
multipart upload.
Copying objects
The copy operation creates a copy of an object that is already stored in Amazon S3. You can create
a copy of your object up to 5 GB in a single atomic operation. However, for copying an object that is
greater than 5 GB, you must use the multipart upload API. Using the copy operation, you can:
Each Amazon S3 object has metadata. It is a set of name-value pairs. You can set object metadata at
the time you upload it. After you upload the object, you cannot modify object metadata. The only way
to modify object metadata is to make a copy of the object and set the metadata. In the copy operation
you set the same object as the source and target.
Each object has metadata. Some of it is system metadata and other user-defined. Users control some of
the system metadata such as storage class configuration to use for the object, and configure server-side
encryption. When you copy an object, user-controlled system metadata and user-defined metadata are
also copied. Amazon S3 resets the system-controlled metadata. For example, when you copy an object,
Amazon S3 resets the creation date of the copied object. You don't need to set any of these values in
your copy request.
When copying an object, you might decide to update some of the metadata values. For example, if
your source object is configured to use standard storage, you might choose to use reduced redundancy
storage for the object copy. You might also decide to alter some of the user-defined metadata values
present on the source object. Note that if you choose to update any of the object's user-configurable
metadata (system or user-defined) during the copy, then you must explicitly specify all of the user-
configurable metadata present on the source object in your request, even if you are only changing only
one of the metadata values.
For more information about the object metadata, see Working with object metadata (p. 60).
Note
When copying objects, you can request Amazon S3 to save the target object encrypted with an AWS
Key Management Service (AWS KMS) customer master key (CMK), an Amazon S3-managed encryption
key, or a customer-provided encryption key. Accordingly, you must specify encryption information in
your request. If the copy source is an object that is stored in Amazon S3 using server-side encryption
with customer provided key, you will need to provide encryption information in your request so
Amazon S3 can decrypt the object for copying. For more information, see Protecting data using
encryption (p. 157).
To copy more than one Amazon S3 object with a single request, you can use Amazon S3 batch
operations. You provide S3 Batch Operations with a list of objects to operate on. S3 Batch Operations
calls the respective API to perform the specified operation. A single Batch Operations job can perform
the specified operation on billions of objects containing exabytes of data.
The S3 Batch Operations feature tracks progress, sends notifications, and stores a detailed completion
report of all actions, providing a fully managed, auditable, serverless experience. You can use S3
Batch Operations through the AWS Management Console, AWS CLI, AWS SDKs, or REST API. For more
information, see the section called “Batch Ops basics” (p. 662).
To copy an object
To copy an object, use the examples below.
To copy an object
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to copy.
3. Select the check box to the left of the names of the objects that you want to copy.
4. Choose Actions and choose Copy from the list of options that appears.
To move objects
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to move.
3. Select the check box to the left of the names of the objects that you want to move.
4. Choose Actions and choose Move from the list of options that appears.
Note
• This action creates a copy of all specified objects with updated settings, updates the last-
modified date in the specified location, and adds a delete marker to the original object.
• When moving folders, wait for the move action to finish before making additional changes in
the folders.
• Objects encrypted with customer-provided encryption keys (SSE-C) cannot be copied using
the S3 console. To copy objects encrypted with SSE-C, use the AWS CLI, AWS SDK, or the
Amazon S3 REST API.
• This action updates metadata for bucket versioning, encryption, Object Lock features, and
archived objects.
Java
Example
The following example copies an object in Amazon S3 using the AWS SDK for Java. For instructions
on creating and testing a working sample, see Testing the Amazon S3 Java Code Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import java.io.IOException;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
.NET
The following C# example uses the high-level AWS SDK for .NET to copy objects that are as large
as 5 GB in a single operation. For objects that are larger than 5 GB, use the multipart upload copy
example described in Copying an object using multipart upload (p. 98).
This example makes a copy of an object that is a maximum of 5 GB. For information about the
example's compatibility with a specific version of the AWS SDK for .NET and instructions on how to
create and test a working sample, see Running the Amazon S3 .NET Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class CopyObjectTest
{
private const string sourceBucket = "*** provide the name of the bucket with
source object ***";
private const string destinationBucket = "*** provide the name of the bucket to
copy the object to ***";
private const string objectKey = "*** provide the name of object to copy ***";
private const string destObjectKey = "*** provide the destination object key
name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
PHP
This topic guides you through using classes from version 3 of the AWS SDK for PHP to copy a single
object and multiple objects within Amazon S3, from one bucket to another or within the same
bucket.
This topic assumes that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly installed.
The following PHP example illustrates the use of the copyObject() method to copy a single object
within Amazon S3 and using a batch of calls to CopyObject using the getcommand() method to
make multiple copies of an object.
Copying objects
2 To make multiple copies of an object, you run a batch of calls to the Amazon S3 client
getCommand() method, which is inherited from the Aws\CommandInterface class.
You provide the CopyObject command as the first argument and an array containing
the source bucket, source key name, target bucket, and target key name as the second
argument.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
// Copy an object.
$s3->copyObject([
'Bucket' => $targetBucket,
'Key' => "{$sourceKeyname}-copy",
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
]);
}
}
} catch (\Exception $e) {
// General error handling here
}
Ruby
The following tasks guide you through using the Ruby classes to copy an object in Amazon S3 from
one bucket to another or within the same bucket.
Copying objects
1 Use the Amazon S3 modularized gem for version 3 of the AWS SDK for Ruby, require
'aws-sdk-s3', and provide your AWS credentials. For more information about how
to provide your credentials, see Making requests using AWS account or IAM user
credentials (p. 909).
2 Provide the request information, such as source bucket name, source key name,
destination bucket name, and destination key.
The following Ruby code example demonstrates the preceding tasks using the #copy_object
method to copy an object from one bucket to another.
require 'aws-sdk-s3'
This example copies the flotsam object from the pacific bucket to the jetsam object of the
atlantic bucket, preserving its metadata.
PUT\r\n
\r\n
\r\n
Wed, 20 Feb 2008 22:12:21 +0000\r\n
x-amz-copy-source:/pacific/flotsam\r\n
/atlantic/jetsam
Amazon S3 returns the following response that specifies the ETag of the object and when it was last
modified.
HTTP/1.1 200 OK
x-amz-id-2: Vyaxt7qEbzv34BnSu5hctyyNSlHTYZFMWK4FtzO+iX8JQNyaLdTshL0KxatbaOZt
x-amz-request-id: 6B13C3C5B34AF333
Date: Wed, 20 Feb 2008 22:13:01 +0000
Content-Type: application/xml
Transfer-Encoding: chunked
Connection: close
Server: AmazonS3
<?xml version="1.0" encoding="UTF-8"?>
<CopyObjectResult>
<LastModified>2008-02-20T22:13:01</LastModified>
<ETag>"7e9c608af58950deeb370c98608ed097"</ETag>
</CopyObjectResult>
Downloading an object
This section explains how to download objects from an S3 bucket.
Data transfer fees apply when you download objects. For information about Amazon S3 features, and
pricing, see Amazon S3.
Important
If an object key name consists of a single period (.), or two periods (..), you can’t download the
object using the Amazon S3 console. To download an object with a key name of “.” or “..”, you
must use the AWS CLI, AWS SDKs, or REST API. For more information about naming objects, see
Object key naming guidelines (p. 58).
You can download a single object per request using the Amazon S3 console. To download multiple
objects, use the AWS CLI, AWS SDKs, or REST API.
When you download an object programmatically, its metadata is returned in the response headers. There
are times when you want to override certain response header values returned in a GET response. For
example, you might override the Content-Disposition response header value in your GET request.
The REST GET Object API (see GET Object) allows you to specify query string parameters in your GET
request to override these values. The AWS SDKs for Java, .NET, and PHP also provide necessary objects
you can use to specify values for these response headers in your GET request.
When retrieving objects that are stored encrypted using server-side encryption, you must provide
appropriate request headers. For more information, see Protecting data using encryption (p. 157).
Data transfer fees apply when you download objects. For information about Amazon S3 features, and
pricing, see Amazon S3.
Important
If an object key name consists of a single period (.), or two periods (..), you can’t download the
object using the Amazon S3 console. To download an object with a key name of “.” or “..”, you
must use the AWS CLI, AWS SDKs, or REST API. For more information about naming objects, see
Object key naming guidelines (p. 58).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to download an object from.
3. You can download an object from an S3 bucket in any of the following ways:
When you download an object through the AWS SDK for Java, Amazon S3 returns all of the object's
metadata and an input stream from which to read the object's contents.
• Execute the AmazonS3Client.getObject() method, providing the bucket name and object key
in the request.
• Execute one of the S3Object instance methods to process the input stream.
Note
Your network connection remains open until you read all of the data or close the input
stream. We recommend that you read the content of the stream as quickly as possible.
• Instead of reading the entire object, you can read only a portion of the object data by specifying
the byte range that you want in the request.
• You can optionally override the response header values by using a ResponseHeaderOverrides
object and setting the corresponding request property. For example, you can use this feature to
indicate that the object should be downloaded into a file with a different file name than the object
key name.
The following example retrieves an object from an Amazon S3 bucket three ways: first, as a
complete object, then as a range of bytes from the object, then as a complete object with overridden
response header values. For more information about getting objects from Amazon S3, see GET
Object. For instructions on creating and testing a working sample, see Testing the Amazon S3 Java
Code Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ResponseHeaderOverrides;
import com.amazonaws.services.s3.model.S3Object;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
displayTextInputStream(objectPortion.getObjectContent());
.NET
When you download an object, you get all of the object's metadata and a stream from which to read
the contents. You should read the content of the stream as quickly as possible because the data is
streamed directly from Amazon S3 and your network connection will remain open until you read all
the data or close the input stream. You do the following to get an object:
• Execute the getObject method by providing bucket name and object key in the request.
• Execute one of the GetObjectResponse methods to process the stream.
• Instead of reading the entire object, you can read only the portion of the object data by specifying
the byte range in the request, as shown in the following C# example:
Example
• When retrieving an object, you can optionally override the response header values (see
Downloading an object (p. 109)) by using the ResponseHeaderOverrides object and setting
the corresponding request property. The following C# code example shows how to do this. For
example, you can use this feature to indicate that the object should be downloaded into a file with
a different file name than the object key name.
Example
request.ResponseHeaderOverrides = responseHeaders;
Example
The following C# code example retrieves an object from an Amazon S3 bucket. From the response,
the example reads the object data using the GetObjectResponse.ResponseStream property.
The example also shows how you can use the GetObjectResponse.Metadata collection to read
object metadata. If the object you retrieve has the x-amz-meta-title metadata, the code prints
the metadata value.
For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class GetObjectTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** object key ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;
{
client = new AmazonS3Client(bucketRegion);
ReadObjectDataAsync().Wait();
}
PHP
This topic explains how to use a class from the AWS SDK for PHP to retrieve an Amazon S3 object.
You can retrieve an entire object or a byte range from the object. We assume that you are already
following the instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 952) and
have the AWS SDK for PHP properly installed.
When retrieving an object, you can optionally override the response header values by
adding the response keys, ResponseContentType, ResponseContentLanguage,
ResponseContentDisposition, ResponseCacheControl, and ResponseExpires, to the
getObject() method, as shown in the following PHP code example:
Example
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname,
For more information about retrieving objects, see Downloading an object (p. 109).
The following PHP example retrieves an object and displays the content of the object in the browser.
The example shows how to use the getObject() method. For information about running the PHP
examples in this guide, see Running PHP Examples (p. 952).
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
try {
// Get the object.
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname
]);
For more information about the request and response format, see Get Object.
For information about Amazon S3 features and pricing, see Amazon S3 pricing.
• Delete a single object — Amazon S3 provides the DELETE API that you can use to delete one object in
a single HTTP request.
• Delete multiple objects — Amazon S3 provides the Multi-Object Delete API that you can use to delete
up to 1,000 objects in a single HTTP request.
When deleting objects from a bucket that is not version-enabled, you provide only the object key name.
However, when deleting objects from a version-enabled bucket, you can optionally provide the version ID
of the object to delete a specific version of the object.
• Specify a non-versioned delete request — Specify only the object's key, and not the version ID. In
this case, Amazon S3 creates a delete marker and returns its version ID in the response. This makes
your object disappear from the bucket. For information about object versioning and the delete marker
concept, see Using versioning in S3 buckets (p. 453).
• Specify a versioned delete request — Specify both the key and also a version ID. In this case the
following two outcomes are possible:
• If the version ID maps to a specific object version, Amazon S3 deletes the specific version of the
object.
• If the version ID maps to the delete marker of that object, Amazon S3 deletes the delete marker.
This makes the object reappear in your bucket.
• If you have an MFA-enabled bucket, and you make a non-versioned delete request (you are not
deleting a versioned object), and you don't provide an MFA token, the delete succeeds.
• If you have a Multi-Object Delete request specifying only non-versioned objects to delete from an
MFA-enabled bucket, and you don't provide an MFA token, the deletions succeed.
For information about MFA delete, see Configuring MFA delete (p. 460).
Topics
• Deleting a single object (p. 117)
• Deleting multiple objects (p. 123)
Because all objects in your S3 bucket incur storage costs, you should delete objects that you no longer
need. For example, if you are collecting log files, it's a good idea to delete them when they're no longer
needed. You can set up a lifecycle rule to automatically delete objects such as log files. For more
information, see the section called “Setting lifecycle configuration” (p. 507).
For information about Amazon S3 features and pricing, see Amazon S3 pricing.
To delete an object
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to delete an object from.
3. Choose the name of the object that you want to delete.
4. To delete the current version of the object, choose Latest version, and choose the trash can icon.
5. To delete a previous version of the object, choose Latest version, and choose the trash can icon
beside the version that you want to delete.
If you have S3 Versioning enabled on the bucket, you have the following options:
For more information about S3 Versioning, see Using versioning in S3 buckets (p. 453).
Java
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;
import java.io.IOException;
import java.util.ArrayList;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();
1. Adds a sample object to the bucket. Amazon S3 returns the version ID of the newly added object.
The example uses this version ID in the delete request.
2. Deletes the object version by specifying both the object key name and a version ID. If there are no
other versions of that object, Amazon S3 deletes the object entirely. Otherwise, Amazon S3 only
deletes the specified version.
Note
You can get the version IDs of an object by sending a ListVersions request.
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketVersioningConfiguration;
import com.amazonaws.services.s3.model.DeleteVersionRequest;
import com.amazonaws.services.s3.model.PutObjectResult;
import java.io.IOException;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
.NET
The following examples show how to delete an object from both versioned and non-versioned
buckets. For more information about S3 Versioning, see Using versioning in S3 buckets (p. 453).
The following C# example deletes an object from a non-versioned bucket. The example assumes that
the objects don't have version IDs, so you don't specify version IDs. You specify only the object key.
For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class DeleteObjectNonVersionedBucketTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** object key ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;
Console.WriteLine("Deleting an object");
await client.DeleteObjectAsync(deleteObjectRequest);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
deleting an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
deleting an object", e.Message);
}
}
}
}
The following C# example deletes an object from a versioned bucket. It deletes a specific version of
the object by specifying the object key name and version ID.
1. Enables S3 Versioning on a bucket that you specify (if S3 Versioning is already enabled, this has
no effect).
2. Adds a sample object to the bucket. In response, Amazon S3 returns the version ID of the newly
added object. The example uses this version ID in the delete request.
3. Deletes the sample object by specifying both the object key name and a version ID.
Note
You can also get the version ID of an object by sending a ListVersions request.
For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class DeleteObjectVersion
{
private const string bucketName = "*** versioning-enabled bucket name ***";
private const string keyName = "*** Object Key Name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;
PHP
This example shows how to use classes from version 3 of the AWS SDK for PHP to delete an object
from a non-versioned bucket. For information about deleting an object from a versioned bucket, see
Using the REST API (p. 123).
This example assumes that you are already following the instructions for Using the AWS SDK for
PHP and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly installed. For
information about running the PHP examples in this guide, see Running PHP Examples (p. 952).
The following PHP example deletes an object from a bucket. Because this example shows how to
delete objects from non-versioned buckets, it provides only the bucket name and object key (not a
version ID) in the delete request.
<?php
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$result = $s3->deleteObject([
'Bucket' => $bucket,
'Key' => $keyname
]);
if ($result['DeleteMarker'])
{
echo $keyname . ' was deleted or does not exist.' . PHP_EOL;
} else {
exit('Error: ' . $keyname . ' was not deleted.' . PHP_EOL);
}
}
catch (S3Exception $e) {
exit('Error: ' . $e->getAwsErrorMessage() . PHP_EOL);
}
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname
]);
To delete one object per request, use the DELETE API. For more information, see DELETE Object. For
more information about using the CLI to delete an object, see delete-object.
For information about Amazon S3 features and pricing, see Amazon S3 pricing.
You can use the Amazon S3 console or the Multi-Object Delete API to delete multiple objects
simultaneously from an S3 bucket.
To delete objects
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to delete.
3. Select the check box to the left of the names of the objects that you want to delete.
4. Choose Actions and choose Delete from the list of options that appears.
5. Enter delete if asked to confirm that you want to delete these objects.
6. Choose Delete objects in the bottom right and Amazon S3 deletes the specified objects.
Warning
To learn more about object deletion, see Deleting Amazon S3 objects (p. 115).
Java
The AWS SDK for Java provides the AmazonS3Client.deleteObjects() method for deleting
multiple objects. For each object that you want to delete, you specify the key name. If the bucket is
versioning-enabled, you have the following options:
• Specify only the object's key name. Amazon S3 adds a delete marker to the object.
• Specify both the object's key name and a version ID to be deleted. Amazon S3 deletes the
specified version of the object.
Example
The following example uses the Multi-Object Delete API to delete objects from a bucket that
is not version-enabled. The example uploads sample objects to the bucket and then uses the
AmazonS3Client.deleteObjects() method to delete the objects in a single request. In the
DeleteObjectsRequest, the example specifies only the object key names because the objects do
not have version IDs.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;
import java.io.IOException;
import java.util.ArrayList;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();
Example
The following example uses the Multi-Object Delete API to delete objects from a version-enabled
bucket. It does the following:
1. Creates sample objects and then deletes them, specifying the key name and version ID for each
object to delete. The operation deletes only the specified object versions.
2. Creates sample objects and then deletes them by specifying only the key names. Because the
example doesn't specify version IDs, the operation adds a delete marker to each object, without
deleting any specific object versions. After the delete markers are added, these objects will not
appear in the AWS Management Console.
3. Removes the delete markers by specifying the object keys and version IDs of the delete markers.
The operation deletes the delete markers, which results in the objects reappearing in the AWS
Management Console.
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketVersioningConfiguration;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;
import com.amazonaws.services.s3.model.DeleteObjectsResult.DeletedObject;
import com.amazonaws.services.s3.model.PutObjectResult;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
try {
S3_CLIENT = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
// Delete the delete markers, leaving the objects intact in the bucket.
.NET
The AWS SDK for .NET provides a convenient method for deleting multiple objects:
DeleteObjects. For each object that you want to delete, you specify the key name and the version
of the object. If the bucket is not versioning-enabled, you specify null for the version ID. If an
exception occurs, review the DeleteObjectsException response to determine which objects were
not deleted and why.
The following C# example uses the multi-object delete API to delete objects from a bucket that
is not version-enabled. The example uploads the sample objects to the bucket, and then uses the
DeleteObjects method to delete the objects in a single request. In the DeleteObjectsRequest,
the example specifies only the object key names because the version IDs are null.
For information about creating and testing a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class DeleteMultipleObjectsNonVersionedBucketTest
{
private const string bucketName = "*** versioning-enabled bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
The following C# example uses the multi-object delete API to delete objects from a version-enabled
bucket. The example performs the following actions:
1. Creates sample objects and deletes them by specifying the key name and version ID for each
object. The operation deletes specific versions of the objects.
2. Creates sample objects and deletes them by specifying only the key names. Because the example
doesn't specify version IDs, the operation only adds delete markers. It doesn't delete any specific
versions of the objects. After deletion, these objects don't appear in the Amazon S3 console.
3. Deletes the delete markers by specifying the object keys and version IDs of the delete markers.
When the operation deletes the delete markers, the objects reappear in the console.
For information about creating and testing a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class DeleteMultipleObjVersionedBucketTest
{
private const string bucketName = "*** versioning-enabled bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
// Delete objects using only keys. Amazon S3 creates a delete marker and
// returns its version ID in the response.
List<DeletedObject> deletedObjects = await
NonVersionedDeleteAsync(keysAndVersions2);
return deletedObjects;
}
catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
throw; // Some deletes failed. Investigate before continuing.
}
// This response contains the DeletedObjects list which we use to delete
the delete markers.
return response.DeletedObjects;
}
// Now, delete the delete marker to bring your objects back to the bucket.
try
{
Console.WriteLine("Removing the delete markers .....");
var deleteObjectResponse = await
s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} delete markers",
deleteObjectResponse.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
}
}
};
};
keys.Add(keyVersion);
}
return keys;
}
}
}
PHP
These examples show how to use classes from version 3 of the AWS SDK for PHP to delete multiple
objects from versioned and non-versioned Amazon S3 buckets. For more information about
versioning, see Using versioning in S3 buckets (p. 453).
The examples assume that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly installed.
For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).
<?php
require 'vendor/autoload.php';
use Aws\S3\S3Client;
]
]);
}
The following PHP example uses the deleteObjects() method to delete multiple objects from a
version-enabled bucket.
For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).
<?php
require 'vendor/autoload.php';
use Aws\S3\S3Client;
// 3. List the objects versions and get the keys and version IDs.
$versions = $s3->listObjectVersions(['Bucket' => $bucket]);
]);
if (isset($result['Deleted']))
{
$deleted = true;
if (isset($result['Errors']))
{
$errors = true;
if ($deleted)
{
echo $deletedResults;
}
if ($errors)
{
echo $errorResults;
}
For more information, see Delete Multiple Objects in the Amazon Simple Storage Service API Reference.
In the Amazon S3 console, prefixes are called folders. You can view all your objects and folders in the
S3 console by navigating to a bucket. You can also view information about each object, including object
properties.
For more information about listing and organizing your data in Amazon S3, see the following topics.
Topics
• Organizing objects using prefixes (p. 136)
The prefix limits the results to only those keys that begin with the specified prefix. The delimiter causes a
list operation to roll up all the keys that share a common prefix into a single summary list result.
The purpose of the prefix and delimiter parameters is to help you organize and then browse your keys
hierarchically. To do this, first pick a delimiter for your bucket, such as slash (/), that doesn't occur in any
of your anticipated key names. Next, construct your key names by concatenating all containing levels of
the hierarchy, separating each level with the delimiter.
For example, if you were storing information about cities, you might naturally organize them by
continent, then by country, then by province or state. Because these names don't usually contain
punctuation, you might use slash (/) as the delimiter. The following examples use a slash (/) delimiter.
• Europe/France/Nouvelle-Aquitaine/Bordeaux
• North America/Canada/Quebec/Montreal
• North America/USA/Washington/Bellevue
• North America/USA/Washington/Seattle
If you stored data for every city in the world in this manner, it would become awkward to manage
a flat key namespace. By using Prefix and Delimiter with the list operation, you can use the
hierarchy you've created to list your data. For example, to list all the states in USA, set Delimiter='/'
and Prefix='North America/USA/'. To list all the provinces in Canada for which you have data, set
Delimiter='/' and Prefix='North America/Canada/'.
sample.jpg
photos/2006/January/sample.jpg
photos/2006/February/sample2.jpg
photos/2006/February/sample3.jpg
photos/2006/February/sample4.jpg
The sample bucket has only the sample.jpg object at the root level. To list only the root level objects
in the bucket, you send a GET request on the bucket with "/" delimiter character. In response, Amazon S3
returns the sample.jpg object key because it does not contain the "/" delimiter character. All other keys
contain the delimiter character. Amazon S3 groups these keys and returns a single CommonPrefixes
element with prefix value photos/ that is a substring from the beginning of these keys to the first
occurrence of the specified delimiter.
Example
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>ExampleBucket</Name>
<Prefix></Prefix>
<Marker></Marker>
<MaxKeys>1000</MaxKeys>
<Delimiter>/</Delimiter>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>sample.jpg</Key>
<LastModified>2011-07-24T19:39:30.000Z</LastModified>
<ETag>"d1a7fb5eab1c16cb4f7cf341cf188c3d"</ETag>
<Size>6</Size>
<Owner>
<ID>75cc57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
<DisplayName>displayname</DisplayName>
</Owner>
<StorageClass>STANDARD</StorageClass>
</Contents>
<CommonPrefixes>
<Prefix>photos/</Prefix>
</CommonPrefixes>
</ListBucketResult>
For more information about listing object keys programmatically, see Listing object keys
programmatically (p. 137).
Amazon S3 exposes a list operation that lets you enumerate the keys contained in a bucket. Keys are
selected for listing by bucket and prefix. For example, consider a bucket named "dictionary" that
contains a key for every English word. You might make a call to list all the keys in that bucket that start
with the letter "q". List results are always returned in UTF-8 binary order.
Both the SOAP and REST list operations return an XML document that contains the names of matching
keys and information about the object identified by each key.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.
Groups of keys that share a prefix terminated by a special delimiter can be rolled up by that common
prefix for the purposes of listing. This enables applications to organize and browse their keys
hierarchically, much like how you would organize your files into directories in a file system.
For example, to extend the dictionary bucket to contain more than just English words, you might form
keys by prefixing each word with its language and a delimiter, such as "French/logical". Using this
naming scheme and the hierarchical listing feature, you could retrieve a list of only French words. You
could also browse the top-level list of available languages without having to iterate through all the
lexicographically intervening keys. For more information about this aspect of listing, see Organizing
objects using prefixes (p. 136).
REST API
If your application requires it, you can send REST requests directly. You can send a GET request to return
some or all of the objects in a bucket or you can use selection criteria to return a subset of the objects
in a bucket. For more information, see GET Bucket (List Objects) Version 2 in the Amazon Simple Storage
Service API Reference.
List performance is not substantially affected by the total number of keys in your bucket. It's also not
affected by the presence or absence of the prefix, marker, maxkeys, or delimiter arguments.
As buckets can contain a virtually unlimited number of keys, the complete results of a list query can
be extremely large. To manage large result sets, the Amazon S3 API supports pagination to split them
into multiple responses. Each list keys response returns a page of up to 1,000 keys with an indicator
indicating if the response is truncated. You send a series of list keys requests until you have received all
the keys. AWS SDK wrapper libraries provide the same pagination.
Java
The following example lists the object keys in a bucket. The example uses pagination to retrieve
a set of object keys. If there are more keys to return after the first page, Amazon S3 includes a
continuation token in the response. The example uses the continuation token in the subsequent
request to fetch the next set of object keys.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ListObjectsV2Request;
import com.amazonaws.services.s3.model.ListObjectsV2Result;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import java.io.IOException;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
System.out.println("Listing objects");
do {
result = s3Client.listObjectsV2(req);
.NET
The following C# example lists the object keys for a bucket. In the example, pagination is used to
retrieve a set of object keys. If there are more keys to return, Amazon S3 includes a continuation
token in the response. The code uses the continuation token in the subsequent request to fetch the
next set of object keys.
For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class ListObjectsTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
MaxKeys = 10
};
ListObjectsV2Response response;
do
{
response = await client.ListObjectsV2Async(request);
PHP
This example guides you through using classes from version 3 of the AWS SDK for PHP to list the
object keys contained in an Amazon S3 bucket.
This example assumes that you are already following the instructions for Using the AWS SDK for
PHP and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly installed.
To list the object keys contained in a bucket using the AWS SDK for PHP, you first must list the
objects contained in the bucket and then extract the key from each of the listed objects. When
listing objects in a bucket you have the option of using the low-level Aws\S3\S3Client::listObjects()
method or the high-level Aws\ResultPaginator class.
The low-level listObjects() method maps to the underlying Amazon S3 REST API. Each
listObjects() request returns a page of up to 1,000 objects. If you have more than 1,000 objects
in the bucket, your response will be truncated and you must send another listObjects() request
to retrieve the next set of 1,000 objects.
You can use the high-level ListObjects paginator to make it easier to list the objects contained
in a bucket. To use the ListObjects paginator to create an object list, run the Amazon S3 client
getPaginator() method (inherited from the Aws/AwsClientInterface class) with the ListObjects
command as the first argument and an array to contain the returned objects from the specified
bucket as the second argument.
When used as a ListObjects paginator, the getPaginator() method returns all the objects
contained in the specified bucket. There is no 1,000 object limit, so you don't need to worry whether
the response is truncated.
The following tasks guide you through using the PHP Amazon S3 client methods to list the objects
contained in a bucket from which you can list the object keys.
The following PHP example demonstrates how to list the keys from a specified bucket. It shows
how to use the high-level getIterator() method to list the objects in a bucket and then extract
the key from each of the objects in the list. It also shows how to use the low-level listObjects()
method to list the objects in a bucket and then extract the key from each of the objects in the
list returned. For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
objects. It does this by using a shared name prefix for objects (that is, objects have names that begin with
a common string). Object names are also referred to as key names.
For example, you can create a folder on the console named photos and store an object named
myphoto.jpg in it. The object is then stored with the key name photos/myphoto.jpg, where
photos/ is the prefix.
You can have folders within folders, but not buckets within buckets. You can upload and copy objects
directly into a folder. Folders can be created, deleted, and made public, but they cannot be renamed.
Objects can be copied from one folder to another.
Important
The Amazon S3 console treats all objects that have a forward slash ("/") character as the last
(trailing) character in the key name as a folder, for example examplekeyname/. You can't
upload an object that has a key name with a trailing "/" character using the Amazon S3 console.
However, you can upload objects that are named with a trailing "/" with the Amazon S3 API by
using the AWS CLI, AWS SDKs, or REST API.
An object that is named with a trailing "/" appears as a folder in the Amazon S3 console. The
Amazon S3 console does not display the content and metadata for such an object. When you
use the console to copy an object named with a trailing "/", a new folder is created in the
destination location, but the object's data and metadata are not copied.
Topics
• Creating a folder (p. 142)
• Making folders public (p. 143)
• Deleting folders (p. 143)
Creating a folder
This section describes how to use the Amazon S3 console to create a folder.
Important
If your bucket policy prevents uploading objects to this bucket without encryption, tags,
metadata, or access control list (ACL) grantees, you will not be able to create a folder using
this configuration. Instead, upload an empty folder and specify these settings in the upload
configuration.
To create a folder
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to create a folder in.
3. Choose Create folder.
4. Enter a name for the folder (for example, favorite-pics). Then choose Create folder.
In the Amazon S3 console, you can make a folder public. You can also make a folder public by creating a
bucket policy that limits access by prefix. For more information, see Identity and access management in
Amazon S3 (p. 209).
Warning
After you make a folder public in the Amazon S3 console, you can't make it private again.
Instead, you must set permissions on each individual object in the public folder so that the
objects have no public access. For more information, see Configuring ACLs (p. 389).
Deleting folders
This section explains how to use the Amazon S3 console to delete folders from an S3 bucket.
For information about Amazon S3 features and pricing, see Amazon S3.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to delete folders from.
3. In the Objects list, select the check box next to the folders and objects that you want to delete.
4. Choose Delete.
5. On the Delete objects page, verify that the names of the folders you selected for deletion are listed.
6. In the Delete objects box, enter delete, and choose Delete objects.
Warning
This action deletes all specified objects. When deleting folders, wait for the delete action to
finish before adding new objects to the folder. Otherwise, new objects might be deleted as well.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object for which you want an overview.
• To download an object version, select the check box next to the version ID, choose Actions, and
then choose Download.
• To delete an object version, select the check box next to the version ID, and choose Delete.
Important
You can undelete an object only if it was deleted as the latest (current) version. You can't
undelete a previous version of an object that was deleted.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object you want to view properties for.
The Object overview for your object opens. You can scroll down to view the object properties.
4. On the Object overview page, you can configure the following properties for the object.
Note
If you change the Storage Class, Encryption, or Metadata properties, a new object is
created to replace the old one. If S3 Versioning is enabled, a new version of the object
is created, and the existing object becomes an older version. The role that changes the
property also becomes the owner of the new object or (object version).
a. Storage class – Each object in Amazon S3 has a storage class associated with it. The storage
class that you choose to use depends on how frequently you access the object. The default
storage class for S3 objects is STANDARD. You choose which storage class to use when you
upload an object. For more information about storage classes, see Using Amazon S3 storage
classes (p. 496).
To change the storage class after you upload an object, choose Storage class. Choose the
storage class that you want, and then choose Save.
b. Server-side encryption settings – You can use server-side encryption to encrypt your S3
objects. For more information, see Specifying server-side encryption with AWS KMS (SSE-
KMS) (p. 161) or Specifying Amazon S3 encryption (p. 175).
c. Metadata – Each object in Amazon S3 has a set of name-value pairs that represents its
metadata. For information about adding metadata to an S3 object, see Editing object metadata
in the Amazon S3 console (p. 63).
d. Tags – You categorize storage by adding tags to an S3 object. For more information, see
Categorizing your storage using tags (p. 609).
e. Object lock legal hold and rentention – You can prevent an object from being deleted. For
more information, see Using S3 Object Lock (p. 488).
object, you can upload the object only if the creator of the presigned URL has the necessary permissions
to upload that object.
All objects and buckets by default are private. The presigned URLs are useful if you want your user/
customer to be able to upload a specific object to your bucket, but you don't require them to have AWS
security credentials or permissions.
When you create a presigned URL, you must provide your security credentials and then specify a bucket
name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The
presigned URLs are valid only for the specified duration. That is, you must start the action before the
expiration date and time. If the action consists of multiple steps, such as a multipart upload, all steps
must be started before the expiration, otherwise you will receive an error when Amazon S3 attempts to
start a step with an expired URL.
You can use the presigned URL multiple times, up to the expiration date and time.
Note
Anyone with valid security credentials can create a presigned URL. However, for you to
successfully upload an object, the presigned URL must be created by someone who has
permission to perform the operation that the presigned URL is based upon.
You can generate a presigned URL programmatically using the AWS SDK for Java, .NET, Ruby, PHP,
Node.js, and Python.
If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a presigned
object URL without writing any code. Anyone who receives a valid presigned URL can then
programmatically upload an object. For more information, see Using Amazon S3 from AWS Explorer. For
instructions on how to install AWS Explorer, see Developing with Amazon S3 using the AWS SDKs, and
explorers (p. 943).
In essence, presigned URLs are a bearer token that grants access to customers who possess them. As
such, we recommend that you protect them appropriately.
If you want to restrict the use of presigned URLs and all S3 access to particular network paths, you
can write AWS Identity and Access Management (IAM) policies that require a particular network path.
These policies can be set on the IAM principal that makes the call, the Amazon S3 bucket, or both. A
network-path restriction on the principal requires the user of those credentials to make requests from
the specified network. A restriction on the bucket limits access to that bucket only to requests originating
from the specified network. Realize that these restrictions also apply outside of the presigned URL
scenario.
The IAM global condition that you use depends on the type of endpoint. If you are using the public
endpoint for Amazon S3, use aws:SourceIp. If you are using a VPC endpoint to Amazon S3, use
aws:SourceVpc or aws:SourceVpce.
The following IAM policy statement requires the principal to access AWS from only the specified network
range. With this policy statement in place, all access is required to originate from that range. This
includes the case of someone using a presigned URL for S3.
{
"Sid": "NetworkRestrictionForIAMPrincipal",
"Effect": "Deny",
"Action": "",
"Resource": "",
"Condition": {
"NotIpAddressIfExists": { "aws:SourceIp": "IP-address" },
"BoolIfExists": { "aws:ViaAWSService": "false" }
}
}
Topics
• Generating a presigned object URL (p. 146)
• Uploading objects using presigned URLs (p. 149)
When you create a presigned URL for your object, you must provide your security credentials, specify a
bucket name, an object key, specify the HTTP method (GET to download the object) and expiration date
and time. The presigned URLs are valid only for the specified duration.
Anyone who receives the presigned URL can then access the object. For example, if you have a video
in your bucket and both the bucket and the object are private, you can share the video with others by
generating a presigned URL.
Note
• Anyone with valid security credentials can create a presigned URL. However, in order to
successfully access an object, the presigned URL must be created by someone who has
permission to perform the operation that the presigned URL is based upon.
• The credentials that you can use to create a presigned URL include:
• IAM instance profile: Valid up to 6 hours
• AWS Security Token Service : Valid up to 36 hours when signed with permanent credentials,
such as the credentials of the AWS account root user or an IAM user
• IAM user: Valid up to 7 days when using AWS Signature Version 4
To create a presigned URL that's valid for up to 7 days, first designate IAM user credentials
(the access key and secret access key) to the SDK that you're using. Then, generate a
presigned URL using AWS Signature Version 4.
• If you created a presigned URL using a temporary token, then the URL expires when the token
expires, even if the URL was created with a later expiration time.
• Since presigned URLs grant access to your Amazon S3 buckets to whoever has the URL, we
recommend that you protect them appropriately. For more details about protecting presigned
URLs, see Uploading objects using presigned URLs (p. 149).
You can generate a presigned URL programmatically using the REST API, the AWS Command Line
Interface, and the AWS SDK for Java, .NET, Ruby, PHP, Node.js, Python, and Go.
For instructions about how to install the AWS Explorer, see Developing with Amazon S3 using the AWS
SDKs, and explorers (p. 943).
Java
Example
The following example generates a presigned URL that you can give to others so that they can
retrieve an object from an S3 bucket. For more information, see Accessing an object using a
presigned URL (p. 144).
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.HttpMethod;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;
import java.io.IOException;
import java.net.URL;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
e.printStackTrace();
}
}
}
.NET
Example
The following example generates a presigned URL that you can give to others so that they can
retrieve an object. For more information, see Accessing an object using a presigned URL (p. 144).
For instructions about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
namespace Amazon.DocSamples.S3
{
class GenPresignedURLTest
{
private const string bucketName = "*** bucket name ***";
private const string objectKey = "*** object key ***";
// Specify how long the presigned URL lasts, in hours
private const double timeoutDuration = 12;
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
Go
You can use SDK for Go to upload an object. You can send a PUT request to upload data in a single
operation. For more information, see Generate a Pre-Signed URL for an Amazon S3 PUT Operation
with a Specific Payload in the AWS SDK for Go Developer Guide.
The following examples show how to upload objects using presigned URLs.
Java
• Specify the HTTP PUT verb when creating the GeneratePresignedUrlRequest and
HttpURLConnection objects.
• Interact with the HttpURLConnection object in some way after finishing the upload. The
following example accomplishes this by using the HttpURLConnection object to check the HTTP
response code.
Example
This example generates a presigned URL and uses it to upload sample data as an object. For
instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.HttpMethod;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;
import com.amazonaws.services.s3.model.S3Object;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import java.net.URL;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
// Create the connection and use it to upload the new object using the pre-
signed URL.
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStreamWriter out = new
OutputStreamWriter(connection.getOutputStream());
out.write("This text uploaded as an object via presigned URL.");
out.close();
// Check the HTTP response code. To complete the upload and make the object
available,
// you must interact with the connection object in some way.
connection.getResponseCode();
System.out.println("HTTP response code: " + connection.getResponseCode());
.NET
The following C# example shows how to use the AWS SDK for .NET to upload an object to an S3
bucket using a presigned URL.
This example generates a presigned URL for a specific object and uses it to upload a file. For
information about the example's compatibility with a specific version of the AWS SDK for .NET and
instructions about how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Net;
namespace Amazon.DocSamples.S3
{
class UploadObjectUsingPresignedURLTest
{
private const string bucketName = "*** provide bucket name ***";
private const string objectKey = "*** provide the name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file
to upload ***";
// Specify how long the presigned URL lasts, in hours
private const double timeoutDuration = 12;
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
Ruby
The following tasks guide you through using a Ruby script to upload an object using a presigned URL
for SDK for Ruby - Version 3.
2 Provide a bucket name and an object key by calling the #bucket[] and the
#object[] methods of your Aws::S3::Resource class instance.
Generate a presigned URL by creating an instance of the URI class, and use it to parse
the .presigned_url method of your Aws::S3::Resource class instance. You
must specify :put as an argument to .presigned_url, and you must specify PUT to
Net::HTTP::Session#send_request if you want to upload an object.
The upload creates an object or replaces any existing object with the same key that is
specified in the presigned URL.
The following Ruby code example demonstrates the preceding tasks for SDK for Ruby - Version 3.
Example
require 'aws-sdk-s3'
require 'net/http'
if http_client.nil?
Net::HTTP.start(url.host) do |http|
http.send_request(
'PUT',
url.request_uri,
object_content,
'content-type' => ''
)
end
else
http_client.start(url.host) do |http|
http.send_request(
'PUT',
url.request_uri,
object_content,
'content-type' => ''
)
end
end
content = object.get.body
puts "The presigned URL for the object '#{object_key}' in the bucket " \
"'#{bucket_name}' is:\n\n"
puts url
puts "\nUsing this presigned URL to get the content that " \
"was just uploaded to this object, the object\'s content is:\n\n"
puts content.read
return true
rescue StandardError => e
puts "Error uploading to presigned URL: #{e.message}"
return false
end
PHP
Please see Upload an object using a presigned URL (AWS SDK for PHP).
Amazon S3 supports the BitTorrent protocol so that developers can save costs when distributing content
at high scale. Amazon S3 is useful for simple, reliable storage of any data. The default distribution
mechanism for Amazon S3 data is via client/server download. In client/server distribution, the entire
object is transferred point-to-point from Amazon S3 to every authorized user who requests that object.
While client/server delivery is appropriate for a wide variety of use cases, it is not optimal for everybody.
Specifically, the costs of client/server distribution increase linearly as the number of users downloading
objects increases. This can make it expensive to distribute popular objects.
BitTorrent addresses this problem by recruiting the very clients that are downloading the object as
distributors themselves: Each client downloads some pieces of the object from Amazon S3 and some
from other clients, while simultaneously uploading pieces of the same object to other interested "peers."
The benefit for publishers is that for large, popular files the amount of data actually supplied by Amazon
S3 can be substantially lower than what it would have been serving the same clients via client/server
download. Less data transferred means lower costs for the publisher of the object.
Note
• Amazon S3 does not support the BitTorrent protocol in AWS Regions launched after May 30,
2016.
• You can only get a torrent file for objects that are less than 5 GBs in size.
Topics
• How you are charged for BitTorrent delivery (p. 154)
• Using BitTorrent to retrieve objects stored in Amazon S3 (p. 154)
• Publishing content using Amazon S3 and BitTorrent (p. 155)
The data transfer savings achieved from use of BitTorrent can vary widely depending on how popular
your object is. Less popular objects require heavier use of the "seeder" to serve clients, and thus the
difference between BitTorrent distribution costs and client/server distribution costs might be small for
such objects. In particular, if only one client is ever downloading a particular object at a time, the cost of
BitTorrent delivery will be the same as direct download.
The starting point for a BitTorrent download is a .torrent file. This small file describes for BitTorrent
clients both the data to be downloaded and where to get started finding that data. A .torrent file is a
small fraction of the size of the actual object to be downloaded. Once you feed your BitTorrent client
application an Amazon S3 generated .torrent file, it should start downloading immediately from Amazon
S3 and from any "peer" BitTorrent clients.
Retrieving a .torrent file for any publicly available object is easy. Simply add a "?torrent" query string
parameter at the end of the REST GET request for the object. No authentication is required. Once you
have a BitTorrent client installed, downloading an object using BitTorrent download might be as easy as
opening this URL in your web browser.
There is no mechanism to fetch the .torrent for an Amazon S3 object using the SOAP API.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.
Example
This example retrieves the Torrent file for the "Nelson" object in the "quotes" bucket.
Sample Request
Sample Response
HTTP/1.1 200 OK
x-amz-request-id: 7CD745EBB7AB5ED9
Date: Wed, 25 Nov 2009 12:00:00 GMT
Content-Disposition: attachment; filename=Nelson.torrent;
Content-Type: application/x-bittorrent
Content-Length: 537
Server: AmazonS3
You can direct your clients to your BitTorrent accessible objects by giving them the .torrent file directly or
by publishing a link to the ?torrent URL of your object, as described by GetObjectTorrent in the Amazon
Simple Storage Service API Reference. One important thing to note is that the .torrent file describing an
Amazon S3 object is generated on demand the first time it is requested (via the REST ?torrent resource).
Generating the .torrent for an object takes time proportional to the size of that object. For large objects,
this time can be significant. Therefore, before publishing a ?torrent link, we suggest making the first
request for it yourself. Amazon S3 might take several minutes to respond to this first request, as it
generates the .torrent file. Unless you update the object in question, subsequent requests for the .torrent
will be fast. Following this procedure before distributing a ?torrent link will ensure a smooth BitTorrent
downloading experience for your customers.
To stop distributing a file using BitTorrent, simply remove anonymous access to it. This can be
accomplished by either deleting the file from Amazon S3, or modifying your access control policy to
prohibit anonymous reads. After doing so, Amazon S3 will no longer act as a "seeder" in the BitTorrent
network for your file, and will no longer serve the .torrent file via the ?torrent REST API. However, after
a .torrent for your file is published, this action might not stop public downloads of your object that
happen exclusively using the BitTorrent peer to peer network.
Amazon S3 Security
Cloud security at AWS is the highest priority. As an AWS customer, you benefit from a data center
and network architecture that are built to meet the requirements of the most security-sensitive
organizations.
Security is a shared responsibility between AWS and you. The shared responsibility model describes this
as security of the cloud and security in the cloud:
• Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services
in the AWS Cloud. AWS also provides you with services that you can use securely. The effectiveness
of our security is regularly tested and verified by third-party auditors as part of the AWS compliance
programs. To learn about the compliance programs that apply to Amazon S3, see AWS Services in
Scope by Compliance Program.
• Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also
responsible for other factors including the sensitivity of your data, your organization’s requirements,
and applicable laws and regulations.
This documentation will help you understand how to apply the shared responsibility model when using
Amazon S3. The following topics show you how to configure Amazon S3 to meet your security and
compliance objectives. You'll also learn how to use other AWS services that can help you monitor and
secure your Amazon S3 resources.
Topics
• Data protection in Amazon S3 (p. 156)
• Protecting data using encryption (p. 157)
• Internetwork traffic privacy (p. 202)
• AWS PrivateLink for Amazon S3 (p. 202)
• Identity and access management in Amazon S3 (p. 209)
• Logging and monitoring in Amazon S3 (p. 442)
• Compliance Validation for Amazon S3 (p. 443)
• Resilience in Amazon S3 (p. 444)
• Infrastructure security in Amazon S3 (p. 446)
• Configuration and vulnerability analysis in Amazon S3 (p. 447)
• Security Best Practices for Amazon S3 (p. 448)
Amazon S3 further protects your data using versioning. You can use versioning to preserve, retrieve, and
restore every version of every object that is stored in your Amazon S3 bucket. With versioning, you can
easily recover from both unintended user actions and application failures. By default, requests retrieve
the most recently written version. You can retrieve older versions of an object by specifying a version of
the object in a request.
For data protection purposes, we recommend that you protect AWS account credentials and set up
individual user accounts with AWS Identity and Access Management (IAM), so that each user is given only
the permissions necessary to fulfill their job duties.
If you require FIPS 140-2 validated cryptographic modules when accessing AWS through a command line
interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see
Federal Information Processing Standard (FIPS) 140-2.
The following security best practices also address data protection in Amazon S3:
• Server-Side Encryption – Request Amazon S3 to encrypt your object before saving it on disks in its
data centers and then decrypt it when you download the objects.
• Client-Side Encryption – Encrypt data client-side and upload the encrypted data to Amazon S3. In this
case, you manage the encryption process, the encryption keys, and related tools.
For more information about server-side encryption and client-side encryption, review the topics listed
below.
Topics
You have three mutually exclusive options, depending on how you choose to manage the encryption
keys.
When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted
with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly
rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit
Advanced Encryption Standard (AES-256), to encrypt your data. For more information, see Protecting
data using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) (p. 174).
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service
(SSE-KMS)
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-
KMS) is similar to SSE-S3, but with some additional benefits and charges for using this service. There
are separate permissions for the use of a CMK that provides added protection against unauthorized
access of your objects in Amazon S3. SSE-KMS also provides you with an audit trail that shows when your
CMK was used and by whom. Additionally, you can create and manage customer managed CMKs or use
AWS managed CMKs that are unique to you, your service, and your Region. For more information, see
Protecting Data Using Server-Side Encryption with CMKs Stored in AWS Key Management Service (SSE-
KMS) (p. 158).
With Server-Side Encryption with Customer-Provided Keys (SSE-C), you manage the encryption keys
and Amazon S3 manages the encryption, as it writes to disks, and decryption, when you access your
objects. For more information, see Protecting data using server-side encryption with customer-provided
encryption keys (SSE-C) (p. 185).
If you use CMKs, you use AWS KMS via the AWS Management Console or AWS KMS APIs to centrally
create CMKs, define the policies that control how CMKs can be used, and audit their usage to prove that
they are being used correctly. You can use these CMKs to protect your data in Amazon S3 buckets. When
you use SSE-KMS encryption with an S3 bucket, the AWS KMS CMK must be in the same Region as the
bucket.
There are additional charges for using AWS KMS CMKs. For more information, see AWS KMS concepts -
Customer master keys (CMKs) in the AWS Key Management Service Developer Guide and AWS KMS pricing.
Important
You need the kms:Decrypt permission when you upload or download an Amazon S3
object encrypted with an AWS KMS CMK. This is in addition to the kms:ReEncrypt,
kms:GenerateDataKey, and kms:DescribeKey permissions. For more information, see
Failure to upload a large file to Amazon S3 with encryption using an AWS KMS key.
If you don't specify a customer managed CMK, Amazon S3 automatically creates an AWS managed
CMK in your AWS account the first time that you add an object encrypted with SSE-KMS to a bucket. By
default, Amazon S3 uses this CMK for SSE-KMS.
If you want to use a customer managed CMK for SSE-KMS, create the CMK before you configure SSE-
KMS. Then, when you configure SSE-KMS for your bucket, specify the existing customer managed CMK.
Creating your own customer managed CMK gives you more flexibility and control. For example, you
can create, rotate, and disable customer managed CMKs. You can also define access controls and
audit the customer managed CMKs that you use to protect your data. For more information about
customer managed and AWS managed CMKs, see AWS KMS concepts in the AWS Key Management
Service Developer Guide.
Important
When you use an AWS KMS CMK for server-side encryption in Amazon S3, you must choose a
symmetric CMK. Amazon S3 only supports symmetric CMKs and not asymmetric CMKs. For more
information, see Using Symmetric and Asymmetric Keys in the AWS Key Management Service
Developer Guide.
When you configure your bucket to use S3 Bucket Keys for SSE-KMS on new objects, AWS KMS generates
a bucket-level key that is used to create unique data keys for objects in the bucket. This bucket key is
used for a time-limited period within Amazon S3, further reducing the need for Amazon S3 to make
requests to AWS KMS to complete encryption operations. For more information about using S3 Bucket
Keys, see Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys (p. 166).
SSE-KMS highlights
The highlights of SSE-KMS are as follows:
• You can choose a customer managed CMK that you create and manage, or you can choose an AWS
managed CMK that Amazon S3 creates in your AWS account and manages for you. Like a customer
managed CMK, your AWS managed CMK is unique to your AWS account and Region. Only Amazon S3
has permission to use this CMK on your behalf. Amazon S3 only supports symmetric CMKs.
• You can create, rotate, and disable auditable customer managed CMKs from the AWS KMS console.
• The ETag in the response is not the MD5 of the object data.
• The data keys used to encrypt your data are also encrypted and stored alongside the data that they
protect.
• The security controls in AWS KMS can help you meet encryption-related compliance requirements.
if the request does not include the x-amz-server-side-encryption header requesting server-side
encryption with SSE-KMS.
{
"Version":"2012-10-17",
"Id":"PutObjectPolicy",
"Statement":[{
"Sid":"DenyUnEncryptedObjectUploads",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{
"StringNotEquals":{
"s3:x-amz-server-side-encryption":"aws:kms"
}
}
}
]
}
To require that a particular AWS KMS CMK be used to encrypt the objects in a bucket, you can use the
s3:x-amz-server-side-encryption-aws-kms-key-id condition key. To specify the AWS KMS
CMK, you must use a key Amazon Resource Name (ARN) that is in the "arn:aws:kms:region:acct-
id:key/key-id" format.
Note
When you upload an object, you can specify the AWS KMS CMK using the x-amz-server-
side-encryption-aws-kms-key-id header. If the header is not present in the request,
Amazon S3 assumes the AWS managed CMK. Regardless, the AWS KMS key ID that Amazon S3
uses for object encryption must match the AWS KMS key ID in the policy, otherwise Amazon S3
denies the request.
For a complete list of Amazon S3‐specific condition keys and more information about specifying
condition keys, see Amazon S3 condition keys (p. 232).
Encryption context
Amazon S3 supports an encryption context with the x-amz-server-side-encryption-context
header. An encryption context is an optional set of key-value pairs that can contain additional contextual
information about the data.
For information about the encryption context in Amazon S3, see Encryption context (p. 160). For
general information about encryption context, see AWS Key Management Service Concepts - Encryption
Context in the AWS Key Management Service Developer Guide.
The encryption context can be any value that you want, provided that the header adheres to the Base64-
encoded JSON format. However, because the encryption context is not encrypted and because it is
logged if AWS CloudTrail logging is turned on, the encryption context should not include sensitive
information. We further recommend that your context describe the data being encrypted or decrypted
so that you can better understand the CloudTrail events produced by AWS KMS.
In Amazon S3, the object or bucket Amazon Resource Name (ARN) is commonly used as an encryption
context. If you use SSE-KMS without enabling an S3 Bucket Key, you use the object ARN as your
encryption context, for example, arn:aws:s3:::object_ARN. However, if you use SSE-KMS
and enable an S3 Bucket Key, you use the bucket ARN for your encryption context, for example,
arn:aws:s3:::bucket_ARN. For more information about S3 Bucket Keys, see Reducing the cost of
SSE-KMS with Amazon S3 Bucket Keys (p. 166).
If the key aws:s3:arn is not already in the encryption context, Amazon S3 can append a predefined
key of aws:s3:arn to the encryption context that you provide. Amazon S3 appends this predefined key
when it processes your requests. If you use SSE-KMS without an S3 Bucket Key, the value is equal to the
object ARN. If you use SSE-KMS with an S3 Bucket Key enabled, the value is equal to the bucket ARN.
You can use this predefined key to track relevant requests in CloudTrail. So you can always see which
Amazon S3 ARN was used with which encryption key. You can use CloudTrail logs to ensure that the
encryption context is not identical between different Amazon S3 objects and buckets, which provides
additional security. Your full encryption context will be validated to have the value equal to the object or
bucket ARN.
Topics
• Specifying server-side encryption with AWS KMS (SSE-KMS) (p. 161)
• Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys (p. 166)
You can specify SSE-KMS using the S3 console, REST APIs, AWS SDKs, and AWS CLI. For more
information, see the topics below.
This topic describes how to set or change the type of encryption an object using the Amazon S3 console.
Note
If you change an object's encryption, a new object is created to replace the old one. If S3
Versioning is enabled, a new version of the object is created, and the existing object becomes an
older version. The role that changes the property also becomes the owner of the new object or
(object version).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object that you want to add or change encryption for.
The Object overview opens, displaying the properties for your object.
4. Under Server-side encryption settings, choose Edit.
Important
You can only use KMS CMKs that are enabled in the same AWS Region as the bucket. When
you choose Choose from your KMS master keys, the S3 console only lists 100 KMS CMKs
per Region. If you have more than 100 CMKs in the same Region, you can only see the first
100 CMKs in the S3 console. To use a KMS CMK that is not listed in the console, choose
Custom KMS ARN, and enter the KMS CMK ARN.
When you use an AWS KMS CMK for server-side encryption in Amazon S3, you must choose
a CMK that is enabled in the same Region as your bucket. Additionally, Amazon S3 only
supports symmetric CMKs and not asymmetric CMKs. For more information, see Using
Symmetric and Asymmetric Keys in the AWS Key Management Service Developer Guide.
For more information about creating an AWS KMS CMK, see Creating Keys in the AWS Key
Management Service Developer Guide. For more information about using AWS KMS with Amazon
S3, see Protecting Data Using Server-Side Encryption with CMKs Stored in AWS Key Management
Service (SSE-KMS) (p. 158).
8. Choose Save changes.
Note
This action applies encryption to all specified objects. When encrypting folders, wait for the save
operation to finish before adding new objects to the folder.
If you specify the x-amz-server-side-encryption header with a value of aws:kms, you can also use
the following request headers:
• x-amz-server-side-encryption-aws-kms-key-id
• x-amz-server-side-encryption-context
• x-amz-server-side-encryption-bucket-key-enabled
Topics
• Amazon S3 REST APIs that support SSE-KMS (p. 162)
• Encryption context (x-amz-server-side-encryption-context) (p. 163)
• AWS KMS key ID (x-amz-server-side-encryption-aws-kms-key-id) (p. 163)
• S3 Bucket Keys (x-amz-server-side-encryption-aws-bucket-key-enabled) (p. 164)
• PUT Object — When you upload data using the PUT API, you can specify these request headers.
• PUT Object - Copy— When you copy an object, you have both a source object and a target object.
When you pass SSE-KMS headers with the COPY operation, they are applied only to the target object.
When copying an existing object, regardless of whether the source object is encrypted or not, the
destination object is not encrypted unless you explicitly request server-side encryption.
• POST Object— When you use a POST operation to upload an object, instead of the request headers,
you provide the same information in the form fields.
• Initiate Multipart Upload— When you upload large objects using the multipart upload API, you can
specify these headers. You specify these headers in the initiate request.
The response headers of the following REST APIs return the x-amz-server-side-encryption header
when an object is stored using server-side encryption.
• PUT Object
• PUT Object - Copy
• POST Object
• Initiate Multipart Upload
• Upload Part
• Upload Part - Copy
• Complete Multipart Upload
• Get Object
• Head Object
Important
• All GET and PUT requests for an object protected by AWS KMS will fail if you don't make them
using Secure Sockets Language (SSL) or Signature Version 4.
• If your object uses SSE-KMS, don't send encryption request headers for GET requests and
HEAD requests, or you’ll get an HTTP 400 BadRequest error.
In Amazon S3, the object or bucket Amazon Resource Name (ARN) is commonly used as an encryption
context. If you use SSE-KMS without enabling an S3 Bucket Key, you use the object ARN as your
encryption context, for example, arn:aws:s3:::object_ARN. However, if you use SSE-KMS
and enable an S3 Bucket Key, you use the bucket ARN for your encryption context, for example,
arn:aws:s3:::bucket_ARN.
For information about the encryption context in Amazon S3, see Encryption context (p. 160). For
general information about encryption context, see AWS Key Management Service Concepts - Encryption
Context in the AWS Key Management Service Developer Guide.
information, see Using Symmetric and Asymmetric Keys in the AWS Key Management Service
Developer Guide.
When using AWS SDKs, you can request Amazon S3 to use AWS Key Management Service (AWS KMS)
customer master keys (CMKs). This section provides examples of using the AWS SDKs for Java and .NET.
For information about other SDKs, go to Sample Code and Libraries.
Important
When you use an AWS KMS CMK for server-side encryption in Amazon S3, you must choose a
symmetric CMK. Amazon S3 only supports symmetric CMKs and not asymmetric CMKs. For more
information, see Using Symmetric and Asymmetric Keys in the AWS Key Management Service
Developer Guide.
Copy operation
When copying objects, you add the same request properties (ServerSideEncryptionMethod and
ServerSideEncryptionKeyManagementServiceKeyId) to request Amazon S3 to use an AWS KMS
CMK. For more information about copying objects, see Copying objects (p. 102).
Put operation
Java
When uploading an object using the AWS SDK for Java, you can request Amazon S3 to use an AWS
KMS CMK by adding the SSEAwsKeyManagementParams property as shown in the following
request.
In this case, Amazon S3 uses the AWS managed CMK (see Using Server-Side Encryption with CMKs
Stored in AWS KMS (p. 158)). You can optionally create a symmetric customer managed CMK and
specify that in the request.
For more information about creating customer managed CMKs, see Programming the AWS KMS API
in the AWS Key Management Service Developer Guide.
For working code examples of uploading an object, see the following topics. You will need to update
those code examples and provide encryption information as shown in the preceding code fragment.
• For uploading an object in a single operation, see Uploading objects (p. 65).
.NET
When uploading an object using the AWS SDK for .NET, you can request Amazon S3 to use an AWS
KMS CMK by adding the ServerSideEncryptionMethod property as shown in the following
request.
In this case, Amazon S3 uses the AWS managed CMK. For more information, see Protecting
Data Using Server-Side Encryption with CMKs Stored in AWS Key Management Service (SSE-
KMS) (p. 158). You can optionally create your own symmetric customer managed CMK and specify
that in the request.
For more information about creating customer managed CMKs, see Programming the AWS KMS API
in the AWS Key Management Service Developer Guide.
For working code examples of uploading an object, see the following topics. You will need to update
these code examples and provide encryption information as shown in the preceding code fragment.
• For uploading an object in a single operation, see Uploading objects (p. 65).
• For multipart upload see the following topics:
• Using high-level multipart upload API, see Uploading an object using multipart upload (p. 78).
• Using low-level multipart upload API, see Uploading an object using multipart upload (p. 78).
Presigned URLs
Java
When creating a presigned URL for an object encrypted using an AWS KMS CMK, you must explicitly
specify Signature Version 4.
For a code example, see Accessing an object using a presigned URL (p. 144).
.NET
When creating a presigned URL for an object encrypted using an AWS KMS CMK, you must explicitly
specify Signature Version 4.
AWSConfigs.S3Config.UseSignatureVersion4 = true;
For a code example, see Accessing an object using a presigned URL (p. 144).
When you configure your bucket to use an S3 Bucket Key for SSE-KMS on new objects, AWS KMS
generates a bucket-level key that is used to create unique data keys for objects in the bucket. This S3
Bucket Key is used for a time-limited period within Amazon S3, reducing the need for Amazon S3 to
make requests to AWS KMS to complete encryption operations. This reduces traffic from S3 to AWS KMS,
allowing you to access AWS KMS-encrypted objects in S3 at a fraction of the previous cost.
Amazon S3 will only share an S3 Bucket Key for objects encrypted by the same AWS KMS customer
master key (CMK).
specific objects in a bucket with an individual per-object KMS key using the REST API, AWS SDK, or AWS
CLI. You can also view S3 Bucket Key settings.
Before you configure your bucket to use an S3 Bucket Key, review Changes to note before enabling an S3
Bucket Key (p. 167).
When you create a new bucket, you can configure your bucket to use an S3 Bucket Key for SSE-KMS on
new objects. You can also configure an existing bucket to use an S3 Bucket Key for SSE-KMS on new
objects by updating your bucket properties.
For more information, see Configuring your bucket to use an S3 Bucket Key with SSE-KMS for new
objects (p. 169).
REST API, AWS CLI, and AWS SDK support for S3 Bucket Keys
You can use the REST API, AWS CLI, or AWS SDK to configure your bucket to use an S3 Bucket Key for
SSE-KMS on new objects. You can also enable an S3 Bucket Key at the object level.
• Configuring an S3 Bucket Key at the object level using the REST API, AWS SDKs, or AWS CLI (p. 171)
• Configuring your bucket to use an S3 Bucket Key with SSE-KMS for new objects (p. 169)
• PutBucketEncryption
• ServerSideEncryptionRule accepts the BucketKeyEnabled parameter for enabling and
disabling an S3 Bucket Key.
• GetBucketEncryption
• ServerSideEncryptionRule returns the settings for BucketKeyEnabled.
• PutObject, CopyObject, CreateMutlipartUpload, and PostObject
• x-amz-server-side-encryption-bucket-key-enabled request header enables or disables an
S3 Bucket Key at the object level.
• HeadObject, GetObject, UploadPartCopy, UploadPart, and CompleteMultipartUpload
• x-amz-server-side-encryption-bucket-key-enabled response header indicates if an S3
Bucket Key is enabled or disabled for an object.
Before you enable an S3 Bucket Key, please note the following related changes:
When you enable an S3 Bucket Key, the AWS KMS key policy for the CMK must include the
kms:Decrypt permission for the calling principal. If the calling principal is in a different account than
the AWS KMS CMK, you must also include kms:Decrypt permission in the IAM policy. The call to
kms:Decrypt verifies the integrity of the S3 Bucket Key before using it.
You only need to include kms:Decrypt permissions in the key policy if you use a customer managed
AWS KMS CMK. If you enable an S3 Bucket Key for server-side encryption using an AWS managed CMK
(aws/s3), your AWS KMS key policy already includes kms:Decrypt permissions.
If your existing IAM policies or AWS KMS key policies use your object Amazon Resource Name (ARN) as
the encryption context to refine or limit access to your AWS KMS CMKs, these policies won’t work with
an S3 Bucket Key. S3 Bucket Keys use the bucket ARN as encryption context. Before you enable an S3
Bucket Key, update your IAM policies or AWS KMS key policies to use your bucket ARN as encryption
context.
For more information about encryption context and S3 Bucket Keys, see Encryption context (x-amz-
server-side-encryption-context) (p. 163).
After you enable an S3 Bucket Key, your AWS KMS CloudTrail events log your bucket ARN instead of your
object ARN. Additionally, you see fewer KMS CloudTrail events for SSE-KMS objects in your logs. Because
key material is time-limited in Amazon S3, fewer requests are made to AWS KMS.
You can use S3 Bucket Keys with Same-Region Replication (SRR) and Cross-Region Replication (CRR).
When Amazon S3 replicates an encrypted object, it generally preserves the encryption settings of
the replica object in the destination bucket. However, if the source object is not encrypted and your
destination bucket uses default encryption or an S3 Bucket Key, Amazon S3 encrypts the object with the
destination bucket’s configuration.
Important
To use replication with an S3 Bucket Key, the AWS KMS key policy for the CMK used to encrypt
the object replica must include kms:Decrypt permissions for the calling principal. The call to
kms:Decrypt verifies the integrity of the S3 Bucket Key before using it. For more information,
see Using an S3 Bucket Key with replication (p. 168). For more information about SSE-KMS and
S3 Bucket Key, see Amazon S3 Bucket Keys and replication (p. 601).
The following examples illustrate how an S3 Bucket Key works with replication. For more information,
see Replicating objects created with server-side encryption (SSE) using AWS KMS CMKs (p. 599).
Example Example 1 – Source object uses S3 Bucket Keys, destination bucket uses default
encryption
If your source object uses an S3 Bucket Key but your destination bucket uses default encryption with
SSE-KMS, the replica object maintains its S3 Bucket Key encryption settings in the destination bucket.
The destination bucket still uses default encryption with SSE-KMS.
Example Example 2 – Source object is not encrypted, destination bucket uses an S3 Bucket
Key with SSE-KMS
If your source object is not encrypted and the destination bucket uses an S3 Bucket Key with SSE-KMS,
the source object is encrypted with an S3 Bucket Key using SSE-KMS in the destination bucket. This
results in the ETag of the source object being different from the ETag of the replica object. You must
update applications that use the ETag to accommodate for this difference.
For more information about enabling and working with S3 Bucket Keys, see the following sections:
• Configuring your bucket to use an S3 Bucket Key with SSE-KMS for new objects (p. 169)
• Configuring an S3 Bucket Key at the object level using the REST API, AWS SDKs, or AWS CLI (p. 171)
• Viewing settings for an S3 Bucket Key (p. 172)
Configuring your bucket to use an S3 Bucket Key with SSE-KMS for new objects
When you configure server-side encryption using SSE-KMS, you can configure your bucket to use an S3
Bucket Key for SSE-KMS on new objects. S3 Bucket Keys decrease the request traffic from Amazon S3 to
AWS Key Management Service (AWS KMS) and reduce the cost of SSE-KMS. For more information, see
Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys (p. 166).
You can configure your bucket to use an S3 Bucket Key for SSE-KMS on new objects by using the Amazon
S3 console, REST API, AWS SDK, AWS CLI, or AWS CloudFormation. If you want to enable or disable an S3
Bucket Key for existing objects, you can use a COPY operation. For more information, see Configuring an
S3 Bucket Key at the object level using the REST API, AWS SDKs, or AWS CLI (p. 171).
When an S3 Bucket Key is enabled for the source or destination bucket, the encryption context
will be the bucket Amazon Resource Name (ARN) and not the object ARN, for example,
arn:aws:s3:::bucket_ARN. You need to update your IAM policies to use the bucket ARN for
the encryption context. For more information, see Granting additional permissions for the IAM role
(p. 600).
The following examples illustrate how an S3 Bucket Key works with replication. For more information,
see Replicating objects created with server-side encryption (SSE) using AWS KMS CMKs (p. 599).
Prerequisite:
Before you configure your bucket to use an S3 Bucket Key, review Changes to note before enabling an S3
Bucket Key (p. 167).
In the S3 console, you can enable or disable an S3 Bucket Key for a new or existing bucket. Objects in
the S3 console inherit their S3 Bucket Key setting from the bucket configuration. When you enable an S3
Bucket Key for your bucket, new objects that you upload to the bucket use an S3 Bucket Key for server-
side encryption using AWS KMS.
Uploading, copying, or modifying objects in buckets that have an S3 Bucket Key enabled
If you upload, modify, or copy an object in a bucket that has an S3 Bucket Key enabled, the S3 Bucket
Key settings for that object might be updated to align with bucket configuration.
If an object already has an S3 Bucket Key enabled, the S3 Bucket Key settings for that object don't
change when you copy or modify the object. However, if you modify or copy an object that doesn’t have
an S3 Bucket Key enabled, and the destination bucket has an S3 Bucket Key configuration, the object
inherits the destination bucket's S3 Bucket Key settings. For example, if your source object doesn't have
an S3 Bucket Key enabled but the destination bucket has S3 Bucket Key enabled, an S3 Bucket Key will
be enabled for the object.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Create bucket.
Amazon S3 creates your bucket with an S3 Bucket Key enabled. New objects that you upload to the
bucket will use an S3 Bucket Key. To disable an S3 Bucket Key, follow the previous steps, and choose
disable.
Amazon S3 enables an S3 Bucket Key for new objects added to your bucket. Existing objects don't
use the S3 Bucket Key. To disable an S3 Bucket Key, follow the previous steps, and choose Disable.
You can use PutBucketEncryption to enable or disable an S3 Bucket Key for your bucket. To configure
an S3 Bucket Key with PutBucketEncryption, specify the ServerSideEncryptionRule, which includes
default encryption with server-side encryption using AWS KMS customer master keys (CMKs). You can
also optionally use a customer managed CMK by specifying the KMS key ID for the CMK.
The following example enables default bucket encryption with SSE-KMS and an S3 Bucket Key using the
AWS SDK for Java.
Java
ServerSideEncryptionByDefault serverSideEncryptionByDefault = new
ServerSideEncryptionByDefault()
.withSSEAlgorithm(SSEAlgorithm.KMS);
ServerSideEncryptionRule rule = new ServerSideEncryptionRule()
.withApplyServerSideEncryptionByDefault(serverSideEncryptionByDefault)
.withBucketKeyEnabled(true);
ServerSideEncryptionConfiguration serverSideEncryptionConfiguration =
new ServerSideEncryptionConfiguration().withRules(Collections.singleton(rule));
The following example enables default bucket encryption with SSE-KMS and an S3 Bucket Key using the
AWS CLI.
For more information about configuring an S3 Bucket Key using AWS CloudFormation, see
AWS::S3::Bucket ServerSideEncryptionRule in the AWS CloudFormation User Guide.
Configuring an S3 Bucket Key at the object level using the REST API, AWS SDKs, or AWS CLI
When you perform a PUT or COPY operation using the REST API, AWS SDKs, or AWS CLI, you can enable
or disable an S3 Bucket Key at the object level. S3 Bucket Keys reduce the cost of server-side encryption
using AWS Key Management Service (AWS KMS) (SSE-KMS) by decreasing request traffic from Amazon
S3 to AWS KMS. For more information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket
Keys (p. 166).
When you configure an S3 Bucket Key for an object using a PUT or COPY operation, Amazon S3 only
updates the settings for that object. The S3 Bucket Key settings for the destination bucket do not
change. If you don't specify an S3 Bucket Key for your object, Amazon S3 applies the S3 Bucket Key
settings for the destination bucket to the object.
Prerequisite:
Before you configure your object to use an S3 Bucket Key, review Changes to note before enabling an S3
Bucket Key (p. 167).
Topics
• Using the REST API (p. 172)
Java
aws s3api put-object --bucket <bucket name> --key <object key name> --server-side-
encryption aws:kms --bucket-key-enabled —body <filepath>
S3 Bucket Keys decrease request traffic from Amazon S3 to AWS KMS and reduce the cost of server-side
encryption using AWS Key Management Service (SSE-KMS). For more information, see Reducing the cost
of SSE-KMS with Amazon S3 Bucket Keys (p. 166).
To view S3 Bucket Key settings for a bucket or an object that has inherited S3 Bucket Key settings from
the bucket configuration, you need permission to perform the s3:GetEncryptionConfiguration
action. For more information, see GetBucketEncryption in the Amazon Simple Storage Service API
Reference.
In the S3 console, you can view the S3 Bucket Key settings for your bucket or object. S3 Bucket Key
settings are inherited from the bucket configuration unless the source objects already has an S3 Bucket
Key configured.
Objects and folders in the same bucket can have different S3 Bucket Key settings. For example, if you
upload an object using the REST API and enable an S3 Bucket Key for the object, the object retains its S3
Bucket Key setting in the destination bucket, even if S3 Bucket Key is disabled in the destination bucket.
As another example, if you enable an S3 Bucket Key for an existing bucket, objects that are already in the
bucket do not use an S3 Bucket Key. However, new objects have an S3 Bucket Key enabled.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the bucket that you want to enable an S3 Bucket Key for.
3. Choose Properties.
4. In the Default encryption section, under Bucket Key, you see the S3 Bucket Key setting for your
bucket.
If you can’t see the S3 Bucket Key setting, you might not have permission to perform the
s3:GetEncryptionConfiguration action. For more information, see GetBucketEncryption in the
Amazon Simple Storage Service API Reference.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the bucket that you want to enable an S3 Bucket Key for.
3. In the Objects list, choose your object name.
4. On the Details tab, under Server-side encryption settings, choose Edit.
Under Bucket Key, you see the S3 Bucket Key setting for your object but you cannot edit it.
To return encryption information for a bucket, including settings for an S3 Bucket Key, use the
GetBucketEncryption operation. S3 Bucket Key settings are returned in the response body in the
ServerSideEncryptionConfiguration with the BucketKeyEnabled setting. For more information,
see GetBucketEncryption in the Amazon S3 API Reference.
To return the S3 Bucket Key status for an object, use the HeadObject operation. HeadObject returns
the x-amz-server-side-encryption-bucket-key-enabled response header to show whether an
S3 Bucket Key is enabled or disabled for the object. For more information, see HeadObject in the Amazon
S3 API Reference.
• PutObject
• PostObject
• CopyObject
• CreateMultipartUpload
• UploadPartCopy
• UploadPart
• CompleteMultipartUpload
• GetObject
There are no new charges for using server-side encryption with Amazon S3-managed keys (SSE-
S3). However, requests to configure and use SSE-S3 incur standard Amazon S3 request charges. For
information about pricing, see Amazon S3 pricing.
If you need server-side encryption for all of the objects that are stored in a bucket, use a bucket policy.
For example, the following bucket policy denies permissions to upload an object unless the request
includes the x-amz-server-side-encryption header to request server-side encryption:
{
"Version": "2012-10-17",
"Id": "PutObjectPolicy",
"Statement": [
{
"Sid": "DenyIncorrectEncryptionHeader",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::awsexamplebucket1/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
},
{
"Sid": "DenyUnencryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::awsexamplebucket1/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption": "true"
}
}
}
]
}
Note
• Server-side encryption encrypts only the object data, not object metadata.
• PUT operations—Specify the request header when uploading data using the PUT API. For more
information, see PUT Object.
• Initiate Multipart Upload—Specify the header in the initiate request when uploading large objects
using the multipart upload API . For more information, see Initiate Multipart Upload.
• COPY operations—When you copy an object, you have both a source object and a target object. For
more information, see PUT Object - Copy.
Note
When using a POST operation to upload an object, instead of providing the request header, you
provide the same information in the form fields. For more information, see POST Object.
The AWS SDKs also provide wrapper APIs that you can use to request server-side encryption. You can
also use the AWS Management Console to upload objects and request server-side encryption.
Topics
• Specifying Amazon S3 encryption (p. 175)
You can specify SSE-S3 using the S3 console, REST APIs, AWS SDKs, and AWS CLI. For more information,
see the topics below.
This topic describes how to set or change the type of encryption an object using the AWS Management
Console. When you copy and object using the console, it copies the object as is. That means if the
source is encrypted, the target object is also encrypted. The console also allows you to add or change
encryption for an object.
Note
If you change an object's encryption, a new object is created to replace the old one. If S3
Versioning is enabled, a new version of the object is created, and the existing object becomes an
older version. The role that changes the property also becomes the owner of the new object or
(object version).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object that you want to add or change encryption for.
The Object overview opens, displaying the properties for your object.
4. Under Server-side encryption settings, choose Edit.
For more information about using Amazon S3 server-side encryption to encrypt your data, see
Protecting data using server-side encryption with Amazon S3-managed encryption keys (SSE-
S3) (p. 174).
7. Choose Save changes.
Note
This action applies encryption to all specified objects. When encrypting folders, wait for the save
operation to finish before adding new objects to the folder.
At the time of object creation—that is, when you are uploading a new object or making a copy of an
existing object—you can specify if you want Amazon S3 to encrypt your data by adding the x-amz-
server-side-encryption header to the request. Set the value of the header to the encryption
algorithm AES256 that Amazon S3 supports. Amazon S3 confirms that your object is stored using
server-side encryption by returning the response header x-amz-server-side-encryption.
The following REST upload APIs accept the x-amz-server-side-encryption request header.
• PUT Object
• PUT Object - Copy
• POST Object
• Initiate Multipart Upload
When uploading large objects using the multipart upload API, you can specify server-side encryption
by adding the x-amz-server-side-encryption header to the Initiate Multipart Upload request.
When you are copying an existing object, regardless of whether the source object is encrypted or not, the
destination object is not encrypted unless you explicitly request server-side encryption.
The response headers of the following REST APIs return the x-amz-server-side-encryption header
when an object is stored using server-side encryption.
• PUT Object
• PUT Object - Copy
• POST Object
• Initiate Multipart Upload
• Upload Part
• Upload Part - Copy
Note
Encryption request headers should not be sent for GET requests and HEAD requests if your
object uses SSE-S3 or you’ll get an HTTP 400 BadRequest error.
When using AWS SDKs, you can request Amazon S3 to use Amazon S3-managed encryption keys. This
section provides examples of using the AWS SDKs in multiple languages. For information about other
SDKs, go to Sample Code and Libraries.
Java
When you use the AWS SDK for Java to upload an object, you can use server-side encryption
to encrypt it. To request server-side encryption, use the ObjectMetadata property of the
PutObjectRequest to set the x-amz-server-side-encryption request header. When you call
the putObject() method of the AmazonS3Client, Amazon S3 encrypts and saves the data.
You can also request server-side encryption when uploading objects with the multipart upload API:
• When using the high-level multipart upload API, you use the TransferManager methods to
apply server-side encryption to objects as you upload them. You can use any of the upload
methods that take ObjectMetadata as a parameter. For more information, see Uploading an
object using multipart upload (p. 78).
• When using the low-level multipart upload API, you specify server-side encryption when
you initiate the multipart upload. You add the ObjectMetadata property by calling the
InitiateMultipartUploadRequest.setObjectMetadata() method. For more information,
see Using the AWS SDKs (low-level-level API) (p. 82).
You can't directly change the encryption state of an object (encrypting an unencrypted object or
decrypting an encrypted object). To change an object's encryption state, you make a copy of the
object, specifying the desired encryption state for the copy, and then delete the original object.
Amazon S3 encrypts the copied object only if you explicitly request server-side encryption. To
request encryption of the copied object through the Java API, use the ObjectMetadata property to
specify server-side encryption in the CopyObjectRequest.
Example Example
The following example shows how to set server-side encryption using the AWS SDK for Java. It
shows how to perform the following tasks:
For more information about server-side encryption, see Using the REST API (p. 176). For
instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.internal.SSEResultBase;
import com.amazonaws.services.s3.model.*;
import java.io.ByteArrayInputStream;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
// Make a copy of the object and use server-side encryption when storing the
copy.
CopyObjectRequest request = new CopyObjectRequest(bucketName,
sourceKey,
bucketName,
destKey);
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
request.setNewObjectMetadata(objectMetadata);
// Perform the copy operation and display the copy's encryption status.
CopyObjectResult response = s3Client.copyObject(request);
System.out.println("Object \"" + destKey + "\" uploaded with SSE.");
printEncryptionStatus(response);
// Delete the original, unencrypted object, leaving only the encrypted copy in
Amazon S3.
s3Client.deleteObject(bucketName, sourceKey);
System.out.println("Unencrypted object \"" + sourceKey + "\" deleted.");
}
.NET
When you upload an object, you can direct Amazon S3 to encrypt it. To change the encryption state
of an existing object, you make a copy of the object and delete the source object. By default, the
copy operation encrypts the target only if you explicitly request server-side encryption of the target
object. To specify server-side encryption in the CopyObjectRequest, add the following:
ServerSideEncryptionMethod = ServerSideEncryptionMethod.AES256
For a working sample of how to copy an object, see Using the AWS SDKs (p. 105).
The following example uploads an object. In the request, the example directs Amazon S3 to encrypt
the object. The example then retrieves object metadata and verifies the encryption method that
was used. For information about creating and testing a working sample, see Running the Amazon
S3 .NET Code Examples (p. 951).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class SpecifyServerSideEncryptionTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** key name for object created ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;
PHP
This topic shows how to use classes from version 3 of the AWS SDK for PHP to add server-side
encryption to objects that you upload to Amazon Simple Storage Service (Amazon S3). It assumes
that you are already following the instructions for Using the AWS SDK for PHP and Running PHP
Examples (p. 952) and have the AWS SDK for PHP properly installed.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
When you upload large objects using the multipart upload API, you can specify server-side
encryption for the objects that you are uploading, as follows:
• When using the low-level multipart upload API, specify server-side encryption when you
call the Aws\S3\S3Client::createMultipartUpload() method. To add the x-amz-server-
side-encryption request header to your request, specify the array parameter's
ServerSideEncryption key with the value AES256. For more information about the low-level
multipart upload API, see Using the AWS SDKs (low-level-level API) (p. 82).
• When using the high-level multipart upload API, specify server-side encryption using the
ServerSideEncryption parameter of the CreateMultipartUpload method. For an example
of using the setOption() method with the high-level multipart upload API, see Uploading an
object using multipart upload (p. 78).
To determine the encryption state of an existing object, retrieve the object metadata by calling the
Aws\S3\S3Client::headObject() method as shown in the following PHP code example.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
]);
To change the encryption state of an existing object, make a copy of the object using the Aws
\S3\S3Client::copyObject() method and delete the source object. By default, copyObject() does
not encrypt the target unless you explicitly request server-side encryption of the destination object
using the ServerSideEncryption parameter with the value AES256. The following PHP code
example makes a copy of an object and adds server-side encryption to the copied object.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
Ruby
When using the AWS SDK for Ruby to upload an object, you can specify that the object be
stored encrypted at rest with server-side encryption (SSE). When you read the object back, it is
automatically decrypted.
The following AWS SDK for Ruby – Version 3 example demonstrates how to specify that a file
uploaded to Amazon S3 be encrypted at rest.
require 'aws-sdk-s3'
# Uploads a file to an Amazon S3 bucket and then encrypts the file server-side
# by using the 256-bit Advanced Encryption Standard (AES-256) block cipher.
#
# Prerequisites:
#
# - An Amazon S3 bucket.
#
# @param s3_client [Aws::S3::Client] An initialized Amazon S3 client.
For an example that shows how to upload an object without SSE, see Uploading objects (p. 65).
The following code example demonstrates how to determine the encryption state of an existing
object.
require 'aws-sdk-s3'
end
If server-side encryption is not used for the object that is stored in Amazon S3, the method returns
null.
To change the encryption state of an existing object, make a copy of the object and delete the
source object. By default, the copy methods do not encrypt the target unless you explicitly request
server-side encryption. You can request the encryption of the target object by specifying the
server_side_encryption value in the options hash argument as shown in the following Ruby
code example. The code example demonstrates how to copy an object and encrypt the copy.
require 'aws-sdk-s3'
For examples of setting up encryption using AWS CloudFormation, see Create a bucket with default
encryption and Create a bucket using AWS KMS server-side encryption with an S3 Bucket Key in the AWS
CloudFormation User Guide.
For a sample of how to copy an object without encryption, see Copying objects (p. 102).
When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256
encryption to your data and removes the encryption key from memory. When you retrieve an object,
you must provide the same encryption key as part of your request. Amazon S3 first verifies that the
encryption key you provided matches and then decrypts the object before returning the object data to
you.
There are no new charges for using server-side encryption with customer-provided encryption keys
(SSE-C). However, requests to configure and use SSE-C incur standard Amazon S3 request charges. For
information about pricing, see Amazon S3 pricing.
Important
Amazon S3 does not store the encryption key you provide. Instead, it stores a randomly salted
HMAC value of the encryption key to validate future requests. The salted HMAC value cannot
be used to derive the value of the encryption key or to decrypt the contents of the encrypted
object. That means if you lose the encryption key, you lose the object.
SSE-C overview
This section provides an overview of SSE-C:
Topics
Name Description
x-amz-server-side- Use this header to specify the encryption algorithm. The header value
encryption-customer- must be "AES256".
algorithm
x-amz-server-side- Use this header to provide the 256-bit, base64-encoded encryption key
encryption-customer- for Amazon S3 to use to encrypt or decrypt your data.
key
x-amz-server-side- Use this header to provide the base64-encoded 128-bit MD5 digest of
encryption-customer- the encryption key according to RFC 1321. Amazon S3 uses this header
key-MD5 for a message integrity check to ensure that the encryption key was
transmitted without error.
You can use AWS SDK wrapper libraries to add these headers to your request. If you need to, you can
make the Amazon S3 REST API calls directly in your application.
Note
You cannot use the Amazon S3 console to upload an object and request SSE-C. You also cannot
use the console to update (for example, change the storage class or add metadata) an existing
object stored using SSE-C.
• When creating a presigned URL, you must specify the algorithm using the x-amz-server-side-
encryption-customer-algorithm in the signature calculation.
• When using the presigned URL to upload a new object, retrieve an existing object, or retrieve only
object metadata, you must provide all the encryption headers in your client application.
Note
For non-SSE-C objects, you can generate a presigned URL and directly paste that into a
browser, for example to access the data.
However, this is not true for SSE-C objects because in addition to the presigned URL, you also
need to include HTTP headers that are specific to SSE-C objects. Therefore, you can use the
presigned URL for SSE-C objects only programmatically.
• GET operation — When retrieving objects using the GET API (see GET Object), you can specify the
request headers. Torrents are not supported for objects encrypted using SSE-C.
• HEAD operation — To retrieve object metadata using the HEAD API (see HEAD Object), you can
specify these request headers.
• PUT operation — When uploading data using the PUT Object API (see PUT Object), you can specify
these request headers.
• Multipart Upload — When uploading large objects using the multipart upload API, you can specify
these headers. You specify these headers in the initiate request (see Initiate Multipart Upload) and
each subsequent part upload request (see Upload Part or
). For each part upload request, the encryption information must be the same as what you provided in
the initiate multipart upload request.
• POST operation — When using a POST operation to upload an object (see POST Object), instead of
the request headers, you provide the same information in the form fields.
• Copy operation — When you copy an object (see PUT Object - Copy), you have both a source object
and a target object:
• If you want the target object encrypted using server-side encryption with AWS managed keys, you
must provide the x-amz-server-side-encryption request header.
• If you want the target object encrypted using SSE-C, you must provide encryption information using
the three headers described in the preceding table.
• If the source object is encrypted using SSE-C, you must provide encryption key information using the
following headers so that Amazon S3 can decrypt the object for copying.
Name Description
x-amz-copy-source Include this header to specify the algorithm Amazon S3 should use to
-server-side decrypt the source object. This value must be AES256.
-encryption-
customer-algorithm
x-amz-copy-source Include this header to provide the base64-encoded encryption key for
-server-side Amazon S3 to use to decrypt the source object. This encryption key
-encryption- must be the one that you provided Amazon S3 when you created the
customer-key source object. Otherwise, Amazon S3 cannot decrypt the object.
x-amz-copy- Include this header to provide the base64-encoded 128-bit MD5 digest
source-server- of the encryption key according to RFC 1321.
side-encryption-
customer-key-MD5
Using the AWS SDKs to specify SSE-C for PUT, GET, Head, and Copy operations
The following examples show how to request server-side encryption with customer-provided keys (SSE-
C) for objects. The examples perform the following operations. Each operation shows how to specify
SSE-C-related headers in the request:
• Copy object—Makes a copy of the previously-uploaded object. Because the source object is stored
using SSE-C, you must provide its encryption information in your copy request. By default, Amazon
S3 encrypts the copy of the object only if you explicitly request it. This example directs Amazon S3 to
store an encrypted copy of the object.
Java
Note
This example shows how to upload an object in a single operation. When using the
Multipart Upload API to upload large objects, you provide encryption information in the
same way shown in this example. For examples of multipart uploads that use the AWS SDK
for Java, see Uploading an object using multipart upload (p. 78).
To add the required encryption information, you include an SSECustomerKey in your request. For
more information about the SSECustomerKey class, see the REST API section.
For information about SSE-C, see Protecting data using server-side encryption with customer-
provided encryption keys (SSE-C) (p. 185). For instructions on creating and testing a working
sample, see Testing the Amazon S3 Java Code Examples (p. 950).
Example
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;
import javax.crypto.KeyGenerator;
import java.io.BufferedReader;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
try {
S3_CLIENT = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
// Upload an object.
uploadObject(bucketName, keyName, new File(uploadFileName));
// Copy the object into a new object that also uses SSE-C.
copyObject(bucketName, keyName, targetKeyName);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
S3_CLIENT.copyObject(copyRequest);
System.out.println("Object copied");
}
.NET
Note
For examples of uploading large objects using the multipart upload API, see Uploading an
object using multipart upload (p. 78) and Using the AWS SDKs (low-level-level API) (p. 82).
For information about SSE-C, see Protecting data using server-side encryption with customer-
provided encryption keys (SSE-C) (p. 185)). For information about creating and testing a working
sample, see Running the Amazon S3 .NET Code Examples (p. 951).
Example
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Security.Cryptography;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class SSEClientEncryptionKeyObjectOperationsTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** key name for new object created ***";
private const string copyTargetKeyName = "*** key name for object copy ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;
// 3. Get object metadata and verify that the object uses AES-256
encryption.
await GetObjectMetadataAsync(base64Key);
// 4. Copy both the source and target objects using server-side
encryption with
// a customer-provided encryption key.
await CopyObjectAsync(aesEncryption, base64Key);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered ***. Message:'{0}' when writing an
object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
if (getResponse.ServerSideEncryptionCustomerMethod ==
ServerSideEncryptionCustomerMethod.AES256)
Console.WriteLine("Object encryption method is AES256, same as we
set");
else
Console.WriteLine("Error...Object encryption method is not the same
as AES256 we set");
// Assert.AreEqual(putObjectRequest.ContentBody, content);
// Assert.AreEqual(ServerSideEncryptionCustomerMethod.AES256,
getResponse.ServerSideEncryptionCustomerMethod);
}
}
private static async Task GetObjectMetadataAsync(string base64Key)
{
GetObjectMetadataRequest getObjectMetadataRequest = new
GetObjectMetadataRequest
{
BucketName = bucketName,
Key = keyName,
The example in the preceding section shows how to request server-side encryption with customer-
provided key (SSE-C) in the PUT, GET, Head, and Copy operations. This section describes other Amazon
S3 APIs that support SSE-C.
Java
To upload large objects, you can use multipart upload API (see Uploading and copying objects using
multipart upload (p. 72)). You can use either high-level or low-level APIs to upload large objects.
These APIs support encryption-related headers in the request.
• When using the high-level TransferManager API, you provide the encryption-specific headers in
the PutObjectRequest (see Uploading an object using multipart upload (p. 78)).
• When using the low-level API, you provide encryption-related information in the
InitiateMultipartUploadRequest, followed by identical encryption information in each
UploadPartRequest. You do not need to provide any encryption-specific headers in your
CompleteMultipartUploadRequest. For examples, see Using the AWS SDKs (low-level-level
API) (p. 82).
The following example uses TransferManager to create objects and shows how to provide SSE-C
related information. The example does the following:
Example
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.SSECustomerKey;
import com.amazonaws.services.s3.transfer.Copy;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;
import javax.crypto.KeyGenerator;
import java.io.File;
import java.security.SecureRandom;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
// Copy the object and store the copy using SSE-C with a new key.
CopyObjectRequest copyObjectRequest = new CopyObjectRequest(bucketName,
keyName, bucketName, targetKeyName);
SSECustomerKey sseTargetObjectEncryptionKey = new
SSECustomerKey(keyGenerator.generateKey());
copyObjectRequest.setSourceSSECustomerKey(sseCustomerEncryptionKey);
copyObjectRequest.setDestinationSSECustomerKey(sseTargetObjectEncryptionKey);
.NET
To upload large objects, you can use multipart upload API (see Uploading and copying objects using
multipart upload (p. 72)). AWS SDK for .NET provides both high-level or low-level APIs to upload
large objects. These APIs support encryption-related headers in the request.
• When using high-level Transfer-Utility API, you provide the encryption-specific headers in
the TransferUtilityUploadRequest as shown. For code examples, see Uploading an object
using multipart upload (p. 78).
ServerSideEncryptionCustomerProvidedKey = base64Key,
};
• When using the low-level API, you provide encryption-related information in the initiate multipart
upload request, followed by identical encryption information in the subsequent upload part
requests. You do not need to provide any encryption-specific headers in your complete multipart
upload request. For examples, see Using the AWS SDKs (low-level-level API) (p. 82).
The following is a low-level multipart upload example that makes a copy of an existing large
object. In the example, the object to be copied is stored in Amazon S3 using SSE-C, and you want
to save the target object also using SSE-C. In the example, you do the following:
• Initiate a multipart upload request by providing an encryption key and related information.
• Provide source and target object encryption keys and related information in the
CopyPartRequest.
• Obtain the size of the source object to be copied by retrieving the object metadata.
• Upload the objects in 5 MB parts.
Example
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.IO;
using System.Security.Cryptography;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class SSECLowLevelMPUcopyObjectTest
{
private const string existingBucketName = "*** bucket name ***";
private const string sourceKeyName = "*** source object key name ***";
private const string targetKeyName = "*** key name for the target object
***";
private const string filePath = @"*** file path ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
CopyObjClientEncryptionKeyAsync().Wait();
}
// 1. Initialize.
InitiateMultipartUploadRequest initiateRequest = new
InitiateMultipartUploadRequest
{
BucketName = existingBucketName,
Key = targetKeyName,
ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = base64Key,
};
InitiateMultipartUploadResponse initResponse =
await s3Client.InitiateMultipartUploadAsync(initiateRequest);
// 2. Upload Parts.
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB
long firstByte = 0;
long lastByte = partSize;
try
{
// First find source object size. Because object is stored encrypted
with
// customer provided key you need to provide encryption information
in your request.
GetObjectMetadataRequest getObjectMetadataRequest = new
GetObjectMetadataRequest()
{
BucketName = existingBucketName,
Key = sourceKeyName,
ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = base64Key // " *
**source object encryption key ***"
};
long filePosition = 0;
for (int i = 1; filePosition <
getObjectMetadataResponse.ContentLength; i++)
{
CopyPartRequest copyPartRequest = new CopyPartRequest
{
UploadId = initResponse.UploadId,
// Source.
SourceBucket = existingBucketName,
SourceKey = sourceKeyName,
// Source object is stored using SSE-C. Provide encryption
information.
CopySourceServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
CopySourceServerSideEncryptionCustomerProvidedKey =
base64Key, //"***source object encryption key ***",
FirstByte = firstByte,
// If the last part is smaller then our normal part size then
use the remaining size.
LastByte = lastByte >
getObjectMetadataResponse.ContentLength ?
getObjectMetadataResponse.ContentLength - 1 : lastByte,
// Target.
DestinationBucket = existingBucketName,
DestinationKey = targetKeyName,
PartNumber = i,
// Step 3: complete.
CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest
{
BucketName = existingBucketName,
Key = targetKeyName,
UploadId = initResponse.UploadId,
};
completeRequest.AddPartETags(uploadResponses);
CompleteMultipartUploadResponse completeUploadResponse =
await s3Client.CompleteMultipartUploadAsync(completeRequest);
}
catch (Exception exception)
{
Console.WriteLine("Exception occurred: {0}", exception.Message);
AbortMultipartUploadRequest abortMPURequest = new
AbortMultipartUploadRequest
{
BucketName = existingBucketName,
Key = targetKeyName,
UploadId = initResponse.UploadId
};
s3Client.AbortMultipartUpload(abortMPURequest);
}
}
private static async Task CreateSampleObjUsingClientEncryptionKeyAsync(string
base64Key, IAmazonS3 s3Client)
{
// List to store upload part responses.
List<UploadPartResponse> uploadResponses = new
List<UploadPartResponse>();
// 1. Initialize.
InitiateMultipartUploadRequest initiateRequest = new
InitiateMultipartUploadRequest
{
BucketName = existingBucketName,
Key = sourceKeyName,
ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = base64Key
};
InitiateMultipartUploadResponse initResponse =
await s3Client.InitiateMultipartUploadAsync(initiateRequest);
// 2. Upload Parts.
long contentLength = new FileInfo(filePath).Length;
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB
try
{
long filePosition = 0;
filePosition += partSize;
}
// Step 3: complete.
CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest
{
BucketName = existingBucketName,
Key = sourceKeyName,
UploadId = initResponse.UploadId,
//PartETags = new List<PartETag>(uploadResponses)
};
completeRequest.AddPartETags(uploadResponses);
CompleteMultipartUploadResponse completeUploadResponse =
await s3Client.CompleteMultipartUploadAsync(completeRequest);
}
catch (Exception exception)
{
Console.WriteLine("Exception occurred: {0}", exception.Message);
AbortMultipartUploadRequest abortMPURequest = new
AbortMultipartUploadRequest
{
BucketName = existingBucketName,
Key = sourceKeyName,
UploadId = initResponse.UploadId
};
await s3Client.AbortMultipartUploadAsync(abortMPURequest);
}
}
}
}
• Use a customer master key (CMK) stored in AWS Key Management Service (AWS KMS).
• Use a master key that you store within your application.
• When uploading an object — Using the CMK ID, the client first sends a request to AWS KMS for a
new symmetric key that it can use to encrypt your object data. AWS KMS returns two versions of a
randomly generated data key:
• A plaintext version of the data key that the client uses to encrypt the object data.
• A cipher blob of the same data key that the client uploads to Amazon S3 as object metadata.
Note
The client obtains a unique data key for each object that it uploads.
• When downloading an object — The client downloads the encrypted object from Amazon S3 along
with the cipher blob version of the data key stored as object metadata. The client then sends the
cipher blob to AWS KMS to get the plaintext version of the data key so that it can decrypt the object
data.
For more information about AWS KMS, see What is AWS Key Management Service? in the AWS Key
Management Service Developer Guide.
Example
The following code example demonstrates how to upload an object to Amazon S3 using AWS KMS with
the AWS SDK for Java. The example uses an AWS managed CMK to encrypt data on the client side before
uploading it to Amazon S3. If you already have a CMK, you can use that by specifying the value of the
keyId variable in the example code. If you don't have a CMK, or you need another one, you can generate
one through the Java API. The example code automatically generates a CMK to use.
For instructions on creating and testing a working example, see Testing the Amazon S3 Java Code
Examples (p. 950).
// --
// specify an Amazon KMS customer master key (CMK) ID
String keyId = createKeyResult.getKeyMetadata().getKeyId();
s3Encryption.shutdown();
kmsClient.shutdown();
• When uploading an object — You provide a client-side master key to the Amazon S3 encryption
client. The client uses the master key only to encrypt the data encryption key that it generates
randomly.
The client-side master key that you provide can be either a symmetric key or a public/private key pair.
The following code examples show how to use each type of key.
For more information, see Client-Side Data Encryption with the AWS SDK for Java and Amazon S3.
Note
If you get a cipher-encryption error message when you use the encryption API for the first time,
your version of the JDK might have a Java Cryptography Extension (JCE) jurisdiction policy file
that limits the maximum key length for encryption and decryption transformations to 128 bits.
The AWS SDK requires a maximum key length of 256 bits.
Example
For instructions on creating and testing a working example, see Testing the Amazon S3 Java Code
Examples (p. 950).
// --
// generate a symmetric encryption key for testing
SecretKey secretKey = keyGenerator.generateKey();
Example
For instructions on creating and testing a working example, see Testing the Amazon S3 Java Code
Examples (p. 950).
// --
// generate an asymmetric key pair for testing
• An AWS Site-to-Site VPN connection. For more information, see What is AWS Site-to-Site VPN?
• An AWS Direct Connect connection. For more information, see What is AWS Direct Connect?
• An AWS PrivateLink connection. For more information, see AWS PrivateLink for Amazon S3 (p. 202).
Access to Amazon S3 via the network is through AWS published APIs. Clients must support Transport
Layer Security (TLS) 1.0. We recommend TLS 1.2 or above. Clients must also support cipher suites
with Perfect Forward Secrecy (PFS), such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Diffie-
Hellman Ephemeral (ECDHE). Most modern systems such as Java 7 and later support these modes.
Additionally, you must sign requests using an access key ID and a secret access key that are associated
with an IAM principal, or you can use the AWS Security Token Service (STS) to generate temporary
security credentials to sign requests.
Interface endpoints are represented by one or more elastic network interfaces (ENIs) that are assigned
private IP addresses from subnets in your VPC. Requests that are made to interface endpoints for
Amazon S3 are automatically routed to Amazon S3 on the Amazon network. You can also access
interface endpoints in your VPC from on-premises applications through AWS Direct Connect or AWS
Virtual Private Network (AWS VPN). For more information about how to connect your VPC with your on-
premises network, see the AWS Direct Connect User Guide and the AWS Site-to-Site VPN User Guide.
For general information about interface endpoints, see Interface VPC endpoints (AWS PrivateLink).
Topics
• Types of VPC endpoints for Amazon S3 (p. 203)
• Accessing Amazon S3 interface endpoints (p. 203)
• Accessing buckets and S3 access points from S3 interface endpoints (p. 204)
• Updating an on-premises DNS configuration (p. 206)
• Creating a VPC endpoint policy for Amazon S3 (p. 207)
Does not allow access from on premises Allows access from on premises
Does not allow access from another AWS Region Allows access from another AWS Region
For more information about gateway endpoints, see Gateway VPC endpoints in the Amazon VPC User
Guide.
When you create an interface endpoint, Amazon S3 generates two types of endpoint-specific, S3 DNS
names: regional and zonal.
• Regional DNS names include a unique VPC endpoint ID, a service identifier, the AWS
Region, and vpce.amazonaws.com in its name. For example, for VPC endpoint ID
vpce-1a2b3c4d, the DNS name generated might be similar to vpce-1a2b3c4d-5e6f.s3.us-
east-1.vpce.amazonaws.com.
Endpoint-specific S3 DNS names can be resolved from the S3 public DNS domain.
Note
Amazon S3 interface endpoints do not support the private DNS feature of interface endpoints.
For more information about Private DNS for interface endpoints, see the Amazon VPC User
Guide.
The following image shows the VPC console Details tab, where you can find the DNS name of a VPC
endpoint. In this example, the VPC endpoint ID (vpce-id) is vpce-0e25b8cdd720f900e and the DNS
name is vpce-0e25b8cdd720f900e-argc85vg.s3.us-east-1.vpce.amazonaws.com.
For more about how to view your endpoint-specific DNS names, see Viewing endpoint service private
DNS name configuration in the Amazon VPC User Guide.
Example: Using the endpoint URL to list objects from an access point
s3_client = session.client(
service_name='s3',
endpoint_url='https://bucket.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com'
)
ap_client = session.client(
service_name='s3',
endpoint_url='https://accesspoint.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com'
)
control_client = session.client(
service_name='s3control',
endpoint_url='https://control.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com'
)
// bucket client
final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
"https://bucket.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com",
Regions.DEFAULT_REGION.getName()
)
).build();
List<Bucket> buckets = s3.listBuckets();
// accesspoint client
final AmazonS3 s3accesspoint =
AmazonS3ClientBuilder.standard().withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
"https://accesspoint.vpce-1a2b3c4d-5e6f.s3.us-
east-1.vpce.amazonaws.com",
Regions.DEFAULT_REGION.getName()
)
).build();
ObjectListing objects = s3accesspoint.listObjects("arn:aws:s3:us-
east-1:123456789012:accesspoint/prod");
// control client
final AWSS3Control s3control = AWSS3ControlClient.builder().withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
"https://control.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com",
Regions.DEFAULT_REGION.getName()
)
).build();
final ListJobsResult jobs = s3control.listJobs(new ListJobsRequest());
// bucket client
Region region = Region.US_EAST_1;
s3Client = S3Client.builder().region(region)
.endpointOverride(URI.create("https://
bucket.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com"))
.build()
// accesspoint client
Region region = Region.US_EAST_1;
s3Client = S3Client.builder().region(region)
.endpointOverride(URI.create("https://
accesspoint.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com"))
.build()
// control client
Region region = Region.US_EAST_1;
s3ControlClient = S3ControlClient.builder().region(region)
.endpointOverride(URI.create("https://
control.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com"))
.build()
• Your on-premises network uses AWS Direct Connect or AWS VPN to connect to VPC A.
• Your applications on-premises and in VPC A use endpoint-specific DNS names to access Amazon S3
through the S3 interface endpoint.
• On-premises applications send data to the interface endpoint in the VPC through AWS Direct Connect
(or AWS VPN). AWS PrivateLink moves the data from the interface endpoint to Amazon S3 over the
Amazon network.
• In-VPC applications also send traffic to the interface endpoint. AWS PrivateLink moves the data from
the interface endpoint to Amazon S3 over the Amazon network.
• On-premises applications use endpoint-specific DNS names to send data to the interface endpoint
within the VPC through AWS Direct Connect (or AWS VPN). AWS PrivateLink moves the data from the
interface endpoint to Amazon S3 over the Amazon network.
• Using default regional Amazon S3 names, in-VPC applications send data to the gateway endpoint that
connects to Amazon S3 over the Amazon network.
For more information about gateway endpoints, see Gateway VPC endpoints in the Amazon VPC User
Guide.
• The AWS Identity and Access Management (IAM) principal that can perform actions
• The actions that can be performed
• The resources on which actions can be performed
You can also use Amazon S3 bucket policies to restrict access to specific buckets from a specific VPC
endpoint using the aws:sourceVpce condition in your bucket policy. The following examples show
policies that restrict access to a bucket or to an endpoint.
Important
• When applying the Amazon S3 bucket policies for VPC endpoints described in this section,
you might block your access to the bucket without intending to do so. Bucket permissions
that are intended to specifically limit bucket access to connections originating from your VPC
endpoint can block all connections to the bucket. For information about how to fix this issue,
see My bucket policy has the wrong VPC or VPC endpoint ID. How can I fix the policy so that I
can access the bucket? in the AWS Support Knowledge Center.
• Before using the following example policy, replace the VPC endpoint ID with an appropriate
value for your use case. Otherwise, you won't be able to access your bucket.
• This policy disables console access to the specified bucket, because console requests don't
originate from the specified VPC endpoint.
You can create an endpoint policy that restricts access to specific Amazon S3 buckets only. This is useful
if you have other AWS services in your VPC that use buckets. The following bucket policy restricts access
to DOC-EXAMPLE-BUCKET1 only. Replace DOC-EXAMPLE-BUCKET1 with the name of your bucket.
{
"Version": "2012-10-17",
"Id": "Policy1415115909151",
"Statement": [
{ "Sid": "Access-to-specific-bucket-only",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET1",
"arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*"]
}
]
}
You can create a policy that restricts access only to the S3 buckets in a specific AWS account. Use this
to prevent clients within your VPC from accessing buckets that you do not own. The following example
creates a policy that restricts access to resources owned by a single AWS account ID, 111122223333.
{
"Statement": [
{
"Sid": "Access-to-bucket-in-specific-account-only",
"Principal": "",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::",
"Condition": {
"StringNotEquals": {
"s3:ResourceAccount": "111122223333"
}
}
}
]
}
The following Amazon S3 bucket policy allows access to a specific bucket, DOC-EXAMPLE-BUCKET2,
from endpoint vpce-1a2b3c4d only. The policy denies all access to the bucket if the specified endpoint
is not being used. The aws:sourceVpce condition is used to specify the endpoint and does not require
an Amazon Resource Name (ARN) for the VPC endpoint resource, only the endpoint ID. Replace DOC-
EXAMPLE-BUCKET2 and vpce-1a2b3c4d with a real bucket name and endpoint.
{
"Version": "2012-10-17",
"Id": "Policy1415115909152",
"Statement": [
{ "Sid": "Access-to-specific-VPCE-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET2",
"arn:aws:s3:::DOC-EXAMPLE-BUCKET2/*"],
"Condition": {"StringNotEquals": {"aws:sourceVpce": "vpce-1a2b3c4d"}}
}
]
}
For more policy examples, see Endpoints for Amazon S3 in the Amazon VPC User Guide.
For more information about VPC connectivity, see Network-to-Amazon VPC connectivity options in the
AWS whitepaper: Amazon Virtual Private Cloud Connectivity Options.
Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies.
Access policies that you attach to your resources (buckets and objects) are referred to as resource-based
policies. For example, bucket policies and access control lists (ACLs) are resource-based policies. You can
also attach access policies to users in your account. These are called user policies. You can choose to use
resource-based policies, user policies, or some combination of these to manage permissions to your
Amazon S3 resources. The following sections provide general guidelines for managing permissions in
Amazon S3.
For more information about managing access to your Amazon S3 objects and buckets, see the topics
below.
Topics
• Overview of managing access (p. 210)
• Access policy guidelines (p. 215)
• How Amazon S3 authorizes a request (p. 219)
• Bucket policies and user policies (p. 226)
• Managing access with ACLs (p. 383)
• Using cross-origin resource sharing (CORS) (p. 397)
• Blocking public access to your Amazon S3 storage (p. 408)
• Managing data access with Amazon S3 access points (p. 418)
• Reviewing bucket access using Access Analyzer for S3 (p. 432)
• Controlling ownership of uploaded objects using S3 Object Ownership (p. 436)
• Verifying bucket ownership with bucket owner condition (p. 438)
Topics
• Amazon S3 resources: buckets and objects (p. 210)
• Amazon S3 bucket and object ownership (p. 211)
• Resource operations (p. 212)
• Managing access to resources (p. 212)
• Which access control method should I use? (p. 215)
• lifecycle – Stores lifecycle configuration information. For more information, see Managing your
storage lifecycle (p. 501).
• website – Stores website configuration information if you configure your bucket for website hosting.
For information, see Hosting a static website using Amazon S3 (p. 857).
• versioning – Stores versioning configuration. For more information, see PUT Bucket versioning in
the Amazon Simple Storage Service API Reference.
• policy and acl (access control list) – Store access permission information for the bucket.
• cors (cross-origin resource sharing) – Supports configuring your bucket to allow cross-origin requests.
For more information, see Using cross-origin resource sharing (CORS) (p. 397).
• object ownership – Enables the bucket owner to take ownership of new objects in the bucket,
regardless of who uploads them. For more information, see Controlling ownership of uploaded objects
using S3 Object Ownership (p. 436).
• logging – Enables you to request Amazon S3 to save bucket access logs.
• acl – Stores a list of access permissions on the object. For more information, see Managing access with
ACLs (p. 383).
• restore – Supports temporarily restoring an archived object. For more information, see POST Object
restore in the Amazon Simple Storage Service API Reference.
An object in the S3 Glacier storage class is an archived object. To access the object, you must first
initiate a restore request, which restores a copy of the archived object. In the request, you specify
the number of days that you want the restored copy to exist. For more information about archiving
objects, see Managing your storage lifecycle (p. 501).
• The AWS account that you use to create buckets and upload objects owns those resources.
• If you upload an object using AWS Identity and Access Management (IAM) user or role credentials, the
AWS account that the user or role belongs to owns the object.
• A bucket owner can grant cross-account permissions to another AWS account (or users in another
account) to upload objects. In this case, the AWS account that uploads objects owns those objects. The
bucket owner does not have permissions on the objects that other accounts own, with the following
exceptions:
• The bucket owner pays the bills. The bucket owner can deny access to any objects, or delete any
objects in the bucket, regardless of who owns them.
• The bucket owner can archive any objects or restore archived objects regardless of who owns them.
Archival refers to the storage class used to store the objects. For more information, see Managing
your storage lifecycle (p. 501).
A bucket owner can allow unauthenticated requests. For example, unauthenticated PUT Object
requests are allowed when a bucket has a public bucket policy, or when a bucket ACL grants WRITE or
FULL_CONTROL access to the All Users group or the anonymous user specifically. For more information
about public bucket policies and public access control lists (ACLs), see The meaning of "public" (p. 411).
All unauthenticated requests are made by the anonymous user. This user is represented in ACLs by
the specific canonical user ID 65a011a29cdf8ec533ec3d1ccaae921c. If an object is uploaded to a
bucket through an unauthenticated request, the anonymous user owns the object. The default object
ACL grants FULL_CONTROL to the anonymous user as the object's owner. Therefore, Amazon S3 allows
unauthenticated requests to retrieve the object or modify its ACL.
To prevent objects from being modified by the anonymous user, we recommend that you do not
implement bucket policies that allow anonymous public writes to your bucket or use ACLs that allow
the anonymous user write access to your bucket. You can enforce this recommended behavior by using
Amazon S3 Block Public Access.
For more information about blocking public access, see Blocking public access to your Amazon S3
storage (p. 408). For more information about ACLs, see Managing access with ACLs (p. 383).
Important
We recommend that you don't use the AWS account root user credentials to make authenticated
requests. Instead, create an IAM user and grant that user full access. We refer to these users as
administrator users. You can use the administrator user credentials, instead of AWS account root
user credentials, to interact with AWS and perform tasks, such as create a bucket, create users,
and grant permissions. For more information, see AWS account root user credentials and IAM
user credentials in the AWS General Reference and Security best practices in IAM in the IAM User
Guide.
Resource operations
Amazon S3 provides a set of operations to work with the Amazon S3 resources. For a list of available
operations, see Actions defined by Amazon S3 (p. 243).
• Resource-based policies – Bucket policies and access control lists (ACLs) are resource-based because
you attach them to your Amazon S3 resources.
• ACL – Each bucket and object has an ACL associated with it. An ACL is a list of grants identifying
grantee and permission granted. You use ACLs to grant basic read/write permissions to other AWS
accounts. ACLs use an Amazon S3–specific XML schema.
The following is an example bucket ACL. The grant in the ACL shows a bucket owner as having full
control permission.
<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>*** Owner-Canonical-User-ID ***</ID>
<DisplayName>owner-display-name</DisplayName>
</Owner>
<AccessControlList>
<Grant>
<Grantee xmlns:xsi="http://www.w3.org/2