s3 Userguide
s3 Userguide
User Guide
API Version 2006-03-01
Amazon Simple Storage Service User Guide
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon Simple Storage Service User Guide
Table of Contents
What is Amazon S3? ........................................................................................................................... 1
How do I...? ............................................................................................................................... 1
Advantages of using Amazon S3 .................................................................................................. 2
Amazon S3 concepts .................................................................................................................. 2
Buckets ............................................................................................................................. 2
Objects ............................................................................................................................. 3
Keys ................................................................................................................................. 3
Regions ............................................................................................................................. 3
Amazon S3 data consistency model ...................................................................................... 3
Amazon S3 features ................................................................................................................... 5
Storage classes .................................................................................................................. 6
Bucket policies ................................................................................................................... 6
AWS Identity and Access Management .................................................................................. 7
Access control lists ............................................................................................................. 7
Versioning ......................................................................................................................... 7
Operations ........................................................................................................................ 7
Amazon S3 application programming interfaces (API) ..................................................................... 7
The REST interface ............................................................................................................. 8
The SOAP interface ............................................................................................................ 8
Paying for Amazon S3 ................................................................................................................ 8
Related services ......................................................................................................................... 8
Getting started ................................................................................................................................ 10
Setting up ............................................................................................................................... 10
Sign up for AWS .............................................................................................................. 11
Create an IAM user ........................................................................................................... 11
Sign in as an IAM user ...................................................................................................... 12
Step 1: Create a bucket ............................................................................................................. 12
Step 2: Upload an object .......................................................................................................... 13
Step 3: Download an object ...................................................................................................... 14
Using the S3 console ........................................................................................................ 14
Step 4: Copy an object ............................................................................................................. 15
Step 5: Delete the objects and bucket ........................................................................................ 15
Deleting an object ............................................................................................................ 16
Emptying your bucket ....................................................................................................... 16
Deleting your bucket ........................................................................................................ 16
Where do I go from here? ......................................................................................................... 17
Common use scenarios ...................................................................................................... 17
Considerations ................................................................................................................. 17
Advanced features ............................................................................................................ 18
Access control .................................................................................................................. 19
Development resources ..................................................................................................... 23
Working with buckets ....................................................................................................................... 24
Buckets overview ...................................................................................................................... 24
About permissions ............................................................................................................ 25
Managing public access to buckets ..................................................................................... 25
Bucket configuration ......................................................................................................... 26
Naming rules ........................................................................................................................... 27
Example bucket names ..................................................................................................... 28
Creating a bucket ..................................................................................................................... 28
Viewing bucket properties ......................................................................................................... 33
Accessing a bucket ................................................................................................................... 33
Virtual-hosted–style access ................................................................................................ 34
Path-style access .............................................................................................................. 34
Accessing an S3 bucket over IPv6 ....................................................................................... 34
Traffic between service and on-premises clients and applications .......................................... 265
Traffic between AWS resources in the same Region ............................................................. 266
AWS PrivateLink for Amazon S3 ............................................................................................... 266
Types of VPC endpoints .................................................................................................. 266
Restrictions and limitations of AWS PrivateLink for Amazon S3 ............................................. 267
Accessing Amazon S3 interface endpoints .......................................................................... 267
Accessing buckets and S3 access points from S3 interface endpoints ..................................... 267
Updating an on-premises DNS configuration ...................................................................... 271
Creating a VPC endpoint policy ........................................................................................ 272
Identity and access management .............................................................................................. 274
Overview ....................................................................................................................... 275
Access policy guidelines ................................................................................................... 280
Request authorization ..................................................................................................... 284
Bucket policies and user policies ....................................................................................... 291
AWS managed policies .................................................................................................... 458
Managing access with ACLs .............................................................................................. 460
Using CORS ................................................................................................................... 477
Blocking public access ..................................................................................................... 488
Reviewing bucket access .................................................................................................. 497
Controlling object ownership ........................................................................................... 502
Verifying bucket ownership .............................................................................................. 504
Logging and monitoring .......................................................................................................... 508
Compliance Validation ............................................................................................................. 509
Resilience .............................................................................................................................. 510
Backup encryption .......................................................................................................... 511
Infrastructure security ............................................................................................................. 512
Configuration and vulnerability analysis .................................................................................... 513
Security Best Practices ............................................................................................................ 514
Amazon S3 Preventative Security Best Practices ................................................................. 514
Amazon S3 Monitoring and Auditing Best Practices ............................................................ 516
Managing storage ........................................................................................................................... 519
Using S3 Versioning ................................................................................................................ 519
Unversioned, versioning-enabled, and versioning-suspended buckets ..................................... 519
Using S3 Versioning with S3 Lifecycle ............................................................................... 520
S3 Versioning ................................................................................................................. 520
Enabling versioning on buckets ........................................................................................ 523
Configuring MFA delete ................................................................................................... 528
Working with versioning-enabled objects ........................................................................... 529
Working with versioning-suspended objects ....................................................................... 546
Working with archived objects ......................................................................................... 549
Using Object Lock .................................................................................................................. 559
S3 Object Lock ............................................................................................................... 559
Configuring Object Lock on the console ............................................................................ 563
Managing Object Lock .................................................................................................... 564
Managing storage classes ........................................................................................................ 567
Frequently accessed objects ............................................................................................. 567
Automatically optimizing data with changing or unknown access patterns ............................. 567
Infrequently accessed objects ........................................................................................... 568
Archiving objects ............................................................................................................ 569
Amazon S3 on Outposts .................................................................................................. 569
Comparing storage classes ............................................................................................... 570
Setting the storage class of an object ............................................................................... 570
Amazon S3 Intelligent-Tiering .................................................................................................. 571
How S3 Intelligent-Tiering works ..................................................................................... 571
Using S3 Intelligent-Tiering ............................................................................................ 572
Managing S3 Intelligent-Tiering ...................................................................................... 576
Managing lifecycle .................................................................................................................. 578
Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of
data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable,
reliable, fast, and inexpensive data storage infrastructure that Amazon uses to run its own global
network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to
developers.
This introduction to Amazon Simple Storage Service (Amazon S3) provides a detailed summary of this
web service. After reading this section, you should have a good idea of what it offers and how it can fit in
with your business.
This guide describes how you send requests to create buckets, store and retrieve your objects, and
manage permissions on your resources. The guide also describes access control and the authentication
process. Access control defines who can access objects and buckets within Amazon S3, and the type of
access (for example, READ and WRITE). The authentication process verifies the identity of a user who is
trying to access Amazon Web Services (AWS).
Topics
• How do I...? (p. 1)
• Advantages of using Amazon S3 (p. 2)
• Amazon S3 concepts (p. 2)
• Amazon S3 features (p. 5)
• Amazon S3 application programming interfaces (API) (p. 7)
• Paying for Amazon S3 (p. 8)
• Related services (p. 8)
How do I...?
Information Relevant sections
How do I work with access points? Managing data access with Amazon S3 access
points (p. 184)
How do I manage access to my Identity and access management in Amazon S3 (p. 274)
resources?
• Creating buckets – Create and name a bucket that stores data. Buckets are the fundamental
containers in Amazon S3 for data storage.
• Storing data – Store an infinite amount of data in a bucket. Upload as many objects as you like into
an Amazon S3 bucket. Each object can contain up to 5 TB of data. Each object is stored and retrieved
using a unique developer-assigned key.
• Downloading data – Download your data or enable others to do so. Download your data anytime you
like, or allow others to do the same.
• Permissions – Grant or deny access to others who want to upload or download data into your
Amazon S3 bucket. Grant upload and download permissions to three types of users. Authentication
mechanisms can help keep data secure from unauthorized access.
• Standard interfaces – Use standards-based REST and SOAP interfaces designed to work with any
internet-development toolkit.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or
the AWS SDKs.
Amazon S3 concepts
This section describes key concepts and terminology you need to understand to use Amazon S3
effectively. They are presented in the order that you will most likely encounter them.
Topics
• Buckets (p. 2)
• Objects (p. 3)
• Keys (p. 3)
• Regions (p. 3)
• Amazon S3 data consistency model (p. 3)
Buckets
A bucket is a container for objects stored in Amazon S3. Every object is contained in a bucket. For
example, if the object named photos/puppy.jpg is stored in the awsexamplebucket1 bucket in the
US West (Oregon) Region, then it is addressable using the URL https://awsexamplebucket1.s3.us-
west-2.amazonaws.com/photos/puppy.jpg.
You can configure buckets so that they are created in a specific AWS Region. For more information, see
Accessing a Bucket (p. 33). You can also configure a bucket so that every time an object is added to it,
Amazon S3 generates a unique version ID and assigns it to the object. For more information, see Using
Versioning (p. 519).
For more information about buckets, see Buckets overview (p. 24).
Objects
Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata.
The data portion is opaque to Amazon S3. The metadata is a set of name-value pairs that describe
the object. These include some default metadata, such as the date last modified, and standard HTTP
metadata, such as Content-Type. You can also specify custom metadata at the time the object is
stored.
An object is uniquely identified within a bucket by a key (name) and a version ID. For more information,
see Keys (p. 3) and Using Versioning (p. 519).
Keys
A key is the unique identifier for an object within a bucket. Every object in a bucket has exactly
one key. The combination of a bucket, key, and version ID uniquely identify each object. So you
can think of Amazon S3 as a basic data map between "bucket + key + version" and the object
itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web
service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://
doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl, "doc" is the name of the bucket and
"2006-03-01/AmazonS3.wsdl" is the key.
For more information about object keys, see Creating object key names (p. 58).
Regions
You can choose the geographical AWS Region where Amazon S3 will store the buckets that you create.
You might choose a Region to optimize latency, minimize costs, or address regulatory requirements.
Objects stored in a Region never leave the Region unless you explicitly transfer them to another Region.
For example, objects stored in the Europe (Ireland) Region never leave it.
Note
You can only access Amazon S3 and its features in AWS Regions that are enabled for your
account.
For a list of Amazon S3 Regions and endpoints, see Regions and Endpoints in the AWS General Reference.
Updates to a single key are atomic. For example, if you PUT to an existing key from one thread and
perform a GET on the same key from a second thread concurrently, you will get either the old data or the
new data, but never partial or corrupt data.
Amazon S3 achieves high availability by replicating data across multiple servers within AWS data centers.
If a PUT request is successful, your data is safely stored. Any read (GET or LIST) that is initiated following
the receipt of a successful PUT response will return the data written by the PUT. Here are examples of
this behavior:
• A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new
object will appear in the list.
• A process replaces an existing object and immediately tries to read it. Amazon S3 will return the new
data.
• A process deletes an existing object and immediately tries to read it. Amazon S3 will not return any
data as the object has been deleted.
• A process deletes an existing object and immediately lists keys within its bucket. The object will not
appear in the listing.
Note
• Amazon S3 does not support object locking for concurrent writers. If two PUT requests are
simultaneously made to the same key, the request with the latest timestamp wins. If this is an
issue, you will need to build an object-locking mechanism into your application
• Updates are key-based. There is no way to make atomic updates across keys. For example,
you cannot make the update of one key dependent on the update of another key unless you
design this functionality into your application.
• If you delete a bucket and immediately list all buckets, the deleted bucket might still appear in the list.
• If you enable versioning on a bucket for the first time, it might take a short amount of time for the
change to be fully propagated. We recommend that you wait for 15 minutes after enabling versioning
before issuing write operations (PUT or DELETE) on objects in the bucket.
Concurrent applications
This section provides examples of behavior to be expected from Amazon S3 when multiple clients are
writing to the same items.
In this example, both W1 (write 1) and W2 (write 2) complete before the start of R1 (read 1) and R2 (read
2). Because S3 is strongly consistent, R1 and R2 both return color = ruby.
In the next example, W2 does not complete before the start of R1. Therefore, R1 might return color =
ruby or color = garnet. However, since W1 and W2 finish before the start of R2, R2 returns color =
garnet.
In the last example, W2 begins before W1 has received an acknowledgement. Therefore, these writes are
considered concurrent. Amazon S3 internally uses last-writer-wins semantics to determine which write
takes precedence. However, the order in which Amazon S3 receives the requests and the order in which
applications receive acknowledgements cannot be predicted due to factors such as network latency.
For example, W2 might be initiated by an Amazon EC2 instance in the same region while W1 might be
initiated by a host that is further away. The best way to determine the final value is to perform a read
after both writes have been acknowledged.
Amazon S3 features
This section describes important Amazon S3 features.
Topics
• Storage classes (p. 6)
• Bucket policies (p. 6)
• AWS Identity and Access Management (p. 7)
• Access control lists (p. 7)
• Versioning (p. 7)
• Operations (p. 7)
Storage classes
Amazon S3 offers a range of storage classes designed for different use cases. These include Amazon S3
STANDARD for general-purpose storage of frequently accessed data, Amazon S3 STANDARD_IA for long-
lived, but less frequently accessed data, and S3 Glacier for long-term archive.
For more information, see Using Amazon S3 storage classes (p. 567).
Bucket policies
Bucket policies provide centralized access control to buckets and objects based on a variety of conditions,
including Amazon S3 operations, requesters, resources, and aspects of the request (for example, IP
address). The policies are expressed in the access policy language and enable centralized management of
permissions. The permissions attached to a bucket apply to all of the bucket's objects that are owned by
the bucket owner account.
Both individuals and companies can use bucket policies. When companies register with Amazon S3,
they create an account. Thereafter, the company becomes synonymous with the account. Accounts are
financially responsible for the AWS resources that they (and their employees) create. Accounts have
the power to grant bucket policy permissions and assign employees permissions based on a variety of
conditions. For example, an account could create a policy that gives a user write access:
• To a particular S3 bucket
• From an account's corporate network
• During business hours
An account can grant one user limited read and write access, but allow another to create and delete
buckets also. An account could allow several field offices to store their daily reports in a single bucket. It
could allow each office to write only to a certain set of names (for example, "Nevada/*" or "Utah/*") and
only from the office's IP address range.
Unlike access control lists (described later), which can add (grant) permissions only on individual objects,
policies can either add or deny permissions across all (or a subset) of objects within a bucket. With one
request, an account can set the permissions of any number of objects in a bucket. An account can use
wildcards (similar to regular expression operators) on Amazon Resource Names (ARNs) and other values.
The account could then control access to groups of objects that begin with a common prefix or end with
a given extension, such as .html.
Only the bucket owner is allowed to associate a policy with a bucket. Policies (written in the access policy
language) allow or deny requests based on the following:
• Amazon S3 bucket operations (such as PUT ?acl), and object operations (such as PUT Object, or
GET Object)
• Requester
• Conditions specified in the policy
An account can control access based on specific Amazon S3 operations, such as GetObject,
GetObjectVersion, DeleteObject, or DeleteBucket.
The conditions can be such things as IP addresses, IP address ranges in CIDR notation, dates, user agents,
HTTP referrer, and transports (HTTP and HTTPS).
For more information, see Bucket policies and user policies (p. 291).
For example, you can use IAM with Amazon S3 to control the type of access a user or group of users has
to specific parts of an Amazon S3 bucket your AWS account owns.
Versioning
You can use versioning to keep multiple versions of an object in the same bucket. For more information,
see Using versioning in S3 buckets (p. 519).
Operations
Following are the most common operations that you'll run through the API.
Common operations
• Create a bucket – Create and name your own bucket in which to store your objects.
• Write an object – Store data by creating or overwriting an object. When you write an object, you
specify a unique key in the namespace of your bucket. This is also a good time to specify any access
control you want on the object.
• Read an object – Read data back. You can download the data via HTTP.
• Delete an object – Delete some of your data.
• List keys – List the keys contained in one of your buckets. You can filter the key list based on a prefix.
These operations and all other functionality are described in detail throughout this guide.
Amazon S3 provides a REST and a SOAP interface. They are similar, but there are some differences. For
example, in the REST interface, metadata is returned in HTTP headers. Because we only support HTTP
requests of up to 4 KB (not including the body), the amount of metadata you can supply is restricted.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.
You can use any toolkit that supports HTTP to use the REST API. You can even use a browser to fetch
objects, as long as they are anonymously readable.
The REST API uses the standard HTTP headers and status codes, so that standard browsers and toolkits
work as expected. In some areas, we have added functionality to HTTP (for example, we added headers
to support access control). In these cases, we have done our best to add the new functionality in a way
that matched the style of standard HTTP usage.
The SOAP API provides a SOAP 1.1 interface using document literal encoding. The most common way to
use SOAP is to download the WSDL (see https://doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl),
use a SOAP toolkit such as Apache Axis or Microsoft .NET to create bindings, and then write code that
uses the bindings to call Amazon S3.
Amazon S3 charges you only for what you actually use, with no hidden fees and no overage charges.
This gives developers a variable-cost service that can grow with their business while enjoying the cost
advantages of the AWS infrastructure.
Before storing anything in Amazon S3, you must register with the service and provide a payment method
that is charged at the end of each month. There are no setup fees to begin using the service. At the end
of the month, your payment method is automatically charged for that month's usage.
For information about paying for Amazon S3 storage, see Amazon S3 Pricing.
Related services
After you load your data into Amazon S3, you can use it with other AWS services. The following are the
services you might use most frequently:
• Amazon Elastic Compute Cloud (Amazon EC2) – This service provides virtual compute resources in
the cloud. For more information, see the Amazon EC2 product details page.
• Amazon EMR – This service enables businesses, researchers, data analysts, and developers to easily
and cost-effectively process vast amounts of data. It uses a hosted Hadoop framework running on the
web-scale infrastructure of Amazon EC2 and Amazon S3. For more information, see the Amazon EMR
product details page.
• AWS Snowball – This service accelerates transferring large amounts of data into and out of AWS using
physical storage devices, bypassing the internet. Each AWS Snowball device type can transport data
at faster-than internet speeds. This transport is done by shipping the data in the devices through a
regional carrier. For more information, see the AWS Snowball product details page.
To store an object in Amazon S3, you create a bucket and then upload the object to the bucket. When
the object is in the bucket, you can open it, download it, and move it. When you no longer need an object
or a bucket, you can clean up your resources.
With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.
Prerequisites
Before you begin, confirm that you've completed the steps in Prerequisite: Setting up Amazon
S3 (p. 10).
Topics
• Prerequisite: Setting up Amazon S3 (p. 10)
• Step 1: Create your first S3 bucket (p. 12)
• Step 2: Upload an object to your bucket (p. 13)
• Step 3: Download an object (p. 14)
• Step 4: Copy your object to a folder (p. 15)
• Step 5: Delete your objects and bucket (p. 15)
• Where do I go from here? (p. 17)
With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.
When you sign up for AWS and set up Amazon S3, you can optionally change the display language in the
AWS Management Console. For more information, see Changing the language of the AWS Management
Console in the AWS Management Console Getting Started Guide.
Topics
• Sign up for AWS (p. 11)
• Create an IAM user (p. 11)
• Sign in as an IAM user (p. 12)
1. Open https://portal.aws.amazon.com/billing/signup.
2. Follow the online instructions.
Part of the sign-up procedure involves receiving a phone call and entering a verification code on the
phone keypad.
AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view
your current account activity and manage your account by going to https://aws.amazon.com/ and
choosing My Account.
If you signed up for AWS but have not created an IAM user for yourself, follow these steps.
To create an administrator user for yourself and add the user to an administrators group
(console)
1. Sign in to the IAM console as the account owner by choosing Root user and entering your AWS
account email address. On the next page, enter your password.
Note
We strongly recommend that you adhere to the best practice of using the Administrator
IAM user that follows and securely lock away the root user credentials. Sign in as the root
user only to perform a few account and service management tasks.
2. In the navigation pane, choose Users and then choose Add user.
3. For User name, enter Administrator.
4. Select the check box next to AWS Management Console access. Then select Custom password, and
then enter your new password in the text box.
5. (Optional) By default, AWS requires the new user to create a new password when first signing in. You
can clear the check box next to User must create a new password at next sign-in to allow the new
user to reset their password after they sign in.
6. Choose Next: Permissions.
7. Under Set permissions, choose Add user to group.
8. Choose Create group.
9. In the Create group dialog box, for Group name enter Administrators.
10. Choose Filter policies, and then select AWS managed - job function to filter the table contents.
11. In the policy list, select the check box for AdministratorAccess. Then choose Create group.
Note
You must activate IAM user and role access to Billing before you can use the
AdministratorAccess permissions to access the AWS Billing and Cost Management
console. To do this, follow the instructions in step 1 of the tutorial about delegating access
to the billing console.
12. Back in the list of groups, select the check box for your new group. Choose Refresh if necessary to
see the group in the list.
13. Choose Next: Tags.
14. (Optional) Add metadata to the user by attaching tags as key-value pairs. For more information
about using tags in IAM, see Tagging IAM entities in the IAM User Guide.
15. Choose Next: Review to see the list of group memberships to be added to the new user. When you
are ready to proceed, choose Create user.
You can use this same process to create more groups and users and to give your users access to your AWS
account resources. To learn about using policies that restrict user permissions to specific AWS resources,
see Access management and Example policies.
Before you sign in as an IAM user, you can verify the sign-in link for IAM users in the IAM console. On the
IAM Dashboard, under IAM users sign-in link, you can see the sign-in link for your AWS account. The URL
for your sign-in link contains your AWS account ID without dashes (‐).
If you don't want the URL for your sign-in link to contain your AWS account ID, you can create an account
alias. For more information, see Creating, deleting, and listing an AWS account alias in the IAM User
Guide.
Your sign-in link includes your AWS account ID (without dashes) or your AWS account alias:
https://aws_account_id_or_alias.signin.aws.amazon.com/console
3. Enter the IAM user name and password that you just created.
When you're signed in, the navigation bar displays "your_user_name @ your_aws_account_id".
following the examples in this guide are minimal (less than $1). For more information about
storage charges, see Amazon S3 pricing.
To create a bucket using the AWS Command Line Interface, see create-bucket in the AWS CLI Command
Reference.
To create a bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Create bucket.
After you create the bucket, you can't change its name. For information about naming buckets, see
Bucket naming rules (p. 27).
Important
Avoid including sensitive information, such as account numbers, in the bucket name. The
bucket name is visible in the URLs that point to the objects in the bucket.
4. In Region, choose the AWS Region where you want the bucket to reside.
Choose a Region that is close to you geographically to minimize latency and costs and to address
regulatory requirements. Objects stored in a Region never leave that Region unless you explicitly
transfer them to another Region. For a list of Amazon S3 AWS Regions, see AWS Service Endpoints
in the Amazon Web Services General Reference.
5. Keep the remaining settings set to the defaults. For more information on additional bucket settings,
see Creating a bucket (p. 28).
6. Choose Create bucket.
Next step
To add an object to your bucket, see Step 2: Upload an object to your bucket (p. 13).
Next step
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to download an object from.
3. You can download an object from an S3 bucket in any of the following ways:
On the Overview page, select the object and from the Actions menu choose Download or
Download as if you want to download the object to a specific folder.
• Choose the object that you want to download and then from the Object actions menu choose
Download or Download as if you want to download the object to a specific folder.
• If you want to download a specific version of the object, choose the name of the object that you
want to download. Choose the Versions tab and then from the Actions menu choose Download
or Download as if you want to download the object to a specific folder.
Next step
To copy and paste your object within Amazon S3, see Step 4: Copy your object to a folder (p. 15).
To navigate into a folder and choose a subfolder as your destination, choose the folder name.
c. Choose Choose destination.
The path to your destination folder appears in the Destination box. In Destination, you can
alternately enter your destination path, for example, s3://bucket-name/folder-name/.
7. In the bottom right, choose Copy.
Next step
To delete an object and a bucket in Amazon S3, see Step 5: Delete your objects and bucket (p. 15).
Before you delete your bucket, empty the bucket or delete the objects in the bucket. After you delete
your objects and bucket, they are no longer available.
If you want to continue to use the same bucket name, we recommend that you delete the objects or
empty the bucket, but don't delete the bucket. After you delete a bucket, the name becomes available
to reuse. However, another AWS account might create a bucket with the same name before you have a
chance to reuse it.
Topics
Deleting an object
If you want to choose which objects you delete without emptying all the objects from your bucket, you
can delete an object.
1. In the Buckets list, choose the name of the bucket that you want to delete an object from.
2. Select the check box to the left of the names of the objects that you want to delete.
3. Choose Actions and choose Delete from the list of options that appears.
To empty a bucket
1. In the Buckets list, select the bucket that you want to empty, and then choose Empty.
2. To confirm that you want to empty the bucket and delete all the objects in it, in Empty bucket, type
permanently delete.
Important
Emptying the bucket cannot be undone. Objects added to the bucket while the empty
bucket action is in progress will be deleted.
3. To empty the bucket and delete all the objects in it, and choose Empty.
An Empty bucket: Status page opens that you can use to review a summary of failed and successful
object deletions.
4. To return to your bucket list, choose Exit.
The following topics explain various ways in which you can gain a deeper understanding of Amazon S3
so that you can implement it in your applications.
Topics
• Common use scenarios (p. 17)
• Considerations going forward (p. 17)
• Advanced Amazon S3 features (p. 18)
• Access control best practices (p. 19)
• Development resources (p. 23)
• Backup and storage – Provide data backup and storage services for others.
• Application hosting – Provide services that deploy, install, and manage web applications.
• Media hosting – Build a redundant, scalable, and highly available infrastructure that hosts video,
photo, or music uploads and downloads.
• Software delivery – Host your software applications that customers can download.
Topics
• AWS account and security credentials (p. 17)
• Security (p. 18)
• AWS integration (p. 18)
• Pricing (p. 18)
If you're an account owner or administrator and want to know more about IAM, see the product
description at https://aws.amazon.com/iam or the technical documentation in the IAM User Guide.
Security
Amazon S3 provides authentication mechanisms to secure data stored in Amazon S3 against
unauthorized access. Unless you specify otherwise, only the AWS account owner can access data
uploaded to Amazon S3. For more information about how to manage access to buckets and objects, go
to Identity and access management in Amazon S3 (p. 274).
You can also encrypt your data before uploading it to Amazon S3.
AWS integration
You can use Amazon S3 alone or in concert with one or more other Amazon products. The following are
the most common products used with Amazon S3:
• Amazon EC2
• Amazon EMR
• Amazon SQS
• Amazon CloudFront
Pricing
Learn the pricing structure for storing and transferring data on Amazon S3. For more information, see
Amazon S3 pricing.
Link Functionality
Using Requester Pays buckets for storage Learn how to configure a bucket so that a
transfers and usage (p. 52) customer pays for the downloads they make.
Topics
• Creating a new bucket (p. 19)
• Storing and sharing data (p. 20)
• Sharing resources (p. 21)
• Protecting data (p. 21)
S3 Block Public Access provides four settings to help you avoid inadvertently exposing your S3 resources.
You can apply these settings in any combination to individual access points, buckets, or entire AWS
accounts. If you apply a setting to an account, it applies to all buckets and access points that are owned
by that account. By default, the Block all public access setting is applied to new buckets created in the
Amazon S3 console.
If the S3 Block Public Access settings are too restrictive, you can use AWS Identity and Access
Management (IAM) identities to grant access to specific users rather than disabling all Block Public Access
settings. Using Block Public Access with IAM identities helps ensure that any operation that is blocked by
a Block Public Access setting is rejected unless the requesting user has been given specific permission.
For more information, see Block public access settings (p. 489).
When setting up accounts for new team members who require S3 access, use IAM users and roles to
ensure least privileges. You can also implement a form of IAM multi-factor authentication (MFA) to
support a strong identity foundation. Using IAM identities, you can grant unique permissions to users
and specify what resources they can access and what actions they can take. IAM identities provide
increased capabilities, including the ability to require users to enter login credentials before accessing
shared resources and apply permission hierarchies to different objects within a single bucket.
For more information, see Example 1: Bucket owner granting its users bucket permissions (p. 434).
Bucket policies
With bucket policies, you can personalize bucket access to help ensure that only those users you have
approved can access resources and perform actions within them. In addition to bucket policies, you
should use bucket-level Block Public Access settings to further limit public access to your data.
When creating policies, avoid the use of wildcards in the Principal element because it effectively
allows anyone to access your Amazon S3 resources. It's better to explicitly list users or groups that are
allowed to access the bucket. Rather than including a wildcard for their actions, grant them specific
permissions when applicable.
To further maintain the practice of least privileges, Deny statements in the Effect element should be
as broad as possible and Allow statements should be as narrow as possible. Deny effects paired with the
"s3:*" action are another good way to implement opt-in best practices for the users included in policy
condition statements.
For more information about specifying conditions for when a policy is in effect, see Amazon S3 condition
key examples (p. 300).
When adding users in a corporate setting, you can use a virtual private cloud (VPC) endpoint to allow any
users in your virtual network to access your Amazon S3 resources. VPC endpoints enable developers to
provide specific access and permissions to groups of users based on the network the user is connected to.
Rather than adding each user to an IAM role or group, you can use VPC endpoints to deny bucket access
if the request doesn’t originate from the specified endpoint.
For more information, see Controlling access from VPC endpoints with bucket policies (p. 398).
If you use the Amazon S3 console to manage buckets and objects, you should implement S3 Versioning
and S3 Object Lock. These features help prevent accidental changes to critical data and enable you to
roll back unintended actions. This capability is particularly useful when there are multiple users with full
write and execute permissions accessing the Amazon S3 console.
For information about S3 Versioning, see Using versioning in S3 buckets (p. 519). For information about
Object Lock, see Using S3 Object Lock (p. 559).
To manage your objects so that they are stored cost effectively throughout their lifecycle, you can pair
lifecycle policies with object versioning. Lifecycle policies define actions that you want S3 to take during
an object's lifetime. For example, you can create a lifecycle policy that will transition objects to another
storage class, archive them, or delete them after a specified period of time. You can define a lifecycle
policy for all objects or a subset of objects in the bucket by using a shared prefix or tag.
For more information, see Managing your storage lifecycle (p. 578).
When creating buckets that are accessed by different office locations, you should consider implementing
S3 Cross-Region Replication. Cross-Region Replication helps ensure that all users have access to the
resources they need and increases operational efficiency. Cross-Region Replication offers increased
availability by copying objects across S3 buckets in different AWS Regions. However, the use of this tool
increases storage costs.
When configuring a bucket to be used as a publicly accessed static website, you need to disable all Block
Public Access settings. It is important to only provide s3:GetObject actions and not ListObject or
PutObject permissions when writing the bucket policy for your static website. This helps ensure that
users cannot view all the objects in your bucket or add their own content.
For more information, see Setting permissions for website access (p. 954).
Amazon CloudFront provides the capabilities required to set up a secure static website. Amazon S3
static websites only support HTTP endpoints. CloudFront uses the durable storage of Amazon S3 while
providing additional security headers like HTTPS. HTTPS adds security by encrypting a normal HTTP
request and protecting against common cyber attacks.
For more information, see Getting started with a secure static website in the Amazon CloudFront
Developer Guide.
Sharing resources
There are several different ways that you can share resources with a specific group of users. You can
use the following tools to share a set of documents or other resources to a single group of users,
department, or an office. Although they can all be used to accomplish the same goal, some tools might
pair better than others with your existing settings.
User policies
You can share resources with a limited group of people using IAM groups and user policies. When
creating a new IAM user, you are prompted to create and add them to a group. However, you can create
and add users to groups at any point. If the individuals you intend to share these resources with are
already set up within IAM, you can add them to a common group and share the bucket with their group
within the user policy. You can also use IAM user policies to share individual objects within a bucket.
For more information, see Allowing an IAM user access to one of your buckets (p. 425).
As a general rule, we recommend that you use S3 bucket policies or IAM policies for access control.
Amazon S3 access control lists (ACLs) are a legacy access control mechanism that predates IAM. If
you already use S3 ACLs and you find them sufficient, there is no need to change. However, certain
access control scenarios require the use of ACLs. For example, when a bucket owner wants to grant
permission to objects, but not all objects are owned by the bucket owner, the object owner must first
grant permission to the bucket owner. This is done using an object ACL.
For more information, see Example 3: Bucket owner granting permissions to objects it does not
own (p. 443).
Prefixes
When trying to share specific resources from a bucket, you can replicate folder-level permissions using
prefixes. The Amazon S3 console supports the folder concept as a means of grouping objects by using a
shared name prefix for objects. You can then specify a prefix within the conditions of an IAM user's policy
to grant them explicit permission to access the resources associated with that prefix.
For more information, see Organizing objects in the Amazon S3 console using folders (p. 147).
Tagging
If you use object tagging to categorize storage, you can share objects that have been tagged with a
specific value with specified users. Resource tagging allows you to control access to objects based on the
tags associated with the resource that a user is trying to access. To do this, use the ResourceTag/key-
name condition within an IAM user policy to allow access to the tagged resources.
For more information, see Controlling access to AWS resources using resource tags in the IAM User Guide.
Protecting data
Use the following tools to help protect data in transit and at rest, both of which are crucial in
maintaining the integrity and accessibility of your data.
Object encryption
Amazon S3 offers several object encryption options that protect data in transit and at rest. Server-side
encryption encrypts your object before saving it on disks in its data centers and then decrypts it when
you download the objects. As long as you authenticate your request and you have access permissions,
there is no difference in the way you access encrypted or unencrypted objects. When setting up server-
side encryption, you have three mutually exclusive options:
For more information, see Protecting data using server-side encryption (p. 219).
Client-side encryption is the act of encrypting data before sending it to Amazon S3. For more
information, see Protecting data using client-side encryption (p. 261).
Signing methods
Signature Version 4 is the process of adding authentication information to AWS requests sent by HTTP.
For security, most requests to AWS must be signed with an access key, which consists of an access key ID
and secret access key. These two keys are commonly referred to as your security credentials.
For more information, see Authenticating Requests (AWS Signature Version 4) and Signature Version 4
signing process.
Monitoring is an important part of maintaining the reliability, availability, and performance of your
Amazon S3 solutions so that you can more easily debug a multi-point failure if one occurs. Logging can
provide insight into any errors users are receiving, and when and what requests are made. AWS provides
several tools for monitoring your Amazon S3 resources:
• Amazon CloudWatch
• AWS CloudTrail
• Amazon S3 Access Logs
• AWS Trusted Advisor
For more information, see Logging and monitoring in Amazon S3 (p. 508).
Amazon S3 is integrated with AWS CloudTrail, a service that provides a record of actions taken by a
user, a role, or an AWS service in Amazon S3. This feature can be paired with Amazon GuardDuty, which
monitors threats against your Amazon S3 resources by analyzing CloudTrail management events and
CloudTrail S3 data events. These data sources monitor different kinds of activity. For example, S3 related
CloudTrail management events include operations that list or configure S3 projects. GuardDuty analyzes
S3 data events from all of your S3 buckets and monitors them for malicious and suspicious activity.
For more information, see Amazon S3 protection in Amazon GuardDuty in the Amazon GuardDuty User
Guide.
Development resources
To help you build applications using the language of your choice, we provide the following resources:
• Sample Code and Libraries – The AWS Developer Center has sample code and libraries written
especially for Amazon S3.
You can use these code samples as a means of understanding how to implement the Amazon S3 API.
For more information, see the AWS Developer Center.
• Tutorials – Our Resource Center offers more Amazon S3 tutorials.
These tutorials provide a hands-on approach for learning Amazon S3 functionality. For more
information, see Articles & Tutorials.
• Customer Forum – We recommend that you review the Amazon S3 forum to get an idea of what other
users are doing and to benefit from the questions they ask.
The forum can help you understand what you can and can't do with Amazon S3. The forum also serves
as a place for you to ask questions that other users or AWS representatives might answer. You can use
the forum to report issues with the service or the API. For more information, see Discussion Forums.
To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the
object is in the bucket, you can open it, download it, and move it. When you no longer need an object or
a bucket, you can clean up your resources.
Note
With Amazon S3, you pay only for what you use. For more information about Amazon S3
features and pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started
with Amazon S3 for free. For more information, see AWS Free Tier.
The topics in this section provide an overview of working with buckets in Amazon S3. They include
information about naming, creating, accessing, and deleting buckets. For more information about
viewing or listing objects in a bucket, see Organizing, listing, and working with your objects (p. 141).
Topics
• Buckets overview (p. 24)
• Bucket naming rules (p. 27)
• Creating a bucket (p. 28)
• Viewing the properties for an S3 bucket (p. 33)
• Accessing a bucket (p. 33)
• Emptying a bucket (p. 35)
• Deleting a bucket (p. 37)
• Setting default server-side encryption behavior for Amazon S3 buckets (p. 40)
• Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration (p. 44)
• Using Requester Pays buckets for storage transfers and usage (p. 52)
• Bucket restrictions and limitations (p. 55)
Buckets overview
To upload your data (photos, videos, documents, etc.) to Amazon S3, you must first create an S3 bucket
in one of the AWS Regions. You can then upload any number of objects to the bucket.
In terms of implementation, buckets and objects are AWS resources, and Amazon S3 provides APIs for
you to manage them. For example, you can create a bucket and upload objects using the Amazon S3 API.
You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3
APIs to send requests to Amazon S3.
This section describes how to work with buckets. For information about working with objects, see
Amazon S3 objects overview (p. 57).
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts.
This means that after a bucket is created, the name of that bucket cannot be used by another AWS
account in any AWS Region until the bucket is deleted. You should not depend on specific bucket naming
conventions for availability or security verification purposes. For bucket naming guidelines, see Bucket
naming rules (p. 27).
Amazon S3 creates buckets in a Region that you specify. To optimize latency, minimize costs, or address
regulatory requirements, choose any AWS Region that is geographically close to you. For example, if
you reside in Europe, you might find it advantageous to create buckets in the Europe (Ireland) or Europe
(Frankfurt) Regions. For a list of Amazon S3 Regions, see Regions and Endpoints in the AWS General
Reference.
Note
Objects that belong to a bucket that you create in a specific AWS Region never leave that
Region, unless you explicitly transfer them to another Region. For example, objects that are
stored in the Europe (Ireland) Region never leave it.
Topics
• About permissions (p. 25)
• Managing public access to buckets (p. 25)
• Bucket configuration options (p. 26)
About permissions
You can use your AWS account root user credentials to create a bucket and perform any other Amazon
S3 operation. However, we recommend that you do not use the root user credentials of your AWS
account to make requests, such as to create a bucket. Instead, create an AWS Identity and Access
Management (IAM) user, and grant that user full access (users by default have no permissions).
These users are referred to as administrators. You can use the administrator user credentials, instead
of the root user credentials of your account, to interact with AWS and perform tasks, such as create a
bucket, create users, and grant them permissions.
For more information, see AWS account root user credentials and IAM user credentials in the AWS
General Reference and Security best practices in IAM in the IAM User Guide.
The AWS account that creates a resource owns that resource. For example, if you create an IAM user
in your AWS account and grant the user permission to create a bucket, the user can create a bucket.
But the user does not own the bucket; the AWS account that the user belongs to owns the bucket. The
user needs additional permission from the resource owner to perform any other bucket operations. For
more information about managing permissions for your Amazon S3 resources, see Identity and access
management in Amazon S3 (p. 274).
To help ensure that all of your Amazon S3 buckets and objects have their public access blocked, we
recommend that you turn on all four settings for Block Public Access for your account. These settings
block all public access for all current and future buckets.
Before applying these settings, verify that your applications will work correctly without public access. If
you require some level of public access to your buckets or objects—for example, to host a static website
as described at Hosting a static website using Amazon S3 (p. 944)—you can customize the individual
settings to suit your storage use cases. For more information, see Blocking public access to your Amazon
S3 storage (p. 488).
These are referred to as subresources because they exist in the context of a specific bucket or object. The
following table lists subresources that enable you to manage bucket-specific configurations.
Subresource Description
cors (cross-origin You can configure your bucket to allow cross-origin requests.
resource sharing)
For more information, see Using cross-origin resource sharing (CORS) (p. 477).
event notification You can enable your bucket to send you notifications of specified bucket events.
lifecycle You can define lifecycle rules for objects in your bucket that have a well-defined
lifecycle. For example, you can define a rule to archive objects one year after
creation, or delete an object 10 years after creation.
For more information, see Managing your storage lifecycle (p. 578).
location When you create a bucket, you specify the AWS Region where you want Amazon
S3 to create the bucket. Amazon S3 stores this information in the location
subresource and provides an API for you to retrieve this information.
logging Logging enables you to track requests for access to your bucket. Each access
log record provides details about a single access request, such as the requester,
bucket name, request time, request action, response status, and error code, if
any. Access log information can be useful in security and access audits. It can also
help you learn about your customer base and understand your Amazon S3 bill.
object locking To use S3 Object Lock, you must enable it for a bucket. You can also optionally
configure a default retention mode and period that applies to new objects that
are placed in the bucket.
policy and ACL All your resources (such as buckets and objects) are private by default. Amazon
(access control list) S3 supports both bucket policy and access control list (ACL) options for you to
grant and manage bucket-level permissions. Amazon S3 stores the permission
information in the policy and acl subresources.
Subresource Description
requestPayment By default, the AWS account that creates the bucket (the bucket owner) pays
for downloads from the bucket. Using this subresource, the bucket owner
can specify that the person requesting the download will be charged for the
download. Amazon S3 provides an API for you to manage this subresource.
For more information, see Using Requester Pays buckets for storage transfers
and usage (p. 52).
tagging You can add cost allocation tags to your bucket to categorize and track your AWS
costs. Amazon S3 provides the tagging subresource to store and manage tags on
a bucket. Using tags you apply to your bucket, AWS generates a cost allocation
report with usage and costs aggregated by your tags.
For more information, see Billing and usage reporting for S3 buckets (p. 696).
transfer Transfer Acceleration enables fast, easy, and secure transfers of files over long
acceleration distances between your client and an S3 bucket. Transfer Acceleration takes
advantage of the globally distributed edge locations of Amazon CloudFront.
For more information, see Configuring fast, secure file transfers using Amazon S3
Transfer Acceleration (p. 44).
website You can configure your bucket for static website hosting. Amazon S3 stores this
configuration by creating a website subresource.
For more information, see Hosting a static website using Amazon S3 (p. 944).
• Buckets used with Amazon S3 Transfer Acceleration can't have dots (.) in their names. For more
information about Transfer Acceleration, see Configuring fast, secure file transfers using Amazon S3
Transfer Acceleration (p. 44).
For best compatibility, we recommend that you avoid using dots (.) in bucket names, except for buckets
that are used only for static website hosting. If you include dots in a bucket's name, you can't use virtual-
host-style addressing over HTTPS, unless you perform your own certificate validation. This is because the
security certificates used for virtual hosting of buckets don't work for buckets with dots in their names.
This limitation doesn't affect buckets used for static website hosting, because static website hosting is
only available over HTTP. For more information about virtual-host-style addressing, see Virtual hosting
of buckets (p. 1022). For more information about static website hosting, see Hosting a static website
using Amazon S3 (p. 944).
Note
Before March 1, 2018, buckets created in the US East (N. Virginia) Region could have names
that were up to 255 characters long and included uppercase letters and underscores. Beginning
March 1, 2018, new buckets in US East (N. Virginia) must conform to the same rules applied in
all other Regions.
• docexamplebucket1
• log-delivery-march-2020
• my-hosted-content
The following example bucket names are valid but not recommended for uses other than static website
hosting:
• docexamplewebsite.com
• www.docexamplewebsite.com
• my.example.s3.bucket
Creating a bucket
To upload your data to Amazon S3, you must first create an Amazon S3 bucket in one of the AWS
Regions. When you create a bucket, you must choose a bucket name and Region. You can optionally
choose other storage management options for the bucket. After you create a bucket, you cannot change
the bucket name or Region. For information about naming buckets, see Bucket naming rules (p. 27).
The AWS account that creates the bucket owns it. You can upload any number of objects to the bucket.
By default, you can create up to 100 buckets in each of your AWS accounts. If you need more buckets,
you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a service limit
increase. To learn how to submit a bucket limit increase, see AWS service quotas in the AWS General
Reference. You can store any number of objects in a bucket.
You can use the Amazon S3 console, Amazon S3 APIs, AWS CLI, or AWS SDKs to create a bucket.
After you create the bucket, you can't change its name. For information about naming buckets, see
Bucket naming rules (p. 27).
Important
Avoid including sensitive information, such as account numbers, in the bucket name. The
bucket name is visible in the URLs that point to the objects in the bucket.
4. In Region, choose the AWS Region where you want the bucket to reside.
Choose a Region close to you to minimize latency and costs and address regulatory requirements.
Objects stored in a Region never leave that Region unless you explicitly transfer them to another
Region. For a list of Amazon S3 AWS Regions, see AWS service endpoints in the Amazon Web Services
General Reference.
5. In Bucket settings for Block Public Access, choose the Block Public Access settings that you want to
apply to the bucket.
We recommend that you keep all settings enabled unless you know that you need to turn off one
or more of them for your use case, such as to host a public website. Block Public Access settings
that you enable for the bucket are also enabled for all access points that you create on the bucket.
For more information about blocking public access, see Blocking public access to your Amazon S3
storage (p. 488).
6. (Optional) If you want to enable S3 Object Lock, do the following:
For more information about the S3 Object Lock feature, see Using S3 Object Lock (p. 559).
Note
To create an Object Lock enabled bucket, you must have the following permissions:
s3:CreateBucket, s3:PutBucketVersioning and s3:PutBucketObjectLockConfiguration.
To create a client to access a dual-stack endpoint, you must specify an AWS Region. For more
information, see Dual-stack endpoints (p. 991). For a list of available AWS Regions, see Regions and
endpoints in the AWS General Reference.
When you create a client, the Region maps to the Region-specific endpoint. The client uses this endpoint
to communicate with Amazon S3: s3.<region>.amazonaws.com. If your Region launched after March
20, 2019, your client and bucket must be in the same Region. However, you can use a client in the US
East (N. Virginia) Region to create a bucket in any Region that launched before March 20, 2019. For more
information, see Legacy Endpoints (p. 1026).
• Create a client by explicitly specifying an AWS Region — In the example, the client uses the s3.us-
west-2.amazonaws.com endpoint to communicate with Amazon S3. You can specify any AWS
Region. For a list of AWS Regions, see Regions and endpoints in the AWS General Reference.
• Send a create bucket request by specifying only a bucket name — The client sends a request to
Amazon S3 to create the bucket in the Region where you created a client.
• Retrieve information about the location of the bucket — Amazon S3 stores bucket location
information in the location subresource that is associated with the bucket.
Java
This example shows how to create an Amazon S3 bucket using the AWS SDK for Java. For
instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CreateBucketRequest;
import com.amazonaws.services.s3.model.GetBucketLocationRequest;
import java.io.IOException;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
if (!s3Client.doesBucketExistV2(bucketName)) {
// Because the CreateBucketRequest object doesn't specify a region, the
// bucket is created in the region specified in the client.
s3Client.createBucket(new CreateBucketRequest(bucketName));
// Verify that the bucket was created by retrieving it and checking its
location.
String bucketLocation = s3Client.getBucketLocation(new
GetBucketLocationRequest(bucketName));
System.out.println("Bucket location: " + bucketLocation);
}
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it and returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}
.NET
For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).
Example
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Util;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class CreateBucketTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
CreateBucketAsync().Wait();
}
Ruby
For information about how to create and test a working sample, see Using the AWS SDK for Ruby -
Version 3 (p. 1040).
Example
require 'aws-sdk-s3'
For information about the AWS CLI, see What is the AWS Command Line Interface? in the AWS Command
Line Interface User Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to view the properties for.
3. Choose Properties.
4. On the Properties page, you can configure the following properties for the bucket.
• Bucket Versioning – Keep multiple versions of an object in one bucket by using versioning. By
default, versioning is disabled for a new bucket. For information about enabling versioning, see
Enabling versioning on buckets (p. 523).
• Tags – With AWS cost allocation, you can use bucket tags to annotate billing for your use of a
bucket. A tag is a key-value pair that represents a label that you assign to a bucket. To add tags,
choose Tags, and then choose Add tag. For more information, see Using cost allocation S3 bucket
tags (p. 695).
• Default encryption – Enabling default encryption provides you with automatic server-side
encryption. Amazon S3 encrypts an object before saving it to a disk and decrypts the object when
you download it. For more information, see Setting default server-side encryption behavior for
Amazon S3 buckets (p. 40).
• Server access logging – Get detailed records for the requests that are made to your bucket with
server access logging. By default, Amazon S3 doesn't collect server access logs. For information
about enabling server access logging, see Enabling Amazon S3 server access logging (p. 835)
• AWS CloudTrail data events – Use CloudTrail to log data events. By default, trails don't log data
events. Additional charges apply for data events. For more information, see Logging Data Events
for Trails in the AWS CloudTrail User Guide.
• Event notifications – Enable certain Amazon S3 bucket events to send notification messages to a
destination whenever the events occur. To enable events, choose Create event notification, and
then specify the settings you want to use. For more information, see Enabling and configuring
event notifications using the Amazon S3 console (p. 875).
• Transfer acceleration – Enable fast, easy, and secure transfers of files over long distances between
your client and an S3 bucket. For information about enabling transfer acceleration, see Enabling
and using S3 Transfer Acceleration (p. 47).
• Object Lock – Use S3 Object Lock to prevent an object from being deleted or overwritten for a
fixed amount of time or indefinitely. For more information, see Using S3 Object Lock (p. 559).
• Requester Pays – Enable Requester Pays if you want the requester (instead of the bucket owner)
to pay for requests and data transfers. For more information, see Using Requester Pays buckets for
storage transfers and usage (p. 52).
• Static website hosting – You can host a static website on Amazon S3. To enable static website
hosting, choose Static website hosting, and then specify the settings you want to use. For more
information, see Hosting a static website using Amazon S3 (p. 944).
Accessing a bucket
You can access your bucket using the Amazon S3 console. Using the console UI, you can perform almost
all bucket operations without having to write any code.
If you access a bucket programmatically, Amazon S3 supports RESTful architecture in which your buckets
and objects are resources, each with a resource URI that uniquely identifies the resource.
Amazon S3 supports both virtual-hosted–style and path-style URLs to access a bucket. Because
buckets can be accessed using path-style and virtual-hosted–style URLs, we recommend that you
create buckets with DNS-compliant bucket names. For more information, see Bucket restrictions and
limitations (p. 55).
Note
Virtual-hosted-style and path-style requests use the S3 dot Region endpoint structure
(s3.Region), for example, https://my-bucket.s3.us-west-2.amazonaws.com. However,
some older Amazon S3 Regions also support S3 dash Region endpoints s3-Region, for
example, https://my-bucket.s3-us-west-2.amazonaws.com. If your bucket is in one of
these Regions, you might see s3-Region endpoints in your server access logs or AWS CloudTrail
logs. We recommend that you do not use this endpoint structure in your requests.
Virtual-hosted–style access
In a virtual-hosted–style request, the bucket name is part of the domain name in the URL.
https://bucket-name.s3.Region.amazonaws.com/key name
In this example, my-bucket is the bucket name, US West (Oregon) is the Region, and puppy.png is the
key name:
https://my-bucket.s3.us-west-2.amazonaws.com/puppy.png
For more information about virtual hosted style access, see Virtual Hosted-Style Requests (p. 1023).
Path-style access
In Amazon S3, path-style URLs use the following format.
https://s3.Region.amazonaws.com/bucket-name/key name
For example, if you create a bucket named mybucket in the US West (Oregon) Region, and you want to
access the puppy.jpg object in that bucket, you can use the following path-style URL:
https://s3.us-west-2.amazonaws.com/mybucket/puppy.jpg
S3 access points only support virtual-host-style addressing. To address a bucket through an access point,
use the following format.
https://AccessPointName-AccountId.s3-accesspoint.region.amazonaws.com.
Note
• If your access point name includes dash (-) characters, include the dashes in the URL and
insert another dash before the account ID. For example, to use an access point named
finance-docs owned by account 123456789012 in Region us-west-2, the appropriate
URL would be https://finance-docs-123456789012.s3-accesspoint.us-
west-2.amazonaws.com.
• S3 access points don't support access by HTTP, only secure access by HTTPS.
S3://bucket-name/key-name
For example, the following example uses the sample bucket described in the earlier path-style section.
S3://mybucket/puppy.jpg
Emptying a bucket
You can empty a bucket's contents using the Amazon S3 console, AWS SDKs, or AWS Command Line
Interface (AWS CLI). When you empty a bucket, you delete all the objects, but you keep the bucket.
After you empty a bucket, it cannot be undone. When you empty a bucket that has S3 Bucket Versioning
enabled or suspended, all versions of all the objects in the bucket are deleted. For more information, see
Working with objects in a versioning-enabled bucket (p. 529).
You can also specify a lifecycle configuration on a bucket to expire objects so that Amazon S3 can delete
them. For more information, see Setting lifecycle configuration on a bucket (p. 584)
Troubleshooting
Objects added to the bucket while the empty bucket action is in progress might be deleted. To prevent
new objects from being added to a bucket while the empty bucket action is in progress, you might need
to stop your AWS CloudTrail trails from logging events to the bucket. For more information, see Turning
off logging for a trail in the AWS CloudTrail User Guide.
Another alternative to stopping CloudTrail trails from being added to the bucket is to add a deny
s3:PutObject statement to your bucket policy. If you want to store new objects in the bucket, you should
remove the deny s3:PutObject statement from your bucket policy. For more information, see Example —
Object operations (p. 295) and IAM JSON policy elements: Effect in the IAM User Guide
To empty an S3 bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, select the option next to the name of the bucket that you want to empty,
and then choose Empty.
3. On the Empty bucket page, confirm that you want to empty the bucket by entering the bucket
name into the text field, and then choose Empty.
4. Monitor the progress of the bucket emptying process on the Empty bucket: Status page.
The following rm command removes objects that have the key name prefix doc, for example, doc/doc1
and doc/doc2.
Use the following command to remove all objects without specifying a prefix.
For more information, see Using high-level S3 commands with the AWS CLI in the AWS Command Line
Interface User Guide.
Note
You can't remove objects from a bucket that has versioning enabled. Amazon S3 adds a delete
marker when you delete an object, which is what this command does. For more information
about S3 Bucket Versioning, see Using versioning in S3 buckets (p. 519).
For an example of how to empty a bucket using AWS SDK for Java, see Deleting a bucket (p. 37). The
code deletes all objects, regardless of whether the bucket has versioning enabled, and then it deletes the
bucket. To just empty the bucket, make sure that you remove the statement that deletes the bucket.
For more information about using other AWS SDKs, see Tools for Amazon Web Services.
You can add lifecycle configuration rules to expire all objects or a subset of objects that have a specific
key name prefix. For example, to remove all objects in a bucket, you can set a lifecycle rule to expire
objects one day after creation.
Amazon S3 supports a bucket lifecycle rule that you can use to stop multipart uploads that don't
complete within a specified number of days after being initiated. We recommend that you configure this
lifecycle rule to minimize your storage costs. For more information, see Configuring a bucket lifecycle
policy to abort incomplete multipart uploads (p. 79).
For more information about using a lifecycle configuration to empty a bucket, see Setting lifecycle
configuration on a bucket (p. 584) and Expiring objects (p. 584).
Deleting a bucket
You can delete an empty Amazon S3 bucket. Before deleting a bucket, consider the following:
• Bucket names are unique. If you delete a bucket, another AWS user can use the name.
• If the bucket hosts a static website, and you created and configured an Amazon Route 53 hosted
zone as described in Configuring a static website using a custom domain registered with Route
53 (p. 971), you must clean up the Route 53 hosted zone settings that are related to the bucket. For
more information, see Step 2: Delete the Route 53 hosted zone (p. 985).
• If the bucket receives log data from Elastic Load Balancing (ELB): We recommend that you stop the
delivery of ELB logs to the bucket before deleting it. After you delete the bucket, if another user
creates a bucket using the same name, your log data could potentially be delivered to that bucket. For
information about ELB access logs, see Access logs in the User Guide for Classic Load Balancers and
Access logs in the User Guide for Application Load Balancers.
Troubleshooting
• s3:DeleteBucket permissions – If you cannot delete a bucket, work with your IAM administrator to
confirm that you have s3:DeleteBucket permissions in your IAM user policy.
• s3:DeleteBucket deny statement – If you have s3:DeleteBucket permissions in your IAM policy and
you cannot delete a bucket, the bucket policy might include a deny statement for s3:DeleteBucket.
Buckets created by ElasticBeanstalk have a policy containing this statement by default. Before you can
delete the bucket, you must delete this statement or the bucket policy.
Important
Bucket names are unique. If you delete a bucket, another AWS user can use the name. If you
want to continue to use the same bucket name, don't delete the bucket. We recommend that
you empty the bucket and keep it.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, select the option next to the name of the bucket that you want to delete, and
then choose Delete at the top of the page.
3. On the Delete bucket page, confirm that you want to delete the bucket by entering the bucket
name into the text field, and then choose Delete bucket.
Note
If the bucket contains any objects, empty the bucket before deleting it by selecting the
empty bucket configuration link in the This bucket is not empty error alert and following
the instructions on the Empty bucket page. Then return to the Delete bucket page and
delete the bucket.
Java
The following Java example deletes a bucket that contains objects. The example deletes all objects,
and then it deletes the bucket. The example works for buckets with or without versioning enabled.
Note
For buckets without versioning enabled, you can delete all objects directly and then delete
the bucket. For buckets with versioning enabled, you must delete all object versions before
deleting the bucket.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;
import java.util.Iterator;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
if (versionList.isTruncated()) {
versionList = s3Client.listNextBatchOfVersions(versionList);
} else {
break;
}
}
// After all objects and object versions are deleted, delete the bucket.
s3Client.deleteBucket(bucketName);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client couldn't
// parse the response from Amazon S3.
e.printStackTrace();
}
}
}
If your bucket does not have versioning enabled, you can use the rb (remove bucket) AWS CLI command
with the --force parameter to delete the bucket and all the objects in it. This command deletes all
objects first and then deletes the bucket.
For more information, see Using High-Level S3 Commands with the AWS Command Line Interface in the
AWS Command Line Interface User Guide.
When you configure your bucket to use default encryption with SSE-KMS, you can also enable S3 Bucket
Keys to decrease request traffic from Amazon S3 to AWS Key Management Service (AWS KMS) and
reduce the cost of encryption. For more information, see Reducing the cost of SSE-KMS with Amazon S3
Bucket Keys (p. 228).
When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk and
decrypts it when you download the objects. For more information about protecting data using
server-side encryption and encryption key management, see Protecting data using server-side
encryption (p. 219).
For more information about permissions required for default encryption, see PutBucketEncryption in the
Amazon Simple Storage Service API Reference.
To set up default encryption on a bucket, you can use the Amazon S3 console, AWS CLI, AWS SDKs, or
the REST API. For more information, see the section called “Enabling default encryption” (p. 41).
To encrypt your existing Amazon S3 objects, you can use Amazon S3 Batch Operations. You provide
S3 Batch Operations with a list of objects to operate on, and Batch Operations calls the respective API
to perform the specified operation. You can use the Batch Operations Copy operation to copy existing
unencrypted objects and write them back to the same bucket as encrypted objects. A single Batch
Operations job can perform the specified operation on billions of objects. For more information, see
Performing large-scale batch operations on Amazon S3 objects (p. 738) and the AWS Storage Blog post
Encrypting objects with Amazon S3 Batch Operations.
You can also encrypt existing objects using the Copy Object API. For more information, see the AWS
Storage Blog post Encrypting existing Amazon S3 objects with the AWS CLI.
Note
Amazon S3 buckets with default bucket encryption using SSE-KMS cannot be used as
destination buckets for the section called “Logging server access” (p. 833). Only SSE-S3
default encryption is supported for server access log destination buckets.
• The AWS managed key (aws/s3) is used when a AWS KMS key Amazon Resource Name (ARN) or alias is
not provided at request time, nor via the bucket's default encryption configuration.
• If you're uploading or accessing S3 objects using AWS Identity and Access Management (IAM)
principals that are in the same AWS account as your KMS key, you can use the AWS managed key (aws/
s3).
• Use a customer managed key if you want to grant cross-account access to your S3 objects. You can
configure the policy of a customer managed key to allow access from another account.
• If specifying your own KMS key, you should use a fully qualified KMS key key ARN. When using a KMS
key alias, be aware that AWS KMS will resolve the key within the requester’s account. This can result in
data encrypted with a KMS key that belongs to the requester, and not the bucket administrator.
• You must specify a key that you (the requester) have been granted Encrypt permission to. For more
information, see Allows key users to use a KMS key for cryptographic operations in the AWS Key
Management Service Developer Guide.
For more information about when to use customer managed keys and the AWS managed KMS keys, see
Should I use an AWS managed key or a customer managed key key to encrypt my objects on Amazon S3?
• If objects in the source bucket are not encrypted, the replica objects in the destination bucket are
encrypted using the default encryption settings of the destination bucket. This results in the ETag of
the source object being different from the ETag of the replica object. You must update applications
that use the ETag to accommodate for this difference.
• If objects in the source bucket are encrypted using SSE-S3 or SSE-KMS, the replica objects in the
destination bucket use the same encryption as the source object encryption. The default encryption
settings of the destination bucket are not used.
For more information about using default encryption with SSE-KMS, see Replicating encrypted
objects (p. 675).
When you configure your bucket to use S3 Bucket Keys for SSE-KMS on new objects, AWS KMS generates
a bucket-level key that is used to create a unique data key for objects in the bucket. This bucket key is
used for a time-limited period within Amazon S3, reducing the need for Amazon S3 to make requests to
AWS KMS to complete encryption operations.
For more information about using an S3 Bucket Key, see Using Amazon S3 Bucket Keys (p. 228).
When you configure default encryption using AWS KMS, you can also configure S3 Bucket Key. For more
information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys (p. 228).
Default encryption works with all existing and new Amazon S3 buckets. Without default encryption, to
encrypt all objects stored in a bucket, you must include encryption information with every object storage
request. You must also set up an Amazon S3 bucket policy to reject storage requests that don't include
encryption information.
There are no additional charges for using default encryption for S3 buckets. Requests to configure the
default encryption feature incur standard Amazon S3 request charges. For information about pricing, see
Amazon S3 pricing. For SSE-KMS KMS key storage, AWS KMS charges apply and are listed at AWS KMS
pricing.
After you enable default encryption for a bucket, the following encryption behavior applies:
• There is no change to the encryption of the objects that existed in the bucket before default
encryption was enabled.
• When you upload objects after enabling default encryption:
• If your PUT request headers don't include encryption information, Amazon S3 uses the bucket’s
default encryption settings to encrypt the objects.
• If your PUT request headers include encryption information, Amazon S3 uses the encryption
information from the PUT request to encrypt objects before storing them in Amazon S3.
• If you use the SSE-KMS option for your default encryption configuration, you are subject to the RPS
(requests per second) limits of AWS KMS. For more information about AWS KMS limits and how to
request a limit increase, see AWS KMS limits.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want.
3. Choose Properties.
4. Under Default encryption, choose Edit.
5. To enable or disable server-side encryption, choose Enable or Disable.
6. To enable server-side encryption using an Amazon S3-managed key, under Encryption key type,
choose Amazon S3 key (SSE-S3).
For more information about using Amazon S3 server-side encryption to encrypt your data, see
Protecting data using server-side encryption with Amazon S3-managed encryption keys (SSE-
S3) (p. 237).
7. To enable server-side encryption using an AWS KMS key, follow these steps:
a. Under Encryption key type, choose AWS Key Management Service key (SSE-KMS).
Important
If you use the AWS KMS option for your default encryption configuration, you are
subject to the RPS (requests per second) limits of AWS KMS. For more information
about AWS KMS quotas and how to request a quota increase, see Quotas.
b. Under AWS KMS key choose one of the following:
Important
You can only use KMS keys that are enabled in the same AWS Region as the bucket.
When you choose Choose from your KMS keys, the S3 console only lists 100 KMS
keys per Region. If you have more than 100 KMS keys in the same Region, you can only
see the first 100 KMS keys in the S3 console. To use a KMS key that is not listed in the
console, choose Custom KMS ARN, and enter the KMS key ARN.
When you use an AWS KMS key for server-side encryption in Amazon S3, you must
choose a symmetric KMS key. Amazon S3 only supports symmetric KMS keys and not
asymmetric KMS keys. For more information, see Using symmetric and asymmetric keys
in the AWS Key Management Service Developer Guide.
For more information about creating an AWS KMS key, see Creating keys in the AWS Key
Management Service Developer Guide. For more information about using AWS KMS with Amazon
S3, see Protecting Data Using Server-Side Encryption with KMS keys Stored in AWS Key
Management Service (SSE-KMS) (p. 220).
8. To use S3 Bucket Keys, under Bucket Key, choose Enable.
When you configure your bucket to use default encryption with SSE-KMS, you can also enable S3
Bucket Key. S3 Bucket Keys decrease request traffic from Amazon S3 to AWS KMS and lower the
cost of encryption. For more information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket
Keys (p. 228).
9. Choose Save changes.
For more information about default encryption, see Setting default server-side encryption behavior
for Amazon S3 buckets (p. 40). For more information about using the AWS CLI to configure default
encryption, see put-bucket-encryption.
This example configures default bucket encryption with Amazon S3-managed encryption.
This example configures default bucket encryption with SSE-KMS using an S3 Bucket Key.
For more information, see PutBucketEncryption in the Amazon Simple Storage Service API Reference.
• PutBucketEncryption
• GetBucketEncryption
• DeleteBucketEncryption
You can also create Amazon CloudWatch Events with S3 bucket-level operations as the event type.
For more information about CloudTrail events, see Enable logging for objects in a bucket using the
console (p. 826).
You can use CloudTrail logs for object-level Amazon S3 actions to track PUT and POST requests to
Amazon S3. You can use these actions to verify whether default encryption is being used to encrypt
objects when incoming PUT requests don't have encryption headers.
When Amazon S3 encrypts an object using the default encryption settings, the log includes
the following field as the name/value pair: "SSEApplied":"Default_SSE_S3" or
"SSEApplied":"Default_SSE_KMS".
When Amazon S3 encrypts an object using the PUT encryption headers, the log includes one of the
following fields as the name/value pair: "SSEApplied":"SSE_S3", "SSEApplied":"SSE_KMS or
"SSEApplied":"SSE_C".
For multipart uploads, this information is included in the InitiateMultipartUpload API requests. For
more information about using CloudTrail and CloudWatch, see Monitoring Amazon S3 (p. 814).
When you use Transfer Acceleration, additional data transfer charges might apply. For more information
about pricing, see Amazon S3 pricing.
• Your customers upload to a centralized bucket from all over the world.
• You transfer gigabytes to terabytes of data on a regular basis across continents.
• You can't use all of your available bandwidth over the internet when uploading to Amazon S3.
For more information about when to use Transfer Acceleration, see Amazon S3 FAQs.
• Transfer Acceleration is only supported on virtual-hosted style requests. For more information about
virtual-hosted style requests, see Making requests using the REST API (p. 1020).
• The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain
periods (".").
• Transfer Acceleration must be enabled on the bucket. For more information, see Enabling and using S3
Transfer Acceleration (p. 47).
After you enable Transfer Acceleration on a bucket, it might take up to 20 minutes before the data
transfer speed to the bucket increases.
Note
Transfer Acceleration is currently not supported for buckets located in the following Regions:
• Africa (Cape Town) (af-south-1)
• Asia Pacific (Hong Kong) (ap-east-1)
• Asia Pacific (Osaka) (ap-northeast-3)
• Europe (Stockholm) (eu-north-1)
• Europe (Milan) (eu-south-1)
• Middle East (Bahrain) (me-south-1)
• To access the bucket that is enabled for Transfer Acceleration, you must use the endpoint
bucketname.s3-accelerate.amazonaws.com. Or, use the dual-stack endpoint bucketname.s3-
accelerate.dualstack.amazonaws.com to connect to the enabled bucket over IPv6.
• You must be the bucket owner to set the transfer acceleration state. The bucket owner can
assign permissions to other users to allow them to set the acceleration state on a bucket. The
s3:PutAccelerateConfiguration permission permits users to enable or disable Transfer
Acceleration on a bucket. The s3:GetAccelerateConfiguration permission permits users to
return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended. For more
information about these permissions, see Example — Bucket subresource operations (p. 296) and
Identity and access management in Amazon S3 (p. 274).
The following sections describe how to get started and use Amazon S3 Transfer Acceleration for
transferring data.
Topics
• Getting started with Amazon S3 Transfer Acceleration (p. 45)
• Enabling and using S3 Transfer Acceleration (p. 47)
• Using the Amazon S3 Transfer Acceleration Speed Comparison tool (p. 51)
locations in Amazon CloudFront. As the data arrives at an edge location, data is routed to Amazon S3
over an optimized network path.
To get started using Amazon S3 Transfer Acceleration, perform the following steps:
You can enable Transfer Acceleration on a bucket any of the following ways:
• Use the Amazon S3 console.
• Use the REST API PUT Bucket accelerate operation.
• Use the AWS CLI and AWS SDKs. For more information, see Developing with Amazon S3 using the
AWS SDKs, and explorers (p. 1030).
For more information, see Enabling and using S3 Transfer Acceleration (p. 47).
Note
For your bucket to work with transfer acceleration, the bucket name must conform to DNS
naming requirements and must not contain periods (".").
2. Transfer data to and from the acceleration-enabled bucket
Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and IPv4. The Transfer
Acceleration dual-stack endpoint only uses the virtual hosted-style type of endpoint name. For
more information, see Getting started making requests over IPv6 (p. 989) and Using Amazon S3
dual-stack endpoints (p. 991).
Note
You can continue to use the regular endpoint in addition to the accelerate endpoints.
You can point your Amazon S3 PUT object and GET object requests to the s3-accelerate
endpoint domain name after you enable Transfer Acceleration. For example, suppose that you
currently have a REST API application using PUT Object that uses the hostname mybucket.s3.us-
east-1.amazonaws.com in the PUT request. To accelerate the PUT, you change the hostname in
your request to mybucket.s3-accelerate.amazonaws.com. To go back to using the standard
upload speed, change the name back to mybucket.s3.us-east-1.amazonaws.com.
After Transfer Acceleration is enabled, it can take up to 20 minutes for you to realize the performance
benefit. However, the accelerate endpoint is available as soon as you enable Transfer Acceleration.
You can use the accelerate endpoint in the AWS CLI, AWS SDKs, and other tools that transfer data
to and from Amazon S3. If you are using the AWS SDKs, some of the supported languages use
an accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint
for Transfer Acceleration to bucketname.s3-accelerate.amazonaws.com. For examples of
how to use an accelerate endpoint client configuration flag, see Enabling and using S3 Transfer
Acceleration (p. 47).
You can use all Amazon S3 operations through the transfer acceleration endpoints except for the
following:
Also, Amazon S3 Transfer Acceleration does not support cross-Region copies using PUT Object - Copy.
This section provides examples of how to enable Amazon S3 Transfer Acceleration on a bucket and use
the acceleration endpoint for the enabled bucket.
For more information about Transfer Acceleration requirements, see Configuring fast, secure file
transfers using Amazon S3 Transfer Acceleration (p. 44).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to enable transfer acceleration for.
3. Choose Properties.
4. Under Transfer acceleration, choose Edit.
5. Choose Enable, and choose Save changes.
1. After Amazon S3 enables transfer acceleration for your bucket, view the Properties tab for the
bucket.
2. Under Transfer acceleration, Accelerated endpoint displays the transfer acceleration endpoint for
your bucket. Use this endpoint to access accelerated data transfers to and from your bucket.
The following example sets Status=Enabled to enable Transfer Acceleration on a bucket. You use
Status=Suspended to suspend Transfer Acceleration.
Example
All requests are sent using the virtual style of bucket addressing: my-bucket.s3-
accelerate.amazonaws.com. Any ListBuckets, CreateBucket, and DeleteBucket requests are
not sent to the accelerate endpoint because the endpoint doesn't support those operations.
For more information about use_accelerate_endpoint, see AWS CLI S3 Configuration in the AWS CLI
Command Reference.
Example
If you want to use the accelerate endpoint for some AWS CLI commands but not others, you can use
either one of the following two methods:
• Use the accelerate endpoint for any s3 or s3api command by setting the --endpoint-url parameter
to https://s3-accelerate.amazonaws.com.
• Set up separate profiles in your AWS Config file. For example, create one profile that sets
use_accelerate_endpoint to true and a profile that does not set use_accelerate_endpoint.
When you run a command, specify which profile you want to use, depending upon whether you want
to use the accelerate endpoint.
Example
The following example uploads a file to a bucket enabled for Transfer Acceleration by using the --
endpoint-url parameter to specify the accelerate endpoint.
Example
accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint for Transfer
Acceleration to bucketname.s3-accelerate.amazonaws.com.
Java
Example
The following example shows how to use an accelerate endpoint to upload an object to Amazon S3.
The example does the following:
• Creates an AmazonS3Client that is configured to use accelerate endpoints. All buckets that the
client accesses must have Transfer Acceleration enabled.
• Enables Transfer Acceleration on a specified bucket. This step is necessary only if the bucket you
specify doesn't already have Transfer Acceleration enabled.
• Verifies that transfer acceleration is enabled for the specified bucket.
• Uploads a new object to the specified bucket using the bucket's accelerate endpoint.
For more information about using Transfer Acceleration, see Getting started with Amazon S3
Transfer Acceleration (p. 45). For instructions on creating and testing a working sample, see
Testing the Amazon S3 Java Code Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketAccelerateConfiguration;
import com.amazonaws.services.s3.model.BucketAccelerateStatus;
import com.amazonaws.services.s3.model.GetBucketAccelerateConfigurationRequest;
import com.amazonaws.services.s3.model.SetBucketAccelerateConfigurationRequest;
try {
// Create an Amazon S3 client that is configured to use the accelerate
endpoint.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.enableAccelerateMode()
.build();
.NET
The following example shows how to use the AWS SDK for .NET to enable Transfer Acceleration on
a bucket. For instructions on how to create and test a working sample, see Running the Amazon
S3 .NET Code Examples (p. 1039).
Example
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class TransferAccelerationTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
EnableAccelerationAsync().Wait();
}
When uploading an object to a bucket that has Transfer Acceleration enabled, you specify using the
acceleration endpoint at the time of creating a client.
Javascript
For an example of enabling Transfer Acceleration by using the AWS SDK for JavaScript, see Calling
the putBucketAccelerateConfiguration operation in the AWS SDK for JavaScript API Reference.
Python (Boto)
For an example of enabling Transfer Acceleration by using the SDK for Python, see
put_bucket_accelerate_configuration in the AWS SDK for Python (Boto3) API Reference.
Other
For information about using other AWS SDKs, see Sample Code and Libraries.
You can access the Speed Comparison tool using either of the following methods:
• Copy the following URL into your browser window, replacing region with the AWS Region that you
are using (for example, us-west-2) and yourBucketName with the name of the bucket that you
want to evaluate:
https://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-
speed-comparsion.html?region=region&origBucketName=yourBucketName
For a list of the Regions supported by Amazon S3, see Amazon S3 endpoints and quotas in the AWS
General Reference.
Typically, you configure buckets to be Requester Pays buckets when you want to share data but not
incur charges associated with others accessing the data. For example, you might use Requester Pays
buckets when making available large datasets, such as zip code directories, reference data, geospatial
information, or web crawling data.
Important
If you enable Requester Pays on a bucket, anonymous access to that bucket is not allowed.
You must authenticate all requests involving Requester Pays buckets. The request authentication enables
Amazon S3 to identify and charge the requester for their use of the Requester Pays bucket.
When the requester assumes an AWS Identity and Access Management (IAM) role before making their
request, the account to which the role belongs is charged for the request. For more information about
IAM roles, see IAM roles in the IAM User Guide.
After you configure a bucket to be a Requester Pays bucket, requesters must include x-amz-request-
payer in their requests either in the header, for POST, GET and HEAD requests, or as a parameter in
a REST request to show that they understand that they will be charged for the request and the data
download.
• Anonymous requests
• SOAP requests
• Using a Requester Pays bucket as the target bucket for end-user logging, or vice versa. However, you
can turn on end-user logging on a Requester Pays bucket where the target bucket is not a Requester
Pays bucket.
• The requester doesn't include the parameter x-amz-request-payer in the header (GET, HEAD, or
POST) or as a parameter (REST) in the request (HTTP code 403).
• Request authentication fails (HTTP code 403).
• The request is anonymous (HTTP code 403).
• The request is a SOAP request.
Topics
This section provides examples of how to configure Requester Pays on an Amazon S3 bucket using the
console and the REST API.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to enable Requester Pays for.
3. Choose Properties.
4. Under Requester pays, choose Edit.
5. Choose Enable, and choose Save changes.
Amazon S3 enables Requester Pays for your bucket and displays your Bucket overview. Under
Requester pays, you see Enabled.
To revert a Requester Pays bucket to a regular bucket, you use the value BucketOwner. Typically, you
would use BucketOwner when uploading data to the Amazon S3 bucket, and then you would set the
value to Requester before publishing the objects in the bucket.
To set requestPayment
• Use a PUT request to set the Payer value to Requester on a specified bucket.
<RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Payer>Requester</Payer>
</RequestPaymentConfiguration>
HTTP/1.1 200 OK
x-amz-id-2: [id]
x-amz-request-id: [request_id]
Date: Wed, 01 Mar 2009 12:00:00 GMT
Content-Length: 0
Connection: close
Server: AmazonS3
x-amz-request-charged:requester
You can set Requester Pays only at the bucket level. You can't set Requester Pays for specific objects
within the bucket.
You can configure a bucket to be BucketOwner or Requester at any time. However, there might be a
few minutes before the new configuration value takes effect.
Note
Bucket owners who give out presigned URLs should consider carefully before configuring a
bucket to be Requester Pays, especially if the URL has a long lifetime. The bucket owner is
charged each time the requester uses a presigned URL that uses the bucket owner's credentials.
• Use a GET request to obtain the requestPayment resource, as shown in the following request.
HTTP/1.1 200 OK
x-amz-id-2: [id]
x-amz-request-id: [request_id]
Date: Wed, 01 Mar 2009 12:00:00 GMT
Content-Type: [type]
Content-Length: [length]
Connection: close
Server: AmazonS3
• For GET, HEAD, and POST requests, include x-amz-request-payer : requester in the header
• For signed URLs, include x-amz-request-payer=requester in the request
If the request succeeds and the requester is charged, the response includes the header x-amz-request-
charged:requester. If x-amz-request-payer is not in the request, Amazon S3 returns a 403 error
and charges the bucket owner for the request.
Note
Bucket owners do not need to add x-amz-request-payer to their requests.
Ensure that you have included x-amz-request-payer and its value in your signature
calculation. For more information, see Constructing the CanonicalizedAmzHeaders
Element (p. 1059).
• Use a GET request to download an object from a Requester Pays bucket, as shown in the following
request.
If the GET request succeeds and the requester is charged, the response includes x-amz-request-
charged:requester.
Amazon S3 can return an Access Denied error for requests that try to get objects from a Requester
Pays bucket. For more information, see Error Responses in the Amazon Simple Storage Service API
Reference.
When you create a bucket, you choose its name and the AWS Region to create it in. After you create a
bucket, you can't change its name or Region.
When naming a bucket, choose a name that is relevant to you or your business. Avoid using names
associated with others. For example, you should avoid using AWS or Amazon in your bucket name.
By default, you can create up to 100 buckets in each of your AWS accounts. If you need additional
buckets, you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a
service limit increase. There is no difference in performance whether you use many buckets or just a few.
For information about how to increase your bucket limit, see AWS service quotas in the AWS General
Reference.
If a bucket is empty, you can delete it. After a bucket is deleted, the name becomes available for reuse.
However, after you delete the bucket, you might not be able to reuse the name for various reasons.
For example, when you delete the bucket and the name becomes available for reuse, another AWS
account might create a bucket with that name. In addition, some time might pass before you can reuse
the name of a deleted bucket. If you want to use the same bucket name, we recommend that you don't
delete the bucket.
For more information about bucket names, see Bucket naming rules (p. 27)
There is no limit to the number of objects that you can store in a bucket. You can store all of your objects
in a single bucket, or you can organize them across several buckets. However, you can't create a bucket
from within another bucket.
Bucket operations
The high availability engineering of Amazon S3 is focused on get, put, list, and delete operations. Because
bucket operations work against a centralized, global resource space, it is not appropriate to create or
delete buckets on the high availability code path of your application. It's better to create or delete
buckets in a separate initialization or setup routine that you run less often.
If your application automatically creates buckets, choose a bucket naming scheme that is unlikely to
cause naming conflicts. Ensure that your application logic will choose a different bucket name if a bucket
name is already taken.
For more information about bucket naming, see Bucket naming rules (p. 27).
To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the
object is in the bucket, you can open it, download it, and move it. When you no longer need an object or
a bucket, you can clean up these resources.
With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.
Topics
• Amazon S3 objects overview (p. 57)
• Creating object key names (p. 58)
• Working with object metadata (p. 61)
• Uploading objects (p. 66)
• Uploading and copying objects using multipart upload (p. 74)
• Copying objects (p. 108)
• Downloading an object (p. 115)
• Deleting Amazon S3 objects (p. 121)
• Organizing, listing, and working with your objects (p. 141)
• Using presigned URLs (p. 150)
• Transforming objects with S3 Object Lambda (p. 161)
Key
The name that you assign to an object. You use the object key to retrieve the object. For more
information, see Working with object metadata (p. 61).
Version ID
Within a bucket, a key and version ID uniquely identify an object. The version ID is a string that
Amazon S3 generates when you add an object to a bucket. For more information, see Using
versioning in S3 buckets (p. 519).
Value
An object value can be any sequence of bytes. Objects can range in size from zero to 5 TB. For more
information, see Uploading objects (p. 66).
Metadata
A set of name-value pairs with which you can store information regarding the object. You can assign
metadata, referred to as user-defined metadata, to your objects in Amazon S3. Amazon S3 also
assigns system-metadata to these objects, which it uses for managing objects. For more information,
see Working with object metadata (p. 61).
Subresources
Amazon S3 uses the subresource mechanism to store object-specific additional information. Because
subresources are subordinates to objects, they are always associated with some other entity such as
an object or a bucket. For more information, see Object subresources (p. 58).
Access control information
You can control access to the objects you store in Amazon S3. Amazon S3 supports both the
resource-based access control, such as an access control list (ACL) and bucket policies, and user-
based access control. For more information, see Identity and access management in Amazon
S3 (p. 274).
Your Amazon S3 resources (for example, buckets and objects) are private by default. You must
explicitly grant permission for others to access these resources. For more information about sharing
objects, see Sharing an object with a presigned URL (p. 151).
Object subresources
Amazon S3 defines a set of subresources associated with buckets and objects. Subresources are
subordinates to objects. This means that subresources don't exist on their own. They are always
associated with some other entity, such as an object or a bucket.
The following table lists the subresources associated with Amazon S3 objects.
Subresource Description
acl Contains a list of grants identifying the grantees and the permissions granted. When
you create an object, the acl identifies the object owner as having full control over the
object. You can retrieve an object ACL or replace it with an updated list of grants. Any
update to an ACL requires you to replace the existing ACL. For more information about
ACLs, see Access control list (ACL) overview (p. 460).
When you create an object, you specify the key name, which uniquely identifies the object in the bucket.
For example, on the Amazon S3 console, when you highlight a bucket, a list of objects in your bucket
appears. These names are the object keys. The name for a key is a sequence of Unicode characters whose
UTF-8 encoding is at most 1,024 bytes long.
The Amazon S3 data model is a flat structure: You create a bucket, and the bucket store objects. There
is no hierarchy of subbuckets or subfolders. However, you can infer logical hierarchy using key name
prefixes and delimiters as the Amazon S3 console does. The Amazon S3 console supports a concept of
folders. For more information about how to edit metadata from the Amazon S3 console, see Editing
object metadata in the Amazon S3 console (p. 64).
Suppose that your bucket (admin-created) has four objects with the following object keys:
Development/Projects.xls
Finance/statement1.pdf
Private/taxdocument.pdf
s3-dg.pdf
The console uses the key name prefixes (Development/, Finance/, and Private/) and delimiter ('/')
to present a folder structure. The s3-dg.pdf key does not have a prefix, so its object appears directly at
the root level of the bucket. If you open the Development/ folder, you see the Projects.xlsx object
in it.
• Amazon S3 supports buckets and objects, and there is no hierarchy. However, by using prefixes and
delimiters in an object key name, the Amazon S3 console and the AWS SDKs can infer hierarchy and
introduce the concept of folders.
• The Amazon S3 console implements folder object creation by creating a zero-byte object with the
folder prefix and delimiter value as the key. These folder objects don't appear in the console. Otherwise
they behave like any other objects and can be viewed and manipulated through the REST API, AWS CLI,
and AWS SDKs.
Safe characters
The following character sets are generally safe for use in key names.
• 4my-organization
• my.great_photos-2014/jan/myvacation.jpg
• videos/2014/birthday/video1.wmv
Note
Objects with key names ending with period(s) "." downloaded using the Amazon S3 console will
have the period(s) "." removed from the key name of the downloaded object. To download an
object with the key name ending in period(s) "." retained in the downloaded object, you must
use the AWS Command Line Interface (AWS CLI), AWS SDKs, or REST API.
In addition, be aware of the following prefix limitations:
• Objects with a prefix of "./" must uploaded or downloaded with the AWS Command Line
Interface (AWS CLI), AWS SDKs, or REST API. You cannot use the Amazon S3 console.
• Objects with a prefix of "../" cannot be uploaded using the AWS Command Line Interface (AWS
CLI) or Amazon S3 console.
• Ampersand ("&")
• Dollar ("$")
• ASCII character ranges 00–1F hex (0–31 decimal) and 7F (127 decimal)
• 'At' symbol ("@")
• Equals ("=")
• Semicolon (";")
• Colon (":")
• Plus ("+")
• Space – Significant sequences of spaces might be lost in some uses (especially multiple spaces)
• Comma (",")
• Question mark ("?")
Characters to avoid
Avoid the following characters in a key name because of significant special handling for consistency
across all applications.
• Backslash ("\")
• Left curly brace ("{")
• Non-printable ASCII characters (128–255 decimal characters)
• Caret ("^")
• Right curly brace ("}")
• Percent character ("%")
• Grave accent / back tick ("`")
• Right square bracket ("]")
• Quotation marks
• 'Greater Than' symbol (">")
• Left square bracket ("[")
• Tilde ("~")
• 'Less Than' symbol ("<")
• 'Pound' character ("#")
• Vertical bar / pipe ("|")
• ' as '
• ” as "
• & as &
• < as <
• > as >
• \r as or 
• \n as or 

Example
The following example illustrates the use of an XML entity code as a substitution for a carriage return.
This DeleteObjects request deletes an object with the key parameter: /some/prefix/objectwith
\rcarriagereturn (where the \r is the carriage return).
<Delete xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Object>
<Key>/some/prefix/objectwith carriagereturn</Key>
</Object>
</Delete>
When you create an object, you also specify the key name, which uniquely identifies the object in the
bucket. The object key (or key name) uniquely identifies the object in an Amazon S3 bucket. For more
information, see Creating object key names (p. 58).
There are two kinds of metadata in Amazon S3: system-defined metadata and user-defined metadata. The
sections below provide more information about system-defined and user-defined metadata. For more
information about editing metadata using the Amazon S3 console, see Editing object metadata in the
Amazon S3 console (p. 64).
1. Metadata such as object creation date is system controlled, where only Amazon S3 can modify the
value.
2. Other system metadata, such as the storage class configured for the object and whether the object
has server-side encryption enabled, are examples of system metadata whose values you control.
If your bucket is configured as a website, sometimes you might want to redirect a page request to
another page or an external URL. In this case, a webpage is an object in your bucket. Amazon S3 stores
the page redirect value as system metadata whose value you control.
When you create objects, you can configure values of these system metadata items or update the
values when you need to. For more information about storage classes, see Using Amazon S3 storage
classes (p. 567).
For more information about server-side encryption, see Protecting data using encryption (p. 219).
Note
The PUT request header is limited to 8 KB in size. Within the PUT request header, the system-
defined metadata is limited to 2 KB in size. The size of system-defined metadata is measured by
taking the sum of the number of bytes in the US-ASCII encoding of each key and value.
The following table provides a list of system-defined metadata and whether you can update it.
x-amz-storage-class Storage class used for storing the object. For more Yes
information, see Using Amazon S3 storage classes (p. 567).
When metadata is retrieved through the REST API, Amazon S3 combines headers that have the same
name (ignoring case) into a comma-delimited list. If some metadata contains unprintable characters,
it is not returned. Instead, the x-amz-missing-meta header is returned with a value of the number
of unprintable metadata entries. The HeadObject action retrieves metadata from an object without
returning the object itself. This operation is useful if you're only interested in an object's metadata.
To use HEAD, you must have READ access to the object. For more information, see HeadObject in the
Amazon Simple Storage Service API Reference.
User-defined metadata is a set of key-value pairs. Amazon S3 stores user-defined metadata keys in
lowercase.
To avoid issues around the presentation of these metadata values, you should conform to using US-ASCII
characters when using REST and UTF-8 when using SOAP or browser-based uploads via POST.
When using non US-ASCII characters in your metadata values, the provided Unicode string is examined
for non US-ASCII characters. If the string contains only US-ASCII characters, it is presented as is. If the
string contains non US-ASCII characters, it is first character-encoded using UTF-8 and then encoded into
US-ASCII.
Note
The PUT request header is limited to 8 KB in size. Within the PUT request header, the user-
defined metadata is limited to 2 KB in size. The size of user-defined metadata is measured by
taking the sum of the number of bytes in the UTF-8 encoding of each key and value.
For information about changing the metadata of your object after it’s been uploaded by creating a copy
of the object, modifying it, and replacing the old object, or creating a new version, see Editing object
metadata in the Amazon S3 console (p. 64).
You can also set some metadata when you upload the object and later edit it as your needs change. For
example, you might have a set of objects that you initially store in the STANDARD storage class. Over
time, you might no longer need this data to be highly available. So you change the storage class to
GLACIER by editing the value of the x-amz-storage-class key from STANDARD to GLACIER.
Note
Consider the following issues when you are editing object metadata in Amazon S3:
• This action creates a copy of the object with updated settings and the last-modified date. If S3
Versioning is enabled, a new version of the object is created, and the existing object becomes
an older version. If S3 Versioning is not enabled, a new copy of the object replaces the original
object. The IAM role that changes the property also becomes the owner of the new object or
(object version).
• Editing metadata updates values for existing key names.
• Objects that are encrypted with customer-provided encryption keys (SSE-C) cannot be copied
using the console. You must use the AWS CLI, AWS SDK, or the Amazon S3 REST API.
Warning
When editing metadata of folders, wait for the Edit metadata operation to finish before
adding new objects to the folder. Otherwise, new objects might also be edited.
The following topics describe how to edit metadata of an object using the Amazon S3 console.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to your Amazon S3 bucket or folder, and select the check box to the left of the names of
the objects with metadata you want to edit.
3. On the Actions menu, choose Edit actions, and choose Edit metadata.
4. Review the objects listed, and choose Add metadata.
5. For metadata Type, select System-defined.
6. Specify a unique Key and the metadata Value.
7. To edit additional metadata, choose Add metadata. You can also choose Remove to remove a set of
type-key-values.
8. When you are done, choose Edit metadata and Amazon S3 edits the metadata of the specified
objects.
User-defined metadata can be as large as 2 KB total. To calculate the total size of user-defined metadata,
sum the number of bytes in the UTF-8 encoding for each key and value. Both keys and their values must
conform to US-ASCII standards. For more information, see User-defined object metadata (p. 63).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the objects that you want to add
metadata to.
Uploading objects
When you upload a file to Amazon S3, it is stored as an S3 object. Objects consist of the file data and
metadata that describes the object. You can have an unlimited number of objects in a bucket. Before
you can upload files to an Amazon S3 bucket, you need write permissions for the bucket. For more
information about access permissions, see Identity and access management in Amazon S3 (p. 274).
You can upload any file type—images, backups, data, movies, etc.—into an S3 bucket. The maximum size
of a file that you can upload by using the Amazon S3 console is 160 GB. To upload a file larger than 160
GB, use the AWS CLI, AWS SDK, or Amazon S3 REST API.
If you upload an object with a key name that already exists in a versioning-enabled bucket, Amazon
S3 creates another version of the object instead of replacing the existing object. For more information
about versioning, see Using the S3 console (p. 524).
Depending on the size of the data you are uploading, Amazon S3 offers the following options:
• Upload an object in a single operation using the AWS SDKs, REST API, or AWS CLI—With a single
PUT operation, you can upload a single object up to 5 GB in size.
• Upload a single object using the Amazon S3 Console—With the Amazon S3 Console, you can upload
a single object up to 160 GB in size.
• Upload an object in parts using the AWS SDKs, REST API, or AWS CLI—Using the multipart upload
API, you can upload a single large object, up to 5 TB in size.
The multipart upload API is designed to improve the upload experience for larger objects. You can
upload an object in parts. These object parts can be uploaded independently, in any order, and in
parallel. You can use a multipart upload for objects from 5 MB to 5 TB in size. For more information,
see Uploading and copying objects using multipart upload (p. 74).
When uploading an object, you can optionally request that Amazon S3 encrypt it before saving
it to disk, and decrypt it when you download it. For more information, see Protecting data using
encryption (p. 219).
When you upload an object, the object key name is the file name and any optional prefixes. In the
Amazon S3 console, you can create folders to organize your objects. In Amazon S3, folders are
represented as prefixes that appear in the object key name. If you upload an individual object to a folder
in the Amazon S3 console, the folder name is included in the object key name.
For example, if you upload an object named sample1.jpg to a folder named backup, the key name is
backup/sample1.jpg. However, the object is displayed in the console as sample1.jpg in the backup
folder. For more information about key names, see Working with object metadata (p. 61).
Note
If you rename an object or change any of the properties in the S3 console, for example Storage
Class, Encryption, Metadata, a new object is created to replace the old one. If S3 Versioning
is enabled, a new version of the object is created, and the existing object becomes an older
version. The role that changes the property also becomes the owner of the new object or (object
version).
When you upload a folder, Amazon S3 uploads all of the files and subfolders from the specified folder
to your bucket. It then assigns an object key name that is a combination of the uploaded file name
and the folder name. For example, if you upload a folder named /images that contains two files,
sample1.jpg and sample2.jpg, Amazon S3 uploads the files and then assigns the corresponding key
names, images/sample1.jpg and images/sample2.jpg. The key names include the folder name
as a prefix. The Amazon S3 console displays only the part of the key name that follows the last “/”. For
example, within an images folder the images/sample1.jpg and images/sample2.jpg objects are
displayed as sample1.jpg and a sample2.jpg.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to upload your folders or files to.
3. Choose Upload.
4. In the Upload window, do one of the following:
Amazon S3 uploads your objects and folders. When the upload completes, you can see a success
message on the Upload: status page.
7. To configure additional object properties before uploading, see To configure additional object
properties (p. 67).
For more information about storage classes, see Using Amazon S3 storage classes (p. 567).
3. To update the encryption settings for your objects, under Server-side encryption settings, do the
following.
For more information, see Protecting data using server-side encryption with Amazon S3-
managed encryption keys (SSE-S3) (p. 237).
c. To encrypt the uploaded files using the AWS Key Management Service (AWS KMS), choose AWS
Key Management Service key (SSE-KMS). Then choose an option for AWS KMS key.
For more information about creating a customer managed key, see Creating Keys in the AWS
Key Management Service Developer Guide. For more information about protecting data with
AWS KMS, see Protecting Data Using Server-Side Encryption with KMS keys Stored in AWS
Key Management Service (SSE-KMS) (p. 220).
• Enter KMS root key ARN - Specify the AWS KMS key ARN for a customer managed key, and
enter the Amazon Resource Name (ARN).
You can use the KMS root key ARN to give an external account the ability to use an object
that is protected by an AWS KMS key. To do this, choose Enter KMS root key ARN, and enter
the Amazon Resource Name (ARN) for the external account. Administrators of an external
account that have usage permissions to an object protected by your KMS key can further
restrict access by creating a resource-level IAM policy.
Note
To encrypt objects in a bucket, you can use only AWS KMS keys that are available in the
same AWS Region as the bucket.
4. To change access control list permissions, under Access control list (ACL), edit permissions.
For information about object access permissions, see Using the S3 console to set ACL permissions for
an object (p. 470). You can grant read access to your objects to the general public (everyone in the
world) for all of the files that you're uploading. We recommend that you do not change the default
setting for public read access. Granting public read access is applicable to a small subset of use cases
such as when buckets are used for websites. You can always make changes to object permissions
after you upload the object.
5. To add tags to all of the objects that you are uploading, choose Add tag. Type a tag name in the Key
field. Type a value for the tag.
Object tagging gives you a way to categorize storage. Each tag is a key-value pair. Key and tag
values are case sensitive. You can have up to 10 tags per object. A tag key can be up to 128 Unicode
characters in length and tag values can be up to 255 Unicode characters in length. For more
information about object tags, see Categorizing your storage using tags (p. 685).
6. To add metadata, choose Add metadata.
For system-defined metadata, you can select common HTTP headers, such as Content-Type
and Content-Disposition. For a list of system-defined metadata and information about whether
you can add the value, see System-defined object metadata (p. 62). Any metadata starting
with prefix x-amz-meta- is treated as user-defined metadata. User-defined metadata is stored
with the object and is returned when you download the object. Both the keys and their values
must conform to US-ASCII standards. User-defined metadata can be as large as 2 KB. For
more information about system defined and user defined metadata, see Working with object
metadata (p. 61).
b. For Key, choose a key.
c. Type a value for the key.
7. To upload your objects, choose Upload.
Amazon S3 uploads your object. When the upload completes, you can see a success message on the
Upload: status page.
8. Choose Exit.
.NET
The following C# code example creates two objects with two PutObjectRequest requests:
• The first PutObjectRequest request saves a text string as sample object data. It also specifies
the bucket and object key names.
• The second PutObjectRequest request uploads a file by specifying the file name. This request
also specifies the ContentType header and optional object metadata (a title).
For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadObjectTest
{
private const string bucketName = "*** bucket name ***";
// For simplicity the example creates two objects from the same file.
// You specify key names for these objects.
private const string keyName1 = "*** key name for first object created ***";
private const string keyName2 = "*** key name for second object created ***";
private const string filePath = @"*** file path ***";
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.EUWest1;
putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
PutObjectResponse response2 = await client.PutObjectAsync(putRequest2);
}
catch (AmazonS3Exception e)
{
Console.WriteLine(
"Error encountered ***. Message:'{0}' when writing an object"
, e.Message);
}
catch (Exception e)
{
Console.WriteLine(
"Unknown encountered on server. Message:'{0}' when writing an
object"
, e.Message);
}
}
}
}
Java
The following example creates two objects. The first object has a text string as data, and the second
object is a file. The example creates the first object by specifying the bucket name, object key, and
text data directly in a call to AmazonS3Client.putObject(). The example creates the second
object by using a PutObjectRequest that specifies the bucket name, object key, and file path. The
PutObjectRequest also specifies the ContentType header and title metadata.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import java.io.File;
import java.io.IOException;
try {
//This code expects that you have AWS credentials set up per:
// https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-
credentials.html
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();
JavaScript
The following example upload an existing file to an Amazon S3 bucket. in a specific Region.
const file = "OBJECT_PATH_AND_NAME"; // Path to and name of object. For example '../
myFiles/index.js'.
const fileStream = fs.createReadStream(file);
PHP
This topic guides you through using classes from the AWS SDK for PHP to upload an object of
up to 5 GB in size. For larger files, you must use multipart upload API. For more information, see
Uploading and copying objects using multipart upload (p. 74).
This topic assumes that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly installed.
The following PHP example creates an object in a specified bucket by uploading data using the
putObject() method. For information about running the PHP examples in this guide, see Running
PHP Examples (p. 1040).
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
try {
// Upload data.
$result = $s3->putObject([
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => 'Hello, world!',
'ACL' => 'public-read'
]);
Ruby
The AWS SDK for Ruby - Version 3 has two ways of uploading an object to Amazon S3. The first
uses a managed file uploader, which makes it easy to upload files of any size from disk. To use the
managed file uploader method:
Example
require 'aws-sdk-s3'
# './my-file.txt'
# )
def object_uploaded?(s3_resource, bucket_name, object_key, file_path)
object = s3_resource.bucket(bucket_name).object(object_key)
object.upload_file(file_path)
return true
rescue StandardError => e
puts "Error uploading object: #{e.message}"
return false
end
The second way that AWS SDK for Ruby - Version 3 can upload an object uses the #put method of
Aws::S3::Object. This is useful if the object is a string or an I/O object that is not a file on disk. To
use this method:
Example
require 'aws-sdk-s3'
• If you're uploading large objects over a stable high-bandwidth network, use multipart upload to
maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded
performance.
• If you're uploading over a spotty network, use multipart upload to increase resiliency to network errors
by avoiding upload restarts. When using multipart upload, you need to retry uploading only parts that
are interrupted during the upload. You don't need to restart uploading your object from the beginning.
You can list all of your in-progress multipart uploads or get a list of the parts that you have uploaded for
a specific multipart upload. Each of these operations is explained in this section.
When you send a request to initiate a multipart upload, Amazon S3 returns a response with an upload
ID, which is a unique identifier for your multipart upload. You must include this upload ID whenever you
upload parts, list the parts, complete an upload, or stop an upload. If you want to provide any metadata
describing the object being uploaded, you must provide it in the request to initiate multipart upload.
Parts upload
When uploading a part, in addition to the upload ID, you must specify a part number. You can choose
any part number between 1 and 10,000. A part number uniquely identifies a part and its position in the
object you are uploading. The part number that you choose doesn’t need to be in a consecutive sequence
(for example, it can be 1, 5, and 14). If you upload a new part using the same part number as a previously
uploaded part, the previously uploaded part is overwritten.
Whenever you upload a part, Amazon S3 returns an ETag header in its response. For each part upload,
you must record the part number and the ETag value. You must include these values in the subsequent
request to complete the multipart upload.
Note
After you initiate a multipart upload and upload one or more parts, you must either complete
or stop the multipart upload in order to stop getting charged for storage of the uploaded parts.
Only after you either complete or stop a multipart upload will Amazon S3 free up the parts
storage and stop charging you for the parts storage.
When you complete a multipart upload, Amazon S3 creates an object by concatenating the parts in
ascending order based on the part number. If any object metadata was provided in the initiate multipart
upload request, Amazon S3 associates that metadata with the object. After a successful complete
request, the parts no longer exist.
Your complete multipart upload request must include the upload ID and a list of both part numbers
and corresponding ETag values. The Amazon S3 response includes an ETag that uniquely identifies the
combined object data. This ETag is not necessarily an MD5 hash of the object data.
You can optionally stop the multipart upload. After stopping a multipart upload, you cannot upload any
part using that upload ID again. All storage from any part of the canceled multipart upload is then freed.
If any part uploads were in-progress, they can still succeed or fail even after you stop. To free all storage
consumed by all parts, you must stop a multipart upload only after all part uploads have completed.
You can list the parts of a specific multipart upload or all in-progress multipart uploads. The list parts
operation returns the parts information that you have uploaded for a specific multipart upload. For each
list parts request, Amazon S3 returns the parts information for the specified multipart upload, up to a
maximum of 1,000 parts. If there are more than 1,000 parts in the multipart upload, you must send a
series of list part requests to retrieve all the parts. Note that the returned list of parts doesn't include
parts that haven't completed uploading. Using the list multipart uploads operation, you can obtain a list
of multipart uploads in progress.
An in-progress multipart upload is an upload that you have initiated, but have not yet completed or
stopped. Each request returns at most 1,000 multipart uploads. If there are more than 1,000 multipart
uploads in progress, you need to send additional requests to retrieve the remaining multipart uploads.
Only use the returned listing for verification. You should not use the result of this listing when sending
a complete multipart upload request. Instead, maintain your own list of the part numbers you specified
when uploading parts and the corresponding ETag values that Amazon S3 returns
Note
It is possible for some other request received between the time you initiated a multipart upload
and completed it to take precedence. For example, if another operation deletes a key after you
initiate a multipart upload with that key, but before you complete it, the complete multipart
upload response might indicate a successful object creation without you ever seeing the object.
Create You must be allowed to perform the s3:PutObject action on an object to create
Multipart multipart upload.
Upload
The bucket owner can allow other principals to perform the s3:PutObject action.
Initiate You must be allowed to perform the s3:PutObject action on an object to initiate
Multipart multipart upload.
Upload
The bucket owner can allow other principals to perform the s3:PutObject action.
Initiator Container element that identifies who initiated the multipart upload. If the initiator is
an AWS account, this element provides the same information as the Owner element.
If the initiator is an IAM User, this element provides the user ARN and display name.
Upload Part You must be allowed to perform the s3:PutObject action on an object to upload a
part.
The bucket owner must allow the initiator to perform the s3:PutObject action on
an object in order for the initiator to upload a part for that object.
Upload Part You must be allowed to perform the s3:PutObject action on an object to upload a
(Copy) part. Because you are uploading a part from an existing object, you must be allowed
s3:GetObject on the source object.
For the initiator to upload a part for an object, the owner of the bucket must allow
the initiator to perform the s3:PutObject action on the object.
Complete You must be allowed to perform the s3:PutObject action on an object to complete
Multipart a multipart upload.
Upload
The bucket owner must allow the initiator to perform the s3:PutObject action on
an object in order for the initiator to complete a multipart upload for that object.
Stop Multipart You must be allowed to perform the s3:AbortMultipartUpload action to stop a
Upload multipart upload.
By default, the bucket owner and the initiator of the multipart upload are allowed
to perform this action. If the initiator is an IAM user, that user's AWS account is also
allowed to stop that multipart upload.
In addition to these defaults, the bucket owner can allow other principals to perform
the s3:AbortMultipartUpload action on an object. The bucket owner can deny
any principal the ability to perform the s3:AbortMultipartUpload action.
By default, the bucket owner has permission to list parts for any multipart upload to
the bucket. The initiator of the multipart upload has the permission to list parts of
the specific multipart upload. If the multipart upload initiator is an IAM user, the AWS
account controlling that IAM user also has permission to list parts of that upload.
In addition to these defaults, the bucket owner can allow other principals to perform
the s3:ListMultipartUploadParts action on an object. The bucket owner can
also deny any principal the ability to perform the s3:ListMultipartUploadParts
action.
In addition to the default, the bucket owner can allow other principals to perform the
s3:ListBucketMultipartUploads action on the bucket.
AWS KMS To perform a multipart upload with encryption using an AWS Key Management
Encrypt and Service (AWS KMS) KMS key, the requester must have permission to the
Decrypt kms:Decrypt and kms:GenerateDataKey* actions on the key. These permissions
related are required because Amazon S3 must decrypt and read data from the encrypted file
permissions parts before it completes the multipart upload.
For more information, see Uploading a large file to Amazon S3 with encryption using
an AWS KMS key in the AWS Knowledge Center.
If your IAM user or role is in the same AWS account as the KMS key, then you must
have these permissions on the key policy. If your IAM user or role belongs to a
different account than the KMS key, then you must have the permissions on both the
key policy and your IAM user or role.
For information on the relationship between ACL permissions and permissions in access policies, see
Mapping of ACL permissions and access policy permissions (p. 463). For information on IAM users, go to
Working with Users and Groups.
Topics
• Configuring a bucket lifecycle policy to abort incomplete multipart uploads (p. 79)
• Uploading an object using multipart upload (p. 80)
• Uploading a directory using the high-level .NET TransferUtility class (p. 93)
• Listing multipart uploads (p. 95)
• Tracking a multipart upload (p. 97)
• Aborting a multipart upload (p. 99)
• Copying an object using multipart upload (p. 103)
• Amazon S3 multipart upload limits (p. 107)
Amazon S3 supports a bucket lifecycle rule that you can use to direct Amazon S3 to stop multipart
uploads that don't complete within a specified number of days after being initiated. When a multipart
upload is not completed within the timeframe, it becomes eligible for an abort operation and Amazon S3
stops the multipart upload (and deletes the parts associated with the multipart upload).
The following is an example lifecycle configuration that specifies a rule with the
AbortIncompleteMultipartUpload action.
<LifecycleConfiguration>
<Rule>
<ID>sample-rule</ID>
<Prefix></Prefix>
<Status>Enabled</Status>
<AbortIncompleteMultipartUpload>
<DaysAfterInitiation>7</DaysAfterInitiation>
</AbortIncompleteMultipartUpload>
</Rule>
</LifecycleConfiguration>
In the example, the rule does not specify a value for the Prefix element (object key name prefix).
Therefore, it applies to all objects in the bucket for which you initiated multipart uploads. Any multipart
uploads that were initiated and did not complete within seven days become eligible for an abort
operation. The abort action has no effect on completed multipart uploads.
For more information about the bucket lifecycle configuration, see Managing your storage
lifecycle (p. 578).
Note
If the multipart upload is completed within the number of days specified in the rule, the
AbortIncompleteMultipartUpload lifecycle action does not apply (that is, Amazon S3 does
not take any action). Also, this action does not apply to objects. No objects are deleted by this
lifecycle action.
1. Set up the AWS CLI. For instructions, see Developing with Amazon S3 using the AWS CLI (p. 1029).
2. Save the following example lifecycle configuration in a file (lifecycle.json). The example
configuration specifies empty prefix and therefore it applies to all objects in the bucket. You can
specify a prefix to restrict the policy to a subset of objects.
{
"Rules": [
{
"ID": "Test Rule",
"Status": "Enabled",
"Filter": {
"Prefix": ""
},
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}
]
}
3. Run the following CLI command to set lifecycle configuration on your bucket.
4. To verify, retrieve the lifecycle configuration using the get-bucket-lifecycle CLI command.
You can upload data from a file or a stream. You can also set advanced options, such as the part size
you want to use for the multipart upload, or the number of concurrent threads you want to use when
uploading the parts. You can also set optional object properties, the storage class, or the access control
list (ACL). You use the PutObjectRequest and the TransferManagerConfiguration classes to set
these advanced options.
When possible, TransferManager tries to use multiple threads to upload multiple parts of a single
upload at once. When dealing with large content sizes and high bandwidth, this can increase throughput
significantly.
In addition to file-upload functionality, the TransferManager class enables you to stop an in-progress
multipart upload. An upload is considered to be in progress after you initiate it and until you complete or
stop it. The TransferManager stops all in-progress multipart uploads on a specified bucket that were
initiated before a specified date and time.
If you need to pause and resume multipart uploads, vary part sizes during the upload, or do not know
the size of the data in advance, use the low-level PHP API. For more information about multipart
uploads, including additional functionality offered by the low-level API methods, see Using the AWS
SDKs (low-level-level API) (p. 87).
Java
The following example loads an object using the high-level multipart upload Java API (the
TransferManager class). For instructions on creating and testing a working sample, see Testing the
Amazon S3 Java Code Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;
import java.io.File;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
.NET
To upload a file to an S3 bucket, use the TransferUtility class. When uploading data from a file,
you must provide the object's key name. If you don't, the API uses the file name for the key name.
When uploading data from a stream, you must provide the object's key name.
To set advanced upload options—such as the part size, the number of threads when
uploading the parts concurrently, metadata, the storage class, or ACL—use the
TransferUtilityUploadRequest class.
The following C# example uploads a file to an Amazon S3 bucket in multiple parts. It shows how to
use various TransferUtility.Upload overloads to upload a file. Each successive call to upload
replaces the previous upload. For information about the example's compatibility with a specific
version of the AWS SDK for .NET and instructions for creating and testing a working sample, see
Running the Amazon S3 .NET Code Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadFileMPUHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file to
upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
// Option 1. Upload a file. The file name is used as the object key
name.
await fileTransferUtility.UploadAsync(filePath, bucketName);
Console.WriteLine("Upload 1 completed");
BucketName = bucketName,
FilePath = filePath,
StorageClass = S3StorageClass.StandardInfrequentAccess,
PartSize = 6291456, // 6 MB.
Key = keyName,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtilityRequest.Metadata.Add("param1", "Value1");
fileTransferUtilityRequest.Metadata.Add("param2", "Value2");
await fileTransferUtility.UploadAsync(fileTransferUtilityRequest);
Console.WriteLine("Upload 4 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}
PHP
The following PHP example uploads a file to an Amazon S3 bucket. The example demonstrates how
to set parameters for the MultipartUploader object.
For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).
require 'vendor/autoload.php';
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
Python
The following example loads an object using the high-level multipart upload Python API (the
TransferManager class).
"""
Use Boto 3 managed file transfers to manage multipart uploads to and downloads
from an Amazon S3 bucket.
When the file to transfer is larger than the specified threshold, the transfer
manager automatically uses multipart uploads or downloads. This demonstration
shows how to use several of the available transfer manager settings and reports
thread usage and time to transfer.
"""
import sys
import threading
import boto3
from boto3.s3.transfer import TransferConfig
MB = 1024 * 1024
s3 = boto3.resource('s3')
class TransferCallback:
"""
Handle callbacks from the transfer manager.
target = self._target_size * MB
sys.stdout.write(
f"\r{self._total_transferred} of {target} transferred "
The multipart chunk size controls the size of the chunks of data that are
sent in the request. A smaller chunk size typically results in the transfer
manager using more threads for the upload.
The metadata is a set of key-value pairs that are stored with the object
in Amazon S3.
"""
transfer_callback = TransferCallback(file_size_mb)
Setting a multipart threshold larger than the size of the file results
in the transfer manager sending the file as a standard upload instead of
a multipart upload.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB)
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
Config=config,
Callback=transfer_callback)
return transfer_callback.thread_info
"""
Upload a file from a local folder to an Amazon S3 bucket, adding server-side
encryption with customer-provided encryption keys to the object.
Setting a multipart threshold larger than the size of the file results
in the transfer manager sending the file as a standard download instead
of a multipart download.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB)
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
Config=config,
Callback=transfer_callback)
return transfer_callback.thread_info
if sse_key:
extra_args = {
'SSECustomerAlgorithm': 'AES256',
'SSECustomerKey': sse_key}
else:
extra_args = None
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
ExtraArgs=extra_args,
Callback=transfer_callback)
return transfer_callback.thread_info
Java
The following example shows how to use the low-level Java classes to upload a file. It performs the
following steps:
Example
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
// Create a list of ETag objects. You retrieve ETags for each object part
uploaded,
// then, after each individual part has been uploaded, pass the list of
ETags to
// the request to complete the upload.
List<PartETag> partETags = new ArrayList<PartETag>();
// Upload the part and add the response's ETag to our list.
UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest);
partETags.add(uploadResult.getPartETag());
filePosition += partSize;
.NET
The following C# example shows how to use the low-level AWS SDK for .NET multipart upload API
to upload a file to an S3 bucket. For information about Amazon S3 multipart uploads, see Uploading
and copying objects using multipart upload (p. 74).
Note
When you use the AWS SDK for .NET API to upload large objects, a timeout might occur
while data is being written to the request stream. You can set an explicit timeout using the
UploadPartRequest.
The following C# example uploads a file to an S3 bucket using the low-level multipart upload API.
For information about the example's compatibility with a specific version of the AWS SDK for .NET
and instructions for creating and testing a working sample, see Running the Amazon S3 .NET Code
Examples (p. 1039).
using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadFileMPULowLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file to
upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
// Upload parts.
long contentLength = new FileInfo(filePath).Length;
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB
try
{
Console.WriteLine("Uploading parts");
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++)
{
UploadPartRequest uploadRequest = new UploadPartRequest
{
BucketName = bucketName,
Key = keyName,
UploadId = initResponse.UploadId,
PartNumber = i,
PartSize = partSize,
FilePosition = filePosition,
FilePath = filePath
};
filePosition += partSize;
}
{
Console.WriteLine("An AmazonS3Exception was thrown: { 0}",
exception.Message);
PHP
This topic shows how to use the low-level uploadPart method from version 3 of the AWS SDK for
PHP to upload a file in multiple parts. It assumes that you are already following the instructions for
Using the AWS SDK for PHP and Running PHP Examples (p. 1039) and have the AWS SDK for PHP
properly installed.
The following PHP example uploads a file to an Amazon S3 bucket using the low-level PHP API
multipart upload. For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).
require 'vendor/autoload.php';
use Aws\S3\S3Client;
$result = $s3->createMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'StorageClass' => 'REDUCED_REDUNDANCY',
'Metadata' => [
'param1' => 'value 1',
'param2' => 'value 2',
'param3' => 'value 3'
]
]);
$uploadId = $result['UploadId'];
while (!feof($file)) {
$result = $s3->uploadPart([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId,
'PartNumber' => $partNumber,
'Body' => fread($file, 5 * 1024 * 1024),
]);
$parts['Parts'][$partNumber] = [
'PartNumber' => $partNumber,
'ETag' => $result['ETag'],
];
$partNumber++;
Alternatively, you can use the following multipart upload client operations directly:
For more information, see Using the AWS SDK for Ruby - Version 3 (p. 1040).
You can also use the REST API to make your own REST requests, or you can use one of the AWS SDKs. For
more information about the REST API, see Using the REST API (p. 93). For more information about the
SDKs, see Uploading an object using multipart upload (p. 80).
To select files in the specified directory based on filtering criteria, specify filtering expressions. For
example, to upload only the .pdf files from a directory, specify the "*.pdf" filter expression.
When uploading files from a directory, you don't specify the key names for the resulting objects. Amazon
S3 constructs the key names using the original file path. For example, assume that you have a directory
called c:\myfolder with the following structure:
Example
C:\myfolder
\a.txt
\b.pdf
\media\
An.mp3
When you upload this directory, Amazon S3 uses the following key names:
Example
a.txt
b.pdf
media/An.mp3
Example
The following C# example uploads a directory to an Amazon S3 bucket. It shows how to use various
TransferUtility.UploadDirectory overloads to upload the directory. Each successive call to
upload replaces the previous upload. For instructions on how to create and test a working sample, see
Running the Amazon S3 .NET Code Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class UploadDirMPUHighLevelAPITest
{
private const string existingBucketName = "*** bucket name ***";
private const string directoryPath = @"*** directory path ***";
// The example uploads only .txt files.
private const string wildCard = "*.txt";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
UploadDirAsync().Wait();
}
// 1. Upload a directory.
await directoryTransferUtility.UploadDirectoryAsync(directoryPath,
existingBucketName);
Console.WriteLine("Upload statement 1 completed");
Directory = directoryPath,
SearchOption = SearchOption.AllDirectories,
SearchPattern = wildCard
};
await directoryTransferUtility.UploadDirectoryAsync(request);
Console.WriteLine("Upload statement 3 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine(
"Error encountered ***. Message:'{0}' when writing an object",
e.Message);
}
catch (Exception e)
{
Console.WriteLine(
"Unknown encountered on server. Message:'{0}' when writing an object",
e.Message);
}
}
}
}
The following tasks guide you through using the low-level Java classes to list all in-progress
multipart uploads on a bucket.
Example
ListMultipartUploadsRequest allMultpartUploadsRequest =
new ListMultipartUploadsRequest(existingBucketName);
MultipartUploadListing multipartUploadListing =
s3Client.listMultipartUploads(allMultpartUploadsRequest);
.NET
To list all of the in-progress multipart uploads on a specific bucket, use the AWS SDK
for .NET low-level multipart upload API's ListMultipartUploadsRequest class.
An in-progress multipart upload is a multipart upload that has been initiated using the initiate
multipart upload request, but has not yet been completed or stopped. For more information about
Amazon S3 multipart uploads, see Uploading and copying objects using multipart upload (p. 74).
The following C# example shows how to use the AWS SDK for .NET to list all in-progress multipart
uploads on a bucket. For information about the example's compatibility with a specific version of
the AWS SDK for .NET and instructions on how to create and test a working sample, see Running the
Amazon S3 .NET Code Examples (p. 1039).
PHP
This topic shows how to use the low-level API classes from version 3 of the AWS SDK for PHP to
list all in-progress multipart uploads on a bucket. It assumes that you are already following the
instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 1039) and have the AWS
SDK for PHP properly installed.
The following PHP example demonstrates listing all in-progress multipart uploads on a bucket.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
Java
Example
Example
The following Java code uploads a file and uses the ProgressListener to track the upload
progress. For instructions on how to create and test a working sample, see Testing the Amazon S3
Java Code Examples (p. 1038).
import java.io.File;
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.event.ProgressEvent;
import com.amazonaws.event.ProgressListener;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.Upload;
// You can ask the upload for its progress, or you can
// add a ProgressListener to your request to receive notifications
// when bytes are transferred.
request.setGeneralProgressListener(new ProgressListener() {
@Override
public void progressChanged(ProgressEvent progressEvent) {
System.out.println("Transferred bytes: " +
progressEvent.getBytesTransferred());
}
});
try {
// You can block and wait for the upload to finish
upload.waitForCompletion();
} catch (AmazonClientException amazonClientException) {
System.out.println("Unable to upload file, upload aborted.");
amazonClientException.printStackTrace();
}
}
}
.NET
The following C# example uploads a file to an S3 bucket using the TransferUtility class, and
tracks the progress of the upload. For information about the example's compatibility with a specific
version of the AWS SDK for .NET and instructions for creating and testing a working sample, see
Running the Amazon S3 .NET Code Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class TrackMPUUsingHighLevelAPITest
{
private const string bucketName = "*** provide the bucket name ***";
private const string keyName = "*** provide the name for the uploaded object
***";
private const string filePath = " *** provide the full path name of the file to
upload **";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
{
var fileTransferUtility = new TransferUtility(s3Client);
uploadRequest.UploadProgressEvent +=
new EventHandler<UploadProgressArgs>
(uploadRequest_UploadPartProgressEvent);
await fileTransferUtility.UploadAsync(uploadRequest);
Console.WriteLine("Upload completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
You are billed for all storage associated with uploaded parts. For more information, see Multipart upload
and pricing (p. 76). So it's important that you either complete the multipart upload to have the object
created or stop the multipart upload to remove any uploaded parts.
You can stop an in-progress multipart upload in Amazon S3 using the AWS Command Line Interface
(AWS CLI), REST API, or AWS SDKs. You can also stop an incomplete multipart upload using a bucket
lifecycle policy.
The following tasks guide you through using the high-level Java classes to stop multipart uploads.
The following Java code stops all multipart uploads in progress that were initiated on a specific
bucket over a week ago. For instructions on how to create and test a working sample, see Testing the
Amazon S3 Java Code Examples (p. 1038).
import java.util.Date;
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.transfer.TransferManager;
try {
tm.abortMultipartUploads(existingBucketName, oneWeekAgo);
} catch (AmazonClientException amazonClientException) {
System.out.println("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
}
}
Note
You can also stop a specific multipart upload. For more information, see Using the AWS
SDKs (low-level API) (p. 101).
.NET
The following C# example stops all in-progress multipart uploads that were initiated on a specific
bucket over a week ago. For information about the example's compatibility with a specific version of
the AWS SDK for .NET and instructions on creating and testing a working sample, see Running the
Amazon S3 .NET Code Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class AbortMPUUsingHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
Note
You can also stop a specific multipart upload. For more information, see Using the AWS
SDKs (low-level API) (p. 101).
To stop a multipart upload, you provide the upload ID, and the bucket and key names that are used in
the upload. After you have stopped a multipart upload, you can't use the upload ID to upload additional
parts. For more information about Amazon S3 multipart uploads, see Uploading and copying objects
using multipart upload (p. 74).
Java
Example
InitiateMultipartUploadRequest initRequest =
new InitiateMultipartUploadRequest(existingBucketName, keyName);
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);
Note
Instead of a specific multipart upload, you can stop all your multipart uploads initiated
before a specific time that are still in progress. This clean-up operation is useful to stop old
multipart uploads that you initiated but did not complete or stop. For more information,
see Using the AWS SDKs (high-level API) (p. 100).
.NET
The following C# example shows how to stop a multipart upload. For a complete C# sample that
includes the following code, see Using the AWS SDKs (low-level-level API) (p. 87).
You can also abort all in-progress multipart uploads that were initiated prior to a specific time. This
clean-up operation is useful for aborting multipart uploads that didn't complete or were aborted.
For more information, see Using the AWS SDKs (high-level API) (p. 100).
PHP
This example shows how to use a class from version 3 of the AWS SDK for PHP to abort a multipart
upload that is in progress. It assumes that you are already following the instructions for Using the
AWS SDK for PHP and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly
installed. The example the abortMultipartUpload() method.
For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).
require 'vendor/autoload.php';
use Aws\S3\S3Client;
Java
Example
The following example shows how to use the Amazon S3 low-level Java API to perform a multipart
copy. For instructions on creating and testing a working sample, see Testing the Amazon S3 Java
Code Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
// Get the object size to track the end of the copy operation.
GetObjectMetadataRequest metadataRequest = new
GetObjectMetadataRequest(sourceBucketName, sourceObjectKey);
ObjectMetadata metadataResult =
s3Client.getObjectMetadata(metadataRequest);
long objectSize = metadataResult.getContentLength();
// Complete the upload request to concatenate all uploaded parts and make
the copied object available.
CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest(
destBucketName,
destObjectKey,
initResult.getUploadId(),
getETags(copyResponses));
s3Client.completeMultipartUpload(completeRequest);
System.out.println("Multipart copy complete.");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
.NET
The following C# example shows how to use the AWS SDK for .NET to copy an Amazon S3 object
that is larger than 5 GB from one source location to another, such as from one bucket to another. To
copy objects that are smaller than 5 GB, use the single-operation copy procedure described in Using
the AWS SDKs (p. 110). For more information about Amazon S3 multipart uploads, see Uploading
and copying objects using multipart upload (p. 74).
This example shows how to copy an Amazon S3 object that is larger than 5 GB from one S3
bucket to another using the AWS SDK for .NET multipart upload API. For information about SDK
compatibility and instructions for creating and testing a working sample, see Running the Amazon
S3 .NET Code Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class CopyObjectUsingMPUapiTest
{
private const string sourceBucket = "*** provide the name of the bucket with
source object ***";
private const string targetBucket = "*** provide the name of the bucket to copy
the object to ***";
private const string sourceObjectKey = "*** provide the name of object to copy
***";
private const string targetObjectKey = "*** provide the name of the object copy
***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
{
s3Client = new AmazonS3Client(bucketRegion);
Console.WriteLine("Copying an object");
MPUCopyObjectAsync().Wait();
}
private static async Task MPUCopyObjectAsync()
{
// Create a list to store the upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();
List<CopyPartResponse> copyResponses = new List<CopyPartResponse>();
try
{
// Get the size of the object.
GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest
{
BucketName = sourceBucket,
Key = sourceObjectKey
};
GetObjectMetadataResponse metadataResponse =
await s3Client.GetObjectMetadataAsync(metadataRequest);
long objectSize = metadataResponse.ContentLength; // Length in bytes.
long bytePosition = 0;
for (int i = 1; bytePosition < objectSize; i++)
{
CopyPartRequest copyRequest = new CopyPartRequest
{
DestinationBucket = targetBucket,
DestinationKey = targetObjectKey,
SourceBucket = sourceBucket,
SourceKey = sourceObjectKey,
UploadId = uploadId,
FirstByte = bytePosition,
LastByte = bytePosition + partSize - 1 >= objectSize ?
objectSize - 1 : bytePosition + partSize - 1,
PartNumber = i
};
copyResponses.Add(await s3Client.CopyPartAsync(copyRequest));
bytePosition += partSize;
}
{
BucketName = targetBucket,
Key = targetObjectKey,
UploadId = initResponse.UploadId
};
completeRequest.AddPartETags(copyResponses);
You can use these APIs to make your own REST requests, or you can use one the SDKs we provide. For
more information about using Multipart Upload with the AWS CLI, see Using the AWS CLI (p. 93). For
more information about the SDKs, see AWS SDK support for multipart upload (p. 76).
Item Specification
Item Specification
Part size 5 MB to 5 GB. There is no minimum size limit on the last part
of your multipart upload.
Copying objects
The copy operation creates a copy of an object that is already stored in Amazon S3.
You can create a copy of your object up to 5 GB in a single atomic operation. However, to copy an object
that is greater than 5 GB, you must use the multipart upload API.
Each Amazon S3 object has metadata. It is a set of name-value pairs. You can set object metadata at
the time you upload it. After you upload the object, you cannot modify object metadata. The only way
to modify object metadata is to make a copy of the object and set the metadata. In the copy operation
you set the same object as the source and target.
Each object has metadata. Some of it is system metadata and other user-defined. Users control some of
the system metadata such as storage class configuration to use for the object, and configure server-side
encryption. When you copy an object, user-controlled system metadata and user-defined metadata are
also copied. Amazon S3 resets the system-controlled metadata. For example, when you copy an object,
Amazon S3 resets the creation date of the copied object. You don't need to set any of these values in
your copy request.
When copying an object, you might decide to update some of the metadata values. For example, if
your source object is configured to use standard storage, you might choose to use reduced redundancy
storage for the object copy. You might also decide to alter some of the user-defined metadata values
present on the source object. Note that if you choose to update any of the object's user-configurable
metadata (system or user-defined) during the copy, then you must explicitly specify all of the user-
configurable metadata present on the source object in your request, even if you are only changing only
one of the metadata values.
For more information about the object metadata, see Working with object metadata (p. 61).
Note
When copying objects, you can request Amazon S3 to save the target object encrypted with an AWS
KMS key, an Amazon S3-managed encryption key, or a customer-provided encryption key. Accordingly,
you must specify encryption information in your request. If the copy source is an object that is stored in
Amazon S3 using server-side encryption with customer provided key, you will need to provide encryption
information in your request so Amazon S3 can decrypt the object for copying. For more information, see
Protecting data using encryption (p. 219).
To copy more than one Amazon S3 object with a single request, you can use Amazon S3 batch
operations. You provide S3 Batch Operations with a list of objects to operate on. S3 Batch Operations
calls the respective API to perform the specified operation. A single Batch Operations job can perform
the specified operation on billions of objects containing exabytes of data.
The S3 Batch Operations feature tracks progress, sends notifications, and stores a detailed completion
report of all actions, providing a fully managed, auditable, serverless experience. You can use S3
Batch Operations through the AWS Management Console, AWS CLI, AWS SDKs, or REST API. For more
information, see the section called “Batch Operations basics” (p. 738).
To copy an object
To copy an object, use the examples below.
To copy an object
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to copy.
3. Select the check box to the left of the names of the objects that you want to copy.
4. Choose Actions and choose Copy from the list of options that appears.
To move objects
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to move.
3. Select the check box to the left of the names of the objects that you want to move.
4. Choose Actions and choose Move from the list of options that appears.
Note
• This action creates a copy of all specified objects with updated settings, updates the last-
modified date in the specified location, and adds a delete marker to the original object.
• When moving folders, wait for the move action to finish before making additional changes in
the folders.
• Objects encrypted with customer-provided encryption keys (SSE-C) cannot be copied using
the S3 console. To copy objects encrypted with SSE-C, use the AWS CLI, AWS SDK, or the
Amazon S3 REST API.
• This action updates metadata for bucket versioning, encryption, Object Lock features, and
archived objects.
Java
Example
The following example copies an object in Amazon S3 using the AWS SDK for Java. For instructions
on creating and testing a working sample, see Testing the Amazon S3 Java Code Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import java.io.IOException;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
.NET
The following C# example uses the high-level AWS SDK for .NET to copy objects that are as large
as 5 GB in a single operation. For objects that are larger than 5 GB, use the multipart upload copy
example described in Copying an object using multipart upload (p. 103).
This example makes a copy of an object that is a maximum of 5 GB. For information about the
example's compatibility with a specific version of the AWS SDK for .NET and instructions on how to
create and test a working sample, see Running the Amazon S3 .NET Code Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class CopyObjectTest
{
private const string sourceBucket = "*** provide the name of the bucket with
source object ***";
private const string destinationBucket = "*** provide the name of the bucket to
copy the object to ***";
private const string objectKey = "*** provide the name of object to copy ***";
private const string destObjectKey = "*** provide the destination object key
name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
try
{
CopyObjectRequest request = new CopyObjectRequest
{
SourceBucket = sourceBucket,
SourceKey = objectKey,
DestinationBucket = destinationBucket,
DestinationKey = destObjectKey
};
CopyObjectResponse response = await s3Client.CopyObjectAsync(request);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}
PHP
This topic guides you through using classes from version 3 of the AWS SDK for PHP to copy a single
object and multiple objects within Amazon S3, from one bucket to another or within the same
bucket.
This topic assumes that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly installed.
The following PHP example illustrates the use of the copyObject() method to copy a single object
within Amazon S3 and using a batch of calls to CopyObject using the getcommand() method to
make multiple copies of an object.
Copying objects
2 To make multiple copies of an object, you run a batch of calls to the Amazon S3 client
getCommand() method, which is inherited from the Aws\CommandInterface class.
You provide the CopyObject command as the first argument and an array containing
the source bucket, source key name, target bucket, and target key name as the second
argument.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
]);
// Copy an object.
$s3->copyObject([
'Bucket' => $targetBucket,
'Key' => "{$sourceKeyname}-copy",
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
]);
Ruby
The following tasks guide you through using the Ruby classes to copy an object in Amazon S3 from
one bucket to another or within the same bucket.
Copying objects
1 Use the Amazon S3 modularized gem for version 3 of the AWS SDK for Ruby, require
'aws-sdk-s3', and provide your AWS credentials. For more information about how
to provide your credentials, see Making requests using AWS account or IAM user
credentials (p. 996).
2 Provide the request information, such as source bucket name, source key name,
destination bucket name, and destination key.
The following Ruby code example demonstrates the preceding tasks using the #copy_object
method to copy an object from one bucket to another.
require 'aws-sdk-s3'
This example copies the flotsam object from the pacific bucket to the jetsam object of the
atlantic bucket, preserving its metadata.
PUT\r\n
\r\n
\r\n
Wed, 20 Feb 2008 22:12:21 +0000\r\n
x-amz-copy-source:/pacific/flotsam\r\n
/atlantic/jetsam
Amazon S3 returns the following response that specifies the ETag of the object and when it was last
modified.
HTTP/1.1 200 OK
x-amz-id-2: Vyaxt7qEbzv34BnSu5hctyyNSlHTYZFMWK4FtzO+iX8JQNyaLdTshL0KxatbaOZt
x-amz-request-id: 6B13C3C5B34AF333
Date: Wed, 20 Feb 2008 22:13:01 +0000
Content-Type: application/xml
Transfer-Encoding: chunked
Connection: close
Server: AmazonS3
<?xml version="1.0" encoding="UTF-8"?>
<CopyObjectResult>
<LastModified>2008-02-20T22:13:01</LastModified>
<ETag>"7e9c608af58950deeb370c98608ed097"</ETag>
</CopyObjectResult>
Downloading an object
This section explains how to download objects from an S3 bucket.
Data transfer fees apply when you download objects. For information about Amazon S3 features, and
pricing, see Amazon S3.
You can download a single object per request using the Amazon S3 console. To download multiple
objects, use the AWS CLI, AWS SDKs, or REST API.
When you download an object programmatically, its metadata is returned in the response headers. There
are times when you want to override certain response header values returned in a GET response. For
example, you might override the Content-Disposition response header value in your GET request.
The REST GET Object API (see GET Object) allows you to specify query string parameters in your GET
request to override these values. The AWS SDKs for Java, .NET, and PHP also provide necessary objects
you can use to specify values for these response headers in your GET request.
When retrieving objects that are stored encrypted using server-side encryption, you must provide
appropriate request headers. For more information, see Protecting data using encryption (p. 219).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to download an object from.
3. You can download an object from an S3 bucket in any of the following ways:
On the Overview page, select the object and from the Actions menu choose Download or
Download as if you want to download the object to a specific folder.
• Choose the object that you want to download and then from the Object actions menu choose
Download or Download as if you want to download the object to a specific folder.
• If you want to download a specific version of the object, choose the name of the object that you
want to download. Choose the Versions tab and then from the Actions menu choose Download
or Download as if you want to download the object to a specific folder.
When you download an object through the AWS SDK for Java, Amazon S3 returns all of the object's
metadata and an input stream from which to read the object's contents.
• Execute the AmazonS3Client.getObject() method, providing the bucket name and object key
in the request.
• Execute one of the S3Object instance methods to process the input stream.
Note
Your network connection remains open until you read all of the data or close the input
stream. We recommend that you read the content of the stream as quickly as possible.
• Instead of reading the entire object, you can read only a portion of the object data by specifying
the byte range that you want in the request.
• You can optionally override the response header values by using a ResponseHeaderOverrides
object and setting the corresponding request property. For example, you can use this feature to
indicate that the object should be downloaded into a file with a different file name than the object
key name.
The following example retrieves an object from an Amazon S3 bucket three ways: first, as a
complete object, then as a range of bytes from the object, then as a complete object with overridden
response header values. For more information about getting objects from Amazon S3, see GET
Object. For instructions on creating and testing a working sample, see Testing the Amazon S3 Java
Code Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ResponseHeaderOverrides;
import com.amazonaws.services.s3.model.S3Object;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
.NET
When you download an object, you get all of the object's metadata and a stream from which to read
the contents. You should read the content of the stream as quickly as possible because the data is
streamed directly from Amazon S3 and your network connection will remain open until you read all
the data or close the input stream. You do the following to get an object:
• Execute the getObject method by providing bucket name and object key in the request.
• Execute one of the GetObjectResponse methods to process the stream.
• Instead of reading the entire object, you can read only the portion of the object data by specifying
the byte range in the request, as shown in the following C# example:
Example
• When retrieving an object, you can optionally override the response header values (see
Downloading an object (p. 115)) by using the ResponseHeaderOverrides object and setting
the corresponding request property. The following C# code example shows how to do this. For
example, you can use this feature to indicate that the object should be downloaded into a file with
a different file name than the object key name.
Example
request.ResponseHeaderOverrides = responseHeaders;
Example
The following C# code example retrieves an object from an Amazon S3 bucket. From the response,
the example reads the object data using the GetObjectResponse.ResponseStream property.
The example also shows how you can use the GetObjectResponse.Metadata collection to read
object metadata. If the object you retrieve has the x-amz-meta-title metadata, the code prints
the metadata value.
For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class GetObjectTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** object key ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;
}
}
}
}
PHP
This topic explains how to use a class from the AWS SDK for PHP to retrieve an Amazon S3 object.
You can retrieve an entire object or a byte range from the object. We assume that you are already
following the instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 1039) and
have the AWS SDK for PHP properly installed.
When retrieving an object, you can optionally override the response header values by
adding the response keys, ResponseContentType, ResponseContentLanguage,
ResponseContentDisposition, ResponseCacheControl, and ResponseExpires, to the
getObject() method, as shown in the following PHP code example:
Example
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname,
'ResponseContentType' => 'text/plain',
'ResponseContentLanguage' => 'en-US',
'ResponseContentDisposition' => 'attachment; filename=testing.txt',
'ResponseCacheControl' => 'No-cache',
'ResponseExpires' => gmdate(DATE_RFC2822, time() + 3600),
]);
For more information about retrieving objects, see Downloading an object (p. 115).
The following PHP example retrieves an object and displays the content of the object in the browser.
The example shows how to use the getObject() method. For information about running the PHP
examples in this guide, see Running PHP Examples (p. 1040).
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
try {
// Get the object.
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname
]);
For more information about the request and response format, see Get Object.
For information about Amazon S3 features and pricing, see Amazon S3 pricing.
• Delete a single object — Amazon S3 provides the DELETE API that you can use to delete one object in
a single HTTP request.
• Delete multiple objects — Amazon S3 provides the Multi-Object Delete API that you can use to delete
up to 1,000 objects in a single HTTP request.
When deleting objects from a bucket that is not version-enabled, you provide only the object key name.
However, when deleting objects from a version-enabled bucket, you can optionally provide the version ID
of the object to delete a specific version of the object.
• Specify a non-versioned delete request — Specify only the object's key, and not the version ID. In
this case, Amazon S3 creates a delete marker and returns its version ID in the response. This makes
your object disappear from the bucket. For information about object versioning and the delete marker
concept, see Using versioning in S3 buckets (p. 519).
• Specify a versioned delete request — Specify both the key and also a version ID. In this case the
following two outcomes are possible:
• If the version ID maps to a specific object version, Amazon S3 deletes the specific version of the
object.
• If the version ID maps to the delete marker of that object, Amazon S3 deletes the delete marker.
This makes the object reappear in your bucket.
• If you have an MFA-enabled bucket, and you make a non-versioned delete request (you are not
deleting a versioned object), and you don't provide an MFA token, the delete succeeds.
• If you have a Multi-Object Delete request specifying only non-versioned objects to delete from an
MFA-enabled bucket, and you don't provide an MFA token, the deletions succeed.
For information about MFA delete, see Configuring MFA delete (p. 528).
Topics
• Deleting a single object (p. 122)
• Deleting multiple objects (p. 129)
Because all objects in your S3 bucket incur storage costs, you should delete objects that you no longer
need. For example, if you are collecting log files, it's a good idea to delete them when they're no longer
needed. You can set up a lifecycle rule to automatically delete objects such as log files. For more
information, see the section called “Setting lifecycle configuration” (p. 584).
For information about Amazon S3 features and pricing, see Amazon S3 pricing.
To delete an object
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to delete an object from.
3. To delete an object in a versioning-enabled bucket with versioning:
• Off, Amazon S3 creates a delete marker. To delete the object, select the object, and choose delete
and confirm your choice by typing delete in the text field.
• On, Amazon S3 will permanently delete the object version. Select the object version that you
want to delete, and choose delete and confirm your choice by typing permanently delete in
the text field.
If you have S3 Versioning enabled on the bucket, you have the following options:
For more information about S3 Versioning, see Using versioning in S3 buckets (p. 519).
Java
The following example assumes that the bucket is not versioning-enabled and the object doesn't
have any version IDs. In the delete request, you specify only the object key and not a version ID.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.DeleteObjectRequest;
import java.io.IOException;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
The following example deletes an object from a versioned bucket. The example deletes a specific
object version by specifying the object key name and version ID.
1. Adds a sample object to the bucket. Amazon S3 returns the version ID of the newly added object.
The example uses this version ID in the delete request.
2. Deletes the object version by specifying both the object key name and a version ID. If there are no
other versions of that object, Amazon S3 deletes the object entirely. Otherwise, Amazon S3 only
deletes the specified version.
Note
You can get the version IDs of an object by sending a ListVersions request.
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketVersioningConfiguration;
import com.amazonaws.services.s3.model.DeleteVersionRequest;
import com.amazonaws.services.s3.model.PutObjectResult;
import java.io.IOException;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
.NET
The following examples show how to delete an object from both versioned and non-versioned
buckets. For more information about S3 Versioning, see Using versioning in S3 buckets (p. 519).
The following C# example deletes an object from a non-versioned bucket. The example assumes that
the objects don't have version IDs, so you don't specify version IDs. You specify only the object key.
For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class DeleteObjectNonVersionedBucketTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** object key ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;
Console.WriteLine("Deleting an object");
await client.DeleteObjectAsync(deleteObjectRequest);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
deleting an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
deleting an object", e.Message);
}
}
}
}
The following C# example deletes an object from a versioned bucket. It deletes a specific version of
the object by specifying the object key name and version ID.
1. Enables S3 Versioning on a bucket that you specify (if S3 Versioning is already enabled, this has
no effect).
2. Adds a sample object to the bucket. In response, Amazon S3 returns the version ID of the newly
added object. The example uses this version ID in the delete request.
3. Deletes the sample object by specifying both the object key name and a version ID.
Note
You can also get the version ID of an object by sending a ListVersions request.
For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class DeleteObjectVersion
{
private const string bucketName = "*** versioning-enabled bucket name ***";
private const string keyName = "*** Object Key Name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;
PHP
This example shows how to use classes from version 3 of the AWS SDK for PHP to delete an object
from a non-versioned bucket. For information about deleting an object from a versioned bucket, see
Using the REST API (p. 129).
This example assumes that you are already following the instructions for Using the AWS SDK for
PHP and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly installed. For
information about running the PHP examples in this guide, see Running PHP Examples (p. 1040).
The following PHP example deletes an object from a bucket. Because this example shows how to
delete objects from non-versioned buckets, it provides only the bucket name and object key (not a
version ID) in the delete request.
<?php
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$result = $s3->deleteObject([
'Bucket' => $bucket,
'Key' => $keyname
]);
if ($result['DeleteMarker'])
{
echo $keyname . ' was deleted or does not exist.' . PHP_EOL;
} else {
exit('Error: ' . $keyname . ' was not deleted.' . PHP_EOL);
}
}
catch (S3Exception $e) {
exit('Error: ' . $e->getAwsErrorMessage() . PHP_EOL);
}
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname
]);
Javascript
This example shows how to use version 3 of the AWS SDK for JavaScript to delete an object. For
more information about AWS SDK for JavaScript see, Using the AWS SDK for JavaScript (p. 1042).
};
run();
To delete one object per request, use the DELETE API. For more information, see DELETE Object. For
more information about using the CLI to delete an object, see delete-object.
For information about Amazon S3 features and pricing, see Amazon S3 pricing.
You can use the Amazon S3 console or the Multi-Object Delete API to delete multiple objects
simultaneously from an S3 bucket.
To delete objects
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to delete.
3. Select the check box to the left of the names of the objects that you want to delete.
4. Choose Actions and choose Delete from the list of options that appears.
Warning
To learn more about object deletion, see Deleting Amazon S3 objects (p. 121).
Java
The AWS SDK for Java provides the AmazonS3Client.deleteObjects() method for deleting
multiple objects. For each object that you want to delete, you specify the key name. If the bucket is
versioning-enabled, you have the following options:
• Specify only the object's key name. Amazon S3 adds a delete marker to the object.
• Specify both the object's key name and a version ID to be deleted. Amazon S3 deletes the
specified version of the object.
Example
The following example uses the Multi-Object Delete API to delete objects from a bucket that
is not version-enabled. The example uploads sample objects to the bucket and then uses the
AmazonS3Client.deleteObjects() method to delete the objects in a single request. In the
DeleteObjectsRequest, the example specifies only the object key names because the objects do
not have version IDs.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;
import java.io.IOException;
import java.util.ArrayList;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();
Example
The following example uses the Multi-Object Delete API to delete objects from a version-enabled
bucket. It does the following:
1. Creates sample objects and then deletes them, specifying the key name and version ID for each
object to delete. The operation deletes only the specified object versions.
2. Creates sample objects and then deletes them by specifying only the key names. Because the
example doesn't specify version IDs, the operation adds a delete marker to each object, without
deleting any specific object versions. After the delete markers are added, these objects will not
appear in the AWS Management Console.
3. Removes the delete markers by specifying the object keys and version IDs of the delete markers.
The operation deletes the delete markers, which results in the objects reappearing in the AWS
Management Console.
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketVersioningConfiguration;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;
import com.amazonaws.services.s3.model.DeleteObjectsResult.DeletedObject;
import com.amazonaws.services.s3.model.PutObjectResult;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
try {
S3_CLIENT = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
.withQuiet(false);
// Delete the delete markers, leaving the objects intact in the bucket.
DeleteObjectsResult delObjRes = S3_CLIENT.deleteObjects(deleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " delete markers successfully deleted");
}
}
.NET
The AWS SDK for .NET provides a convenient method for deleting multiple objects:
DeleteObjects. For each object that you want to delete, you specify the key name and the version
of the object. If the bucket is not versioning-enabled, you specify null for the version ID. If an
exception occurs, review the DeleteObjectsException response to determine which objects were
not deleted and why.
The following C# example uses the multi-object delete API to delete objects from a bucket that
is not version-enabled. The example uploads the sample objects to the bucket, and then uses the
DeleteObjects method to delete the objects in a single request. In the DeleteObjectsRequest,
the example specifies only the object key names because the version IDs are null.
For information about creating and testing a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class DeleteMultipleObjectsNonVersionedBucketTest
{
private const string bucketName = "*** versioning-enabled bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
1. Creates sample objects and deletes them by specifying the key name and version ID for each
object. The operation deletes specific versions of the objects.
2. Creates sample objects and deletes them by specifying only the key names. Because the example
doesn't specify version IDs, the operation only adds delete markers. It doesn't delete any specific
versions of the objects. After deletion, these objects don't appear in the Amazon S3 console.
3. Deletes the delete markers by specifying the object keys and version IDs of the delete markers.
When the operation deletes the delete markers, the objects reappear in the console.
For information about creating and testing a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class DeleteMultipleObjVersionedBucketTest
{
private const string bucketName = "*** versioning-enabled bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
// Delete objects using only keys. Amazon S3 creates a delete marker and
// returns its version ID in the response.
List<DeletedObject> deletedObjects = await
NonVersionedDeleteAsync(keysAndVersions2);
return deletedObjects;
}
}
}
// Now, delete the delete marker to bring your objects back to the bucket.
try
{
Console.WriteLine("Removing the delete markers .....");
var deleteObjectResponse = await
s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} delete markers",
deleteObjectResponse.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
}
}
};
keys.Add(keyVersion);
}
return keys;
}
}
}
PHP
These examples show how to use classes from version 3 of the AWS SDK for PHP to delete multiple
objects from versioned and non-versioned Amazon S3 buckets. For more information about
versioning, see Using versioning in S3 buckets (p. 519).
The examples assume that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly installed.
The following PHP example uses the deleteObjects() method to delete multiple objects from a
bucket that is not version-enabled.
For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).
<?php
require 'vendor/autoload.php';
use Aws\S3\S3Client;
The following PHP example uses the deleteObjects() method to delete multiple objects from a
version-enabled bucket.
For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).
<?php
require 'vendor/autoload.php';
use Aws\S3\S3Client;
// 3. List the objects versions and get the keys and version IDs.
$versions = $s3->listObjectVersions(['Bucket' => $bucket]);
if (isset($result['Deleted']))
{
$deleted = true;
if (isset($result['Errors']))
{
$errors = true;
if ($deleted)
{
echo $deletedResults;
}
if ($errors)
{
echo $errorResults;
}
For more information, see Delete Multiple Objects in the Amazon Simple Storage Service API Reference.
In the Amazon S3 console, prefixes are called folders. You can view all your objects and folders in the
S3 console by navigating to a bucket. You can also view information about each object, including object
properties.
For more information about listing and organizing your data in Amazon S3, see the following topics.
Topics
• Organizing objects using prefixes (p. 141)
• Listing object keys programmatically (p. 143)
• Organizing objects in the Amazon S3 console using folders (p. 147)
• Viewing an object overview in the Amazon S3 console (p. 149)
• Viewing object properties in the Amazon S3 console (p. 150)
The prefix limits the results to only those keys that begin with the specified prefix. The delimiter causes a
list operation to roll up all the keys that share a common prefix into a single summary list result.
The purpose of the prefix and delimiter parameters is to help you organize and then browse your keys
hierarchically. To do this, first pick a delimiter for your bucket, such as slash (/), that doesn't occur in any
of your anticipated key names. Next, construct your key names by concatenating all containing levels of
the hierarchy, separating each level with the delimiter.
For example, if you were storing information about cities, you might naturally organize them by
continent, then by country, then by province or state. Because these names don't usually contain
punctuation, you might use slash (/) as the delimiter. The following examples use a slash (/) delimiter.
• Europe/France/Nouvelle-Aquitaine/Bordeaux
• North America/Canada/Quebec/Montreal
• North America/USA/Washington/Bellevue
• North America/USA/Washington/Seattle
If you stored data for every city in the world in this manner, it would become awkward to manage
a flat key namespace. By using Prefix and Delimiter with the list operation, you can use the
hierarchy you've created to list your data. For example, to list all the states in USA, set Delimiter='/'
and Prefix='North America/USA/'. To list all the provinces in Canada for which you have data, set
Delimiter='/' and Prefix='North America/Canada/'.
sample.jpg
photos/2006/January/sample.jpg
photos/2006/February/sample2.jpg
photos/2006/February/sample3.jpg
photos/2006/February/sample4.jpg
The sample bucket has only the sample.jpg object at the root level. To list only the root level objects
in the bucket, you send a GET request on the bucket with "/" delimiter character. In response, Amazon S3
returns the sample.jpg object key because it does not contain the "/" delimiter character. All other keys
contain the delimiter character. Amazon S3 groups these keys and returns a single CommonPrefixes
element with prefix value photos/ that is a substring from the beginning of these keys to the first
occurrence of the specified delimiter.
Example
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>ExampleBucket</Name>
<Prefix></Prefix>
<Marker></Marker>
<MaxKeys>1000</MaxKeys>
<Delimiter>/</Delimiter>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>sample.jpg</Key>
<LastModified>2011-07-24T19:39:30.000Z</LastModified>
<ETag>"d1a7fb5eab1c16cb4f7cf341cf188c3d"</ETag>
<Size>6</Size>
<Owner>
<ID>75cc57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
<DisplayName>displayname</DisplayName>
</Owner>
<StorageClass>STANDARD</StorageClass>
</Contents>
<CommonPrefixes>
<Prefix>photos/</Prefix>
</CommonPrefixes>
</ListBucketResult>
For more information about listing object keys programmatically, see Listing object keys
programmatically (p. 143).
Amazon S3 exposes a list operation that lets you enumerate the keys contained in a bucket. Keys are
selected for listing by bucket and prefix. For example, consider a bucket named "dictionary" that
contains a key for every English word. You might make a call to list all the keys in that bucket that start
with the letter "q". List results are always returned in UTF-8 binary order.
Both the SOAP and REST list operations return an XML document that contains the names of matching
keys and information about the object identified by each key.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.
Groups of keys that share a prefix terminated by a special delimiter can be rolled up by that common
prefix for the purposes of listing. This enables applications to organize and browse their keys
hierarchically, much like how you would organize your files into directories in a file system.
For example, to extend the dictionary bucket to contain more than just English words, you might form
keys by prefixing each word with its language and a delimiter, such as "French/logical". Using this
naming scheme and the hierarchical listing feature, you could retrieve a list of only French words. You
could also browse the top-level list of available languages without having to iterate through all the
lexicographically intervening keys. For more information about this aspect of listing, see Organizing
objects using prefixes (p. 141).
REST API
If your application requires it, you can send REST requests directly. You can send a GET request to return
some or all of the objects in a bucket or you can use selection criteria to return a subset of the objects
in a bucket. For more information, see GET Bucket (List Objects) Version 2 in the Amazon Simple Storage
Service API Reference.
List performance is not substantially affected by the total number of keys in your bucket. It's also not
affected by the presence or absence of the prefix, marker, maxkeys, or delimiter arguments.
As buckets can contain a virtually unlimited number of keys, the complete results of a list query can
be extremely large. To manage large result sets, the Amazon S3 API supports pagination to split them
into multiple responses. Each list keys response returns a page of up to 1,000 keys with an indicator
indicating if the response is truncated. You send a series of list keys requests until you have received all
the keys. AWS SDK wrapper libraries provide the same pagination.
Java
The following example lists the object keys in a bucket. The example uses pagination to retrieve
a set of object keys. If there are more keys to return after the first page, Amazon S3 includes a
continuation token in the response. The example uses the continuation token in the subsequent
request to fetch the next set of object keys.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ListObjectsV2Request;
import com.amazonaws.services.s3.model.ListObjectsV2Result;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import java.io.IOException;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
System.out.println("Listing objects");
do {
result = s3Client.listObjectsV2(req);
req.setContinuationToken(token);
} while (result.isTruncated());
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}
.NET
The following C# example lists the object keys for a bucket. In the example, pagination is used to
retrieve a set of object keys. If there are more keys to return, Amazon S3 includes a continuation
token in the response. The code uses the continuation token in the subsequent request to fetch the
next set of object keys.
For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;
namespace Amazon.DocSamples.S3
{
class ListObjectsTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
PHP
This example guides you through using classes from version 3 of the AWS SDK for PHP to list the
object keys contained in an Amazon S3 bucket.
This example assumes that you are already following the instructions for Using the AWS SDK for
PHP and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly installed.
To list the object keys contained in a bucket using the AWS SDK for PHP, you first must list the
objects contained in the bucket and then extract the key from each of the listed objects. When
listing objects in a bucket you have the option of using the low-level Aws\S3\S3Client::listObjects()
method or the high-level Aws\ResultPaginator class.
The low-level listObjects() method maps to the underlying Amazon S3 REST API. Each
listObjects() request returns a page of up to 1,000 objects. If you have more than 1,000 objects
in the bucket, your response will be truncated and you must send another listObjects() request
to retrieve the next set of 1,000 objects.
You can use the high-level ListObjects paginator to make it easier to list the objects contained
in a bucket. To use the ListObjects paginator to create an object list, run the Amazon S3
client getPaginator() method (inherited from the Aws/AwsClientInterface class) with the
ListObjects command as the first argument and an array to contain the returned objects from the
specified bucket as the second argument.
When used as a ListObjects paginator, the getPaginator() method returns all the objects
contained in the specified bucket. There is no 1,000 object limit, so you don't need to worry whether
the response is truncated.
The following tasks guide you through using the PHP Amazon S3 client methods to list the objects
contained in a bucket from which you can list the object keys.
The following PHP example demonstrates how to list the keys from a specified bucket. It shows
how to use the high-level getIterator() method to list the objects in a bucket and then extract
the key from each of the objects in the list. It also shows how to use the low-level listObjects()
method to list the objects in a bucket and then extract the key from each of the objects in the
list returned. For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
For example, you can create a folder on the console named photos and store an object named
myphoto.jpg in it. The object is then stored with the key name photos/myphoto.jpg, where
photos/ is the prefix.
You can have folders within folders, but not buckets within buckets. You can upload and copy objects
directly into a folder. Folders can be created, deleted, and made public, but they cannot be renamed.
Objects can be copied from one folder to another.
Important
The Amazon S3 console treats all objects that have a forward slash ("/") character as the last
(trailing) character in the key name as a folder, for example examplekeyname/. You can't
upload an object that has a key name with a trailing "/" character using the Amazon S3 console.
However, you can upload objects that are named with a trailing "/" with the Amazon S3 API by
using the AWS CLI, AWS SDKs, or REST API.
An object that is named with a trailing "/" appears as a folder in the Amazon S3 console. The
Amazon S3 console does not display the content and metadata for such an object. When you
use the console to copy an object named with a trailing "/", a new folder is created in the
destination location, but the object's data and metadata are not copied.
Topics
• Creating a folder (p. 148)
• Making folders public (p. 148)
• Deleting folders (p. 149)
Creating a folder
This section describes how to use the Amazon S3 console to create a folder.
Important
If your bucket policy prevents uploading objects to this bucket without encryption, tags,
metadata, or access control list (ACL) grantees, you will not be able to create a folder using
this configuration. Instead, upload an empty folder and specify these settings in the upload
configuration.
To create a folder
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to create a folder in.
3. Choose Create folder.
4. Enter a name for the folder (for example, favorite-pics). Then choose Create folder.
In the Amazon S3 console, you can make a folder public. You can also make a folder public by creating a
bucket policy that limits access by prefix. For more information, see Identity and access management in
Amazon S3 (p. 274).
Warning
After you make a folder public in the Amazon S3 console, you can't make it private again.
Instead, you must set permissions on each individual object in the public folder so that the
objects have no public access. For more information, see Configuring ACLs (p. 467).
Deleting folders
This section explains how to use the Amazon S3 console to delete folders from an S3 bucket.
For information about Amazon S3 features and pricing, see Amazon S3.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to delete folders from.
3. In the Objects list, select the check box next to the folders and objects that you want to delete.
4. Choose Delete.
5. On the Delete objects page, verify that the names of the folders you selected for deletion are listed.
6. In the Delete objects box, enter delete, and choose Delete objects.
Warning
This action deletes all specified objects. When deleting folders, wait for the delete action to
finish before adding new objects to the folder. Otherwise, new objects might be deleted as well.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object for which you want an overview.
• To download an object version, select the check box next to the version ID, choose Actions, and
then choose Download.
• To delete an object version, select the check box next to the version ID, and choose Delete.
Important
You can undelete an object only if it was deleted as the latest (current) version. You can't
undelete a previous version of an object that was deleted.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object you want to view properties for.
The Object overview for your object opens. You can scroll down to view the object properties.
4. On the Object overview page, you can configure the following properties for the object.
Note
If you change the Storage Class, Encryption, or Metadata properties, a new object is
created to replace the old one. If S3 Versioning is enabled, a new version of the object
is created, and the existing object becomes an older version. The role that changes the
property also becomes the owner of the new object or (object version).
a. Storage class – Each object in Amazon S3 has a storage class associated with it. The storage
class that you choose to use depends on how frequently you access the object. The default
storage class for S3 objects is STANDARD. You choose which storage class to use when you
upload an object. For more information about storage classes, see Using Amazon S3 storage
classes (p. 567).
To change the storage class after you upload an object, choose Storage class. Choose the
storage class that you want, and then choose Save.
b. Server-side encryption settings – You can use server-side encryption to encrypt your S3
objects. For more information, see Specifying server-side encryption with AWS KMS (SSE-
KMS) (p. 223) or Specifying Amazon S3 encryption (p. 238).
c. Metadata – Each object in Amazon S3 has a set of name-value pairs that represents its
metadata. For information about adding metadata to an S3 object, see Editing object metadata
in the Amazon S3 console (p. 64).
d. Tags – You categorize storage by adding tags to an S3 object. For more information, see
Categorizing your storage using tags (p. 685).
e. Object lock legal hold and rentention – You can prevent an object from being deleted. For
more information, see Using S3 Object Lock (p. 559).
In essence, presigned URLs are a bearer token that grants access to customers who possess them. As
such, we recommend that you protect them appropriately.
If you want to restrict the use of presigned URLs and all S3 access to particular network paths, you
can write AWS Identity and Access Management (IAM) policies that require a particular network path.
These policies can be set on the IAM principal that makes the call, the Amazon S3 bucket, or both. A
network-path restriction on the principal requires the user of those credentials to make requests from
the specified network. A restriction on the bucket limits access to that bucket only to requests originating
from the specified network. Realize that these restrictions also apply outside of the presigned URL
scenario.
The IAM global condition that you use depends on the type of endpoint. If you are using the public
endpoint for Amazon S3, use aws:SourceIp. If you are using a VPC endpoint to Amazon S3, use
aws:SourceVpc or aws:SourceVpce.
The following IAM policy statement requires the principal to access AWS from only the specified network
range. With this policy statement in place, all access is required to originate from that range. This
includes the case of someone using a presigned URL for S3.
{
"Sid": "NetworkRestrictionForIAMPrincipal",
"Effect": "Deny",
"Action": "",
"Resource": "",
"Condition": {
"NotIpAddressIfExists": { "aws:SourceIp": "IP-address" },
"BoolIfExists": { "aws:ViaAWSService": "false" }
}
}
For more information about using a presigned URL to share or upload objects, see the topics below.
Topics
• Sharing an object with a presigned URL (p. 151)
• Uploading objects using presigned URLs (p. 155)
When you create a presigned URL for your object, you must provide your security credentials, specify a
bucket name, an object key, specify the HTTP method (GET to download the object) and expiration date
and time. The presigned URLs are valid only for the specified duration.
Anyone who receives the presigned URL can then access the object. For example, if you have a video
in your bucket and both the bucket and the object are private, you can share the video with others by
generating a presigned URL.
Note
• Anyone with valid security credentials can create a presigned URL. However, in order to
successfully access an object, the presigned URL must be created by someone who has
permission to perform the operation that the presigned URL is based upon.
• The credentials that you can use to create a presigned URL include:
• IAM instance profile: Valid up to 6 hours
• AWS Security Token Service : Valid up to 36 hours when signed with permanent credentials,
such as the credentials of the AWS account root user or an IAM user
• IAM user: Valid up to 7 days when using AWS Signature Version 4
To create a presigned URL that's valid for up to 7 days, first designate IAM user credentials
(the access key and secret access key) to the SDK that you're using. Then, generate a
presigned URL using AWS Signature Version 4.
• If you created a presigned URL using a temporary token, then the URL expires when the token
expires, even if the URL was created with a later expiration time.
• Since presigned URLs grant access to your Amazon S3 buckets to whoever has the URL, we
recommend that you protect them appropriately. For more details about protecting presigned
URLs, see Limiting presigned URL capabilities (p. 150).
If you are using Visual Studio, you can generate a presigned URL for an object without writing any
code by using AWS Explorer for Visual Studio. Anyone with this URL can download the object. For more
information, go to Using Amazon S3 from AWS Explorer.
For instructions about how to install the AWS Explorer, see Developing with Amazon S3 using the AWS
SDKs, and explorers (p. 1030).
The following examples generates a presigned URL that you can give to others so that they can retrieve
an object. For more information, see Sharing an object with a presigned URL (p. 151).
.NET
Example
The following example generates a presigned URL that you can give to others so that they can
retrieve an object. For more information, see Sharing an object with a presigned URL (p. 151).
For instructions about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
namespace Amazon.DocSamples.S3
{
class GenPresignedURLTest
{
private const string bucketName = "*** bucket name ***";
private const string objectKey = "*** object key ***";
// Specify how long the presigned URL lasts, in hours
private const double timeoutDuration = 12;
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
Go
You can use SDK for Go to upload an object. You can send a PUT request to upload data in a single
operation. For more information, see Generate a Pre-Signed URL for an Amazon S3 PUT Operation
with a Specific Payload in the AWS SDK for Go Developer Guide.
Java
Example
The following example generates a presigned URL that you can give to others so that they can
retrieve an object from an S3 bucket. For more information, see Sharing an object with a presigned
URL (p. 151).
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.HttpMethod;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;
import java.io.IOException;
import java.net.URL;
import java.time.Instant;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
PHP
For more information about using AWS SDK for PHP Version 3 to generate a presigned URL, see
Amazon S3 pre-signed URL with AWS SDK for PHP Version 3 in the AWS SDK for PHP Developer
Guide.
Python
Generate a presigned URL to share an object by using the SDK for Python (Boto3). For example, use
a Boto3 client and the generate_presigned_url function to generate a presigned URL that GETs
an object.
import boto3
url = boto3.client('s3').generate_presigned_url(
ClientMethod='get_object',
Params={'Bucket': 'BUCKET_NAME', 'Key': 'OBJECT_KEY'},
ExpiresIn=3600)
For a complete example that shows how to generate presigned URLs and how to use the Requests
package to upload and download objects, see the PHP presigned URL example on GitHub. For more
information about using SDK for Python (Boto3) to generate a presigned URL, see Python in the
AWS SDK for PHP API Reference.
All objects and buckets by default are private. The presigned URLs are useful if you want your user/
customer to be able to upload a specific object to your bucket, but you don't require them to have AWS
security credentials or permissions.
When you create a presigned URL, you must provide your security credentials and then specify a bucket
name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The
presigned URLs are valid only for the specified duration. That is, you must start the action before the
expiration date and time. If the action consists of multiple steps, such as a multipart upload, all steps
must be started before the expiration, otherwise you will receive an error when Amazon S3 attempts to
start a step with an expired URL.
You can use the presigned URL multiple times, up to the expiration date and time.
Since presigned URLs grant access to your Amazon S3 buckets to whoever has the URL, we recommend
that you protect them appropriately. For more details about protecting presigned URLs, see Limiting
presigned URL capabilities (p. 150).
Anyone with valid security credentials can create a presigned URL. However, for you to successfully
upload an object, the presigned URL must be created by someone who has permission to perform the
operation that the presigned URL is based upon.
You can generate a presigned URL programmatically using the REST API, .NET, AWS SDK for Java, Ruby,
AWS SDK for JavaScript, PHP, , and Python.
If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a presigned
object URL without writing any code. Anyone who receives a valid presigned URL can then
programmatically upload an object. For more information, see Using Amazon S3 from AWS Explorer. For
instructions on how to install AWS Explorer, see Developing with Amazon S3 using the AWS SDKs, and
explorers (p. 1030).
You can use the AWS SDK to generate a presigned URL that you, or anyone you give the URL, can use
to upload an object to Amazon S3. When you use the URL to upload an object, Amazon S3 creates the
object in the specified bucket. If an object with the same key that is specified in the presigned URL
already exists in the bucket, Amazon S3 replaces the existing object with the uploaded object.
Examples
The following examples show how to upload objects using presigned URLs.
.NET
The following C# example shows how to use the AWS SDK for .NET to upload an object to an S3
bucket using a presigned URL.
This example generates a presigned URL for a specific object and uses it to upload a file. For
information about the example's compatibility with a specific version of the AWS SDK for .NET and
instructions about how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 1039).
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Net;
namespace Amazon.DocSamples.S3
{
class UploadObjectUsingPresignedURLTest
{
private const string bucketName = "*** provide bucket name ***";
private const string objectKey = "*** provide the name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file
to upload ***";
// Specify how long the presigned URL lasts, in hours
private const double timeoutDuration = 12;
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
return url;
}
}
}
Java
• Specify the HTTP PUT verb when creating the GeneratePresignedUrlRequest and
HttpURLConnection objects.
• Interact with the HttpURLConnection object in some way after finishing the upload. The
following example accomplishes this by using the HttpURLConnection object to check the HTTP
response code.
Example
This example generates a presigned URL and uses it to upload sample data as an object. For
instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).
import com.amazonaws.AmazonServiceException;
import com.amazonaws.HttpMethod;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;
import com.amazonaws.services.s3.model.S3Object;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import java.net.URL;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();
// Create the connection and use it to upload the new object using the pre-
signed URL.
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStreamWriter out = new
OutputStreamWriter(connection.getOutputStream());
out.write("This text uploaded as an object via presigned URL.");
out.close();
// Check the HTTP response code. To complete the upload and make the object
available,
// you must interact with the connection object in some way.
connection.getResponseCode();
System.out.println("HTTP response code: " + connection.getResponseCode());
JavaScript
Example
For an AWS SDK for JavaScript example on using the presigned URL to upload objects, see Create a
presigned URL to upload objects to an Amazon S3 bucket.
Example
The following AWS SDK for JavaScript example uses a presigned URL to delete an object:
// Import the required AWS SDK clients and commands for Node.js
import {
CreateBucketCommand,
DeleteObjectCommand,
PutObjectCommand,
DeleteBucketCommand }
from "@aws-sdk/client-s3";
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates Amazon
S3 service client module.
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import fetch from "node-fetch";
// Set parameters
// Create a random names for the Amazon Simple Storage Service (Amazon S3) bucket and
key
Python
Generate a presigned URL to upload an object by using the SDK for Python (Boto3). For example,
use a Boto3 client and the generate_presigned_url function to generate a presigned URL that
PUTs an object.
import boto3
url = boto3.client('s3').generate_presigned_url(
ClientMethod='put_object',
Params={'Bucket': 'BUCKET_NAME', 'Key': 'OBJECT_KEY'},
ExpiresIn=3600)
For a complete example that shows how to generate presigned URLs and how to use the Requests
package to upload and download objects, see the Python presigned URL example on GitHub. For
more information about using SDK for Python (Boto3) to generate a presigned URL, see Python in
the AWS SDK for Python (Boto) API Reference.
Ruby
The following tasks guide you through using a Ruby script to upload an object using a presigned URL
for SDK for Ruby - Version 3.
2 Provide a bucket name and an object key by calling the #bucket[] and the
#object[] methods of your Aws::S3::Resource class instance.
Generate a presigned URL by creating an instance of the URI class, and use it to parse
the .presigned_url method of your Aws::S3::Resource class instance. You
must specify :put as an argument to .presigned_url, and you must specify PUT to
Net::HTTP::Session#send_request if you want to upload an object.
The upload creates an object or replaces any existing object with the same key that is
specified in the presigned URL.
The following Ruby code example demonstrates the preceding tasks for SDK for Ruby - Version 3.
Example
require 'aws-sdk-s3'
require 'net/http'
# )
def object_uploaded_to_presigned_url?(
s3_resource,
bucket_name,
object_key,
object_content,
http_client = nil
)
object = s3_resource.bucket(bucket_name).object(object_key)
url = URI.parse(object.presigned_url(:put))
if http_client.nil?
Net::HTTP.start(url.host) do |http|
http.send_request(
'PUT',
url.request_uri,
object_content,
'content-type' => ''
)
end
else
http_client.start(url.host) do |http|
http.send_request(
'PUT',
url.request_uri,
object_content,
'content-type' => ''
)
end
end
content = object.get.body
puts "The presigned URL for the object '#{object_key}' in the bucket " \
"'#{bucket_name}' is:\n\n"
puts url
puts "\nUsing this presigned URL to get the content that " \
"was just uploaded to this object, the object\'s content is:\n\n"
puts content.read
return true
rescue StandardError => e
puts "Error uploading to presigned URL: #{e.message}"
return false
end
S3 Object Lambda uses AWS Lambda functions to automatically process the output of a standard S3 GET
request. AWS Lambda is a serverless compute service that runs customer-defined code without requiring
management of underlying compute resources. You can author and execute your own custom Lambda
functions, tailoring data transformation to your specific use cases. You can configure a Lambda function
and attach it to an S3 Object Lambda service endpoint and S3 will automatically call your function.
Then any data retrieved using an S3 GET request through the S3 Object Lambda endpoint will return a
transformed result back to the application. All other requests will be processed as normal, as illustrated
in the following diagram.
The topics in this section describe how to work with Object Lambda access points.
Topics
• Creating Object Lambda Access Points (p. 162)
• Configuring IAM policies for Object Lambda access points (p. 166)
• Writing and debugging Lambda functions for S3 Object Lambda Access Points (p. 168)
• Using AWS built Lambda functions (p. 179)
• Best practices and guidelines for S3 Object Lambda (p. 181)
• Security considerations for S3 Object Lambda access points (p. 182)
• An IAM policy
• An Amazon S3 bucket
• A standard S3 access point
• An AWS Lambda function
The following sections describe how to create an Object Lambda access point using the AWS
Management Console and AWS CLI.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the navigation pane on the left side of the console, choose Object Lambda access points.
3. On the Object Lambda access points page, choose Create Object Lambda access point.
4. For Object Lambda access point name, enter the name you want to use for the access point.
As with standard access points, there are rules for naming. For more information, see Rules for
naming Amazon S3 access points (p. 189).
5. For Supporting access point, enter or browse to the standard access point that you want to use. The
access point must be in the same AWS Region as the objects you want to transform.
6. For Invoke Lambda function, you can choose to use a prebuilt function or enter the Amazon
Resource Name (ARN) of an AWS Lambda function in your AWS account.
For more information about prebuilt functions, see Using AWS built Lambda functions (p. 179).
7. (Optional) For Range and part number, you must enable this option in order to process GET
requests with range and part number headers. Selecting this option confirms that your Lambda
function is able to recognize and process these requests. For more information about range headers
and part numbers, see Working with Range and partNumber headers (p. 177).
8. (Optional) Under Payload, add JSON text to provide your Lambda function with additional
information. A payload is optional JSON that you can provide to your Lambda function as input. You
can configure payloads with different parameters for different Object Lambda access points that
invoke the same Lambda function, thereby extending the flexibility of your Lambda function.
9. (Optional) For Request metrics, choose enable or disable to add Amazon S3 monitoring to your
Object Lambda access point. Request metrics are billed at the standard CloudWatch rate.
10. (Optional) Under Object Lambda access point policy, set a resource policy. This resource policy
grants GetObject permission for the specified Object Lambda access point.
11. Choose Create Object Lambda access point.
The following example creates an Object Lambda access point named my-object-lambda-ap for the
bucket DOC-EXAMPLE-BUCKET1 in account 111122223333. This example assumes that a standard
access point named example-ap has already been created. For information about creating a standard
access point, see the section called “Creating access points” (p. 189).
This example uses the AWS prebuilt function compress. For example AWS Lambda functions, see the
section called “Using AWS built functions” (p. 179).
1. Create a bucket. In this example we will use DOC-EXAMPLE-BUCKET1. For information about
creating buckets, see the section called “Creating a bucket” (p. 28).
2. Create a standard access point and attach it to your bucket. In this example we will use example-
ap. For information about creating standard access points, see the section called “Creating access
points” (p. 189)
3. Create a Lambda function in your account that you would like to use to transform your S3 object.
See Using Lambda with the AWS CLI in the AWS Lambda Developer Guide. You can also use an AWS
prebuilt Lambda function.
4. Create an JSON configuration file named my-olap-configuration.json. In this configuration
provide the supporting access point and Lambda function ARN created in the previous steps.
Example
{
"SupportingAccessPoint" : "arn:aws:s3:us-east-1:111122223333:accesspoint/example-
ap",
"TransformationConfigurations": [{
"Actions" : ["GetObject"],
"ContentTransformation" : {
"AwsLambda": {
"FunctionPayload" : "{\"compressionType\":\"gzip\"}",
"FunctionArn" : "arn:aws:lambda:us-east-1:111122223333:function/
compress"
}
}
}]
}
This resource policy grants GetObject permission for account 444455556666 to the specified
Object Lambda access point.
Example
{
"Version" : "2008-10-17",
"Statement":[{
"Sid": "Grant account 444455556666 GetObject access",
"Effect":"Allow",
"Principal" : {
"AWS": "arn:aws:iam::444455556666:root"
}
}
]
}
A payload is optional JSON that you can provide to your AWS Lambda function as input. You can
configure payloads with different parameters for different Object Lambda access points that invoke
the same Lambda function, thereby extending the flexibility of your Lambda function.
The following Object Lambda access point configuration shows a payload with two parameters.
{
"SupportingAccessPoint": "AccessPointArn",
"CloudWatchMetricsEnabled": false,
"TransformationConfigurations": [{
"Actions": ["GetObject"],
"ContentTransformation": {
"AwsLambda": {
"FunctionArn": "FunctionArn",
"FunctionPayload": "{\"res-x\": \"100\",\"res-y\": \"100\"}"
}
}
}]
}
The following Object Lambda access point configuration shows a payload with one parameter and
range and part number enabled.
{
"SupportingAccessPoint":"AccessPointArn",
"CloudWatchMetricsEnabled": false,
"AllowedFeatures": ["GetObject-Range", "GetObject-PartNumber"],
"TransformationConfigurations": [{
"Actions": ["GetObject"],
"ContentTransformation": {
"AwsLambda": {
"FunctionArn":"FunctionArn",
"FunctionPayload": "{\"compression-amount\": \"5\"}"
}
}
}]
}
Important
When using Object Lambda access points the payload should not contain any confidential
information.
For more information about configuring Object Lambda access points using AWS CloudFormation, see
AWS::S3ObjectLambda::AccessPoint in the AWS CloudFormation User Guide.
For more information about configuring Object Lambda access points using the AWS CDK, see
AWS::S3ObjectLambda Construct Library in the AWS Cloud Development Kit (CDK) API Reference.
In the case of a single AWS account, the following four resources must have permissions granted to work
with Object Lambda access points:
arn:aws:s3:::DOC-EXAMPLE-BUCKET1
The bucket has the permissions delegated to your access point, such as the example below. For more
information, see Delegating access control to access points (p. 185).
{
"Version": "2012-10-17",
"Statement" : [
{
"Effect": "Allow",
"Principal" : { "AWS":"account-ARN"},
"Action" : "*",
"Resource" : [ "DOC-EXAMPLE-BUCKET1", "DOC-EXAMPLE-BUCKET1/*"],
"Condition": {
"StringEquals" : { "s3:DataAccessPointAccount" : "Bucket owner's account
ID" }
}
}]
}
• An Amazon S3 standard access point on this bucket with the following ARN:
arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point
• An Object Lambda access point with the following ARN:
arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/my-object-lambda-
ap
• An AWS Lambda function with the following ARN:
arn:aws:lambda:us-east-1:111122223333:function/MyObjectLambdaFunction
Note
If using a Lambda function from your account you must include the function version in your
policy statement. For example, arn:aws:lambda:us-east-1:111122223333:function/
MyObjectLambdaFunction:$LATEST
API Version 2006-03-01
166
Amazon Simple Storage Service User Guide
Configuring IAM policies
The following IAM policy grants a user permission to interact with these resources.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLambdaInvocation",
"Action": [
"lambda:InvokeFunction"
],
"Effect": "Allow",
"Resource": "arn:aws:lambda:us-east-1:111122223333:function/MyObjectLambdaFunction:
$LATEST",
"Condition": {
"ForAnyValue:StringEquals": {
"aws:CalledVia": [
"s3-object-lambda.amazonaws.com"
]
}
}
},
{
"Sid": "AllowStandardAccessPointAccess",
"Action": [
"s3: Get*",
"s3: List*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point/*",
"Condition": {
"ForAnyValue:StringEquals": {
"aws:CalledVia": [
"s3-object-lambda.amazonaws.com"
]
}
}
},
{
"Sid": "AllowObjectLambdaAccess",
"Action": [
"s3-object-lambda:Get*",
"s3-object-lambda:List*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/my-object-
lambda-ap"
}
]
}
For detailed instructions, see Creating a role for an AWS service (console) in the IAM User Guide.
Add the following statement to the execution role that is used by the Lambda function.
{
{
"Sid": "AllowObjectLambdaAccess",
"Action": ["s3-object-lambda:WriteGetObjectResponse"],
"Effect": "Allow",
"Resource": "*"
}
For more information about execution roles see, Lambda execution role in the AWS Lambda Developer
Guide.
Topics
• Working with WriteGetObjectResponse (p. 168)
• Debugging S3 Object Lambda (p. 176)
• Working with Range and partNumber headers (p. 177)
• Event context format and usage (p. 178)
based on the context of your application. The following section shows unique examples of using the
WriteGetObjectResponse.
Example 1:
You can use WriteGetObjectResponse to respond with a 403 Forbidden based on the content of the
object.
Java
package com.amazon.s3.objectlambda;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.events.S3ObjectLambdaEvent;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.WriteGetObjectResponseRequest;
import java.io.ByteArrayInputStream;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
// Prepare the presigned URL for use and make the request to S3.
HttpClient httpClient = HttpClient.newBuilder().build();
var presignedResponse = httpClient.send(
HttpRequest.newBuilder(new URI(event.inputS3Url())).GET().build(),
HttpResponse.BodyHandlers.ofInputStream());
s3Client.writeGetObjectResponse(new WriteGetObjectResponseRequest()
.withRequestRoute(event.outputRoute())
.withRequestToken(event.outputToken())
.withInputStream(presignedResponse.body()));
}
}
Python
import boto3
import requests
"""
Retrieve the operation context object from event. This has info to where the
WriteGetObjectResponse request
should be delivered and a presigned URL in `inputS3Url` where we can download the
requested object from.
The `userRequest` object has information related to the user which made this
`GetObject` request to S3OL.
"""
get_context = event["getObjectContext"]
user_request_headers = event["userRequest"]["headers"]
route = get_context["outputRoute"]
token = get_context["outputToken"]
s3_url = get_context["inputS3Url"]
# Check for the presence of a `CustomHeader` header and deny or allow based on that
header
is_token_present = "SuperSecretToken" in user_request_headers
if is_token_present:
# If the user presented our custom `SuperSecretToken` header we send the
requested object back to the user.
response = requests.get(s3_url)
s3.write_get_object_response(RequestRoute=route, RequestToken=token,
Body=response.content)
else:
# If the token is not present we send an error back to the user.
s3.write_get_object_response(RequestRoute=route, RequestToken=token,
StatusCode=403,
ErrorCode="NoSuperSecretTokenFound", ErrorMessage="The request was not secret
enough.")
NodeJS
const { S3 } = require('aws-sdk');
const axios = require('axios').default;
// Retrieve the operation context object from event. This has info to where the
WriteGetObjectResponse request
// should be delivered and a presigned URL in `inputS3Url` where we can download
the requested object from.
// The `userRequest` object has information related to the user which made this
`GetObject` request to S3OL.
const { userRequest, getObjectContext } = event;
const { outputRoute, outputToken, inputS3Url } = getObjectContext;
// Check for the presence of a `CustomHeader` header and deny or allow based on
that header
const isTokenPresent = Object
.keys(userRequest.headers)
.includes("SuperSecretToken");
if (!isTokenPresent) {
// If the token is not present we send an error back to the user. Notice the
`await` infront of the request as
// we want to wait for this request to finish sending before moving on.
await s3.writeGetObjectResponse({
RequestRoute: outputRoute,
RequestToken: outputToken,
StatusCode: 403,
ErrorCode: "NoSuperSecretTokenFound",
ErrorMessage: "The request was not secret enough.",
}).promise();
} else {
// If the user presented our custom `SuperSecretToken` header we send the
requested object back to the user.
// Again notice the presence of `await`.
const presignedResponse = await axios.get(inputS3Url);
await s3.writeGetObjectResponse({
RequestRoute: outputRoute,
RequestToken: outputToken,
Body: presignedResponse.data,
}).promise();
}
Example 2:
When performing an image transformation, you may find that you need all the bytes of the source
object before you can start processing them. Consequently, your WriteGetObjectResponse will return the
whole object to the requesting application in one go.
Java
package com.amazon.s3.objectlambda;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.events.S3ObjectLambdaEvent;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.WriteGetObjectResponseRequest;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.awt.Image;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
// Prepare the presigned URL for use and make the request to S3.
var presignedResponse = httpClient.send(
HttpRequest.newBuilder(new URI(event.inputS3Url())).GET().build(),
HttpResponse.BodyHandlers.ofInputStream());
// The entire image is loaded into memory here so that we can resize it.
// Once the resizing is completed, we write the bytes into the body
// of the WriteGetObjectResponse.
var originalImage = ImageIO.read(presignedResponse.body());
var resizingImage = originalImage.getScaledInstance(WIDTH, HEIGHT,
Image.SCALE_DEFAULT);
var resizedImage = new BufferedImage(WIDTH, HEIGHT,
BufferedImage.TYPE_INT_RGB);
resizedImage.createGraphics().drawImage(resizingImage, 0, 0, WIDTH, HEIGHT,
null);
Python
import boto3
import requests
import io
from PIL import Image
"""
In this case we're resizing `.png` images which are stored in S3 and are accessible
via the presigned url
`inputS3Url`.
"""
image_request = requests.get(s3_url)
image = Image.open(io.BytesIO(image_request.content))
image.thumbnail((256,256), Image.ANTIALIAS)
transformed = io.BytesIO()
image.save(transformed, "png")
NodeJS
const { S3 } = require('aws-sdk');
const axios = require('axios').default;
const sharp = require('sharp');
// Retrieve the operation context object from event. This has info to where the
WriteGetObjectResponse request
// should be delivered and a presigned URL in `inputS3Url` where we can download
the requested object from
const { getObjectContext } = event;
const { outputRoute, outputToken, inputS3Url } = getObjectContext;
// In this case we're resizing `.png` images which are stored in S3 and are
accessible via the presigned url
// `inputS3Url`.
const { data } = await axios.get(inputS3Url, { responseType: 'arraybuffer' });
Example 3:
Java
package com.amazon.s3.objectlambda;
import com.amazonaws.services.lambda.runtime.events.S3ObjectLambdaEvent;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.WriteGetObjectResponseRequest;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
// We're consuming the incoming response body from the presigned request,
// applying our transformation on that data and emitting the transformed bytes
// into the body of the WriteGetObjectResponse request as soon as they're
ready.
// This example compresses the data from S3, but any processing pertinent
// to your application can be performed here.
var bodyStream = new GZIPCompressingInputStream(presignedResponse.body());
Python
import boto3
import requests
import zlib
from botocore.config import Config
"""
A helper class to work with content iterators. Takes an interator and compresses the
bytes that come from it. It
implements `read` and `__iter__` so the SDK can stream the response
"""
class Compress:
def __init__(self, content_iter):
self.content = content_iter
self.compressed_obj = zlib.compressobj()
def __iter__(self):
while True:
data = next(self.content)
chunk = self.compressed_obj.compress(data)
if not chunk:
break
yield chunk
yield self.compressed_obj.flush()
"""
Retrieve the operation context object from event. This has info to where the
WriteGetObjectResponse request
should be delivered and a presigned URL in `inputS3Url` where we can download the
requested object from.
The `userRequest` object has information related to the user which made this
`GetObject` request to S3OL.
"""
get_context = event["getObjectContext"]
route = get_context["outputRoute"]
token = get_context["outputToken"]
s3_url = get_context["inputS3Url"]
NodeJS
const { S3 } = require('aws-sdk');
const axios = require('axios').default;
const zlib = require('zlib');
// Retrieve the operation context object from event. This has info to where the
WriteGetObjectResponse request
// should be delivered and a presigned URL in `inputS3Url` where we can download
the requested object from
const { getObjectContext } = event;
const { outputRoute, outputToken, inputS3Url } = getObjectContext;
Note
While S3 Object Lambda allows up to 60 seconds to send a complete response to the caller
via WriteGetObjectResponse the actual amount of time available may be less, for instance if
your Lambda function timeout is less than 60 seconds. In other cases the caller may have more
stringent timeouts.
The WriteGetObjectResponse call must be made for the original caller to receive a non-500 response.
If the Lambda function returns, exceptionally or otherwise, before the WriteGetObjectResponse API
is called the original caller will receive a 500 response. Exceptions thrown during the time it takes to
complete the response will result in truncated responses to the caller. If the Lambda receives a 200
Response from the WriteGetObjectResponse API call then the original caller has sent the complete
request. The Lambda response, exceptional or not, is ignored by S3 Object Lambda.
When calling this API, S3 requires the route and request token from the event context. For more
information, see Event context format and usage (p. 178).
These parameters are required to connect the WriteGetObjectResult with the original caller. While
it is always appropriate to retry 500 responses, note that the request token is a single use token
and subsequent attempts to use it may result in 400 Bad Request responses. While the call to
WriteGetObjectResponse with the route and request tokens does not need to be made from the invoked
Lambda, it does need to be made by an identity in the same account and must be completed before the
Lambda finishes execution.
as the standard S3 errors. For information about S3 Object Lambda errors, see S3 Object Lambda Error
Code List in the Amazon Simple Storage Service API Reference.
For more information on general Lambda function debugging see, Monitoring and troubleshooting
Lambda applications in the AWS Lambda Developer Guide.
For information about standard Amazon S3 errors, see Error Responses in the Amazon Simple Storage
Service API Reference.
You can enable request metrics in CloudWatch for your Object Lambda access points. These metrics can
be used to monitor the operational performance of your access point.
CloudTrail Data Events can be enabled to get more granular logging about requests made to your Object
Lambda access points. For more information see, Logging data events for trails in the AWS CloudTrail
User Guide.
When receiving a GET request, S3 Object Lambda invokes your specified Lambda function first, hence
if your GET request contains range or part number parameters, you must ensure that your Lambda
function is equipped to recognize and manage these parameters. Because there can be multiple entities
connected in such a setup (requesting client and services like Lambda, S3, other) it is advised that all
involved entities interpret the requested range (or partNumber) in a uniform manner. This ensures that
the ranges the application is expecting match with the ranges your Lambda function is processing.
When building a function to handle requests with range headers test all combinations of response sizes,
original object sizes, and request range sizes that your application plans to use.
By default, S3 Object Lambda access points will respond with a 501 to any GetObject request that
contains a range or part number parameter, either in the headers or query parameters. You can confirm
that your Lambda function is prepared to handle range or part requests by updating your Object Lambda
access point configuration through the AWS Management Console or the AWS CLI.
The following code example demonstrates how to retrieve the Range header from the GET request and
add it to the presignedURL that Lambda can use to retrieve the requested range from S3.
Range requests to S3 can be made using headers or query parameters. If the original request used the
Range header it can be found in the event context at userRequest.headers.Range. If the original
request used a query parameter then it will be present in userRequest.url as ‘Range’. In both cases,
the presigned URL that is provided will not contain the specified range, and the range header should be
added to it in order to retrieve the requested range from S3.
Part requests to S3 are made using query parameters. If the original request included a part number it
can be found in the query parameters in userRequest.url as ‘partNumber’. The presigned URL that is
provided will not contain the specified partNumber.
{
"xAmzRequestId": "requestId",
"getObjectContext": {
"inputS3Url": "https://my-s3-ap-111122223333.s3-accesspoint.us-
east-1.amazonaws.com/example?X-Amz-Security-Token=<snip>",
"outputRoute": "io-use1-001",
"outputToken": "OutputToken"
},
"configuration": {
"accessPointArn": "arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/
example-object-lambda-ap",
"supportingAccessPointArn": "arn:aws:s3:us-east-1:111122223333:accesspoint/example-
ap",
"payload": "{}"
},
"userRequest": {
"url": "https://object-lambda-111122223333.s3-object-lambda.us-
east-1.amazonaws.com/example",
"headers": {
"Host": "object-lambda-111122223333.s3-object-lambda.us-east-1.amazonaws.com",
"Accept-Encoding": "identity",
"X-Amz-Content-SHA256": "e3b0c44298fc1example"
}
},
"userIdentity": {
"type": "AssumedRole",
"principalId": "principalId",
"arn": "arn:aws:sts::111122223333:assumed-role/Admin/example",
"accountId": "111122223333",
"accessKeyId": "accessKeyId",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "Wed Mar 10 23:41:52 UTC 2021"
},
"sessionIssuer": {
"type": "Role",
"principalId": "principalId",
"arn": "arn:aws:iam::111122223333:role/Admin",
"accountId": "111122223333",
"userName": "Admin"
}
}
},
"protocolVersion": "1.00"
}
• xAmzRequestId ‐ The Amazon S3 request ID for this request. We recommend that you log this value
to help with debugging.
• getObjectContext ‐ The input and output details for connections to Amazon S3 and S3 Object
Lambda.
• inputS3Url ‐ A presigned URL that can be used to fetch the original object from Amazon S3. The
URL is signed using the original caller’s identity, and their permissions will apply when the URL is
used. If there are signed headers in the URL, the Lambda function must include these in the call to
Amazon S3, except for the Host.
• outputRoute ‐ A routing token that is added to the S3 Object Lambda URL when the Lambda
function calls WriteGetObjectResponse.
• outputToken ‐ An opaque token used by S3 Object Lambda to match the
WriteGetObjectResponse call with the original caller.
• configuration ‐ Configuration information about the S3 Object Lambda access point.
• accessPointArn ‐ The Amazon Resource Name (ARN) of the S3 Object Lambda access point that
received this request.
• supportingAccessPointArn ‐ The ARN of the supporting access point that is specified in the S3
Object Lambda access point configuration.
• payload ‐ Custom data that is applied to the S3 Object Lambda access point configuration. S3
Object Lambda treats this as an opaque string, so it might need to be decoded before use.
• userRequest ‐ Information about the original call to S3 Object Lambda.
• url ‐ The decoded URL of the request as received by S3 Object Lambda, excluding any
authorization-related query parameters.
• headers ‐ A map of string to strings containing the HTTP headers and their values from the original
call, excluding any authorization-related headers. If the same header appears multiple times, their
values are combined into a comma-delimited list. The case of the original headers is retained in this
map.
• userIdentity ‐ Details about the identity that made the call to S3 Object Lambda. For more
information see, Logging data events for trails in the AWS CloudTrail User Guide.
• type ‐ The type of identity.
• accountId ‐ The AWS account to which the identity belongs.
• userName ‐ The friendly name of the identity that made the call.
• principalId ‐ The unique identifier for the identity that made the call.
• arn ‐ The ARN of the principal that made the call. The last section of the ARN contains the user or
role that made the call.
• sessionContext ‐ If the request was made with temporary security credentials, this element
provides information about the session that was created for those credentials.
• invokedBy ‐ The name of the AWS service that made the request, such as Amazon EC2 Auto Scaling
or AWS Elastic Beanstalk.
• sessionIssuer ‐ If the request was made with temporary security credentials, this element
provides information about how the credentials were obtained.
• protocolVersion ‐ The version ID of the context provided. The format of this field is {Major
Version}.{Minor Version}. The minor version numbers are always two-digit numbers. Any
removal or change to the semantics of a field will necessitate a major version bump and will require
active opt-in. Amazon S3 can add new fields at any time, at which point you might experience a minor
version bump. Due to the nature of software rollouts, it is possible that you might see multiple minor
versions in use at once.
For more information on how to deploy severless applications from the AWS Serverless Application
Repository, see Deploying Applications in the AWS
API Version Serverless Application Repository Developer Guide.
2006-03-01
179
Amazon Simple Storage Service User Guide
Using AWS built functions
To get started, simply deploy the following Lambda function in your account and add the ARN in your
Object Lambda access point configuration.
ARN:
arn:aws:serverlessrepo:us-east-1:839782855223:applications/
ComprehendPiiAccessControlS3ObjectLambda
You can add the view this function on the AWS Management Console using the following SAR link:
ComprehendPiiAccessControlS3ObjectLambda.
To get started, simply deploy the following Lambda function in your account and add the ARN in your
Object Lambda access point configuration.
ARN:
arn:aws:serverlessrepo:us-east-1:839782855223:applications/
ComprehendPiiRedactionS3ObjectLambda
You can add the view this function on the AWS Management Console using the following SAR link:
ComprehendPiiRedactionS3ObjectLambda.
Example 3: Decompression
The Lambda function S3ObjectLambdaDecompression, is equipped to decompress objects stored in S3 in
one of six compressed file formats including bzip2, gzip, snappy, zlib, zstandard and ZIP. To get started,
simply deploy the following Lambda function in your account and add the ARN in your Object Lambda
access point configuration.
ARN:
arn:aws:serverlessrepo:eu-west-1:123065155563:applications/S3ObjectLambdaDecompression
You can add the view this function on the AWS Management Console using the following SAR link:
S3ObjectLambdaDecompression.
Topics
• Working with S3 Object Lambda (p. 181)
• AWS Services used in connection with S3 Object Lambda (p. 181)
• Working with Range and partNumber GET headers (p. 181)
• Working with AWS CLI and SDKs (p. 182)
S3 Object Lambda allows up to 60 seconds to stream a complete response to its caller. Your function
is also subject to Lambda default quotas. For more information, see Lambda quotas in the AWS
Lambda Developer Guide. Using S3 Object Lambda invokes your specified Lambda function and you are
responsible for ensuring that any data overwritten or deleted from S3 by your specified Lambda function
or application is intended and correct.
You can only use S3 Object Lambda to perform operations on objects. You cannot use them to perform
other Amazon S3 operations, such as modifying or deleting buckets. For a complete list of S3 operations
that support access points see, Access point compatibility with AWS services (p. 198).
In addition to this list, S3 Object Lambda access points do not support POST Object, Copy (as the source),
or Select Object Content.
When receiving a GET request, S3 Object Lambda invokes your specified Lambda function first, hence
if your GET request contains range or part number parameters, you must ensure that your Lambda
function is equipped to recognize and manage these parameters. Because there can be multiple entities
connected in such a setup (requesting client and services like Lambda, S3, other) it is advised that all
involved entities interpret the requested range (or partNumber) in a uniform manner. This ensures that
the ranges the application is expecting match with the ranges your Lambda function is processing.
When building a function to handle requests with range headers test all combinations of response sizes,
original object sizes, and request range sizes that your application plans to use.
By default, S3 Object Lambda access points will respond with a 501 to any GetObject request that
contains a range or part number parameter, either in the headers or query parameters. You can confirm
that your Lambda function is prepared to handle range or part requests by updating your Object Lambda
access point configuration through the AWS Management Console or the AWS CLI.
To mitigate this risk we recommend that the Lambda execution role be carefully scoped to the smallest
set of privileges possible. Additionally, the Lambda should make its S3 accesses via the provided pre-
signed URL whenever possible.
Encryption behavior
Since Object Lambda access point use both Amazon S3 and AWS Lambda there are differences in
encryption behavior. For more information about default S3 encryption behavior, see Setting default
server-side encryption behavior for Amazon S3 buckets (p. 40).
• When using S3 server-side encryption with Object Lambda access points the object will be decrypted
before being sent to AWS Lambda where it will be processed unencrypted up to the original caller (in
case of a GET).
• To prevent the key being logged, S3 will reject GET requests for objects encrypted via server-side
encryption using customer provided keys. The Lambda function may still retrieve these objects
provided it has access to the client provided key.
• When using S3 client-side encryption with Object Lambda access points make sure Lambda has access
to the key to decrypt and reencrypt the object.
point, S3 either invokes Lambda on your behalf or delegates the request to the supporting access point,
depending upon the S3 Object Lambda configuration. When Lambda is invoked for GetObject, S3 will
generate a pre-signed URL to your object on your behalf through the supporting access point. Your
Lambda function will receive this URL as input when invoked.
You may set your Lambda function to use this URL to retrieve the original object, instead of invoking
S3 directly. This model allows you to apply better security boundaries to your objects. You can limit
direct object access through S3 buckets or S3 access points to a limited set of IAM roles or users. This
also protects your Lambda functions from being subject to the Confused Deputy problem, where a
misconfigured function with different permissions than your GetObject invoker could allow or deny
access to objects when it should not.
Without these permissions, requests to invoke Lambda or delegate to S3 will fail as a 403 Forbidden
error. All access must be made by authenticated principals. If you require public access, Lambda@Edge
can be used as a possible alternative. For more information, see Customizing at the edge with
Lambda@Edge in the Amazon CloudFront Developer Guide.
For more information about standard access points, see Managing data access with Amazon S3 access
points (p. 184).
For information about working with buckets, see Buckets overview (p. 24). For information about
working with objects, see Amazon S3 objects overview (p. 57).
• You can only use access points to perform operations on objects. You can't use access points
to perform other Amazon S3 operations, such as modifying or deleting buckets. For a
complete list of S3 operations that support access points, see Access point compatibility with
AWS services (p. 198).
• Access points work with some, but not all, AWS services and features. For example, you can't
configure Cross-Region Replication to operate through an access point. For a complete list of
AWS services that are compatible with S3 access points, see Access point compatibility with
AWS services (p. 198).
This section explains how to work with Amazon S3 access points. For information about working with
buckets, see Buckets overview (p. 24). For information about working with objects, see Amazon S3
objects overview (p. 57).
Topics
• Configuring IAM policies for using access points (p. 184)
• Creating access points (p. 189)
• Using access points (p. 193)
• Access points restrictions and limitations (p. 200)
Condition keys
S3 access points introduce three new condition keys that can be used in IAM policies to control access to
your resources:
s3:DataAccessPointArn
This is a string that you can use to match on an access point ARN. The following example matches all
access points for AWS account 123456789012 in Region us-west-2:
"Condition" : {
"StringLike": {
"s3:DataAccessPointArn": "arn:aws:s3:us-west-2:123456789012:accesspoint/*"
}
}
s3:DataAccessPointAccount
This is a string operator that you can use to match on the account ID of the owner of an access point.
The following example matches all access points owned by AWS account 123456789012.
"Condition" : {
"StringEquals": {
"s3:DataAccessPointAccount": "123456789012"
}
}
s3:AccessPointNetworkOrigin
This is a string operator that you can use to match on the network origin, either Internet or VPC.
The following example matches only access points with a VPC origin.
"Condition" : {
"StringEquals": {
"s3:AccessPointNetworkOrigin": "VPC"
}
}
For more information about using condition keys with Amazon S3, see Actions, resources, and condition
keys for Amazon S3 (p. 310).
{
"Version": "2012-10-17",
"Statement" : [
{
"Effect": "Allow",
1. (Recommended) Delegate access control from the bucket to the access point as described in
Delegating access control to access points (p. 185).
2. Add the same permissions contained in the access point policy to the underlying bucket's
policy. The first access point policy example demonstrates how to modify the underlying
bucket policy to allow the necessary access.
The following access point policy grants IAM user Alice in account 123456789012 permissions to
GET and PUT objects with the prefix Alice/ through access point my-access-point in account
123456789012.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/Alice"
},
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/
Alice/*"
}]
}
Note
For the access point policy to effectively grant access to Alice, the underlying bucket must also
allow the same access to Alice. You can delegate access control from the bucket to the access
point as described in Delegating access control to access points (p. 185). Or, you can add the
following policy to the underlying bucket to grant the necessary permissions to Alice. Note that
the Resource entry differs between the access point and bucket policies.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/Alice"
},
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::awsexamplebucket1/Alice/*"
}]
}
The following access point policy grants IAM user Bob in account 123456789012 permissions to GET
objects through access point my-access-point in account 123456789012 that have the tag key data
set with a value of finance.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Principal" : {
"AWS": "arn:aws:iam::123456789012:user/Bob"
},
"Action":"s3:GetObject",
"Resource" : "arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/
*",
"Condition" : {
"StringEquals": {
"s3:ExistingObjectTag/data": "finance"
}
}
}]
}
The following access point policy allows IAM user Charles in account 123456789012 permission
to view the objects contained in the bucket underlying access point my-access-point in account
123456789012.
{
"Version":"2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/Charles"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point"
}]
}
The following service control policy requires all new access points to be created with a VPC network
origin. With this policy in place, users in your organization can't create new access points that are
accessible from the internet.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:CreateAccessPoint",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"s3:AccessPointNetworkOrigin": "VPC"
}
}
}]
}
The following bucket policy limits access to all S3 object operations for bucket examplebucket to
access points with a VPC network origin.
Important
Before using a statement like this example, make sure you don't need to use features that aren't
supported by access points, such as Cross-Region Replication.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": [
"s3:AbortMultipartUpload",
"s3:BypassGovernanceRetention",
"s3:DeleteObject",
"s3:DeleteObjectTagging",
"s3:DeleteObjectVersion",
"s3:DeleteObjectVersionTagging",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectLegalHold",
"s3:GetObjectRetention",
"s3:GetObjectTagging",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTagging",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectLegalHold",
"s3:PutObjectRetention",
"s3:PutObjectTagging",
"s3:PutObjectVersionAcl",
"s3:PutObjectVersionTagging",
"s3:RestoreObject"
],
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotEquals": {
"s3:AccessPointNetworkOrigin": "VPC"
}
}
}
]
}
By default, you can create up to 1,000 access points per Region for each of your AWS accounts. If you
need more than 1,000 access points for a single account in a single Region, you can request a service
quota increase. For more information about service quotas and requesting an increase, see AWS Service
Quotas in the AWS General Reference.
Note
Because you might want to publicize your access point name in order to allow users to use the
access point, we recommend that you avoid including sensitive information in the access point
name.
Topics
• Creating an access point (p. 189)
• Creating access points restricted to a virtual private cloud (p. 190)
• Managing public access to access points (p. 192)
By default, you can create up to 1,000 access points per Region for each of your AWS accounts. If you
need more than 1,000 access points for a single account in a single Region, you can request a service
quota increase. For more information about service quotas and requesting an increase, see AWS Service
Quotas in the AWS General Reference.
The following examples demonstrate how to create an access point with the AWS CLI and the S3 console.
For more information about how to create access points using the REST API, see CreateAccessPoint in the
Amazon Simple Storage Service API Reference.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the navigation pane on the left side of the console, choose Access points.
3. On the access points page, choose Create access point.
4. In the Access point name field, enter your desired name for the access point. For more information
about naming access points, see Rules for naming Amazon S3 access points (p. 189).
5. In the Bucket name field, enter the name of a bucket in your account to which you want to attach
the access point, for example DOC-EXAMPLE-BUCKET1. Optionally, you can choose Browse S3 to
browse and search buckets in your account. If you choose Browse S3, select the desired bucket and
choose Choose path to populate the Bucket name field with that bucket's name.
6. (Optional) Choose View to view the contents of the specified bucket in a new browser window.
7. Select a Network origin. If you choose Virtual private cloud (VPC), enter the VPC ID that you want
to use with the access point.
For more information about network origins for access points, see Creating access points restricted
to a virtual private cloud (p. 190).
8. Under Block Public Access settings for this Access Point, select the block public access settings
that you want to apply to the access point. All block public access settings are enabled by default for
new access points, and we recommend that you leave all settings enabled unless you know you have
a specific need to disable any of them. Amazon S3 currently doesn't support changing an access
point's block public access settings after the access point has been created.
For more information about using Amazon S3 Block Public Access with access points, see Managing
public access to access points (p. 192).
9. (Optional) Under Access Point policy - optional, specify the access point policy. For more
information about specifying an access point policy, see Access point policy examples (p. 186).
10. Choose Create access point.
point that's only accessible from a specified VPC has a network origin of VPC, and Amazon S3 rejects any
request made to the access point that doesn't originate from that VPC.
Important
You can only specify an access point's network origin when you create the access point. After
you create the access point, you can't change its network origin.
To restrict an access point to VPC-only access, you include the VpcConfiguration parameter with the
request to create the access point. In the VpcConfiguration parameter, you specify the VPC ID that
you want to be able to use the access point. Amazon S3 then rejects requests made through the access
point unless they originate from that VPC.
You can retrieve an access point's network origin using the AWS CLI, AWS SDKs, or REST APIs. If an access
point has a VPC configuration specified, its network origin is VPC. Otherwise, the access point's network
origin is Internet.
Example
Example: Create an access point Restricted to VPC Access
The following example creates an access point named example-vpc-ap for bucket example-bucket
in account 123456789012 that allows access only from VPC vpc-1a2b3c. The example then verifies
that the new access point has a network origin of VPC.
AWS CLI
{
"Name": "example-vpc-ap",
"Bucket": "example-bucket",
"NetworkOrigin": "VPC",
"VpcConfiguration": {
"VpcId": "vpc-1a2b3c"
},
"PublicAccessBlockConfiguration": {
"BlockPublicAcls": true,
"IgnorePublicAcls": true,
"BlockPublicPolicy": true,
"RestrictPublicBuckets": true
},
"CreationDate": "2019-11-27T00:00:00Z"
}
To use an access point with a VPC, you must modify the access policy for your VPC endpoint. VPC
endpoints allow traffic to flow from your VPC to Amazon S3. They have access-control policies that
control how resources within the VPC are allowed to interact with S3. Requests from your VPC to S3 only
succeed through an access point if the VPC endpoint policy grants access to both the access point and
the underlying bucket.
The following example policy statement configures a VPC endpoint to allow calls to GetObject for a
bucket named awsexamplebucket1 and an access point named example-vpc-ap.
{
"Version": "2012-10-17",
"Statement": [
{
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::awsexamplebucket1/*",
"arn:aws:s3:us-west-2:123456789012:accesspoint/example-vpc-ap/object/*"
]
}]
}
Note
The "Resource" declaration in this example uses an Amazon Resource Name (ARN) to
specify the access point. For more information about access point ARNs, see Using access
points (p. 193).
For more information about VPC endpoint policies, see Using Endpoint Policies for Amazon S3 in the
virtual private cloud (VPC) User Guide.
For more information about the S3 Block Public Access feature, see Blocking public access to your
Amazon S3 storage (p. 488).
Important
• All block public access settings are enabled by default for access points. You must explicitly
disable any settings that you don't want to apply to an access point.
• Amazon S3 currently doesn't support changing an access point's block public access settings
after the access point has been created.
Example
Example: Create an access point with Custom Block Public Access Settings
This example creates an access point named example-ap for bucket example-bucket in account
123456789012 with non-default Block Public Access settings. The example then retrieves the new
access point's configuration to verify its Block Public Access settings.
AWS CLI
{
"Name": "example-ap",
"Bucket": "example-bucket",
"NetworkOrigin": "Internet",
"PublicAccessBlockConfiguration": {
"BlockPublicAcls": false,
"IgnorePublicAcls": false,
"BlockPublicPolicy": true,
"RestrictPublicBuckets": true
},
"CreationDate": "2019-11-27T00:00:00Z"
}
Access points have Amazon Resource Names (ARNs). Access point ARNs are similar to bucket ARNs, but
they are explicitly typed and encode the access point's Region and the AWS account ID of the access
point's owner. For more information about ARNs, see Amazon Resource Names (ARNs) in the AWS
General Reference.
ARNs for objects accessed through an access point use the format arn:aws:s3:region:account-
id:accesspoint/access-point-name/object/resource. For example:
Topics
• Monitoring and logging access points (p. 193)
• Using Amazon S3 access points with the Amazon S3 console (p. 194)
• Using a bucket-style alias for your access point (p. 196)
• Using access points with compatible Amazon S3 operations (p. 197)
CloudTrail log entries for requests made through access points will include the access point ARN in the
resources section of the log.
"resources": [
{
"type": "AWS::S3::Object",
"ARN": "arn:aws:s3:::DOC-EXAMPLE-BUCKET1/my-image.jpg"
},
{
"accountId": "123456789012",
"type": "AWS::S3::Bucket",
"ARN": "arn:aws:s3:::DOC-EXAMPLE-BUCKET1"
},
{
"accountId": "DOC-EXAMPLE-BUCKET1",
"type": "AWS::S3::AccessPoint",
"ARN": "arn:aws:s3:us-west-2:DOC-EXAMPLE-BUCKET1:accesspoint/my-bucket-ap"
}
]
For more information about S3 Server Access Logs, see Logging requests using server access
logging (p. 833). For more information about AWS CloudTrail, see What is AWS CloudTrail? in the AWS
CloudTrail User Guide.
Note
S3 access points aren't currently compatible with Amazon CloudWatch metrics.
Topics
• Listing access points for your account (p. 194)
• Listing access points for a bucket (p. 195)
• Viewing configuration details for an access point (p. 195)
• Using an access point (p. 195)
• Viewing block public access settings for an access point (p. 195)
• Editing an access point policy (p. 195)
• Deleting an access point (p. 196)
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the navigation pane on the left side of the console, choose Buckets.
3. On the Buckets page, select the name of the bucket whose access points you want to list.
4. On the bucket detail page, choose the access points tab.
5. Choose the name of the access point you want to manage or use.
• The console view always shows all objects in the bucket. Using an access point as
described in this procedure restricts the operations you can perform on those objects, but
not whether you can see that they exist in the bucket.
• The S3 Management Console doesn't support using virtual private cloud (VPC) access
points to access bucket resources. To access bucket resources from a VPC access point, use
the AWS CLI, AWS SDKs, or Amazon S3 REST APIs.
2. Choose Permissions.
3. Under access point policy, choose Edit.
4. Enter the access point policy in the text field. The console automatically displays the Amazon
Resource Name (ARN) for the access point, which you can use in the policy.
The following shows an example ARN and access point alias for an access point named my-access-
point.
• ARN — arn:aws:s3:region:account-id:accesspoint/my-access-point
• Access point alias — my-access-point-hrzrlukc5m36ft7okagglf3gmwluquse1b-s3alias
For more information about ARNs, see Amazon Resource Names (ARNs) in the AWS General Reference.
When you create an access point, Amazon S3 automatically generates an access point alias name, as
shown in the following example.
"AccessPointArn":
"arn:aws:s3:region:111122223333:accesspoint/my-access-point",
"Alias": "my-access-point-aqfqprnstn7aefdfbarligizwgyfouse1a-s3alias"
}
You can use this access point alias name instead of an Amazon S3 bucket name in any data plane
operation. For a list of these operations, see Access point compatibility with AWS services (p. 198).
Limitations
Topics
• Access point compatibility with AWS services (p. 198)
• Access point compatibility with S3 operations (p. 198)
• Request an object through an access point (p. 198)
• Upload an object through an access point alias (p. 199)
• Delete an object through an access point (p. 199)
• List objects through an access point alias (p. 199)
• Add a tag set to an object through an access point (p. 199)
• Grant access permissions through an access point using an ACL (p. 199)
S3 operations
• AbortMultipartUpload
• CompleteMultipartUpload
• CopyObject (same-region copies only)
• CreateMultipartUpload
• DeleteObject
• DeleteObjectTagging
• GetBucketLocation
• GetObject
• GetObjectAcl
• GetObjectLegalHold
• GetObjectRetention
• GetObjectTagging
• HeadBucket
• HeadObject
• ListMultipartUploads
• ListObjects
• ListObjectsV2
• ListParts
• Presign
• PutObject
• PutObjectLegalHold
• PutObjectRetention
• PutObjectAcl
• PutObjectTagging
• RestoreObject
• UploadPart
• UploadPartCopy (same-region copies only)
AWS CLI
AWS CLI
AWS CLI
AWS CLI
AWS CLI
AWS CLI
• You can only create access points for buckets that you own.
• Each access point is associated with exactly one bucket, which you must specify when you create the
access point. After you create an access point, you can't associate it with a different bucket. However,
you can delete an access point and then create another one with the same name associated with a
different bucket.
• Access point names must meet certain conditions. For more information about naming access points,
see Rules for naming Amazon S3 access points (p. 189).
• After you create an access point, you can't change its virtual private cloud (VPC) configuration.
• Access point policies are limited to 20 KB in size.
• You can create a maximum of 1,000 access points per AWS account per Region. If you need more than
1,000 access points for a single account in a single Region, you can request a service quota increase.
For more information about service quotas and requesting an increase, see AWS Service Quotas in the
AWS General Reference.
• You can't use an access point as a destination for S3 Replication. For more information about
replication, see Replicating objects (p. 623).
• You can only address access points using virtual-host-style URLs. For more information about virtual-
host-style addressing, see Accessing a bucket (p. 33).
• APIs that control access point functionality (for example, PutAccessPoint and
GetAccessPointPolicy) don't support cross-account calls.
• You must use AWS Signature Version 4 when making requests to an access point using the REST APIs.
For more information about authenticating requests, see Authenticating Requests (AWS Signature
Version 4) in the Amazon Simple Storage Service API Reference.
• Access points only support access over HTTPS.
• Access points don't support anonymous access.
When you create a Multi-Region Access Point, you specify a set of Regions where you want to store data
to be served through that Multi-Region Access Point. You can use S3 Cross-Region Replication (CRR)
to synchronize data among buckets in those Regions. You can then request or write data through the
Multi-Region Access Point global endpoint. Amazon S3 automatically serves the request to the replicated
dataset from the available Region over the AWS global network with the lowest latency. Multi-Region
Access Points are also compatible with applications running in Amazon virtual private clouds (VPCs),
including those using AWS PrivateLink for Amazon S3 (p. 266).
The following is a graphical representation of a Multi-Region Access Point and how it routes requests to
buckets.
Topics
• Creating Multi-Region Access Points (p. 202)
• Making requests using a Multi-Region Access Point (p. 208)
• Managing Multi-Region Access Points (p. 213)
• Monitoring and logging requests made through a Multi-Region Access Point to underlying
resources (p. 214)
• Multi-Region Access Point restrictions and limitations (p. 216)
When you use the API, the request to create a Multi-Region Access Point is asynchronous. When you
submit a request to create a Multi-Region Access Point, Amazon S3 synchronously authorizes the
request. It then immediately returns a token that you can use to track the progress of the creation
request. For more information about tracking asynchronous requests to create and manage Multi-Region
Access Points, see Managing Multi-Region Access Points (p. 213).
After you create the Multi-Region Access Point, you can create an access control policy for it. Each Multi-
Region Access Point can have an associated policy. A Multi-Region Access Point policy is a resource-
based policy that allows you to limit the use of the Multi-Region Access Point by resource, user, or other
conditions.
Note
For an application or user to be able to access an object through a Multi-Region Access Point,
both the access policy for the Multi-Region Access Point and the access policy for the underlying
buckets that contain the object must permit the request. When the two policies are different,
the more restrictive policy takes precedence.
Using a bucket with a Multi-Region Access Point does not change the bucket's behavior when the
bucket is accessed through the existing bucket name or an Amazon Resource Name (ARN). All existing
operations against the bucket continue to work as before. Restrictions that you include in a Multi-Region
Access Point policy apply only to requests that are made through the Multi-Region Access Point.
You can update the policy for a Multi-Region Access Point after creating it, but you can't delete the
policy. The closest possible approximation to deleting a policy is to update the Multi-Region Access Point
policy to deny all permissions.
Topics
• Rules for naming Amazon S3 Multi-Region Access Points (p. 203)
• Rules for choosing buckets for Amazon S3 Multi-Region Access Points (p. 204)
• Blocking public access with Amazon S3 Multi-Region Access Points (p. 204)
• Creating Amazon S3 Multi-Region Access Points (p. 205)
• Configuring a Multi-Region Access Point for use with AWS PrivateLink (p. 206)
You use this name when invoking Multi-Region Access Point management operations, such as
GetMultiRegionAccessPoint and UpdateMultiRegionAccessPointPolicy. The name is not
used to send requests to the Multi-Region Access Point, and it doesn’t need to be exposed to clients who
make requests using the Multi-Region Access Point.
When Amazon S3 creates a Multi-Region Access Point, it automatically assigns an alias to it. This alias is
a unique alphanumeric string that ends in .mrap. The alias is used to construct the hostname and the
Amazon Resource Name (ARN) for a Multi-Region Access Point. The fully qualified name is also based on
the alias for the Multi-Region Access Point.
You can’t determine the name of a Multi-Region Access Point from its alias, so you can disclose an
alias without risk of exposing the name, purpose, or owner of the Multi-Region Access Point. Amazon
S3 selects the alias for each new Multi-Region Access Point, and the alias can’t be changed. For more
information about addressing a Multi-Region Access Point, see Making requests using a Multi-Region
Access Point (p. 208).
Multi-Region Access Point aliases are unique throughout time and aren’t based on the name or
configuration of a Multi-Region Access Point. If you create a Multi-Region Access Point, and then delete
it and create another one with the same name and configuration, the second Multi-Region Access Point
will have a different alias than the first. New Multi-Region Access Points can never have the same alias as
a previous Multi-Region Access Point.
• You can specify the buckets that are associated with a Multi-Region Access Point only at the
time that you create it. After it is created, you can’t add, modify, or remove buckets from the
Multi-Region Access Point configuration. To change the buckets, you must delete the entire
Multi-Region Access Point and create a new one.
• You can't delete a bucket that is part of a Multi-Region Access Point. If you want to delete a
bucket attached to a Multi-Region Access Point, delete the Multi-Region Access Point first.
• The AWS account that owns the Multi-Region Access Point must also own the associated
buckets. For more information about using permissions with Multi-Region Access Points, see
Multi-Region Access Point permissions (p. 210).
• Not all Regions support Multi-Region Access Points. To see the list of supported Regions, see
Multi-Region Access Point restrictions and limitations (p. 216).
You can create replication rules to synchronize data between buckets. These rules enable you to
automatically copy data from source buckets to destination buckets. Having buckets connected to a
Multi-Region Access Point does not affect how replication works. Configuring replication with Multi-
Region Access Points is described in a later section.
It is important to realize that when you make a request to a Multi-Region Access Point, the Multi-Region
Access Point does not make any considerations about which bucket can fulfill the request. This is why
replication is recommended. Otherwise, one of the buckets in the Multi-Region Access Point might have
the necessary data, but there's no way to guarantee it will receive the request. For more information, see
Configuring bucket replication for use with Multi-Region Access Points (p. 212).
When Amazon S3 authorizes a request, it applies the most restrictive combination of these settings. If
the Block Public Access settings for any of these resources (the Multi-Region Access Point, the underlying
bucket, or the owner account) block access for the requested action or resource, Amazon S3 rejects the
request.
We recommend that you enable all Block Public Access settings unless you have a specific need to
disable any of them. By default, all Block Public Access settings are enabled for a Multi-Region Access
Point. Be ware that if Block Public Access is enabled, the Multi-Region Access Point will not be able to
accept internet-based requests.
Important
Amazon S3 doesn’t currently support changing the Block Public Access settings for a Multi-
Region Access Point after it has been created.
For more information about Amazon S3 Block Public Access, see Blocking public access to your Amazon
S3 storage (p. 488).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the navigation pane, choose Multi-Region Access Points.
3. In the Multi-Region Access Point name field, supply a name for the Multi-Region Access Point.
4. To select the buckets that will be associated with this Multi-Region Access Point, choose Add
buckets .
To create a new bucket, choose Create bucket. After creating the bucket, choose Add buckets to add
the bucket to the Multi-Region Access Point.
For more information about creating buckets, see Creating a bucket (p. 28).
5. Under Block Public Access settings for this Multi-Region Access Point, select the Block Public
Access settings that you want to apply to the Multi-Region Access Point. By default, all Block Public
Access settings are enabled for new Multi-Region Access Points. We recommend that you leave all
settings enabled unless you know that you have a specific need to disable any of them.
Note
Amazon S3 currently doesn't support changing a Multi-Region Access Point's Block Public
Access settings after the Multi-Region Access Point has been created.
6. Choose Create Multi-Region Access Point.
The following example creates a Multi-Region Access Point with two buckets using the AWS CLI.
"BlockPublicAcls": true,
"IgnorePublicAcls": true,
"BlockPublicPolicy": true,
"RestrictPublicBuckets": true
},
"Regions": [
{ "Bucket": "DOC-EXAMPLE-BUCKET1" },
{ "Bucket": "DOC-EXAMPLE-BUCKET2" }
]
}'
Topics
• Configuring a Multi-Region Access Point for use with AWS PrivateLink (p. 206)
To make requests to a Multi-Region Access Point via interface endpoints, follow these steps to configure
the VPC and the Multi-Region Access Point.
1. Create or have an appropriate VPC endpoint that can connect to Multi-Region Access Points. For
more information about creating VPC endpoints, see Interface VPC endpoints in the VPC User Guide.
Important
Make sure to create a com.amazonaws.s3-global.accesspoint endpoint. Other endpoint
types cannot access Multi-Region Access Points.
After this VPC endpoint is created, all Multi-Region Access Point requests in the VPC route through
this endpoint if you have private DNS enabled for the endpoint. This is enabled by default.
2. If the Multi-Region Access Point policy does not support connections from VPC endpoints, you will
need to update it.
3. Verify that the individual bucket policies will allow access to the users of the Multi-Region Access
Point.
Remember that Multi-Region Access Points work by routing requests to buckets, not by fulfilling requests
themselves. This is important to remember because the originator of the request must have permissions
to the Multi-Region Access Point and be allowed to access the individual buckets in the Multi-Region
Access Point. Otherwise, the request might be routed to a bucket where the originator doesn't have
permissions to fulfill the request. A Multi-Region Access Point and the buckets must be owned by the
same AWS account. However, VPCs from different accounts can use a Multi-Region Access Point if the
permissions are configured correctly.
Because of this, the VPC endpoint policy must allow access both to the Multi-Region Access Point
and to each underlying bucket that you want to be able to fulfill requests. For example, suppose
that you have a Multi-Region Access Point with alias mfzwi23gnjvgw.mrap. It is backed by buckets
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Read-buckets-and-MRAP-VPCE-policy",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::doc-examplebucket1/*",
"arn:aws:s3:::doc-examplebucket2/*",
"arn:aws:s3::123456789012:accesspoint/mfzwi23gnjvgw.mrap/object/*"
]
}]
}
As mentioned previously, you also must make sure that the Multi-Region Access Point policy is
configured to support access through a VPC endpoint. You don't need to specify the VPC endpoint that
is requesting access. The following sample policy would grant access to any requestor trying to use the
Multi-Region Access Point for the GetObject requests.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Open-read-MRAP-policy"
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3::123456789012:accesspoint/mfzwi23gnjvgw.mrap/object/*",
}]
}
And of course, the individual buckets would each need a policy to support access from requests
submitted through VPC endpoint. The following example policy grants read access to any anonymous
users, which would include requests made through the VPC endpoint.
{
"Version":"2012-10-17",
"Statement": [
{
"Sid": "Public-read",
"Effect":"Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::doc-examplebucket1",
"arn:aws:s3:::doc-examplebucket2/*"]
}]
}
For more information about editing a VPCE policy, see Control access to services with VPC endpoints in
the VPC User Guide.