0% found this document useful (0 votes)
571 views1,069 pages

Amazon Simple Storage Service: User Guide API Version 2006-03-01

Uploaded by

teo2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
571 views1,069 pages

Amazon Simple Storage Service: User Guide API Version 2006-03-01

Uploaded by

teo2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1069

Amazon Simple Storage Service

User Guide
API Version 2006-03-01
Amazon Simple Storage Service User Guide

Amazon Simple Storage Service: User Guide


Copyright © 2021 Amazon Web Services, Inc. and/or its affiliates. All rights reserved.

Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon Simple Storage Service User Guide

Table of Contents
........................................................................................................................................................ x
What is Amazon S3? ........................................................................................................................... 1
How do I...? ............................................................................................................................... 1
Advantages of using Amazon S3 .................................................................................................. 1
Amazon S3 concepts .................................................................................................................. 2
Buckets ............................................................................................................................. 2
Objects ............................................................................................................................. 3
Keys ................................................................................................................................. 3
Regions ............................................................................................................................. 3
Amazon S3 data consistency model ...................................................................................... 3
Amazon S3 features ................................................................................................................... 5
Storage classes .................................................................................................................. 6
Bucket policies ................................................................................................................... 6
AWS identity and access management .................................................................................. 7
Access control lists ............................................................................................................. 7
Versioning ......................................................................................................................... 7
Operations ........................................................................................................................ 7
Amazon S3 application programming interfaces (API) ..................................................................... 7
The REST interface ............................................................................................................. 8
The SOAP interface ............................................................................................................ 8
Paying for Amazon S3 ................................................................................................................ 8
Related services ......................................................................................................................... 8
Getting started ................................................................................................................................ 10
Setting up ............................................................................................................................... 10
Sign up for AWS .............................................................................................................. 11
Create an IAM user ........................................................................................................... 11
Sign in as an IAM user ...................................................................................................... 12
Step 1: Create a bucket ............................................................................................................. 12
Step 2: Upload an object .......................................................................................................... 13
Step 3: Download an object ...................................................................................................... 14
Step 4: Copy an object ............................................................................................................. 14
Step 5: Delete the objects and bucket ........................................................................................ 15
Emptying your bucket ....................................................................................................... 15
Deleting an object ............................................................................................................ 16
Deleting your bucket ........................................................................................................ 16
Where do I go from here? ......................................................................................................... 16
Common use scenarios ...................................................................................................... 17
Considerations ................................................................................................................. 17
Advanced features ............................................................................................................ 18
Changing the console language ......................................................................................... 18
Access control .................................................................................................................. 19
Development resources ..................................................................................................... 23
Working with buckets ....................................................................................................................... 24
Buckets overview ...................................................................................................................... 24
About permissions ............................................................................................................ 25
Managing public access to buckets ..................................................................................... 25
Bucket configuration ......................................................................................................... 26
Naming rules ........................................................................................................................... 27
Example bucket names ..................................................................................................... 28
Creating a bucket ..................................................................................................................... 28
Viewing bucket properties ......................................................................................................... 33
Accessing a bucket ................................................................................................................... 33
Virtual-hosted–style access ................................................................................................ 34
Path-style access .............................................................................................................. 34

API Version 2006-03-01


iii
Amazon Simple Storage Service User Guide

Accessing an S3 bucket over IPv6 ....................................................................................... 34


Accessing a bucket through S3 Access Points ....................................................................... 35
Accessing a bucket using S3:// ........................................................................................... 35
Emptying a bucket ................................................................................................................... 35
Deleting a bucket ..................................................................................................................... 37
Setting default bucket encryption .............................................................................................. 39
Using encryption for cross-account operations ..................................................................... 40
Using default encryption with replication ............................................................................ 40
Using Amazon S3 Bucket Keys with default encryption ......................................................... 41
Enabling default encryption ............................................................................................... 41
Monitoring default encryption ........................................................................................... 43
Configuring Transfer Acceleration ............................................................................................... 43
Why use Transfer Acceleration? .......................................................................................... 44
Requirements for using Transfer Acceleration ....................................................................... 44
Getting Started ................................................................................................................ 45
Enabling Transfer Acceleration ........................................................................................... 46
Speed Comparison tool ..................................................................................................... 50
Using Requester Pays ................................................................................................................ 51
How Requester Pays charges work ...................................................................................... 51
Configuring Requester Pays ............................................................................................... 52
Retrieving the requestPayment configuration ....................................................................... 53
Downloading objects in Requester Pays buckets ................................................................... 54
Restrictions and limitations ....................................................................................................... 54
Working with objects ........................................................................................................................ 56
Objects ................................................................................................................................... 56
Subresources .................................................................................................................... 57
Creating object keys ................................................................................................................. 58
Object key naming guidelines ............................................................................................ 58
Working with metadata ............................................................................................................. 60
System-defined object metadata ........................................................................................ 61
User-defined object metadata ............................................................................................ 62
Editing object metadata .................................................................................................... 63
Uploading objects .................................................................................................................... 65
Using multipart upload ............................................................................................................. 72
Multipart upload process ................................................................................................... 73
Concurrent multipart upload operations .............................................................................. 74
Multipart upload and pricing ............................................................................................. 74
API support for multipart upload ....................................................................................... 74
Multipart upload API and permissions ................................................................................. 75
Configuring a lifecycle policy ............................................................................................. 77
Uploading an object using multipart upload ........................................................................ 78
Uploading a directory ....................................................................................................... 88
Listing multipart uploads .................................................................................................. 90
Tracking a multipart upload .............................................................................................. 92
Aborting a multipart upload .............................................................................................. 94
Copying an object ............................................................................................................ 98
Multipart upload limits .................................................................................................... 102
Copying objects ...................................................................................................................... 102
To copy an object ........................................................................................................... 103
Downloading an object ........................................................................................................... 109
Deleting objects ..................................................................................................................... 115
Programmatically deleting objects from a version-enabled bucket ........................................ 116
Deleting objects from an MFA-enabled bucket .................................................................... 116
Deleting a single object ................................................................................................... 117
Deleting multiple objects ................................................................................................. 123
Organizing and listing objects .................................................................................................. 135
Using prefixes ................................................................................................................ 136

API Version 2006-03-01


iv
Amazon Simple Storage Service User Guide

Listing objects ................................................................................................................ 137


Using folders ................................................................................................................. 141
Viewing an object overview ............................................................................................. 143
Viewing object properties ................................................................................................ 144
Using a presigned URL ............................................................................................................ 144
Limiting presigned URL capabilities ................................................................................... 145
Generating a presigned URL ............................................................................................ 146
Uploading objects using presigned URLs ............................................................................ 149
Using BitTorrent ..................................................................................................................... 153
How you are charged for BitTorrent delivery ...................................................................... 154
Using BitTorrent to retrieve objects stored in Amazon S3 ..................................................... 154
Publishing content using Amazon S3 and BitTorrent ........................................................... 155
Security ......................................................................................................................................... 156
Data protection ...................................................................................................................... 156
Data encryption ..................................................................................................................... 157
Server-side encryption .................................................................................................... 157
Using client-side encryption ............................................................................................. 198
Internetwork privacy ............................................................................................................... 202
Traffic between service and on-premises clients and applications .......................................... 202
Traffic between AWS resources in the same Region ............................................................. 202
AWS PrivateLink for Amazon S3 ............................................................................................... 202
Types of VPC endpoints .................................................................................................. 203
Accessing Amazon S3 interface endpoints .......................................................................... 203
Accessing buckets and S3 access points from S3 interface endpoints ..................................... 204
Updating an on-premises DNS configuration ...................................................................... 206
Creating a VPC endpoint policy ........................................................................................ 207
Identity and access management .............................................................................................. 209
Overview ....................................................................................................................... 210
Access policy guidelines ................................................................................................... 215
Request authorization ..................................................................................................... 219
Bucket policies and user policies ....................................................................................... 226
Managing access with ACLs .............................................................................................. 383
Using CORS ................................................................................................................... 397
Blocking public access ..................................................................................................... 408
Using access points ......................................................................................................... 418
Reviewing bucket access .................................................................................................. 432
Controlling object ownership ........................................................................................... 436
Verifying bucket ownership .............................................................................................. 438
Logging and monitoring .......................................................................................................... 442
Compliance Validation ............................................................................................................. 443
Resilience .............................................................................................................................. 444
Backup encryption .......................................................................................................... 445
Infrastructure security ............................................................................................................. 446
Configuration and vulnerability analysis .................................................................................... 447
Security Best Practices ............................................................................................................ 448
Amazon S3 Preventative Security Best Practices ................................................................. 448
Amazon S3 Monitoring and Auditing Best Practices ............................................................ 450
Managing storage ........................................................................................................................... 453
Using S3 Versioning ................................................................................................................ 453
Unversioned, versioning-enabled, and versioning-suspended buckets ..................................... 453
Using S3 Versioning with S3 Lifecycle ............................................................................... 454
How S3 Versioning works ................................................................................................ 454
Enabling versioning on buckets ........................................................................................ 457
Configuring MFA delete ................................................................................................... 460
Working with versioning-enabled objects ........................................................................... 462
Working with versioning-suspended objects ....................................................................... 477
Working with archived objects ......................................................................................... 479

API Version 2006-03-01


v
Amazon Simple Storage Service User Guide

Using Object Lock .................................................................................................................. 488


Overview ....................................................................................................................... 489
Configuring Object Lock .................................................................................................. 492
Managing object locks ..................................................................................................... 493
Managing storage classes ........................................................................................................ 496
Frequently accessed objects ............................................................................................. 497
Automatically optimizing data with changing or unknown access patterns ............................. 497
Infrequently accessed objects ........................................................................................... 498
Archiving objects ............................................................................................................ 498
Amazon S3 on Outposts .................................................................................................. 499
Comparing storage classes ............................................................................................... 499
Setting the storage class of an object ............................................................................... 500
Managing lifecycle .................................................................................................................. 501
Managing object lifecycle ................................................................................................ 501
Creating a lifecycle configuration ...................................................................................... 501
Transitioning objects ....................................................................................................... 502
Expiring objects .............................................................................................................. 507
Setting lifecycle configuration .......................................................................................... 507
Using other bucket configurations .................................................................................... 517
Lifecycle configuration elements ...................................................................................... 519
Examples of lifecycle configuration ................................................................................... 525
Managing inventory ................................................................................................................ 535
Amazon S3 inventory buckets .......................................................................................... 535
Inventory lists ................................................................................................................ 536
Configuring Amazon S3 inventory .................................................................................... 537
Setting up notifications for inventory completion ............................................................... 541
Locating your inventory .................................................................................................. 541
Querying inventory with Athena ....................................................................................... 544
Replicating objects .................................................................................................................. 545
Why use replication ........................................................................................................ 546
When to use Cross-Region Replication .............................................................................. 547
When to use Same-Region Replication .............................................................................. 547
Requirements for replication ............................................................................................ 547
What's replicated? ........................................................................................................... 548
Setting up replication ..................................................................................................... 550
Configuring replication .................................................................................................... 564
Additional configurations ................................................................................................. 590
Getting replication status ................................................................................................ 604
Troubleshooting ............................................................................................................. 606
Additional considerations ................................................................................................. 607
Using object tags ................................................................................................................... 609
API operations related to object tagging ........................................................................... 610
Additional configurations ................................................................................................. 611
Access control ................................................................................................................ 612
Managing object tags ...................................................................................................... 615
Using cost allocation tags ........................................................................................................ 618
More Info ...................................................................................................................... 619
Billing and usage reporting .............................................................................................. 619
Using Amazon S3 Select .......................................................................................................... 634
Requirements and limits .................................................................................................. 634
Constructing a request .................................................................................................... 635
Errors ............................................................................................................................ 635
Selecting content ........................................................................................................... 636
SQL Reference ............................................................................................................... 638
Performing Batch Operations ................................................................................................... 662
Batch Ops basics ............................................................................................................ 662
Granting permissions ...................................................................................................... 663

API Version 2006-03-01


vi
Amazon Simple Storage Service User Guide

Creating a job ................................................................................................................ 669


Operations ..................................................................................................................... 676
Managing jobs ................................................................................................................ 688
Using tags ..................................................................................................................... 695
Managing S3 Object Lock ................................................................................................ 705
Copying objects across AWS accounts ............................................................................... 721
Tracking a Batch Operations job ....................................................................................... 726
Completion reports ......................................................................................................... 729
Monitoring Amazon S3 .................................................................................................................... 732
Monitoring tools ..................................................................................................................... 732
Automated tools ............................................................................................................ 732
Manual tools .................................................................................................................. 733
Logging options ..................................................................................................................... 733
Logging with CloudTrail .......................................................................................................... 735
Using CloudTrail logs with Amazon S3 server access logs and CloudWatch Logs ...................... 735
CloudTrail tracking with Amazon S3 SOAP API calls ............................................................ 736
CloudTrail events ............................................................................................................ 736
Example log files ............................................................................................................ 740
Enabling CloudTrail ......................................................................................................... 744
Identifying S3 requests ................................................................................................... 745
Logging server access ............................................................................................................. 751
How do I enable log delivery? .......................................................................................... 751
Log object key format ..................................................................................................... 752
How are logs delivered? .................................................................................................. 752
Best effort server log delivery .......................................................................................... 752
Bucket logging status changes take effect over time ........................................................... 753
Enabling server access logging ......................................................................................... 753
Log format .................................................................................................................... 759
Deleting log files ............................................................................................................ 768
Identifying S3 requests ................................................................................................... 768
Monitoring metrics with CloudWatch ........................................................................................ 772
Metrics and dimensions ................................................................................................... 773
Accessing CloudWatch metrics .......................................................................................... 779
CloudWatch metrics configurations ................................................................................... 780
Amazon S3 Event Notifications ................................................................................................ 785
Overview ....................................................................................................................... 785
Notification types and destinations ................................................................................... 787
Granting permissions ...................................................................................................... 789
Enabling event notifications ............................................................................................. 792
Walkthrough: Configuring SNS or SQS .............................................................................. 794
Configuring notifications using object key name filtering ..................................................... 800
Event message structure ................................................................................................. 804
Using analytics and insights ............................................................................................................. 809
Storage Class Analysis ............................................................................................................. 809
How to set up storage class analysis ................................................................................. 809
Storage class analysis ...................................................................................................... 810
How can I export storage class analysis data? .................................................................... 811
Configuring storage class analysis ..................................................................................... 812
S3 Storage Lens ..................................................................................................................... 814
Understanding S3 Storage Lens ....................................................................................... 815
Working with Organizations ............................................................................................. 819
Setting permissions ........................................................................................................ 821
Viewing storage metrics .................................................................................................. 823
Metrics glossary ............................................................................................................. 828
Examples and walk-through ............................................................................................. 832
Tracing requests using X-Ray ................................................................................................... 855
How X-Ray works with Amazon S3 ................................................................................... 855

API Version 2006-03-01


vii
Amazon Simple Storage Service User Guide

Available Regions ........................................................................................................... 856


Hosting a static website .................................................................................................................. 857
Website endpoints .................................................................................................................. 857
Website endpoint examples ............................................................................................. 858
Adding a DNS CNAME ..................................................................................................... 858
Using a custom domain with Route 53 .............................................................................. 859
Key differences between a website endpoint and a REST API endpoint ................................... 859
Enabling website hosting ......................................................................................................... 859
Configuring an index document ............................................................................................... 863
Index document and folders ............................................................................................ 863
Configure an index document .......................................................................................... 864
Configuring a custom error document ....................................................................................... 865
Amazon S3 HTTP response codes ..................................................................................... 865
Configuring a custom error document ............................................................................... 866
Setting permissions for website access ...................................................................................... 867
Step 1: Edit S3 Block Public Access settings ....................................................................... 868
Step 2: Add a bucket policy ............................................................................................. 869
Logging web traffic ................................................................................................................. 870
Configuring a redirect ............................................................................................................. 871
Setting an object redirect using the Amazon S3 console ...................................................... 871
Setting an object redirect using the REST API .................................................................... 872
Redirecting requests for a bucket's website endpoint to another host .................................... 873
Configuring advanced conditional redirects ........................................................................ 873
Example walkthroughs ............................................................................................................ 878
Configuring a static website ............................................................................................. 878
Configuring a static website using a custom domain ........................................................... 884
Developing with Amazon S3 ............................................................................................................ 900
Making requests ..................................................................................................................... 900
About access keys ........................................................................................................... 900
Request endpoints .......................................................................................................... 902
Making requests over IPv6 ............................................................................................... 902
Making requests using the AWS SDKs ............................................................................... 909
Making requests using the REST API ................................................................................. 933
Using the AWS CLI .................................................................................................................. 942
Using the AWS SDKs ............................................................................................................... 943
Specifying the Signature Version in Request Authentication ................................................. 944
Using the AWS SDK for Java ............................................................................................ 949
Using the AWS SDK for .NET ............................................................................................ 950
Using the AWS SDK for PHP and Running PHP Examples ..................................................... 952
Using the AWS SDK for Ruby - Version 3 ........................................................................... 953
Using the AWS SDK for Python (Boto) ............................................................................... 954
Using the AWS Mobile SDKs for iOS and Android ............................................................... 954
Using the AWS Amplify JavaScript Library ......................................................................... 954
Using the AWS SDK for JavaScript .................................................................................... 955
Using the REST API ................................................................................................................ 955
Request routing .............................................................................................................. 955
Error handling ........................................................................................................................ 959
The REST error response ................................................................................................. 960
The SOAP error response ................................................................................................. 961
Amazon S3 error best practices ........................................................................................ 961
Reference .............................................................................................................................. 962
Appendix a: Using the SOAP API ...................................................................................... 963
Appendix b: Authenticating requests (AWS signature version 2) ............................................ 965
Optimizing Amazon S3 performance ................................................................................................. 994
Performance Guidelines ........................................................................................................... 994
Measure Performance ..................................................................................................... 995
Scale Horizontally ........................................................................................................... 995

API Version 2006-03-01


viii
Amazon Simple Storage Service User Guide

Use Byte-Range Fetches .................................................................................................. 995


Retry Requests ............................................................................................................... 995
Combine Amazon S3 and Amazon EC2 in the Same Region .................................................. 996
Use Transfer Acceleration to Minimize Latency ................................................................... 996
Use the Latest AWS SDKs ................................................................................................ 996
Performance Design Patterns ................................................................................................... 996
Caching Frequently Accessed Content ............................................................................... 997
Timeouts and Retries for Latency-Sensitive Apps ................................................................ 997
Horizontal Scaling and Request Parallelization ................................................................... 998
Accelerating Geographically Disparate Data Transfers ......................................................... 999
Using S3 on Outposts ................................................................................................................... 1000
Getting started with Amazon S3 on Outposts .......................................................................... 1000
Order an Outpost ......................................................................................................... 1001
Setting up S3 on Outposts ............................................................................................ 1001
Restrictions and limitations .................................................................................................... 1001
Specifications ............................................................................................................... 1002
Data consistency model ................................................................................................. 1002
Supported API operations .............................................................................................. 1002
Unsupported Amazon S3 features ................................................................................... 1003
Network restrictions ...................................................................................................... 1004
Using IAM with S3 on Outposts .............................................................................................. 1004
ARNs for S3 on Outposts ............................................................................................... 1005
Working with S3 on Outposts ................................................................................................ 1005
Access using ARNs ........................................................................................................ 1006
Accessing S3 on Outposts using VPC ............................................................................... 1007
Monitoring S3 on Outposts ............................................................................................ 1008
Examples ............................................................................................................................. 1009
S3 on Outposts examples using the AWS CLI ................................................................... 1009
S3 on Outposts examples using the SDK for Java ............................................................. 1014
Troubleshooting ............................................................................................................................ 1034
Troubleshooting Amazon S3 by Symptom ................................................................................ 1034
Significant Increases in HTTP 503 Responses to Requests to Buckets with Versioning Enabled .. 1034
Unexpected Behavior When Accessing Buckets Set with CORS ............................................ 1035
Getting Amazon S3 Request IDs for AWS Support ..................................................................... 1035
Using HTTP to Obtain Request IDs .................................................................................. 1035
Using a Web Browser to Obtain Request IDs .................................................................... 1036
Using AWS SDKs to Obtain Request IDs ........................................................................... 1036
Using the AWS CLI to Obtain Request IDs ........................................................................ 1037
Related Topics ...................................................................................................................... 1037
Document History ......................................................................................................................... 1039
Earlier Updates ..................................................................................................................... 1045
AWS glossary ............................................................................................................................... 1059

API Version 2006-03-01


ix
Amazon Simple Storage Service User Guide

Welcome to the new Amazon S3 User Guide! The Amazon S3 User Guide combines information and
instructions from the three retired guides: Amazon S3 Developer Guide, Amazon S3 Console User Guide,
and Amazon S3 Getting Started Guide.

API Version 2006-03-01


x
Amazon Simple Storage Service User Guide
How do I...?

What is Amazon S3?


This introduction to Amazon Simple Storage Service (Amazon S3) provides a detailed summary of this
web service. After reading this section, you should have a good idea of what it offers and how it can fit in
with your business.

Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of
data, at any time, from anywhere on the web.

This guide describes how you send requests to create buckets, store and retrieve your objects, and
manage permissions on your resources. The guide also describes access control and the authentication
process. Access control defines who can access objects and buckets within Amazon S3, and the type of
access (for example, READ and WRITE). The authentication process verifies the identity of a user who is
trying to access Amazon Web Services (AWS).

Topics
• How do I...? (p. 1)
• Advantages of using Amazon S3 (p. 1)
• Amazon S3 concepts (p. 2)
• Amazon S3 features (p. 5)
• Amazon S3 application programming interfaces (API) (p. 7)
• Paying for Amazon S3 (p. 8)
• Related services (p. 8)

How do I...?
Information Relevant sections

General product overview and pricing Amazon S3

How do I work with buckets? Buckets overview (p. 24)

How do I work with access points? Managing data access with Amazon S3 access points
(p. 418)

How do I work with objects? Amazon S3 objects overview (p. 56)

How do I make requests? Making requests (p. 900)

How do I manage access to my Identity and access management in Amazon S3 (p. 209)
resources?

Advantages of using Amazon S3


Amazon S3 is intentionally built with a minimal feature set that focuses on simplicity and robustness.
Following are some of the advantages of using Amazon S3:

API Version 2006-03-01


1
Amazon Simple Storage Service User Guide
Amazon S3 concepts

• Creating buckets – Create and name a bucket that stores data. Buckets are the fundamental
containers in Amazon S3 for data storage.
• Storing data – Store an infinite amount of data in a bucket. Upload as many objects as you like into
an Amazon S3 bucket. Each object can contain up to 5 TB of data. Each object is stored and retrieved
using a unique developer-assigned key.
• Downloading data – Download your data or enable others to do so. Download your data anytime you
like, or allow others to do the same.
• Permissions – Grant or deny access to others who want to upload or download data into your
Amazon S3 bucket. Grant upload and download permissions to three types of users. Authentication
mechanisms can help keep data secure from unauthorized access.
• Standard interfaces – Use standards-based REST and SOAP interfaces designed to work with any
internet-development toolkit.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or
the AWS SDKs.

Amazon S3 concepts
This section describes key concepts and terminology you need to understand to use Amazon S3
effectively. They are presented in the order that you will most likely encounter them.

Topics
• Buckets (p. 2)
• Objects (p. 3)
• Keys (p. 3)
• Regions (p. 3)
• Amazon S3 data consistency model (p. 3)

Buckets
A bucket is a container for objects stored in Amazon S3. Every object is contained in a bucket. For
example, if the object named photos/puppy.jpg is stored in the awsexamplebucket1 bucket in the
US West (Oregon) Region, then it is addressable using the URL https://awsexamplebucket1.s3.us-
west-2.amazonaws.com/photos/puppy.jpg.

Buckets serve several purposes:

• They organize the Amazon S3 namespace at the highest level.


• They identify the account responsible for storage and data transfer charges.
• They play a role in access control.
• They serve as the unit of aggregation for usage reporting.

You can configure buckets so that they are created in a specific AWS Region. For more information, see
Accessing a Bucket (p. 33). You can also configure a bucket so that every time an object is added to it,
Amazon S3 generates a unique version ID and assigns it to the object. For more information, see Using
Versioning (p. 453).

For more information about buckets, see Buckets overview (p. 24).

API Version 2006-03-01


2
Amazon Simple Storage Service User Guide
Objects

Objects
Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata.
The data portion is opaque to Amazon S3. The metadata is a set of name-value pairs that describe
the object. These include some default metadata, such as the date last modified, and standard HTTP
metadata, such as Content-Type. You can also specify custom metadata at the time the object is
stored.

An object is uniquely identified within a bucket by a key (name) and a version ID. For more information,
see Keys (p. 3) and Using Versioning (p. 453).

Keys
A key is the unique identifier for an object within a bucket. Every object in a bucket has exactly
one key. The combination of a bucket, key, and version ID uniquely identify each object. So you
can think of Amazon S3 as a basic data map between "bucket + key + version" and the object
itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web
service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://
doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl, "doc" is the name of the bucket and
"2006-03-01/AmazonS3.wsdl" is the key.

For more information about object keys, see Creating object key names (p. 58).

Regions
You can choose the geographical AWS Region where Amazon S3 will store the buckets that you create.
You might choose a Region to optimize latency, minimize costs, or address regulatory requirements.
Objects stored in a Region never leave the Region unless you explicitly transfer them to another Region.
For example, objects stored in the Europe (Ireland) Region never leave it.
Note
You can only access Amazon S3 and its features in AWS Regions that are enabled for your
account.

For a list of Amazon S3 Regions and endpoints, see Regions and Endpoints in the AWS General Reference.

Amazon S3 data consistency model


Amazon S3 provides strong read-after-write consistency for PUTs and DELETEs of objects in your
Amazon S3 bucket in all AWS Regions. This applies to both writes to new objects as well as PUTs that
overwrite existing objects and DELETEs. In addition, read operations on Amazon S3 Select, Amazon
S3 Access Control Lists, Amazon S3 Object Tags, and object metadata (e.g. HEAD object) are strongly
consistent.

Updates to a single key are atomic. For example, if you PUT to an existing key from one thread and
perform a GET on the same key from a second thread concurrently, you will get either the old data or the
new data, but never partial or corrupt data.

Amazon S3 achieves high availability by replicating data across multiple servers within AWS data centers.
If a PUT request is successful, your data is safely stored. Any read (GET or LIST) that is initiated following
the receipt of a successful PUT response will return the data written by the PUT. Here are examples of
this behavior:

• A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new
object will appear in the list.

API Version 2006-03-01


3
Amazon Simple Storage Service User Guide
Amazon S3 data consistency model

• A process replaces an existing object and immediately tries to read it. Amazon S3 will return the new
data.
• A process deletes an existing object and immediately tries to read it. Amazon S3 will not return any
data as the object has been deleted.
• A process deletes an existing object and immediately lists keys within its bucket. The object will not
appear in the listing.

Note

• Amazon S3 does not support object locking for concurrent writers. If two PUT requests are
simultaneously made to the same key, the request with the latest timestamp wins. If this is an
issue, you will need to build an object-locking mechanism into your application
• Updates are key-based. There is no way to make atomic updates across keys. For example,
you cannot make the update of one key dependent on the update of another key unless you
design this functionality into your application.

Bucket configurations have an eventual consistency model. Specifically:

• If you delete a bucket and immediately list all buckets, the deleted bucket might still appear in the list.
• If you enable versioning on a bucket for the first time, it might take a short amount of time for the
change to be fully propagated. We recommend that you wait for 15 minutes after enabling versioning
before issuing write operations (PUT or DELETE) on objects in the bucket.

Concurrent applications
This section provides examples of behavior to be expected from Amazon S3 when multiple clients are
writing to the same items.

In this example, both W1 (write 1) and W2 (write 2) complete before the start of R1 (read 1) and R2 (read
2). Because S3 is strongly consistent, R1 and R2 both return color = ruby.

In the next example, W2 does not complete before the start of R1. Therefore, R1 might return color =
ruby or color = garnet. However, since W1 and W2 finish before the start of R2, R2 returns color =
garnet.

API Version 2006-03-01


4
Amazon Simple Storage Service User Guide
Amazon S3 features

In the last example, W2 begins before W1 has received an acknowledgement. Therefore, these writes are
considered concurrent. Amazon S3 internally uses last-writer-wins semantics to determine which write
takes precedence. However, the order in which Amazon S3 receives the requests and the order in which
applications receive acknowledgements cannot be predicted due to factors such as network latency.
For example, W2 might be initiated by an Amazon EC2 instance in the same region while W1 might be
initiated by a host that is further away. The best way to determine the final value is to perform a read
after both writes have been acknowledged.

Amazon S3 features
This section describes important Amazon S3 features.

Topics
• Storage classes (p. 6)
• Bucket policies (p. 6)
• AWS identity and access management (p. 7)
• Access control lists (p. 7)
• Versioning (p. 7)

API Version 2006-03-01


5
Amazon Simple Storage Service User Guide
Storage classes

• Operations (p. 7)

Storage classes
Amazon S3 offers a range of storage classes designed for different use cases. These include Amazon S3
STANDARD for general-purpose storage of frequently accessed data, Amazon S3 STANDARD_IA for long-
lived, but less frequently accessed data, and S3 Glacier for long-term archive.

For more information, see Using Amazon S3 storage classes (p. 496).

Bucket policies
Bucket policies provide centralized access control to buckets and objects based on a variety of conditions,
including Amazon S3 operations, requesters, resources, and aspects of the request (for example, IP
address). The policies are expressed in the access policy language and enable centralized management of
permissions. The permissions attached to a bucket apply to all of the bucket's objects that are owned by
the bucket owner account.

Both individuals and companies can use bucket policies. When companies register with Amazon S3,
they create an account. Thereafter, the company becomes synonymous with the account. Accounts are
financially responsible for the AWS resources that they (and their employees) create. Accounts have
the power to grant bucket policy permissions and assign employees permissions based on a variety of
conditions. For example, an account could create a policy that gives a user write access:

• To a particular S3 bucket
• From an account's corporate network
• During business hours

An account can grant one user limited read and write access, but allow another to create and delete
buckets also. An account could allow several field offices to store their daily reports in a single bucket. It
could allow each office to write only to a certain set of names (for example, "Nevada/*" or "Utah/*") and
only from the office's IP address range.

Unlike access control lists (described later), which can add (grant) permissions only on individual objects,
policies can either add or deny permissions across all (or a subset) of objects within a bucket. With one
request, an account can set the permissions of any number of objects in a bucket. An account can use
wildcards (similar to regular expression operators) on Amazon Resource Names (ARNs) and other values.
The account could then control access to groups of objects that begin with a common prefix or end with
a given extension, such as .html.

Only the bucket owner is allowed to associate a policy with a bucket. Policies (written in the access policy
language) allow or deny requests based on the following:

• Amazon S3 bucket operations (such as PUT ?acl), and object operations (such as PUT Object, or
GET Object)
• Requester
• Conditions specified in the policy

An account can control access based on specific Amazon S3 operations, such as GetObject,
GetObjectVersion, DeleteObject, or DeleteBucket.

The conditions can be such things as IP addresses, IP address ranges in CIDR notation, dates, user agents,
HTTP referrer, and transports (HTTP and HTTPS).

For more information, see Bucket policies and user policies (p. 226).

API Version 2006-03-01


6
Amazon Simple Storage Service User Guide
AWS identity and access management

AWS identity and access management


You can use AWS Identity and Access Management (IAM) to manage access to your Amazon S3 resources.

For example, you can use IAM with Amazon S3 to control the type of access a user or group of users has
to specific parts of an Amazon S3 bucket your AWS account owns.

For more information about IAM, see the following:

• AWS Identity and Access Management (IAM)


• Getting started
• IAM User Guide

Access control lists


You can control access to each of your buckets and objects using an access control list (ACL). For more
information, see Managing access with ACLs (p. 383).

Versioning
You can use versioning to keep multiple versions of an object in the same bucket. For more information,
see Using versioning in S3 buckets (p. 453).

Operations
Following are the most common operations that you'll run through the API.

Common operations

• Create a bucket – Create and name your own bucket in which to store your objects.
• Write an object – Store data by creating or overwriting an object. When you write an object, you
specify a unique key in the namespace of your bucket. This is also a good time to specify any access
control you want on the object.
• Read an object – Read data back. You can download the data via HTTP or BitTorrent.
• Delete an object – Delete some of your data.
• List keys – List the keys contained in one of your buckets. You can filter the key list based on a prefix.

These operations and all other functionality are described in detail throughout this guide.

Amazon S3 application programming interfaces


(API)
The Amazon S3 architecture is designed to be programming language-neutral, using AWS supported
interfaces to store and retrieve objects.

Amazon S3 provides a REST and a SOAP interface. They are similar, but there are some differences. For
example, in the REST interface, metadata is returned in HTTP headers. Because we only support HTTP
requests of up to 4 KB (not including the body), the amount of metadata you can supply is restricted.

API Version 2006-03-01


7
Amazon Simple Storage Service User Guide
The REST interface

Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

The REST interface


The REST API is an HTTP interface to Amazon S3. Using REST, you use standard HTTP requests to create,
fetch, and delete buckets and objects.

You can use any toolkit that supports HTTP to use the REST API. You can even use a browser to fetch
objects, as long as they are anonymously readable.

The REST API uses the standard HTTP headers and status codes, so that standard browsers and toolkits
work as expected. In some areas, we have added functionality to HTTP (for example, we added headers
to support access control). In these cases, we have done our best to add the new functionality in a way
that matched the style of standard HTTP usage.

The SOAP interface


Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

The SOAP API provides a SOAP 1.1 interface using document literal encoding. The most common way to
use SOAP is to download the WSDL (see https://doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl),
use a SOAP toolkit such as Apache Axis or Microsoft .NET to create bindings, and then write code that
uses the bindings to call Amazon S3.

Paying for Amazon S3


Pricing for Amazon S3 is designed so that you don't have to plan for the storage requirements of your
application. Most storage providers force you to purchase a predetermined amount of storage and
network transfer capacity: If you exceed that capacity, your service is shut off or you are charged high
overage fees. If you do not exceed that capacity, you pay as though you used it all.

Amazon S3 charges you only for what you actually use, with no hidden fees and no overage charges.
This gives developers a variable-cost service that can grow with their business while enjoying the cost
advantages of the AWS infrastructure.

Before storing anything in Amazon S3, you must register with the service and provide a payment method
that is charged at the end of each month. There are no setup fees to begin using the service. At the end
of the month, your payment method is automatically charged for that month's usage.

For information about paying for Amazon S3 storage, see Amazon S3 Pricing.

Related services
After you load your data into Amazon S3, you can use it with other AWS services. The following are the
services you might use most frequently:

• Amazon Elastic Compute Cloud (Amazon EC2) – This service provides virtual compute resources in
the cloud. For more information, see the Amazon EC2 product details page.

API Version 2006-03-01


8
Amazon Simple Storage Service User Guide
Related services

• Amazon EMR – This service enables businesses, researchers, data analysts, and developers to easily
and cost-effectively process vast amounts of data. It uses a hosted Hadoop framework running on the
web-scale infrastructure of Amazon EC2 and Amazon S3. For more information, see the Amazon EMR
product details page.
• AWS Snowball – This service accelerates transferring large amounts of data into and out of AWS using
physical storage devices, bypassing the internet. Each AWS Snowball device type can transport data
at faster-than internet speeds. This transport is done by shipping the data in the devices through a
regional carrier. For more information, see the AWS Snowball product details page.

API Version 2006-03-01


9
Amazon Simple Storage Service User Guide
Setting up

Getting started with Amazon S3


You can get started with Amazon S3 by working with buckets and objects. A bucket is a container for
objects. An object is a file and any metadata that describes that file.

To store an object in Amazon S3, you create a bucket and then upload the object to the bucket. When
the object is in the bucket, you can open it, download it, and move it. When you no longer need an object
or a bucket, you can clean up your resources.

With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.

Prerequisites

Before you begin, confirm that you've completed the steps in Prerequisite: Setting up Amazon
S3 (p. 10).

Topics
• Prerequisite: Setting up Amazon S3 (p. 10)
• Step 1: Create your first S3 bucket (p. 12)
• Step 2: Upload an object to your bucket (p. 13)
• Step 3: Download an object (p. 14)
• Step 4: Copy your object to a folder (p. 14)
• Step 5: Delete your objects and bucket (p. 15)
• Where do I go from here? (p. 16)

Prerequisite: Setting up Amazon S3


When you sign up for AWS, your AWS account is automatically signed up for all services in AWS,
including Amazon S3. You are charged only for the services that you use.

With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.

To set up Amazon S3, use the steps in the following sections.

When you sign up for AWS and set up Amazon S3, you can optionally change the display language in the
AWS Management Console. For more information, see Changing the language of the AWS Management
Console? (p. 18).

Topics
• Sign up for AWS (p. 11)
• Create an IAM user (p. 11)
• Sign in as an IAM user (p. 12)

API Version 2006-03-01


10
Amazon Simple Storage Service User Guide
Sign up for AWS

Sign up for AWS


If you do not have an AWS account, complete the following steps to create one.

To sign up for an AWS account

1. Open https://portal.aws.amazon.com/billing/signup.
2. Follow the online instructions.

Part of the sign-up procedure involves receiving a phone call and entering a verification code on the
phone keypad.

AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view
your current account activity and manage your account by going to https://aws.amazon.com/ and
choosing My Account.

Create an IAM user


When you first create an Amazon Web Services (AWS) account, you begin with a single sign-in identity.
That identity has complete access to all AWS services and resources in the account. This identity is called
the AWS account root user. When you sign in, enter the email address and password that you used to
create the account.
Important
We strongly recommend that you do not use the root user for your everyday tasks, even the
administrative ones. Instead, adhere to the best practice of using the root user only to create
your first IAM user. Then securely lock away the root user credentials and use them to perform
only a few account and service management tasks. To view the tasks that require you to sign in
as the root user, see AWS Tasks That Require Root User.

If you signed up for AWS but have not created an IAM user for yourself, follow these steps.

To create an administrator user for yourself and add the user to an administrators group
(console)

1. Sign in to the IAM console as the account owner by choosing Root user and entering your AWS
account email address. On the next page, enter your password.
Note
We strongly recommend that you adhere to the best practice of using the Administrator
IAM user below and securely lock away the root user credentials. Sign in as the root user
only to perform a few account and service management tasks.
2. In the navigation pane, choose Users and then choose Add user.
3. For User name, enter Administrator.
4. Select the check box next to AWS Management Console access. Then select Custom password, and
then enter your new password in the text box.
5. (Optional) By default, AWS requires the new user to create a new password when first signing in. You
can clear the check box next to User must create a new password at next sign-in to allow the new
user to reset their password after they sign in.
6. Choose Next: Permissions.
7. Under Set permissions, choose Add user to group.
8. Choose Create group.
9. In the Create group dialog box, for Group name enter Administrators.
10. Choose Filter policies, and then select AWS managed -job function to filter the table contents.

API Version 2006-03-01


11
Amazon Simple Storage Service User Guide
Sign in as an IAM user

11. In the policy list, select the check box for AdministratorAccess. Then choose Create group.
Note
You must activate IAM user and role access to Billing before you can use the
AdministratorAccess permissions to access the AWS Billing and Cost Management
console. To do this, follow the instructions in step 1 of the tutorial about delegating access
to the billing console.
12. Back in the list of groups, select the check box for your new group. Choose Refresh if necessary to
see the group in the list.
13. Choose Next: Tags.
14. (Optional) Add metadata to the user by attaching tags as key-value pairs. For more information
about using tags in IAM, see Tagging IAM entities in the IAM User Guide.
15. Choose Next: Review to see the list of group memberships to be added to the new user. When you
are ready to proceed, choose Create user.

You can use this same process to create more groups and users and to give your users access to your AWS
account resources. To learn about using policies that restrict user permissions to specific AWS resources,
see Access management and Example policies.

Sign in as an IAM user


After you create an IAM user, you can sign in to AWS with your IAM user name and password.

Before you sign in as an IAM user, you can verify the sign-in link for IAM users in the IAM console. On the
IAM Dashboard, under IAM users sign-in link, you can see the sign-in link for your AWS account. The URL
for your sign-in link contains your AWS account ID without dashes (‐).

If you don't want the URL for your sign-in link to contain your AWS account ID, you can create an account
alias. For more information, see Creating, deleting, and listing an AWS account alias in the IAM User
Guide.

To sign in as an AWS user

1. Sign out of the AWS Management Console.


2. Enter your sign-in link.

Your sign-in link includes your AWS account ID (without dashes) or your AWS account alias:

https://aws_account_id_or_alias.signin.aws.amazon.com/console

3. Enter the IAM user name and password that you just created.

When you're signed in, the navigation bar displays "your_user_name @ your_aws_account_id".

Step 1: Create your first S3 bucket


After you sign up for AWS, you're ready to create a bucket in Amazon S3 using the AWS Management
Console. Every object in Amazon S3 is stored in a bucket. Before you can store data in Amazon S3, you
must create a bucket.
Note
You are not charged for creating a bucket. You are charged only for storing objects in the
bucket and for transferring objects in and out of the bucket. The charges that you incur through
following the examples in this guide are minimal (less than $1). For more information about
storage charges, see Amazon S3 pricing.

API Version 2006-03-01


12
Amazon Simple Storage Service User Guide
Step 2: Upload an object

To create a bucket using the AWS Command Line Interface, see create-bucket in the AWS CLI Command
Reference.

To create a bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Create bucket.

The Create bucket page opens.


3. In Bucket name, enter a DNS-compliant name for your bucket.

The bucket name must:

• Be unique across all of Amazon S3.


• Be between 3 and 63 characters long.
• Not contain uppercase characters.
• Start with a lowercase letter or number.

After you create the bucket, you can't change its name. For information about naming buckets, see
Bucket naming rules (p. 27).
Important
Avoid including sensitive information, such as account numbers, in the bucket name. The
bucket name is visible in the URLs that point to the objects in the bucket.
4. In Region, choose the AWS Region where you want the bucket to reside.

Choose a Region that is close to you geographically to minimize latency and costs and to address
regulatory requirements. Objects stored in a Region never leave that Region unless you explicitly
transfer them to another Region. For a list of Amazon S3 AWS Regions, see AWS Service Endpoints
in the Amazon Web Services General Reference.
5. In Bucket settings for Block Public Access, keep the values set to the defaults.

By default, Amazon S3 blocks all public access to your buckets. We recommend that you keep
all Block Public Access settings enabled. For more information about blocking public access, see
Blocking public access to your Amazon S3 storage (p. 408).
6. Choose Create bucket.

You've created a bucket in Amazon S3.

Next step

To add an object to your bucket, see Step 2: Upload an object to your bucket (p. 13).

Step 2: Upload an object to your bucket


After creating a bucket in Amazon S3, you're ready to upload an object to the bucket. An object can be
any kind of file: a text file, a photo, a video, and so on.

To upload an object to a bucket

1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.


2. In the Buckets list, choose the name of the bucket that you want to upload your object to.

API Version 2006-03-01


13
Amazon Simple Storage Service User Guide
Step 3: Download an object

3. On the Objects tab for your bucket, choose Upload.


4. Under Files and folders, choose Add files.
5. Choose a file to upload, and then choose Open.
6. Choose Upload.

You've successfully uploaded an object to your bucket.

Next step

To view your object, see Step 3: Download an object (p. 14).

Step 3: Download an object


After you upload an object to a bucket, you can view information about your object and download the
object to your local computer.

To download an object from a bucket

1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.


2. In the Buckets list, choose the name of the bucket that you created.
3. In the Objects list, choose the name of the object that you uploaded.

The object overview opens.


4. On the Details tab, review information about your object.
5. To download the object to your computer, choose Object actions and choose Download.

You've successfully downloaded your object.

Next step

To copy and paste your object within Amazon S3, see Step 4: Copy your object to a folder (p. 14).

Step 4: Copy your object to a folder


You've already added an object to a bucket and downloaded the object. Now, you create a folder and
copy the object and paste it into the folder.

To copy an object to a folder

1. In the Buckets list, choose your bucket name.


2. Choose Create folder and configure a new folder:

a. Enter a folder name (for example, favorite-pics).


b. For the folder encryption setting, choose None.
c. Choose Save.
3. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to copy.
4. Select the check box to the left of the names of the objects that you want to copy.
5. Choose Actions and choose Copy from the list of options that appears.

API Version 2006-03-01


14
Amazon Simple Storage Service User Guide
Step 5: Delete the objects and bucket

Alternatively, choose Copy from the options in the upper right.


6. Choose the destination folder:

a. Choose Browse S3.


b. Choose the option button to the left of the folder name.

To navigate into a folder and choose a subfolder as your destination, choose the folder name.
c. Choose Choose destination.

The path to your destination folder appears in the Destination box. In Destination, you can
alternately enter your destination path, for example, s3://bucket-name/folder-name/.
7. In the bottom right, choose Copy.

Amazon S3 moves your objects to the destination folder.

Next step

To delete an object and a bucket in Amazon S3, see Step 5: Delete your objects and bucket (p. 15).

Step 5: Delete your objects and bucket


When you no longer need an object or a bucket, we recommend that you delete them to prevent further
charges. If you completed this getting started walkthrough as a learning exercise, and you don't plan to
use your bucket or objects, we recommend that you delete your bucket and objects so that charges no
longer accrue.

Before you delete your bucket, empty the bucket or delete the objects in the bucket. After you delete
your objects and bucket, they are no longer available.

If you want to continue to use the same bucket name, we recommend that you delete the objects or
empty the bucket, but don't delete the bucket. After you delete a bucket, the name becomes available
to reuse. However, another AWS account might create a bucket with the same name before you have a
chance to reuse it.

Topics
• Emptying your bucket (p. 15)
• Deleting an object (p. 16)
• Deleting your bucket (p. 16)

Emptying your bucket


If you plan to delete your bucket, you must first empty your bucket, which deletes all the objects in the
bucket.

To empty a bucket

1. In the Buckets list, select the bucket that you want to empty, and then choose Empty.
2. To confirm that you want to empty the bucket and delete all the objects in it, in Empty bucket,
enter the name of the bucket.

API Version 2006-03-01


15
Amazon Simple Storage Service User Guide
Deleting an object

Important
Emptying the bucket cannot be undone. Objects added to the bucket while the empty
bucket action is in progress will be deleted.
3. To empty the bucket and delete all the objects in it, and choose Empty.

An Empty bucket: Status page opens that you can use to review a summary of failed and successful
object deletions.
4. To return to your bucket list, choose Exit.

Deleting an object
If you want to choose which objects you delete without emptying all the objects from your bucket, you
can delete an object.

1. In the Buckets list, choose the name of the bucket that you want to delete an object from.
2. Select the check box to the left of the names of the objects that you want to delete.
3. Choose Actions and choose Delete from the list of options that appears.

Alternatively, choose Delete from the options in the upper right.


4. Enter delete if asked to confirm that you want to delete these objects.
5. Choose Delete objects in the bottom right and Amazon S3 deletes the specified objects.

Deleting your bucket


After you empty your bucket or delete all the objects from your bucket, you can delete your bucket.

1. To delete a bucket, in the Buckets list, select the bucket.


2. Choose Delete.
3. To confirm deletion, in Delete bucket, enter the name of the bucket.
Important
Deleting a bucket cannot be undone. Bucket names are unique. If you delete your bucket,
another AWS user can use the name. If you want to continue to use the same bucket name,
don't delete your bucket. Instead, empty and keep the bucket.
4. To delete your bucket, choose Delete bucket.

Where do I go from here?


In the preceding examples, you learned how to perform some basic Amazon S3 tasks.

The following topics explain various ways in which you can gain a deeper understanding of Amazon S3
so that you can implement it in your applications.

Topics
• Common use scenarios (p. 17)
• Considerations going forward (p. 17)
• Advanced Amazon S3 features (p. 18)
• Changing the language of the AWS Management Console? (p. 18)
• Access control best practices (p. 19)
• Development resources (p. 23)

API Version 2006-03-01


16
Amazon Simple Storage Service User Guide
Common use scenarios

Common use scenarios


The AWS Solutions site lists many of the ways you can use Amazon S3. The following list summarizes
some of those ways.

• Backup and storage – Provide data backup and storage services for others.
• Application hosting – Provide services that deploy, install, and manage web applications.
• Media hosting – Build a redundant, scalable, and highly available infrastructure that hosts video,
photo, or music uploads and downloads.
• Software delivery – Host your software applications that customers can download.

For more information, see AWS Solutions.

Considerations going forward


This section introduces you to topics you should consider before launching your own Amazon S3 product.

Topics
• AWS account and security credentials (p. 17)
• Security (p. 17)
• AWS integration (p. 17)
• Pricing (p. 18)

AWS account and security credentials


When you signed up for the service, you created an AWS account using an email address and password.
Those are your AWS account root user credentials. As a best practice, you should not use your root
user credentials to access AWS. Nor should you give your credentials to anyone else. Instead, create
individual users for those who need access to your AWS account. First, create an AWS Identity and Access
Management (IAM) administrator user for yourself and use it for your daily work. For details, see Creating
your first IAM admin user and group in the IAM User Guide. Then create additional IAM users for other
people. For details, see Creating your first IAM delegated user and group in the IAM User Guide.

If you're an account owner or administrator and want to know more about IAM, see the product
description at https://aws.amazon.com/iam or the technical documentation in the IAM User Guide.

Security
Amazon S3 provides authentication mechanisms to secure data stored in Amazon S3 against
unauthorized access. Unless you specify otherwise, only the AWS account owner can access data
uploaded to Amazon S3. For more information about how to manage access to buckets and objects, go
to Identity and access management in Amazon S3 (p. 209).

You can also encrypt your data before uploading it to Amazon S3.

AWS integration
You can use Amazon S3 alone or in concert with one or more other Amazon products. The following are
the most common products used with Amazon S3:

• Amazon EC2
• Amazon EMR

API Version 2006-03-01


17
Amazon Simple Storage Service User Guide
Advanced features

• Amazon SQS
• Amazon CloudFront

Pricing
Learn the pricing structure for storing and transferring data on Amazon S3. For more information, see
Amazon S3 pricing.

Advanced Amazon S3 features


The examples in this guide show how to accomplish the basic tasks of creating a bucket, uploading and
downloading data to and from it, and moving and deleting the data. The following table summarizes
some of the most common advanced functionality offered by Amazon S3. Note that some advanced
functionality is not available in the AWS Management Console and requires that you use the Amazon S3
API.

Link Functionality

Using Requester Pays buckets for storage Learn how to configure a bucket so that a
transfers and usage (p. 51) customer pays for the downloads they make.

Publishing content using Amazon S3 and Use BitTorrent, which is an open, peer-to-peer
BitTorrent (p. 155) protocol for distributing files.

Using versioning in S3 buckets (p. 453) Learn about Amazon S3 versioning capabilities.

Hosting a static website using Amazon Learn how to host a static website on Amazon S3.
S3 (p. 857)

Managing your storage lifecycle (p. 501) Learn how to manage the lifecycle of objects
in your bucket. Lifecycle management
includes expiring objects and archiving objects
(transitioning objects to the S3 S3 Glacier
storage class).

Changing the language of the AWS Management


Console?
You can change the display language of the AWS Management Console. Several languages are
supported.

To change the console language

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. On the left-side of the bottom navigation bar, choose the language menu.
3. From the language menu, choose the language that you want.

This will change the language for the entire AWS Management Console.

API Version 2006-03-01


18
Amazon Simple Storage Service User Guide
Access control

Access control best practices


Amazon S3 provides a variety of security features and tools. The following scenarios should serve as a
guide to what tools and settings you might want to use when performing certain tasks or operating in
specific environments. Proper application of these tools can help maintain the integrity of your data and
help ensure that your resources are accessible to the intended users.

Topics
• Creating a new bucket (p. 19)
• Storing and sharing data (p. 20)
• Sharing resources (p. 21)
• Protecting data (p. 21)

Creating a new bucket


When creating a new bucket, you should apply the following tools and settings to help ensure that your
Amazon S3 resources are protected. 

Block Public Access

S3 Block Public Access provides four settings to help you avoid inadvertently exposing your S3 resources.
You can apply these settings in any combination to individual access points, buckets, or entire AWS
accounts. If you apply a setting to an account, it applies to all buckets and access points that are owned
by that account. By default, the Block all public access setting is applied to new buckets created in the
Amazon S3 console. 

For more information, see The meaning of "public" (p. 411).

If the S3 Block Public Access settings are too restrictive, you can use AWS Identity and Access
Management (IAM) identities to grant access to specific users rather than disabling all Block Public Access
settings. Using Block Public Access with IAM identities helps ensure that any operation that is blocked by
a Block Public Access setting is rejected unless the requesting user has been given specific permission.

For more information, see Block public access settings (p. 409).

Grant access with IAM identities

When setting up accounts for new team members who require S3 access, use IAM users and roles to
ensure least privileges. You can also implement a form of IAM multi-factor authentication (MFA) to
support a strong identity foundation. Using IAM identities, you can grant unique permissions to users
and specify what resources they can access and what actions they can take. IAM identities provide
increased capabilities, including the ability to require users to enter login credentials before accessing
shared resources and apply permission hierarchies to different objects within a single bucket.

For more information, see Example 1: Bucket owner granting its users bucket permissions (p. 360).

Bucket policies

With bucket policies, you can personalize bucket access to help ensure that only those users you have
approved can access resources and perform actions within them. In addition to bucket policies, you
should use bucket-level Block Public Access settings to further limit public access to your data.

For more information, see Bucket policies and user policies (p. 226).

When creating policies, avoid the use of wildcards in the Principal element because it effectively
allows anyone to access your Amazon S3 resources. It's better to explicitly list users or groups that are

API Version 2006-03-01


19
Amazon Simple Storage Service User Guide
Access control

allowed to access the bucket. Rather than including a wildcard for their actions, grant them specific
permissions when applicable.

To further maintain the practice of least privileges, Deny statements in the Effect element should be
as broad as possible and Allow statements should be as narrow as possible. Deny effects paired with the
"s3:*" action are another good way to implement opt-in best practices for the users included in policy
condition statements.

For more information about specifying conditions for when a policy is in effect, see Amazon S3 condition
keys (p. 232).

Buckets in a VPC setting

When adding users in a corporate setting, you can use a virtual private cloud (VPC) endpoint to allow any
users in your virtual network to access your Amazon S3 resources. VPC endpoints enable developers to
provide specific access and permissions to groups of users based on the network the user is connected to.
Rather than adding each user to an IAM role or group, you can use VPC endpoints to deny bucket access
if the request doesn’t originate from the specified endpoint.

For more information, see Controlling access from VPC endpoints with bucket policies (p. 321).

Storing and sharing data


Use the following tools and best practices to store and share your Amazon S3 data.

Versioning and Object Lock for data integrity

If you use the Amazon S3 console to manage buckets and objects, you should implement S3 Versioning
and S3 Object Lock. These features help prevent accidental changes to critical data and enable you to
roll back unintended actions. This capability is particularly useful when there are multiple users with full
write and execute permissions accessing the Amazon S3 console.

For information about S3 Versioning, see Using versioning in S3 buckets (p. 453). For information about
Object Lock, see Using S3 Object Lock (p. 488).

Object lifecycle management for cost efficiency

To manage your objects so that they are stored cost effectively throughout their lifecycle, you can pair
lifecycle policies with object versioning. Lifecycle policies define actions that you want S3 to take during
an object's lifetime. For example, you can create a lifecycle policy that will transition objects to another
storage class, archive them, or delete them after a specified period of time. You can define a lifecycle
policy for all objects or a subset of objects in the bucket by using a shared prefix or tag.

For more information, see Managing your storage lifecycle (p. 501).

Cross-Region Replication for multiple office locations

When creating buckets that are accessed by different office locations, you should consider implementing
S3 Cross-Region Replication. Cross-Region Replication helps ensure that all users have access to the
resources they need and increases operational efficiency. Cross-Region Replication offers increased
availability by copying objects across S3 buckets in different AWS Regions. However, the use of this tool
increases storage costs.

For more information, see Replicating objects (p. 545).

Permissions for secure static website hosting

When configuring a bucket to be used as a publicly accessed static website, you need to disable all Block
Public Access settings. It is important to only provide s3:GetObject actions and not ListObject or

API Version 2006-03-01


20
Amazon Simple Storage Service User Guide
Access control

PutObject permissions when writing the bucket policy for your static website. This helps ensure that
users cannot view all the objects in your bucket or add their own content.

For more information, see Setting permissions for website access (p. 867).

Amazon CloudFront provides the capabilities required to set up a secure static website. Amazon S3
static websites only support HTTP endpoints. CloudFront uses the durable storage of Amazon S3 while
providing additional security headers like HTTPS. HTTPS adds security by encrypting a normal HTTP
request and protecting against common cyber attacks.

For more information, see Getting started with a secure static website in the Amazon CloudFront
Developer Guide.

Sharing resources
There are several different ways that you can share resources with a specific group of users. You can
use the following tools to share a set of documents or other resources to a single group of users,
department, or an office. Although they can all be used to accomplish the same goal, some tools might
pair better than others with your existing settings.

User policies

You can share resources with a limited group of people using IAM groups and user policies. When
creating a new IAM user, you are prompted to create and add them to a group. However, you can create
and add users to groups at any point. If the individuals you intend to share these resources with are
already set up within IAM, you can add them to a common group and share the bucket with their group
within the user policy. You can also use IAM user policies to share individual objects within a bucket.

For more information, see Allowing an IAM user access to one of your buckets (p. 349).

Access control lists

As a general rule, we recommend that you use S3 bucket policies or IAM policies for access control.
Amazon S3 access control lists (ACLs) are a legacy access control mechanism that predates IAM. If
you already use S3 ACLs and you find them sufficient, there is no need to change. However, certain
access control scenarios require the use of ACLs. For example, when a bucket owner wants to grant
permission to objects, but not all objects are owned by the bucket owner, the object owner must first
grant permission to the bucket owner. This is done using an object ACL.

For more information, see Example 3: Bucket owner granting its users permissions to objects it does not
own (p. 369).

Prefixes

When trying to share specific resources from a bucket, you can replicate folder-level permissions using
prefixes. The Amazon S3 console supports the folder concept as a means of grouping objects by using a
shared name prefix for objects. You can then specify a prefix within the conditions of an IAM user's policy
to grant them explicit permission to access the resources associated with that prefix. 

For more information, see Organizing objects in the Amazon S3 console using folders (p. 141).

Tagging

If you use object tagging to categorize storage, you can share objects that have been tagged with a
specific value with specified users. Resource tagging allows you to control access to objects based on the
tags associated with the resource that a user is trying to access. To do this, use the ResourceTag/key-
name condition within an IAM user policy to allow access to the tagged resources.

API Version 2006-03-01


21
Amazon Simple Storage Service User Guide
Access control

For more information, see Controlling access to AWS resources using resource tags in the IAM User Guide.

Protecting data
Use the following tools to help protect data in transit and at rest, both of which are crucial in
maintaining the integrity and accessibility of your data.

Object encryption

Amazon S3 offers several object encryption options that protect data in transit and at rest. Server-side
encryption encrypts your object before saving it on disks in its data centers and then decrypts it when
you download the objects. As long as you authenticate your request and you have access permissions,
there is no difference in the way you access encrypted or unencrypted objects. When setting up server-
side encryption, you have three mutually exclusive options:

• Amazon S3 managed keys (SSE-S3)


• Customer master keys (CMK) stored in AWS Key Management Service (SSE-KMS)
• Customer-provided keys (SSE-C)

For more information, see Protecting data using server-side encryption (p. 157).

Client-side encryption is the act of encrypting data before sending it to Amazon S3. For more
information, see Protecting data using client-side encryption (p. 198).

Signing methods

Signature Version 4 is the process of adding authentication information to AWS requests sent by HTTP.
For security, most requests to AWS must be signed with an access key, which consists of an access key ID
and secret access key. These two keys are commonly referred to as your security credentials.

For more information, see Authenticating Requests (AWS Signature Version 4) and Signature Version 4
signing process.

Logging and monitoring

Monitoring is an important part of maintaining the reliability, availability, and performance of your
Amazon S3 solutions so that you can more easily debug a multi-point failure if one occurs. Logging can
provide insight into any errors users are receiving, and when and what requests are made. AWS provides
several tools for monitoring your Amazon S3 resources:

• Amazon CloudWatch
• AWS CloudTrail
• Amazon S3 Access Logs
• AWS Trusted Advisor

For more information, see Logging and monitoring in Amazon S3 (p. 442).

Amazon S3 is integrated with AWS CloudTrail, a service that provides a record of actions taken by a
user, a role, or an AWS service in Amazon S3. This feature can be paired with Amazon GuardDuty, which
monitors threats against your Amazon S3 resources by analyzing CloudTrail management events and
CloudTrail S3 data events. These data sources monitor different kinds of activity. For example, S3 related
CloudTrail management events include operations that list or configure S3 projects. GuardDuty analyzes
S3 data events from all of your S3 buckets and monitors them for malicious and suspicious activity.

For more information, see Amazon S3 protection in Amazon GuardDuty in the Amazon GuardDuty User
Guide.

API Version 2006-03-01


22
Amazon Simple Storage Service User Guide
Development resources

Development resources
To help you build applications using the language of your choice, we provide the following resources:

• Sample Code and Libraries – The AWS Developer Center has sample code and libraries written
especially for Amazon S3.

You can use these code samples as a means of understanding how to implement the Amazon S3 API.
For more information, see the AWS Developer Center.
• Tutorials – Our Resource Center offers more Amazon S3 tutorials.

These tutorials provide a hands-on approach for learning Amazon S3 functionality. For more
information, see Articles & Tutorials.
• Customer Forum – We recommend that you review the Amazon S3 forum to get an idea of what other
users are doing and to benefit from the questions they ask.

The forum can help you understand what you can and can't do with Amazon S3. The forum also serves
as a place for you to ask questions that other users or AWS representatives might answer. You can use
the forum to report issues with the service or the API. For more information, see Discussion Forums.

API Version 2006-03-01


23
Amazon Simple Storage Service User Guide
Buckets overview

Creating, configuring, and working


with Amazon S3 buckets
To store your data in Amazon S3, you work with resources known as buckets and objects. A bucket is a
container for objects. An object is a file and any metadata that describes that file.

To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the
object is in the bucket, you can open it, download it, and move it. When you no longer need an object or
a bucket, you can clean up your resources.
Note
With Amazon S3, you pay only for what you use. For more information about Amazon S3
features and pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started
with Amazon S3 for free. For more information, see AWS Free Tier.

The topics in this section provide an overview of working with buckets in Amazon S3. They include
information about naming, creating, accessing, and deleting buckets.

Topics
• Buckets overview (p. 24)
• Bucket naming rules (p. 27)
• Creating a bucket (p. 28)
• Viewing the properties for an S3 bucket (p. 33)
• Accessing a bucket (p. 33)
• Emptying a bucket (p. 35)
• Deleting a bucket (p. 37)
• Setting default server-side encryption behavior for Amazon S3 buckets (p. 39)
• Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration (p. 43)
• Using Requester Pays buckets for storage transfers and usage (p. 51)
• Bucket restrictions and limitations (p. 54)

Buckets overview
To upload your data (photos, videos, documents, etc.) to Amazon S3, you must first create an S3 bucket
in one of the AWS Regions. You can then upload any number of objects to the bucket.

In terms of implementation, buckets and objects are AWS resources, and Amazon S3 provides APIs for
you to manage them. For example, you can create a bucket and upload objects using the Amazon S3 API.
You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3
APIs to send requests to Amazon S3.

This section describes how to work with buckets. For information about working with objects, see
Amazon S3 objects overview (p. 56).

An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts.
This means that after a bucket is created, the name of that bucket cannot be used by another AWS
account in any AWS Region until the bucket is deleted. You should not depend on specific bucket naming
conventions for availability or security verification purposes. For bucket naming guidelines, see Bucket
naming rules (p. 27).

API Version 2006-03-01


24
Amazon Simple Storage Service User Guide
About permissions

Amazon S3 creates buckets in a Region that you specify. To optimize latency, minimize costs, or address
regulatory requirements, choose any AWS Region that is geographically close to you. For example, if
you reside in Europe, you might find it advantageous to create buckets in the Europe (Ireland) or Europe
(Frankfurt) Regions. For a list of Amazon S3 Regions, see Regions and Endpoints in the AWS General
Reference.
Note
Objects that belong to a bucket that you create in a specific AWS Region never leave that
Region, unless you explicitly transfer them to another Region. For example, objects that are
stored in the Europe (Ireland) Region never leave it.

Topics
• About permissions (p. 25)
• Managing public access to buckets (p. 25)
• Bucket configuration options (p. 26)

About permissions
You can use your AWS account root user credentials to create a bucket and perform any other Amazon
S3 operation. However, we recommend that you do not use the root user credentials of your AWS
account to make requests, such as to create a bucket. Instead, create an AWS Identity and Access
Management (IAM) user, and grant that user full access (users by default have no permissions).

These users are referred to as administrators. You can use the administrator user credentials, instead
of the root user credentials of your account, to interact with AWS and perform tasks, such as create a
bucket, create users, and grant them permissions.

For more information, see AWS account root user credentials and IAM user credentials in the AWS
General Reference and Security best practices in IAM in the IAM User Guide.

The AWS account that creates a resource owns that resource. For example, if you create an IAM user
in your AWS account and grant the user permission to create a bucket, the user can create a bucket.
But the user does not own the bucket; the AWS account that the user belongs to owns the bucket. The
user needs additional permission from the resource owner to perform any other bucket operations. For
more information about managing permissions for your Amazon S3 resources, see Identity and access
management in Amazon S3 (p. 209).

Managing public access to buckets


Public access is granted to buckets and objects through access control lists (ACLs), bucket policies, or
both. To help you manage public access to Amazon S3 resources, Amazon S3 provides settings to block
public access. Amazon S3 Block Public Access settings can override ACLs and bucket policies so that you
can enforce uniform limits on public access to these resources. You can apply Block Public Access settings
to individual buckets or to all buckets in your account.

To help ensure that all of your Amazon S3 buckets and objects have their public access blocked, we
recommend that you turn on all four settings for Block Public Access for your account. These settings
block all public access for all current and future buckets.

Before applying these settings, verify that your applications will work correctly without public access. If
you require some level of public access to your buckets or objects—for example, to host a static website
as described at Hosting a static website using Amazon S3 (p. 857)—you can customize the individual
settings to suit your storage use cases. For more information, see Blocking public access to your Amazon
S3 storage (p. 408).

API Version 2006-03-01


25
Amazon Simple Storage Service User Guide
Bucket configuration

Bucket configuration options


Amazon S3 supports various options for you to configure your bucket. For example, you can configure
your bucket for website hosting, add a configuration to manage the lifecycle of objects in the bucket,
and configure the bucket to log all access to the bucket. Amazon S3 supports subresources for you to
store and manage the bucket configuration information. You can use the Amazon S3 API to create and
manage these subresources. However, you can also use the console or the AWS SDKs.
Note
There are also object-level configurations. For example, you can configure object-level
permissions by configuring an access control list (ACL) specific to that object.

These are referred to as subresources because they exist in the context of a specific bucket or object. The
following table lists subresources that enable you to manage bucket-specific configurations.

Subresource Description

cors (cross-origin You can configure your bucket to allow cross-origin requests.
resource sharing)
For more information, see Using cross-origin resource sharing (CORS) (p. 397).

event notification You can enable your bucket to send you notifications of specified bucket events.

For more information, see Amazon S3 Event Notifications (p. 785).

lifecycle You can define lifecycle rules for objects in your bucket that have a well-defined
lifecycle. For example, you can define a rule to archive objects one year after
creation, or delete an object 10 years after creation.

For more information, see Managing your storage lifecycle (p. 501).

location When you create a bucket, you specify the AWS Region where you want Amazon
S3 to create the bucket. Amazon S3 stores this information in the location
subresource and provides an API for you to retrieve this information.

logging Logging enables you to track requests for access to your bucket. Each access
log record provides details about a single access request, such as the requester,
bucket name, request time, request action, response status, and error code, if
any. Access log information can be useful in security and access audits. It can also
help you learn about your customer base and understand your Amazon S3 bill.  

For more information, see Logging requests using server access


logging (p. 751).

object locking To use S3 Object Lock, you must enable it for a bucket. You can also optionally
configure a default retention mode and period that applies to new objects that
are placed in the bucket.

For more information, see Bucket configuration (p. 491).

policy and ACL All your resources (such as buckets and objects) are private by default. Amazon
(access control list) S3 supports both bucket policy and access control list (ACL) options for you to
grant and manage bucket-level permissions. Amazon S3 stores the permission
information in the policy and acl subresources.

For more information, see Identity and access management in Amazon


S3 (p. 209).

API Version 2006-03-01


26
Amazon Simple Storage Service User Guide
Naming rules

Subresource Description

replication Replication is the automatic, asynchronous copying of objects across buckets


in different or the same AWS Regions. For more information, see Replicating
objects (p. 545).

requestPayment By default, the AWS account that creates the bucket (the bucket owner) pays
for downloads from the bucket. Using this subresource, the bucket owner
can specify that the person requesting the download will be charged for the
download. Amazon S3 provides an API for you to manage this subresource.

For more information, see Using Requester Pays buckets for storage transfers
and usage (p. 51).

tagging You can add cost allocation tags to your bucket to categorize and track your AWS
costs. Amazon S3 provides the tagging subresource to store and manage tags on
a bucket. Using tags you apply to your bucket, AWS generates a cost allocation
report with usage and costs aggregated by your tags.

For more information, see Billing and usage reporting for S3 buckets (p. 619).

transfer Transfer Acceleration enables fast, easy, and secure transfers of files over long
acceleration distances between your client and an S3 bucket. Transfer Acceleration takes
advantage of the globally distributed edge locations of Amazon CloudFront.

For more information, see Configuring fast, secure file transfers using Amazon S3
Transfer Acceleration (p. 43).

versioning Versioning helps you recover accidental overwrites and deletes.

We recommend versioning as a best practice to recover objects from being


deleted or overwritten by mistake.

For more information, see Using versioning in S3 buckets (p. 453).

website You can configure your bucket for static website hosting. Amazon S3 stores this
configuration by creating a website subresource.

For more information, see Hosting a static website using Amazon S3 (p. 857).

Bucket naming rules


The following rules apply for naming buckets in Amazon S3:

• Bucket names must be between 3 and 63 characters long.


• Bucket names can consist only of lowercase letters, numbers, dots (.), and hyphens (-).
• Bucket names must begin and end with a letter or number.
• Bucket names must not be formatted as an IP address (for example, 192.168.5.4).
• Bucket names must be unique within a partition. A partition is a grouping of Regions. AWS currently
has three partitions: aws (Standard Regions), aws-cn (China Regions), and aws-us-gov (AWS
GovCloud [US] Regions).
• Buckets used with Amazon S3 Transfer Acceleration can't have dots (.) in their names. For more
information about Transfer Acceleration, see Configuring fast, secure file transfers using Amazon S3
Transfer Acceleration (p. 43).

API Version 2006-03-01


27
Amazon Simple Storage Service User Guide
Example bucket names

For best compatibility, we recommend that you avoid using dots (.) in bucket names, except for buckets
that are used only for static website hosting. If you include dots in a bucket's name, you can't use virtual-
host-style addressing over HTTPS, unless you perform your own certificate validation. This is because the
security certificates used for virtual hosting of buckets don't work for buckets with dots in their names.

This limitation doesn't affect buckets used for static website hosting, because static website hosting is
only available over HTTP. For more information about virtual-host-style addressing, see Virtual hosting
of buckets (p. 935). For more information about static website hosting, see Hosting a static website
using Amazon S3 (p. 857).
Note
Before March 1, 2018, buckets created in the US East (N. Virginia) Region could have names
that were up to 255 characters long and included uppercase letters and underscores. Beginning
March 1, 2018, new buckets in US East (N. Virginia) must conform to the same rules applied in
all other Regions.

Example bucket names


The following example bucket names are valid and follow the recommended naming guidelines:

• docexamplebucket1
• log-delivery-march-2020
• my-hosted-content

The following example bucket names are valid but not recommended for uses other than static website
hosting:

• docexamplewebsite.com
• www.docexamplewebsite.com
• my.example.s3.bucket

The following example bucket names are not valid:

• doc_example_bucket (contains underscores)


• DocExampleBucket (contains uppercase letters)
• doc-example-bucket- (ends with a hyphen)

Creating a bucket
To upload your data to Amazon S3, you must first create an Amazon S3 bucket in one of the AWS
Regions. When you create a bucket, you must choose a bucket name and Region. You can optionally
choose other storage management options for the bucket. After you create a bucket, you cannot change
the bucket name or Region. For information about naming buckets, see Bucket naming rules (p. 27).

The AWS account that creates the bucket owns it. You can upload any number of objects to the bucket.
By default, you can create up to 100 buckets in each of your AWS accounts. If you need more buckets,
you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a service limit
increase. To learn how to submit a bucket limit increase, see AWS service quotas in the AWS General
Reference. You can store any number of objects in a bucket.

You can use the Amazon S3 console, Amazon S3 APIs, AWS CLI, or AWS SDKs to create a bucket.

API Version 2006-03-01


28
Amazon Simple Storage Service User Guide
Creating a bucket

Using the S3 console


1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Create bucket.

The Create bucket wizard opens.


3. In Bucket name, enter a DNS-compliant name for your bucket.

The bucket name must:

• Be unique across all of Amazon S3.


• Be between 3 and 63 characters long.
• Not contain uppercase characters.
• Start with a lowercase letter or number.

After you create the bucket, you can't change its name. For information about naming buckets, see
Bucket naming rules (p. 27).
Important
Avoid including sensitive information, such as account numbers, in the bucket name. The
bucket name is visible in the URLs that point to the objects in the bucket.
4. In Region, choose the AWS Region where you want the bucket to reside.

Choose a Region close to you to minimize latency and costs and address regulatory requirements.
Objects stored in a Region never leave that Region unless you explicitly transfer them to another
Region. For a list of Amazon S3 AWS Regions, see AWS service endpoints in the Amazon Web Services
General Reference.
5. In Bucket settings for Block Public Access, choose the Block Public Access settings that you want to
apply to the bucket.

We recommend that you keep all settings enabled unless you know that you need to turn off one
or more of them for your use case, such as to host a public website. Block Public Access settings
that you enable for the bucket are also enabled for all access points that you create on the bucket.
For more information about blocking public access, see Blocking public access to your Amazon S3
storage (p. 408).
6. (Optional) If you want to enable S3 Object Lock, do the following:

a. Choose Advanced settings, and read the message that appears.


Important
You can only enable S3 Object Lock for a bucket when you create it. If you enable
Object Lock for the bucket, you can't disable it later. Enabling Object Lock also enables
versioning for the bucket. After you enable Object Lock for the bucket, you must
configure the Object Lock settings before any objects in the bucket will be protected.
For more information about configuring protection for objects, see Using S3 Object
Lock (p. 488).
b. If you want to enable Object Lock, enter enable in the text box and choose Confirm.

For more information about the S3 Object Lock feature, see Using S3 Object Lock (p. 488).
7. Choose Create bucket.

API Version 2006-03-01


29
Amazon Simple Storage Service User Guide
Creating a bucket

Using the AWS SDKs


When you use the AWS SDKs to create a bucket, you must create a client and then use the client to send
a request to create a bucket. As a best practice, you should create your client and bucket in the same
AWS Region. If you don't specify a Region when you create a client or a bucket, Amazon S3 uses the
default Region US East (N. Virginia).

To create a client to access a dual-stack endpoint, you must specify an AWS Region. For more
information, see Dual-stack endpoints (p. 904). For a list of available AWS Regions, see Regions and
endpoints in the AWS General Reference.

When you create a client, the Region maps to the Region-specific endpoint. The client uses this endpoint
to communicate with Amazon S3: s3.<region>.amazonaws.com. If your Region launched after March
20, 2019, your client and bucket must be in the same Region. However, you can use a client in the US
East (N. Virginia) Region to create a bucket in any Region that launched before March 20, 2019. For more
information, see Legacy Endpoints (p. 939).

These AWS SDK code examples perform the following tasks:

• Create a client by explicitly specifying an AWS Region — In the example, the client uses the s3.us-
west-2.amazonaws.com endpoint to communicate with Amazon S3. You can specify any AWS
Region. For a list of AWS Regions, see Regions and endpoints in the AWS General Reference.
• Send a create bucket request by specifying only a bucket name — The client sends a request to
Amazon S3 to create the bucket in the Region where you created a client.
• Retrieve information about the location of the bucket — Amazon S3 stores bucket location
information in the location subresource that is associated with the bucket.

Java

This example shows how to create an Amazon S3 bucket using the AWS SDK for Java. For
instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CreateBucketRequest;
import com.amazonaws.services.s3.model.GetBucketLocationRequest;

import java.io.IOException;

public class CreateBucket2 {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

if (!s3Client.doesBucketExistV2(bucketName)) {
// Because the CreateBucketRequest object doesn't specify a region, the

API Version 2006-03-01


30
Amazon Simple Storage Service User Guide
Creating a bucket

// bucket is created in the region specified in the client.


s3Client.createBucket(new CreateBucketRequest(bucketName));

// Verify that the bucket was created by retrieving it and checking its
location.
String bucketLocation = s3Client.getBucketLocation(new
GetBucketLocationRequest(bucketName));
System.out.println("Bucket location: " + bucketLocation);
}
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it and returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).

Example

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Util;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class CreateBucketTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
CreateBucketAsync().Wait();
}

static async Task CreateBucketAsync()


{
try
{
if (!(await AmazonS3Util.DoesS3BucketExistAsync(s3Client, bucketName)))
{
var putBucketRequest = new PutBucketRequest
{
BucketName = bucketName,
UseClientRegion = true
};

PutBucketResponse putBucketResponse = await


s3Client.PutBucketAsync(putBucketRequest);

API Version 2006-03-01


31
Amazon Simple Storage Service User Guide
Creating a bucket

}
// Retrieve the bucket location.
string bucketLocation = await FindBucketLocationAsync(s3Client);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
static async Task<string> FindBucketLocationAsync(IAmazonS3 client)
{
string bucketLocation;
var request = new GetBucketLocationRequest()
{
BucketName = bucketName
};
GetBucketLocationResponse response = await
client.GetBucketLocationAsync(request);
bucketLocation = response.Location.ToString();
return bucketLocation;
}
}
}

Ruby

For information about how to create and test a working sample, see Using the AWS SDK for Ruby -
Version 3 (p. 953).

Example

require 'aws-sdk-s3'

# Creates a bucket in Amazon S3.


#
# @param s3_client [Aws::S3::Client] An initialized Amazon S3 client.
# @param bucket_name [String] The bucket's name.
# @return [Boolean] true if the bucket was created; otherwise, false.
# @example
# s3_client = Aws::S3::Client.new(region: 'us-east-1')
# exit 1 unless bucket_created?(s3_client, 'doc-example-bucket')
def bucket_created?(s3_client, bucket_name)
s3_client.create_bucket(bucket: bucket_name)
rescue StandardError => e
puts "Error while creating the bucket named '#{bucket_name}': #{e.message}"
end

Using the AWS CLI


You can also use the AWS Command Line Interface (AWS CLI) to create an S3 bucket. For more
information, see create-bucket in the AWS CLI Command Reference.

For information about the AWS CLI, see What is the AWS Command Line Interface? in the AWS Command
Line Interface User Guide.

API Version 2006-03-01


32
Amazon Simple Storage Service User Guide
Viewing bucket properties

Viewing the properties for an S3 bucket


You can view and configure the properties for an Amazon S3 bucket, including settings for versioning,
tags, default encryption, logging, notifications, and more.

To view the properties for an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to view the properties for.
3. Choose Properties.
4. On the Properties page, you can configure the following properties for the bucket.

• Bucket Versioning – Keep multiple versions of an object in one bucket by using versioning. By
default, versioning is disabled for a new bucket. For information about enabling versioning, see
Enabling versioning on buckets (p. 457).
• Tags – With AWS cost allocation, you can use bucket tags to annotate billing for your use of a
bucket. A tag is a key-value pair that represents a label that you assign to a bucket. To add tags,
choose Tags, and then choose Add tag. For more information, see Using cost allocation S3 bucket
tags (p. 618).
• Default encryption – Enabling default encryption provides you with automatic server-side
encryption. Amazon S3 encrypts an object before saving it to a disk and decrypts the object when
you download it. For more information, see Setting default server-side encryption behavior for
Amazon S3 buckets (p. 39).
• Server access logging – Get detailed records for the requests that are made to your bucket with
server access logging. By default, Amazon S3 doesn't collect server access logs. For information
about enabling server access logging, see Enabling Amazon S3 server access logging (p. 753)
• AWS CloudTrail data events – Use CloudTrail to log data events. By default, trails don't log data
events. Additional charges apply for data events. For more information, see Logging Data Events
for Trails in the AWS CloudTrail User Guide.
• Event notifications – Enable certain Amazon S3 bucket events to send notification messages to a
destination whenever the events occur. To enable events, choose Create event notification, and
then specify the settings you want to use. For more information, see Enabling and configuring
event notifications using the Amazon S3 console (p. 792).
• Transfer acceleration – Enable fast, easy, and secure transfers of files over long distances between
your client and an S3 bucket. For information about enabling transfer acceleration, see Enabling
and using S3 Transfer Acceleration (p. 46).
• Object Lock – Use S3 Object Lock to prevent an object from being deleted or overwritten for a
fixed amount of time or indefinitely. For more information, see Using S3 Object Lock (p. 488).
• Requester Pays – Enable Requester Pays if you want the requester (instead of the bucket owner)
to pay for requests and data transfers. For more information, see Using Requester Pays buckets for
storage transfers and usage (p. 51).
• Static website hosting – You can host a static website on Amazon S3. To enable static website
hosting, choose Static website hosting, and then specify the settings you want to use. For more
information, see Hosting a static website using Amazon S3 (p. 857).

Accessing a bucket
You can access your bucket using the Amazon S3 console. Using the console UI, you can perform almost
all bucket operations without having to write any code.

API Version 2006-03-01


33
Amazon Simple Storage Service User Guide
Virtual-hosted–style access

If you access a bucket programmatically, Amazon S3 supports RESTful architecture in which your buckets
and objects are resources, each with a resource URI that uniquely identifies the resource.

Amazon S3 supports both virtual-hosted–style and path-style URLs to access a bucket. Because
buckets can be accessed using path-style and virtual-hosted–style URLs, we recommend that you
create buckets with DNS-compliant bucket names. For more information, see Bucket restrictions and
limitations (p. 54).
Note
Virtual-hosted-style and path-style requests use the S3 dot Region endpoint structure
(s3.Region), for example, https://my-bucket.s3.us-west-2.amazonaws.com. However,
some older Amazon S3 Regions also support S3 dash Region endpoints s3-Region, for
example, https://my-bucket.s3-us-west-2.amazonaws.com. If your bucket is in one of
these Regions, you might see s3-Region endpoints in your server access logs or AWS CloudTrail
logs. We recommend that you do not use this endpoint structure in your requests.

Virtual-hosted–style access
In a virtual-hosted–style request, the bucket name is part of the domain name in the URL.

Amazon S3 virtual-hosted-style URLs use the following format.

https://bucket-name.s3.Region.amazonaws.com/key name

In this example, my-bucket is the bucket name, US West (Oregon) is the Region, and puppy.png is the
key name:

https://my-bucket.s3.us-west-2.amazonaws.com/puppy.png

For more information about virtual hosted style access, see Virtual Hosted-Style Requests (p. 936).

Path-style access
In Amazon S3, path-style URLs use the following format.

https://s3.Region.amazonaws.com/bucket-name/key name

For example, if you create a bucket named mybucket in the US West (Oregon) Region, and you want to
access the puppy.jpg object in that bucket, you can use the following path-style URL:

https://s3.us-west-2.amazonaws.com/mybucket/puppy.jpg

For more information, see Path-Style Requests (p. 935).


Important
Update (September 23, 2020) – We have decided to delay the deprecation of path-style URLs to
ensure that customers have the time that they need to transition to virtual hosted-style URLs.
For more information, see Amazon S3 Path Deprecation Plan – The Rest of the Story in the AWS
News Blog.

Accessing an S3 bucket over IPv6


Amazon S3 has a set of dual-stack endpoints, which support requests to S3 buckets over both Internet
Protocol version 6 (IPv6) and IPv4. For more information, see Making requests over IPv6 (p. 902).

API Version 2006-03-01


34
Amazon Simple Storage Service User Guide
Accessing a bucket through S3 Access Points

Accessing a bucket through S3 Access Points


In addition to accessing a bucket directly, you can access a bucket through an access point. For more
information about the S3 Access Points feature, see Managing data access with Amazon S3 access points
(p. 418).

S3 Access Points only support virtual-host-style addressing. To address a bucket through an access point,
use the following format.

https://AccessPointName-AccountId.s3-accesspoint.region.amazonaws.com.

Note

• If your access point name includes dash (-) characters, include the dashes in the URL and
insert another dash before the account ID. For example, to use an access point named
finance-docs owned by account 123456789012 in Region us-west-2, the appropriate
URL would be https://finance-docs-123456789012.s3-accesspoint.us-
west-2.amazonaws.com.
• S3 Access Points don't support access by HTTP, only secure access by HTTPS.

Accessing a bucket using S3://


Some AWS services require specifying an Amazon S3 bucket using S3://bucket. The following example
shows the correct format. Be aware that when using this format, the bucket name does not include the
AWS Region.

S3://bucket-name/key-name

For example, the following example uses the sample bucket described in the earlier path-style section.

S3://mybucket/puppy.jpg

Emptying a bucket
You can empty a bucket's contents using the Amazon S3 console, AWS SDKs, or AWS Command Line
Interface (AWS CLI). When you empty a bucket, you delete all the content, but you keep the bucket.

You can also specify a lifecycle configuration on a bucket to expire objects so that Amazon S3 can delete
them. However, there are limitations on this method based on the number of objects in your bucket and
the bucket's versioning status.

Using the S3 console


You can use the Amazon S3 console to empty a bucket, which deletes all of the objects in the bucket
without deleting the bucket. When you empty a bucket that has S3 Bucket Versioning enabled, all
versions of all the objects in the bucket are deleted. For more information, see Working with objects in a
versioning-enabled bucket (p. 462).

To empty an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.

API Version 2006-03-01


35
Amazon Simple Storage Service User Guide
Emptying a bucket

2. In the Bucket name list, select the option next to the name of the bucket that you want to empty,
and then choose Empty.
3. On the Empty bucket page, confirm that you want to empty the bucket by entering the bucket
name into the text field, and then choose Empty.
4. (Optional) Monitor the progress of the bucket emptying process on the Empty bucket: Status page.

Using the AWS CLI


You can empty a bucket using the AWS CLI only if the bucket does not have Bucket Versioning enabled.
If versioning is not enabled, you can use the rm (remove) AWS CLI command with the --recursive
parameter to empty the bucket (or remove a subset of objects with a specific key name prefix).

The following rm command removes objects that have the key name prefix doc, for example, doc/doc1
and doc/doc2.

$ aws s3 rm s3://bucket-name/doc --recursive

Use the following command to remove all objects without specifying a prefix.

$ aws s3 rm s3://bucket-name --recursive

For more information, see Using high-level S3 commands with the AWS CLI in the AWS Command Line
Interface User Guide.
Note
You can't remove objects from a bucket that has versioning enabled. Amazon S3 adds a delete
marker when you delete an object, which is what this command does. For more information
about S3 Bucket Versioning, see Using versioning in S3 buckets (p. 453).

Using the AWS SDKs


You can use the AWS SDKs to empty a bucket or remove a subset of objects that have a specific key
name prefix.

For an example of how to empty a bucket using AWS SDK for Java, see Deleting a bucket (p. 37). The
code deletes all objects, regardless of whether the bucket has versioning enabled, and then it deletes the
bucket. To just empty the bucket, make sure that you remove the statement that deletes the bucket.

For more information about using other AWS SDKs, see Tools for Amazon Web Services.

Using a lifecycle configuration


You can configure lifecycle on your bucket to expire objects and request that Amazon S3 delete expired
objects. You can add lifecycle configuration rules to expire all or a subset of objects that have a specific
key name prefix. For example, to remove all objects in a bucket, you can set a lifecycle rule to expire
objects one day after creation.

If your bucket has versioning enabled, you can also configure the rule to expire noncurrent objects. To
fully empty the contents of a versioning-enabled bucket, you must configure an expiration policy on
both current and noncurrent objects in the bucket.

For more information, see Managing your storage lifecycle (p. 501) and Expiring objects (p. 507).

API Version 2006-03-01


36
Amazon Simple Storage Service User Guide
Deleting a bucket

Deleting a bucket
You can delete an empty Amazon S3 bucket, and when you're using the AWS Management Console, you
can delete a bucket that contains objects. If you delete a bucket that contains objects, all the objects in
the bucket are permanently deleted.

When you delete a bucket that has S3 Bucket Versioning enabled, all versions of all the objects in the
bucket are permanently deleted. For more information about versioning, see Working with objects in a
versioning-enabled bucket (p. 462).

Before deleting a bucket, consider the following:

• Bucket names are unique. If you delete a bucket, another AWS user can use the name.
• When you delete a bucket that contains objects, all the objects in the bucket are permanently deleted,
including objects that transitioned to the S3 Glacier storage class.
• If the bucket hosts a static website, and you created and configured an Amazon Route 53 hosted zone
as described in Configuring a static website using a custom domain registered with Route 53 (p. 884),
you must clean up the Route 53 hosted zone settings that are related to the bucket. For more
information, see Step 2: Delete the Route 53 hosted zone (p. 898).
• If the bucket receives log data from Elastic Load Balancing (ELB): We recommend that you stop the
delivery of ELB logs to the bucket before deleting it. After you delete the bucket, if another user
creates a bucket using the same name, your log data could potentially be delivered to that bucket. For
information about ELB access logs, see Access logs in the User Guide for Classic Load Balancers and
Access logs in the User Guide for Application Load Balancers.

Important
Bucket names are unique. If you delete a bucket, another AWS user can use the name. If you
want to continue to use the same bucket name, don't delete the bucket. We recommend that
you empty the bucket and keep it.

Using the S3 console


To delete an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, select the option next to the name of the bucket that you want to delete, and
then choose Delete at the top of the page.
3. On the Delete bucket page, confirm that you want to delete the bucket by entering the bucket
name into the text field, and then choose Delete bucket.
Note
If the bucket contains any objects, empty the bucket before deleting it by selecting the
empty bucket configuration link in the This bucket is not empty error alert and following
the instructions on the Empty bucket page. Then return to the Delete bucket page and
delete the bucket.

Using the AWS SDK Java


The following example shows you how to delete a bucket using the AWS SDK for Java. First, the code
deletes objects in the bucket and then it deletes the bucket. For information about other AWS SDKs, see
Tools for Amazon Web Services.

API Version 2006-03-01


37
Amazon Simple Storage Service User Guide
Deleting a bucket

Java

The following Java example deletes a bucket that contains objects. The example deletes all objects,
and then it deletes the bucket. The example works for buckets with or without versioning enabled.
Note
For buckets without versioning enabled, you can delete all objects directly and then delete
the bucket. For buckets with versioning enabled, you must delete all object versions before
deleting the bucket.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;

import java.util.Iterator;

public class DeleteBucket2 {

public static void main(String[] args) {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Delete all objects from the bucket. This is sufficient


// for unversioned buckets. For versioned buckets, when you attempt to
delete objects, Amazon S3 inserts
// delete markers for all objects, but doesn't delete the object versions.
// To delete objects from versioned buckets, delete all of the object
versions before deleting
// the bucket (see below for an example).
ObjectListing objectListing = s3Client.listObjects(bucketName);
while (true) {
Iterator<S3ObjectSummary> objIter =
objectListing.getObjectSummaries().iterator();
while (objIter.hasNext()) {
s3Client.deleteObject(bucketName, objIter.next().getKey());
}

// If the bucket contains many objects, the listObjects() call


// might not return all of the objects in the first listing. Check to
// see whether the listing was truncated. If so, retrieve the next page
of objects
// and delete them.
if (objectListing.isTruncated()) {
objectListing = s3Client.listNextBatchOfObjects(objectListing);
} else {
break;
}
}

// Delete all object versions (required for versioned buckets).

API Version 2006-03-01


38
Amazon Simple Storage Service User Guide
Setting default bucket encryption

VersionListing versionList = s3Client.listVersions(new


ListVersionsRequest().withBucketName(bucketName));
while (true) {
Iterator<S3VersionSummary> versionIter =
versionList.getVersionSummaries().iterator();
while (versionIter.hasNext()) {
S3VersionSummary vs = versionIter.next();
s3Client.deleteVersion(bucketName, vs.getKey(), vs.getVersionId());
}

if (versionList.isTruncated()) {
versionList = s3Client.listNextBatchOfVersions(versionList);
} else {
break;
}
}

// After all objects and object versions are deleted, delete the bucket.
s3Client.deleteBucket(bucketName);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client couldn't
// parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Using the AWS CLI


You can delete a bucket that contains objects with the AWS CLI if it doesn't have versioning enabled.
When you delete a bucket that contains objects, all the objects in the bucket are permanently deleted,
including objects that are transitioned to the S3 Glacier storage class.

If your bucket does not have versioning enabled, you can use the rb (remove bucket) AWS CLI command
with the --force parameter to delete the bucket and all the objects in it. This command deletes all
objects first and then deletes the bucket.

$ aws s3 rb s3://bucket-name --force

For more information, see Using High-Level S3 Commands with the AWS Command Line Interface in the
AWS Command Line Interface User Guide.

Setting default server-side encryption behavior for


Amazon S3 buckets
With Amazon S3 default encryption, you can set the default encryption behavior for an S3 bucket so
that all new objects are encrypted when they are stored in the bucket. The objects are encrypted using
server-side encryption with either Amazon S3-managed keys (SSE-S3) or customer master keys (CMKs)
stored in AWS Key Management Service (AWS KMS) (SSE-KMS).

When you configure your bucket to use default encryption with SSE-KMS, you can also enable S3 Bucket
Keys to decrease request traffic from Amazon S3 to AWS Key Management Service (AWS KMS) and

API Version 2006-03-01


39
Amazon Simple Storage Service User Guide
Using encryption for cross-account operations

reduce the cost of encryption. For more information, see Reducing the cost of SSE-KMS with Amazon S3
Bucket Keys (p. 166).

When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk and
decrypts it when you download the objects. For more information about protecting data using
server-side encryption and encryption key management, see Protecting data using server-side
encryption (p. 157).

For more information about permissions required for default encryption, see PutBucketEncryption in the
Amazon Simple Storage Service API Reference.

To set up default encryption on a bucket, you can use the Amazon S3 console, AWS CLI, AWS SDKs, or
the REST API. For more information, see the section called “Enabling default encryption” (p. 41).

Encrypting existing objects

To encrypt your existing Amazon S3 objects with a single request, you can use Amazon S3 Batch
Operations. You provide S3 Batch Operations with a list of objects to operate on, and Batch Operations
calls the respective API to perform the specified operation. You can use the copy operation to copy the
existing unencrypted objects and write the new encrypted objects to the same bucket. A single Batch
Operations job can perform the specified operation on billions of objects containing exabytes of data.
For more information, see Performing S3 Batch Operations (p. 662).

You can also encrypt existing objects using the Copy Object API. For more information, see the AWS
Storage Blog post Encrypting existing Amazon S3 objects with the AWS CLI.
Note
Amazon S3 buckets with default bucket encryption using SSE-KMS cannot be used as
destination buckets for the section called “Logging server access” (p. 751). Only SSE-S3
default encryption is supported for server access log destination buckets.

Using encryption for cross-account operations


Be aware of the following when using encryption for cross-account operations:

• The AWS managed CMK (aws/s3) is used when a CMK Amazon Resource Name (ARN) or alias is not
provided at request time, nor via the bucket's default encryption configuration.
• If you're uploading or accessing S3 objects using AWS Identity and Access Management (IAM)
principals that are in the same AWS account as your CMK, you can use the AWS managed CMK (aws/
s3).
• Use a customer managed CMK if you want to grant cross-account access to your S3 objects. You can
configure the policy of a customer managed CMK to allow access from another account.
• If specifying your own CMK, you should use a fully qualified CMK key ARN. When using a CMK alias,
be aware that AWS KMS will resolve the key within the requester’s account. This can result in data
encrypted with a CMK that belongs to the requester, and not the bucket administrator.
• You must specify a key that you (the requester) have been granted Encrypt permission to. For
more information, see Allows key users to use a CMK for cryptographic operations in the AWS Key
Management Service Developer Guide.

For more information about when to use customer managed CMKs and the AWS managed CMK, see
Should I use an AWS AWS KMS-managed key or a custom AWS KMS key to encrypt my objects on
Amazon S3?

Using default encryption with replication


When you enable default encryption for a replication destination bucket, the following encryption
behavior applies:

API Version 2006-03-01


40
Amazon Simple Storage Service User Guide
Using Amazon S3 Bucket Keys with default encryption

• If objects in the source bucket are not encrypted, the replica objects in the destination bucket are
encrypted using the default encryption settings of the destination bucket. This results in the ETag of
the source object being different from the ETag of the replica object. You must update applications
that use the ETag to accommodate for this difference.
• If objects in the source bucket are encrypted using SSE-S3 or SSE-KMS, the replica objects in the
destination bucket use the same encryption as the source object encryption. The default encryption
settings of the destination bucket are not used.

For more information about using default encryption with SSE-KMS, see Replicating encrypted
objects (p. 599).

Using Amazon S3 Bucket Keys with default


encryption
When you configure your bucket to use default encryption for SSE-KMS on new objects, you can also
configure S3 Bucket Keys. S3 Bucket Keys decrease the number of transactions from Amazon S3 to AWS
KMS to reduce the cost of server-side encryption using AWS Key Management Service (SSE-KMS).

When you configure your bucket to use S3 Bucket Keys for SSE-KMS on new objects, AWS KMS generates
a bucket-level key that is used to create a unique data key for objects in the bucket. This bucket key is
used for a time-limited period within Amazon S3, reducing the need for Amazon S3 to make requests to
AWS KMS to complete encryption operations.

For more information about using an S3 Bucket Key, see Reducing the cost of SSE-KMS with Amazon S3
Bucket Keys (p. 166).

Enabling Amazon S3 default bucket encryption


You can set the default encryption behavior on an Amazon S3 bucket so that all objects are encrypted
when they are stored in the bucket. The objects are encrypted using server-side encryption with either
Amazon S3-managed keys (SSE-S3) or AWS Key Management Service (AWS KMS) customer master keys
(CMKs).

When you configure default encryption using AWS KMS, you can also configure S3 Bucket Key. For more
information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys (p. 166).

Default encryption works with all existing and new Amazon S3 buckets. Without default encryption, to
encrypt all objects stored in a bucket, you must include encryption information with every object storage
request. You must also set up an Amazon S3 bucket policy to reject storage requests that don't include
encryption information.

There are no additional charges for using default encryption for S3 buckets. Requests to configure the
default encryption feature incur standard Amazon S3 request charges. For information about pricing,
see Amazon S3 pricing. For SSE-KMS CMK storage, AWS KMS charges apply and are listed at AWS KMS
pricing.

You can enable Amazon S3 default encryption for an S3 bucket using the Amazon S3 console, the AWS
SDKs, the Amazon S3 REST API, and the AWS Command Line Interface (AWS CLI).

Using the console


To enable default encryption on an Amazon S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.

API Version 2006-03-01


41
Amazon Simple Storage Service User Guide
Enabling default encryption

2. In the Buckets list, choose the name of the bucket that you want.
3. Choose Properties.
4. Under Default encryption, choose Edit.
5. To enable or disable server-side encryption, choose Enable or Disable.
6. To enable server-side encryption using an Amazon S3-managed key, under Encryption key type,
choose Amazon S3 key (SSE-S3).

For more information about using Amazon S3 server-side encryption to encrypt your data, see
Protecting data using server-side encryption with Amazon S3-managed encryption keys (SSE-
S3) (p. 174).
7. To enable server-side encryption using an AWS KMS CMK, follow these steps:

a. Under Encryption key type, choose AWS Key Management Service key (SSE-KMS).
Important
If you use the AWS KMS option for your default encryption configuration, you are
subject to the RPS (requests per second) limits of AWS KMS. For more information
about AWS KMS quotas and how to request a quota increase, see Quotas.
b. Under AWS KMS key choose one of the following:

• AWS managed key (aws/s3)


• Choose from your KMS master keys, and choose your KMS master key.
• Enter KMS master key ARN, and enter your AWS KMS key ARN.

Important
You can only use KMS CMKs that are enabled in the same AWS Region as the bucket.
When you choose Choose from your KMS master keys, the S3 console only lists 100
KMS CMKs per Region. If you have more than 100 CMKs in the same Region, you can
only see the first 100 CMKs in the S3 console. To use a KMS CMK that is not listed in the
console, choose Custom KMS ARN, and enter the KMS CMK ARN.
When you use an AWS KMS CMK for server-side encryption in Amazon S3, you
must choose a symmetric CMK. Amazon S3 only supports symmetric CMKs and not
asymmetric CMKs. For more information, see Using symmetric and asymmetric keys in
the AWS Key Management Service Developer Guide.

For more information about creating an AWS KMS CMK, see Creating keys in the AWS Key
Management Service Developer Guide. For more information about using AWS KMS with
Amazon S3, see Protecting Data Using Server-Side Encryption with CMKs Stored in AWS Key
Management Service (SSE-KMS) (p. 158).
8. To use S3 Bucket Keys, under Bucket Key, choose Enable.

When you configure your bucket to use default encryption with SSE-KMS, you can also enable S3
Bucket Key. S3 Bucket Keys decrease request traffic from Amazon S3 to AWS KMS and lower the
cost of encryption. For more information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket
Keys (p. 166).
9. Choose Save changes.

Using the API


Use the REST API PUT Bucket encryption operation to enable default encryption and to set the type of
server-side encryption to use—SSE-S3 or SSE-KMS.

For more information, see PutBucketEncryption in the Amazon Simple Storage Service API Reference.

After you enable default encryption for a bucket, the following encryption behavior applies:

API Version 2006-03-01


42
Amazon Simple Storage Service User Guide
Monitoring default encryption

• There is no change to the encryption of the objects that existed in the bucket before default
encryption was enabled.
• When you upload objects after enabling default encryption:
• If your PUT request headers don't include encryption information, Amazon S3 uses the bucket’s
default encryption settings to encrypt the objects.
• If your PUT request headers include encryption information, Amazon S3 uses the encryption
information from the PUT request to encrypt objects before storing them in Amazon S3.
• If you use the SSE-KMS option for your default encryption configuration, you are subject to the RPS
(requests per second) limits of AWS KMS. For more information about AWS KMS limits and how to
request a limit increase, see AWS KMS limits.

Using the AWS SDKs


For information about enabling default encryption using the AWS SDKs, see Developing with Amazon S3
using the AWS SDKs, and explorers (p. 943).

Monitoring default encryption with CloudTrail and


CloudWatch
You can track default encryption configuration requests for Amazon S3 buckets using AWS CloudTrail
events. The following API event names are used in CloudTrail logs:

• PutBucketEncryption
• GetBucketEncryption
• DeleteBucketEncryption

You can also create Amazon CloudWatch Events with S3 bucket-level operations as the event type.
For more information about CloudTrail events, see Enable logging for objects in a bucket using the
console (p. 744).

You can use CloudTrail logs for object-level Amazon S3 actions to track PUT and POST requests to
Amazon S3. You can use these actions to verify whether default encryption is being used to encrypt
objects when incoming PUT requests don't have encryption headers.

When Amazon S3 encrypts an object using the default encryption settings, the log includes
the following field as the name/value pair: "SSEApplied":"Default_SSE_S3" or
"SSEApplied":"Default_SSE_KMS".

When Amazon S3 encrypts an object using the PUT encryption headers, the log includes one of the
following fields as the name/value pair: "SSEApplied":"SSE_S3", "SSEApplied":"SSE_KMS or
"SSEApplied":"SSE_C".

For multipart uploads, this information is included in the InitiateMultipartUpload API requests. For
more information about using CloudTrail and CloudWatch, see Monitoring Amazon S3 (p. 732).

Configuring fast, secure file transfers using


Amazon S3 Transfer Acceleration
Amazon S3 Transfer Acceleration is a bucket-level feature that enables fast, easy, and secure transfers
of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage

API Version 2006-03-01


43
Amazon Simple Storage Service User Guide
Why use Transfer Acceleration?

of the globally distributed edge locations in Amazon CloudFront. As the data arrives at an edge location,
the data is routed to Amazon S3 over an optimized network path.

When you use Transfer Acceleration, additional data transfer charges might apply. For more information
about pricing, see Amazon S3 pricing.

Why use Transfer Acceleration?


You might want to use Transfer Acceleration on a bucket for various reasons:

• Your customers upload to a centralized bucket from all over the world.
• You transfer gigabytes to terabytes of data on a regular basis across continents.
• You can't use all of your available bandwidth over the internet when uploading to Amazon S3.

For more information about when to use Transfer Acceleration, see Amazon S3 FAQs.

Requirements for using Transfer Acceleration


The following are required when you are using Transfer Acceleration on an S3 bucket:

• Transfer Acceleration is only supported on virtual-hosted style requests. For more information about
virtual-hosted style requests, see Making requests using the REST API (p. 933).
• The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain
periods (".").
• Transfer Acceleration must be enabled on the bucket. For more information, see Enabling and using S3
Transfer Acceleration (p. 46).

After you enable Transfer Acceleration on a bucket, it might take up to 20 minutes before the data
transfer speed to the bucket increases.
Note
Transfer Acceleration is currently not supported for buckets located in the following Regions:
• Africa (Cape Town) (af-south-1)
• Asia Pacific (Hong Kong) (ap-east-1)
• Asia Pacific (Osaka-Local) (ap-northeast-3)
• Europe (Stockholm) (eu-north-1)
• Europe (Milan) (eu-south-1)
• Middle East (Bahrain) (me-south-1)
• To access the bucket that is enabled for Transfer Acceleration, you must use the endpoint
bucketname.s3-accelerate.amazonaws.com. Or, use the dual-stack endpoint bucketname.s3-
accelerate.dualstack.amazonaws.com to connect to the enabled bucket over IPv6.
• You must be the bucket owner to set the transfer acceleration state. The bucket owner can
assign permissions to other users to allow them to set the acceleration state on a bucket. The
s3:PutAccelerateConfiguration permission permits users to enable or disable Transfer
Acceleration on a bucket. The s3:GetAccelerateConfiguration permission permits users to
return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended. For more
information about these permissions, see Example — Bucket subresource operations (p. 231) and
Identity and access management in Amazon S3 (p. 209).

The following sections describe how to get started and use Amazon S3 Transfer Acceleration for
transferring data.

Topics

API Version 2006-03-01


44
Amazon Simple Storage Service User Guide
Getting Started

• Getting started with Amazon S3 Transfer Acceleration (p. 45)


• Enabling and using S3 Transfer Acceleration (p. 46)
• Using the Amazon S3 Transfer Acceleration Speed Comparison tool (p. 50)

Getting started with Amazon S3 Transfer


Acceleration
You can use Amazon S3 Transfer Acceleration for fast, easy, and secure transfers of files over long
distances between your client and an S3 bucket. Transfer Acceleration uses the globally distributed edge
locations in Amazon CloudFront. As the data arrives at an edge location, data is routed to Amazon S3
over an optimized network path.

To get started using Amazon S3 Transfer Acceleration, perform the following steps:

1. Enable Transfer Acceleration on a bucket

You can enable Transfer Acceleration on a bucket any of the following ways:
• Use the Amazon S3 console.
• Use the REST API PUT Bucket accelerate operation.
• Use the AWS CLI and AWS SDKs. For more information, see Developing with Amazon S3 using the
AWS SDKs, and explorers (p. 943).

For more information, see Enabling and using S3 Transfer Acceleration (p. 46).
Note
For your bucket to work with transfer acceleration, the bucket name must conform to DNS
naming requirements and must not contain periods (".").
2. Transfer data to and from the acceleration-enabled bucket

Use one of the following s3-accelerate endpoint domain names:


• To access an acceleration-enabled bucket, use bucketname.s3-accelerate.amazonaws.com.
• To access an acceleration-enabled bucket over IPv6, use bucketname.s3-
accelerate.dualstack.amazonaws.com.

Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and IPv4. The Transfer
Acceleration dual-stack endpoint only uses the virtual hosted-style type of endpoint name. For
more information, see Getting started making requests over IPv6 (p. 902) and Using Amazon S3
dual-stack endpoints (p. 904).
Note
You can continue to use the regular endpoint in addition to the accelerate endpoints.

You can point your Amazon S3 PUT object and GET object requests to the s3-accelerate
endpoint domain name after you enable Transfer Acceleration. For example, suppose that you
currently have a REST API application using PUT Object that uses the hostname mybucket.s3.us-
east-1.amazonaws.com in the PUT request. To accelerate the PUT, you change the hostname in
your request to mybucket.s3-accelerate.amazonaws.com. To go back to using the standard
upload speed, change the name back to mybucket.s3.us-east-1.amazonaws.com.

After Transfer Acceleration is enabled, it can take up to 20 minutes for you to realize the performance
benefit. However, the accelerate endpoint is available as soon as you enable Transfer Acceleration.

You can use the accelerate endpoint in the AWS CLI, AWS SDKs, and other tools that transfer data
to and from Amazon S3. If you are
APIusing the2006-03-01
Version AWS SDKs, some of the supported languages use
45
Amazon Simple Storage Service User Guide
Enabling Transfer Acceleration

an accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint
for Transfer Acceleration to bucketname.s3-accelerate.amazonaws.com. For examples of
how to use an accelerate endpoint client configuration flag, see Enabling and using S3 Transfer
Acceleration (p. 46).

You can use all Amazon S3 operations through the transfer acceleration endpoints except for the
following:

• GET Service (list buckets)


• PUT Bucket (create bucket)
• DELETE Bucket

Also, Amazon S3 Transfer Acceleration does not support cross-Region copies using PUT Object - Copy.

Enabling and using S3 Transfer Acceleration


You can use Amazon S3 Transfer Acceleration transfer files quickly and securely over long distances
between your client and an S3 bucket. You can enable Transfer Acceleration using the S3 console, the
AWS Command Line Interface (AWS CLI), or the AWS SDKs.

This section provides examples of how to enable Amazon S3 Transfer Acceleration on a bucket and use
the acceleration endpoint for the enabled bucket.

For more information about Transfer Acceleration requirements, see Configuring fast, secure file
transfers using Amazon S3 Transfer Acceleration (p. 43).

Using the S3 console


Note
If you want to compare accelerated and non-accelerated upload speeds, open the Amazon S3
Transfer Acceleration Speed Comparison tool.
The Speed Comparison tool uses multipart upload to transfer a file from your browser to various
AWS Regions with and without Amazon S3 transfer acceleration. You can compare the upload
speed for direct uploads and transfer accelerated uploads by Region.

To enable transfer acceleration for an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to enable transfer acceleration for.
3. Choose Properties.
4. Under Transfer acceleration, choose Edit.
5. Choose Enable, and choose Save changes.

To access accelerated data transfers

1. After Amazon S3 enables transfer acceleration for your bucket, view the Properties tab for the
bucket.
2. Under Transfer acceleration, Accelerated endpoint displays the transfer acceleration endpoint for
your bucket. Use this endpoint to access accelerated data transfers to and from your bucket.

If you suspend transfer acceleration, the accelerate endpoint no longer works.

API Version 2006-03-01


46
Amazon Simple Storage Service User Guide
Enabling Transfer Acceleration

Using the AWS CLI


The following are examples of AWS CLI commands used for Transfer Acceleration. For instructions on
setting up the AWS CLI, see Developing with Amazon S3 using the AWS CLI (p. 942).

Enabling Transfer Acceleration on a bucket


Use the AWS CLI put-bucket-accelerate-configuration command to enable or suspend Transfer
Acceleration on a bucket.

The following example sets Status=Enabled to enable Transfer Acceleration on a bucket. You use
Status=Suspended to suspend Transfer Acceleration.

Example

$ aws s3api put-bucket-accelerate-configuration --bucket bucketname --accelerate-


configuration Status=Enabled

Using Transfer Acceleration


You can direct all Amazon S3 requests made by s3 and s3api AWS CLI commands to the
accelerate endpoint: s3-accelerate.amazonaws.com. To do this, set the configuration value
use_accelerate_endpoint to true in a profile in your AWS Config file. Transfer Acceleration must be
enabled on your bucket to use the accelerate endpoint.

All requests are sent using the virtual style of bucket addressing: my-bucket.s3-
accelerate.amazonaws.com. Any ListBuckets, CreateBucket, and DeleteBucket requests are
not sent to the accelerate endpoint because the endpoint doesn't support those operations.

For more information about use_accelerate_endpoint, see AWS CLI S3 Configuration in the AWS CLI
Command Reference.

The following example sets use_accelerate_endpoint to true in the default profile.

Example

$ aws configure set default.s3.use_accelerate_endpoint true

If you want to use the accelerate endpoint for some AWS CLI commands but not others, you can use
either one of the following two methods:

• Use the accelerate endpoint per command by setting the --endpoint-url parameter to https://
s3-accelerate.amazonaws.com or http://s3-accelerate.amazonaws.com for any s3 or
s3api command.
• Set up separate profiles in your AWS Config file. For example, create one profile that sets
use_accelerate_endpoint to true and a profile that does not set use_accelerate_endpoint.
When you run a command, specify which profile you want to use, depending upon whether you want
to use the accelerate endpoint.

Uploading an object to a bucket enabled for Transfer Acceleration


The following example uploads a file to a bucket enabled for Transfer Acceleration by using the default
profile that has been configured to use the accelerate endpoint.

Example

$ aws s3 cp file.txt s3://bucketname/keyname --region region

API Version 2006-03-01


47
Amazon Simple Storage Service User Guide
Enabling Transfer Acceleration

The following example uploads a file to a bucket enabled for Transfer Acceleration by using the --
endpoint-url parameter to specify the accelerate endpoint.

Example

$ aws configure set s3.addressing_style virtual


$ aws s3 cp file.txt s3://bucketname/keyname --region region --endpoint-url http://s3-
accelerate.amazonaws.com

Using the AWS SDKs


The following are examples of using Transfer Acceleration to upload objects to Amazon S3 using
the AWS SDK. Some of the AWS SDK supported languages (for example, Java and .NET) use an
accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint for Transfer
Acceleration to bucketname.s3-accelerate.amazonaws.com.

Java

Example

The following example shows how to use an accelerate endpoint to upload an object to Amazon S3.
The example does the following:

• Creates an AmazonS3Client that is configured to use accelerate endpoints. All buckets that the
client accesses must have Transfer Acceleration enabled.
• Enables Transfer Acceleration on a specified bucket. This step is necessary only if the bucket you
specify doesn't already have Transfer Acceleration enabled.
• Verifies that transfer acceleration is enabled for the specified bucket.
• Uploads a new object to the specified bucket using the bucket's accelerate endpoint.

For more information about using Transfer Acceleration, see Getting started with Amazon S3
Transfer Acceleration (p. 45). For instructions on creating and testing a working sample, see
Testing the Amazon S3 Java Code Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketAccelerateConfiguration;
import com.amazonaws.services.s3.model.BucketAccelerateStatus;
import com.amazonaws.services.s3.model.GetBucketAccelerateConfigurationRequest;
import com.amazonaws.services.s3.model.SetBucketAccelerateConfigurationRequest;

public class TransferAcceleration {


public static void main(String[] args) {
Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ***";

try {
// Create an Amazon S3 client that is configured to use the accelerate
endpoint.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.enableAccelerateMode()

API Version 2006-03-01


48
Amazon Simple Storage Service User Guide
Enabling Transfer Acceleration

.build();

// Enable Transfer Acceleration for the specified bucket.


s3Client.setBucketAccelerateConfiguration(
new SetBucketAccelerateConfigurationRequest(bucketName,
new BucketAccelerateConfiguration(
BucketAccelerateStatus.Enabled)));

// Verify that transfer acceleration is enabled for the bucket.


String accelerateStatus = s3Client.getBucketAccelerateConfiguration(
new GetBucketAccelerateConfigurationRequest(bucketName))
.getStatus();
System.out.println("Bucket accelerate status: " + accelerateStatus);

// Upload a new object using the accelerate endpoint.


s3Client.putObject(bucketName, keyName, "Test object for transfer
acceleration");
System.out.println("Object \"" + keyName + "\" uploaded with transfer
acceleration.");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

The following example shows how to use the AWS SDK for .NET to enable Transfer Acceleration on
a bucket. For instructions on how to create and test a working sample, see Running the Amazon
S3 .NET Code Examples (p. 951).

Example

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class TransferAccelerationTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
EnableAccelerationAsync().Wait();
}

static async Task EnableAccelerationAsync()


{
try
{

API Version 2006-03-01


49
Amazon Simple Storage Service User Guide
Speed Comparison tool

var putRequest = new PutBucketAccelerateConfigurationRequest


{
BucketName = bucketName,
AccelerateConfiguration = new AccelerateConfiguration
{
Status = BucketAccelerateStatus.Enabled
}
};
await s3Client.PutBucketAccelerateConfigurationAsync(putRequest);

var getRequest = new GetBucketAccelerateConfigurationRequest


{
BucketName = bucketName
};
var response = await
s3Client.GetBucketAccelerateConfigurationAsync(getRequest);

Console.WriteLine("Acceleration state = '{0}' ", response.Status);


}
catch (AmazonS3Exception amazonS3Exception)
{
Console.WriteLine(
"Error occurred. Message:'{0}' when setting transfer
acceleration",
amazonS3Exception.Message);
}
}
}
}

When uploading an object to a bucket that has Transfer Acceleration enabled, you specify using the
acceleration endpoint at the time of creating a client.

var client = new AmazonS3Client(new AmazonS3Config


{
RegionEndpoint = TestRegionEndpoint,
UseAccelerateEndpoint = true
}

Javascript

For an example of enabling Transfer Acceleration by using the AWS SDK for JavaScript, see Calling
the putBucketAccelerateConfiguration operation in the AWS SDK for JavaScript API Reference.
Python (Boto)

For an example of enabling Transfer Acceleration by using the SDK for Python, see
put_bucket_accelerate_configuration in the AWS SDK for Python (Boto3) API Reference.
Other

For information about using other AWS SDKs, see Sample Code and Libraries.

Using the Amazon S3 Transfer Acceleration Speed


Comparison tool
You can use the Amazon S3 Transfer Acceleration Speed Comparison tool to compare accelerated and
non-accelerated upload speeds across Amazon S3 Regions. The Speed Comparison tool uses multipart
uploads to transfer a file from your browser to various Amazon S3 Regions with and without using
Transfer Acceleration.

API Version 2006-03-01


50
Amazon Simple Storage Service User Guide
Using Requester Pays

You can access the Speed Comparison tool using either of the following methods:

• Copy the following URL into your browser window, replacing region with the AWS Region that you
are using (for example, us-west-2) and yourBucketName with the name of the bucket that you
want to evaluate:

https://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-
speed-comparsion.html?region=region&origBucketName=yourBucketName

For a list of the Regions supported by Amazon S3, see Amazon S3 endpoints and quotas in the AWS
General Reference.
• Use the Amazon S3 console.

Using Requester Pays buckets for storage transfers


and usage
In general, bucket owners pay for all Amazon S3 storage and data transfer costs that are associated with
their bucket. However, you can configure a bucket to be a Requester Pays bucket. With Requester Pays
buckets, the requester instead of the bucket owner pays the cost of the request and the data download
from the bucket. The bucket owner always pays the cost of storing data.

Typically, you configure buckets to be Requester Pays buckets when you want to share data but not
incur charges associated with others accessing the data. For example, you might use Requester Pays
buckets when making available large datasets, such as zip code directories, reference data, geospatial
information, or web crawling data.
Important
If you enable Requester Pays on a bucket, anonymous access to that bucket is not allowed.

You must authenticate all requests involving Requester Pays buckets. The request authentication enables
Amazon S3 to identify and charge the requester for their use of the Requester Pays bucket.

When the requester assumes an AWS Identity and Access Management (IAM) role before making their
request, the account to which the role belongs is charged for the request. For more information about
IAM roles, see IAM roles in the IAM User Guide.

After you configure a bucket to be a Requester Pays bucket, requesters must include x-amz-request-
payer in their requests either in the header, for POST, GET and HEAD requests, or as a parameter in
a REST request to show that they understand that they will be charged for the request and the data
download.

Requester Pays buckets do not support the following:

• Anonymous requests
• BitTorrent
• SOAP requests
• Using a Requester Pays bucket as the target bucket for end-user logging, or vice versa. However, you
can turn on end-user logging on a Requester Pays bucket where the target bucket is not a Requester
Pays bucket.

How Requester Pays charges work


The charge for successful Requester Pays requests is straightforward: The requester pays for the data
transfer and the request, and the bucket owner pays for the data storage. However, the bucket owner is
charged for the request under the following conditions:

API Version 2006-03-01


51
Amazon Simple Storage Service User Guide
Configuring Requester Pays

• The requester doesn't include the parameter x-amz-request-payer in the header (GET, HEAD, or
POST) or as a parameter (REST) in the request (HTTP code 403).
• Request authentication fails (HTTP code 403).
• The request is anonymous (HTTP code 403).
• The request is a SOAP request.

For more information Requester Pays, see the topics below.

Topics
• Configuring Requester Pays on a bucket (p. 52)
• Retrieving the requestPayment configuration using the REST API (p. 53)
• Downloading objects in Requester Pays buckets (p. 54)

Configuring Requester Pays on a bucket


You can configure an Amazon S3 bucket to be a Requester Pays bucket so that the requester pays the
cost of the request and data download instead of the bucket owner.

This section provides examples of how to configure Requester Pays on an Amazon S3 bucket using the
console and the REST API.

Using the S3 console


To enable Requester Pays for an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to enable Requester Pays for.
3. Choose Properties.
4. Under Requester pays, choose Edit.
5. Choose Enable, and choose Save changes.

Amazon S3 enables Requester Pays for your bucket and displays your Bucket overview. Under
Requester pays, you see Enabled.

Using the REST API


Only the bucket owner can set the RequestPaymentConfiguration.payer configuration value of a
bucket to BucketOwner (the default) or Requester. Setting the requestPayment resource is optional.
By default, the bucket is not a Requester Pays bucket.

To revert a Requester Pays bucket to a regular bucket, you use the value BucketOwner. Typically, you
would use BucketOwner when uploading data to the Amazon S3 bucket, and then you would set the
value to Requester before publishing the objects in the bucket.

To set requestPayment

• Use a PUT request to set the Payer value to Requester on a specified bucket.

PUT ?requestPayment HTTP/1.1


Host: [BucketName].s3.amazonaws.com
Content-Length: 173

API Version 2006-03-01


52
Amazon Simple Storage Service User Guide
Retrieving the requestPayment configuration

Date: Wed, 01 Mar 2009 12:00:00 GMT


Authorization: AWS [Signature]

<RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Payer>Requester</Payer>
</RequestPaymentConfiguration>

If the request succeeds, Amazon S3 returns a response similar to the following.

HTTP/1.1 200 OK
x-amz-id-2: [id]
x-amz-request-id: [request_id]
Date: Wed, 01 Mar 2009 12:00:00 GMT
Content-Length: 0
Connection: close
Server: AmazonS3
x-amz-request-charged:requester

You can set Requester Pays only at the bucket level. You can't set Requester Pays for specific objects
within the bucket.

You can configure a bucket to be BucketOwner or Requester at any time. However, there might be a
few minutes before the new configuration value takes effect.
Note
Bucket owners who give out presigned URLs should consider carefully before configuring a
bucket to be Requester Pays, especially if the URL has a long lifetime. The bucket owner is
charged each time the requester uses a presigned URL that uses the bucket owner's credentials.

Retrieving the requestPayment configuration using


the REST API
You can determine the Payer value that is set on a bucket by requesting the resource requestPayment.

To return the requestPayment resource

• Use a GET request to obtain the requestPayment resource, as shown in the following request.

GET ?requestPayment HTTP/1.1


Host: [BucketName].s3.amazonaws.com
Date: Wed, 01 Mar 2009 12:00:00 GMT
Authorization: AWS [Signature]

If the request succeeds, Amazon S3 returns a response similar to the following.

HTTP/1.1 200 OK
x-amz-id-2: [id]
x-amz-request-id: [request_id]
Date: Wed, 01 Mar 2009 12:00:00 GMT
Content-Type: [type]
Content-Length: [length]
Connection: close
Server: AmazonS3

<?xml version="1.0" encoding="UTF-8"?>


<RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Payer>Requester</Payer>

API Version 2006-03-01


53
Amazon Simple Storage Service User Guide
Downloading objects in Requester Pays buckets

</RequestPaymentConfiguration>

This response shows that the payer value is set to Requester.

Downloading objects in Requester Pays buckets


Because requesters are charged for downloading data from Requester Pays buckets, the requests must
contain a special parameter, x-amz-request-payer, which confirms that the requester knows that
they will be charged for the download. To access objects in Requester Pays buckets, requests must
include one of the following.

• For GET, HEAD, and POST requests, include x-amz-request-payer : requester in the header
• For signed URLs, include x-amz-request-payer=requester in the request

If the request succeeds and the requester is charged, the response includes the header x-amz-request-
charged:requester. If x-amz-request-payer is not in the request, Amazon S3 returns a 403 error
and charges the bucket owner for the request.
Note
Bucket owners do not need to add x-amz-request-payer to their requests.
Ensure that you have included x-amz-request-payer and its value in your signature
calculation. For more information, see Constructing the CanonicalizedAmzHeaders
Element (p. 972).

Using the REST API


To download objects from a Requester Pays bucket

• Use a GET request to download an object from a Requester Pays bucket, as shown in the following
request.

GET / [destinationObject] HTTP/1.1


Host: [BucketName].s3.amazonaws.com
x-amz-request-payer : requester
Date: Wed, 01 Mar 2009 12:00:00 GMT
Authorization: AWS [Signature]

If the GET request succeeds and the requester is charged, the response includes x-amz-request-
charged:requester.

Amazon S3 can return an Access Denied error for requests that try to get objects from a Requester
Pays bucket. For more information, see Error Responses in the Amazon Simple Storage Service API
Reference.

Using the AWS CLI


To download objects from a Requester Pays bucket using the AWS CLI, you specify --request-payer
requester as part of your get-object request. For more information, see get-object in the AWS CLI
Reference.

Bucket restrictions and limitations


An Amazon S3 bucket is owned by the AWS account that created it. Bucket ownership is not transferable
to another account.

API Version 2006-03-01


54
Amazon Simple Storage Service User Guide
Restrictions and limitations

When you create a bucket, you choose its name and the AWS Region to create it in. After you create a
bucket, you can't change its name or Region.

When naming a bucket, choose a name that is relevant to you or your business. Avoid using names
associated with others. For example, you should avoid using AWS or Amazon in your bucket name.

By default, you can create up to 100 buckets in each of your AWS accounts. If you need additional
buckets, you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a
service limit increase. There is no difference in performance whether you use many buckets or just a few.

For information about how to increase your bucket limit, see AWS service quotas in the AWS General
Reference.

Reusing bucket names

If a bucket is empty, you can delete it. After a bucket is deleted, the name becomes available for reuse.
However, after you delete the bucket, you might not be able to reuse the name for various reasons.

For example, when you delete the bucket and the name becomes available for reuse, another AWS
account might create a bucket with that name. In addition, some time might pass before you can reuse
the name of a deleted bucket. If you want to use the same bucket name, we recommend that you don't
delete the bucket.

Objects and buckets

There is no limit to the number of objects that you can store in a bucket. You can store all of your objects
in a single bucket, or you can organize them across several buckets. However, you can't create a bucket
from within another bucket.

Bucket operations

The high availability engineering of Amazon S3 is focused on get, put, list, and delete operations. Because
bucket operations work against a centralized, global resource space, it is not appropriate to create or
delete buckets on the high availability code path of your application. It's better to create or delete
buckets in a separate initialization or setup routine that you run less often.

Bucket naming and automatically created buckets

If your application automatically creates buckets, choose a bucket naming scheme that is unlikely to
cause naming conflicts. Ensure that your application logic will choose a different bucket name if a bucket
name is already taken.

For more information about bucket naming, see Bucket naming rules (p. 27).

API Version 2006-03-01


55
Amazon Simple Storage Service User Guide
Objects

Uploading, downloading, and


working with objects in Amazon S3
To store your data in Amazon S3, you work with resources known as buckets and objects. A bucket is a
container for objects. An object is a file and any metadata that describes that file.

To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the
object is in the bucket, you can open it, download it, and move it. When you no longer need an object or
a bucket, you can clean up these resources.

With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.

Topics
• Amazon S3 objects overview (p. 56)
• Creating object key names (p. 58)
• Working with object metadata (p. 60)
• Uploading objects (p. 65)
• Uploading and copying objects using multipart upload (p. 72)
• Copying objects (p. 102)
• Downloading an object (p. 109)
• Deleting Amazon S3 objects (p. 115)
• Organizing, listing, and working with your objects (p. 135)
• Accessing an object using a presigned URL (p. 144)
• Retrieving Amazon S3 objects using BitTorrent (p. 153)

Amazon S3 objects overview


Amazon S3 is an object store that uses unique key-values to store as many objects as you want. You store
these objects in one or more buckets, and each object can be up to 5 TB in size. An object consists of the
following:

Key

The name that you assign to an object. You use the object key to retrieve the object. For more
information, see Working with object metadata (p. 60).
Version ID

Within a bucket, a key and version ID uniquely identify an object. The version ID is a string that
Amazon S3 generates when you add an object to a bucket. For more information, see Using
versioning in S3 buckets (p. 453).

API Version 2006-03-01


56
Amazon Simple Storage Service User Guide
Subresources

Value

The content that you are storing.

An object value can be any sequence of bytes. Objects can range in size from zero to 5 TB. For more
information, see Uploading objects (p. 65).
Metadata

A set of name-value pairs with which you can store information regarding the object. You can assign
metadata, referred to as user-defined metadata, to your objects in Amazon S3. Amazon S3 also
assigns system-metadata to these objects, which it uses for managing objects. For more information,
see Working with object metadata (p. 60).
Subresources

Amazon S3 uses the subresource mechanism to store object-specific additional information. Because
subresources are subordinates to objects, they are always associated with some other entity such as
an object or a bucket. For more information, see Object subresources (p. 57).
Access control information

You can control access to the objects you store in Amazon S3. Amazon S3 supports both the
resource-based access control, such as an access control list (ACL) and bucket policies, and user-
based access control. For more information, see Identity and access management in Amazon
S3 (p. 209).

Your Amazon S3 resources (for example, buckets and objects) are private by default. You must
explicitly grant permission for others to access these resources. For more information about sharing
objects, see Accessing an object using a presigned URL (p. 144).

Object subresources
Amazon S3 defines a set of subresources associated with buckets and objects. Subresources are
subordinates to objects. This means that subresources don't exist on their own. They are always
associated with some other entity, such as an object or a bucket.

The following table lists the subresources associated with Amazon S3 objects.

Subresource Description

acl Contains a list of grants identifying the grantees and the permissions granted. When
you create an object, the acl identifies the object owner as having full control over the
object. You can retrieve an object ACL or replace it with an updated list of grants. Any
update to an ACL requires you to replace the existing ACL. For more information about
ACLs, see Managing access with ACLs (p. 383).

torrent Amazon S3 supports the BitTorrent protocol. Amazon S3 uses the torrent subresource
to return the torrent file associated with the specific object. To retrieve a torrent file,
you specify the torrent subresource in your GET request. Amazon S3 creates a torrent
file and returns it. You can only retrieve the torrent subresource; you cannot create,
update, or delete the torrent subresource. For more information, see Retrieving
Amazon S3 objects using BitTorrent (p. 153).
Note
Amazon S3 does not support the BitTorrent protocol in AWS Regions launched
after May 30, 2016.

API Version 2006-03-01


57
Amazon Simple Storage Service User Guide
Creating object keys

Creating object key names


The object key (or key name) uniquely identifies the object in an Amazon S3 bucket. Object metadata is a
set of name-value pairs.

When you create an object, you specify the key name, which uniquely identifies the object in the bucket.
For example, on the Amazon S3 console, when you highlight a bucket, a list of objects in your bucket
appears. These names are the object keys. The name for a key is a sequence of Unicode characters whose
UTF-8 encoding is at most 1,024 bytes long.

The Amazon S3 data model is a flat structure: You create a bucket, and the bucket store objects. There
is no hierarchy of subbuckets or subfolders. However, you can infer logical hierarchy using key name
prefixes and delimiters as the Amazon S3 console does. The Amazon S3 console supports a concept of
folders. For more information about how to edit metadata from the Amazon S3 console, see Editing
object metadata in the Amazon S3 console (p. 63).

Suppose that your bucket (admin-created) has four objects with the following object keys:

Development/Projects.xls

Finance/statement1.pdf

Private/taxdocument.pdf

s3-dg.pdf

The console uses the key name prefixes (Development/, Finance/, and Private/) and delimiter ('/')
to present a folder structure. The s3-dg.pdf key does not have a prefix, so its object appears directly at
the root level of the bucket. If you open the Development/ folder, you see the Projects.xlsx object
in it.

• Amazon S3 supports buckets and objects, and there is no hierarchy. However, by using prefixes and
delimiters in an object key name, the Amazon S3 console and the AWS SDKs can infer hierarchy and
introduce the concept of folders.
• The Amazon S3 console implements folder object creation by creating a zero-byte object with the
folder prefix and delimiter value as the key. These folder objects don't appear in the console. Otherwise
they behave like any other objects and can be viewed and manipulated through the REST API, AWS CLI,
and AWS SDKs.

Object key naming guidelines


You can use any UTF-8 character in an object key name. However, using certain characters in key names
can cause problems with some applications and protocols. The following guidelines help you maximize
compliance with DNS, web-safe characters, XML parsers, and other APIs.

Safe characters
The following character sets are generally safe for use in key names.

Alphanumeric characters • 0-9


• a-z
• A-Z

API Version 2006-03-01


58
Amazon Simple Storage Service User Guide
Object key naming guidelines

Special characters • Forward slash (/)


• Exclamation point (!)
• Hyphen (-)
• Underscore (_)
• Period (.)
• Asterisk (*)
• Single quote (')
• Open parenthesis (()
• Close parenthesis ())

The following are examples of valid object key names:

• 4my-organization
• my.great_photos-2014/jan/myvacation.jpg
• videos/2014/birthday/video1.wmv

Important
If an object key name ends with a single period (.), or two periods (..), you can’t download the
object using the Amazon S3 console. To download an object with a key name ending with “.” or
“..”, you must use the AWS Command Line Interface (AWS CLI), AWS SDKs, or REST API.

Characters that might require special handling


The following characters in a key name might require additional code handling and likely need to be URL
encoded or referenced as HEX. Some of these are non-printable characters that your browser might not
handle, which also requires special handling:

• Ampersand ("&")
• Dollar ("$")
• ASCII character ranges 00–1F hex (0–31 decimal) and 7F (127 decimal)
• 'At' symbol ("@")
• Equals ("=")
• Semicolon (";")
• Colon (":")
• Plus ("+")
• Space – Significant sequences of spaces might be lost in some uses (especially multiple spaces)
• Comma (",")
• Question mark ("?")

Characters to avoid
Avoid the following characters in a key name because of significant special handling for consistency
across all applications.

• Backslash ("\")
• Left curly brace ("{")
• Non-printable ASCII characters (128–255 decimal characters)

API Version 2006-03-01


59
Amazon Simple Storage Service User Guide
Working with metadata

• Caret ("^")
• Right curly brace ("}")
• Percent character ("%")
• Grave accent / back tick ("`")
• Right square bracket ("]")
• Quotation marks
• 'Greater Than' symbol (">")
• Left square bracket ("[")
• Tilde ("~")
• 'Less Than' symbol ("<")
• 'Pound' character ("#")
• Vertical bar / pipe ("|")

XML related object key constraints


As specified by the XML standard on end-of-line handling, all XML text is normalized such that single
carriage returns (ASCII code 13) and carriage returns immediately followed by a line feed (ASCII code
10) are replaced by a single line feed character. To ensure the correct parsing of object keys in XML
requests, carriage returns and other special characters must be replaced with their equivalent XML entity
code when they are inserted within XML tags. The following is a list of such special characters and their
equivalent entity codes:

• ' as &apos;
• ” as &quot;
• & as &amp;
• < as &lt;
• > as &gt;
• \r as &#13; or &#x0D;
• \n as &#10; or &#x0A;

Example

The following example illustrates the use of an XML entity code as a substitution for a carriage return.
This DeleteObjects request deletes an object with the key parameter: /some/prefix/objectwith
\rcarriagereturn (where the \r is the carriage return).

<Delete xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Object>
<Key>/some/prefix/objectwith&#13;carriagereturn</Key>
</Object>
</Delete>

Working with object metadata


You can set object metadata in Amazon S3 at the time you upload the object. After you upload the
object, you cannot modify object metadata. The only way to modify object metadata is to make a copy
of the object and set the metadata.

API Version 2006-03-01


60
Amazon Simple Storage Service User Guide
System-defined object metadata

There are two kinds of metadata in Amazon S3: system-defined metadata and user-defined metadata. The
sections below provide more information about system-defined and user-defined metadata. For more
information about editing metadata using the Amazon S3 console, see Editing object metadata in the
Amazon S3 console (p. 63).

System-defined object metadata


For each object stored in a bucket, Amazon S3 maintains a set of system metadata. Amazon S3 processes
this system metadata as needed. For example, Amazon S3 maintains object creation date and size
metadata and uses this information as part of object management.

There are two categories of system metadata:

1. Metadata such as object creation date is system controlled, where only Amazon S3 can modify the
value.
2. Other system metadata, such as the storage class configured for the object and whether the object
has server-side encryption enabled, are examples of system metadata whose values you control.
If your bucket is configured as a website, sometimes you might want to redirect a page request to
another page or an external URL. In this case, a webpage is an object in your bucket. Amazon S3 stores
the page redirect value as system metadata whose value you control.

When you create objects, you can configure values of these system metadata items or update the
values when you need to. For more information about storage classes, see Using Amazon S3 storage
classes (p. 496).

For more information about server-side encryption, see Protecting data using encryption (p. 157).

Note
The PUT request header is limited to 8 KB in size. Within the PUT request header, the system-
defined metadata is limited to 2 KB in size. The size of system-defined metadata is measured by
taking the sum of the number of bytes in the US-ASCII encoding of each key and value.

The following table provides a list of system-defined metadata and whether you can update it.

Name Description Can user


modify the
value?

Date Current date and time. No

Content-Length Object size in bytes. No

Content-Type Object type. Yes

Last-Modified Object creation date or the last modified date, whichever is No


the latest.

Content-MD5 The base64-encoded 128-bit MD5 digest of the object. No

x-amz-server-side- Indicates whether server-side encryption is enabled for Yes


encryption the object, and whether that encryption is from the AWS
Key Management Service (AWS KMS) or from Amazon S3
managed encryption (SSE-S3). For more information, see
Protecting data using server-side encryption (p. 157).

x-amz-version-id Object version. When you enable versioning on a bucket, No


Amazon S3 assigns a version number to objects added to

API Version 2006-03-01


61
Amazon Simple Storage Service User Guide
User-defined object metadata

Name Description Can user


modify the
value?
the bucket. For more information, see Using versioning in S3
buckets (p. 453).

x-amz-delete-marker In a bucket that has versioning enabled, this Boolean marker No


indicates whether the object is a delete marker.

x-amz-storage-class Storage class used for storing the object. For more Yes
information, see Using Amazon S3 storage classes (p. 496).

x-amz-website- Redirects requests for the associated object to another Yes


redirect-location object in the same bucket or an external URL. For more
information, see (Optional) Configuring a webpage
redirect (p. 871).

x-amz-server-side- If x-amz-server-side-encryption is present and has the value Yes


encryption-aws-kms- of aws:kms, this indicates the ID of the AWS KMS symmetric
key-id customer master key (CMK) that was used for the object.

x-amz-server-side- Indicates whether server-side encryption with customer- Yes


encryption-customer- provided encryption keys (SSE-C) is enabled. For more
algorithm information, see Protecting data using server-side
encryption with customer-provided encryption keys (SSE-
C) (p. 185).

User-defined object metadata


When uploading an object, you can also assign metadata to the object. You provide this optional
information as a name-value (key-value) pair when you send a PUT or POST request to create the object.
When you upload objects using the REST API, the optional user-defined metadata names must begin
with "x-amz-meta-" to distinguish them from other HTTP headers. When you retrieve the object using
the REST API, this prefix is returned. When you upload objects using the SOAP API, the prefix is not
required. When you retrieve the object using the SOAP API, the prefix is removed, regardless of which API
you used to upload the object.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

When metadata is retrieved through the REST API, Amazon S3 combines headers that have the same
name (ignoring case) into a comma-delimited list. If some metadata contains unprintable characters,
it is not returned. Instead, the x-amz-missing-meta header is returned with a value of the number
of unprintable metadata entries. The HeadObject action retrieves metadata from an object without
returning the object itself. This operation is useful if you're only interested in an object's metadata.
To use HEAD, you must have READ access to the object. For more information, see HeadObject in the
Amazon Simple Storage Service API Reference.

User-defined metadata is a set of key-value pairs. Amazon S3 stores user-defined metadata keys in
lowercase.

Amazon S3 allows arbitrary Unicode characters in your metadata values.

To avoid issues around the presentation of these metadata values, you should conform to using US-ASCII
characters when using REST and UTF-8 when using SOAP or browser-based uploads via POST.

API Version 2006-03-01


62
Amazon Simple Storage Service User Guide
Editing object metadata

When using non US-ASCII characters in your metadata values, the provided Unicode string is examined
for non US-ASCII characters. If the string contains only US-ASCII characters, it is presented as is. If the
string contains non US-ASCII characters, it is first character-encoded using UTF-8 and then encoded into
US-ASCII.

The following is an example.

PUT /Key HTTP/1.1


Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-nonascii: ÄMÄZÕÑ S3

HEAD /Key HTTP/1.1


Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-nonascii: =?UTF-8?B?w4PChE3Dg8KEWsODwpXDg8KRIFMz?=

PUT /Key HTTP/1.1


Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-ascii: AMAZONS3

HEAD /Key HTTP/1.1


Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-ascii: AMAZONS3

Note
The PUT request header is limited to 8 KB in size. Within the PUT request header, the user-
defined metadata is limited to 2 KB in size. The size of user-defined metadata is measured by
taking the sum of the number of bytes in the UTF-8 encoding of each key and value.

For information about adding metadata to your object after it’s been uploaded, Editing object metadata
in the Amazon S3 console (p. 63).

Editing object metadata in the Amazon S3 console


You can use the Amazon S3 console to edit metadata of existing S3 objects. Some metadata is set by
Amazon S3 when you upload the object. For example, Content-Length is the key (name) and the value
is the size of the object in bytes.

You can also set some metadata when you upload the object and later edit it as your needs change. For
example, you might have a set of objects that you initially store in the STANDARD storage class. Over
time, you might no longer need this data to be highly available. So you change the storage class to
GLACIER by editing the value of the x-amz-storage-class key from STANDARD to GLACIER.
Note
Consider the following issues when you are editing object metadata in Amazon S3:

• This action creates a copy of the object with updated settings and the last-modified date. If S3
Versioning is enabled, a new version of the object is created, and the existing object becomes
an older version. The IAM role that changes the property also becomes the owner of the new
object or (object version).
• Editing metadata updates values for existing key names.
• Objects that are encrypted with customer-provided encryption keys (SSE-C) cannot be copied
using the console. You must use the AWS CLI, AWS SDK, or the Amazon S3 REST API.

Warning
When editing metadata of folders, wait for the Edit metadata operation to finish before
adding new objects to the folder. Otherwise, new objects might also be edited.

API Version 2006-03-01


63
Amazon Simple Storage Service User Guide
Editing object metadata

The following topics describe how to edit metadata of an object using the Amazon S3 console.

Editing system-defined metadata


You can configure some, but not all, system metadata for an S3 object. For a list of system-defined
metadata and whether you can modify their values, see System-defined object metadata (p. 61).

To edit system-defined metadata of an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to your Amazon S3 bucket or folder, and select the check box to the left of the names of
the objects with metadata you want to edit.
3. On the Actions menu, choose Edit actions, and choose Edit metadata.
4. Review the objects listed, and choose Add metadata.
5. For metadata Type, select System-defined.
6. Specify a unique Key and the metadata Value.
7. To edit additional metadata, choose Add metadata. You can also choose Remove to remove a set of
type-key-values.
8. When you are done, choose Edit metadata and Amazon S3 edits the metadata of the specified
objects.

Editing user-defined metadata


You can edit user-defined metadata of an object by combining the metadata prefix, x-amz-meta-, and
a name you choose to create a custom key. For example, if you add the custom name alt-name, the
metadata key would be x-amz-meta-alt-name.

User-defined metadata can be as large as 2 KB total. To calculate the total size of user-defined metadata,
sum the number of bytes in the UTF-8 encoding for each key and value. Both keys and their values must
conform to US-ASCII standards. For more information, see User-defined object metadata (p. 62).

To edit user-defined metadata of an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the objects that you want to add
metadata to.

You can also optionally navigate to a folder.


3. In the Objects list, select the check box next to the names of the objects that you want to add
metadata to.
4. On the Actions menu, choose Edit metadata.
5. Review the objects listed, and choose Add metadata.
6. For metadata Type, choose User-defined.
7. Enter a unique custom Key following x-amz-meta-. Also enter a metadata Value.
8. To add additional metadata, choose Add metadata. You can also choose Remove to remove a set of
type-key-values.
9. Choose Edit metadata.

Amazon S3 edits the metadata of the specified objects.

API Version 2006-03-01


64
Amazon Simple Storage Service User Guide
Uploading objects

Uploading objects
When you upload a file to Amazon S3, it is stored as an S3 object. Objects consist of the file data and
metadata that describes the object. You can have an unlimited number of objects in a bucket. Before
you can upload files to an Amazon S3 bucket, you need write permissions for the bucket. For more
information about access permissions, see Identity and access management in Amazon S3 (p. 209).

You can upload any file type—images, backups, data, movies, etc.—into an S3 bucket. The maximum size
of a file that you can upload by using the Amazon S3 console is 160 GB. To upload a file larger than 160
GB, use the AWS CLI, AWS SDK, or Amazon S3 REST API.

If you upload an object with a key name that already exists in a versioning-enabled bucket, Amazon
S3 creates another version of the object instead of replacing the existing object. For more information
about versioning, see Using the S3 console (p. 458).

Depending on the size of the data you are uploading, Amazon S3 offers the following options:

• Upload an object in a single operation using the AWS SDKs, REST API, or AWS CLI—With a single
PUT operation, you can upload a single object up to 5 GB in size.
• Upload a single object using the Amazon S3 Console—With the Amazon S3 Console, you can upload
a single object up to 160 GB in size.
• Upload an object in parts using the AWS SDKs, REST API, or AWS CLI—Using the multipart upload
API, you can upload a single large object, up to 5 TB in size.

The multipart upload API is designed to improve the upload experience for larger objects. You can
upload an object in parts. These object parts can be uploaded independently, in any order, and in
parallel. You can use a multipart upload for objects from 5 MB to 5 TB in size. For more information,
see Uploading and copying objects using multipart upload (p. 72).

When uploading an object, you can optionally request that Amazon S3 encrypt it before saving
it to disk, and decrypt it when you download it. For more information, see Protecting data using
encryption (p. 157).

Using the S3 console


This procedure explains how to upload objects and folders to an S3 bucket using the console.

When you upload an object, the object key name is the file name and any optional prefixes. In the
Amazon S3 console, you can create folders to organize your objects. In Amazon S3, folders are
represented as prefixes that appear in the object key name. If you upload an individual object to a folder
in the Amazon S3 console, the folder name is included in the object key name.

For example, if you upload an object named sample1.jpg to a folder named backup, the key name is
backup/sample1.jpg. However, the object is displayed in the console as sample1.jpg in the backup
folder. For more information about key names, see Working with object metadata (p. 60).
Note
If you rename an object or change any of the properties in the S3 console, for example Storage
Class, Encryption, Metadata, a new object is created to replace the old one. If S3 Versioning
is enabled, a new version of the object is created, and the existing object becomes an older
version. The role that changes the property also becomes the owner of the new object or (object
version).

When you upload a folder, Amazon S3 uploads all of the files and subfolders from the specified folder
to your bucket. It then assigns an object key name that is a combination of the uploaded file name
and the folder name. For example, if you upload a folder named /images that contains two files,
sample1.jpg and sample2.jpg, Amazon S3 uploads the files and then assigns the corresponding key

API Version 2006-03-01


65
Amazon Simple Storage Service User Guide
Uploading objects

names, images/sample1.jpg and images/sample2.jpg. The key names include the folder name
as a prefix. The Amazon S3 console displays only the part of the key name that follows the last “/”. For
example, within an images folder the images/sample1.jpg and images/sample2.jpg objects are
displayed as sample1.jpg and a sample2.jpg.

To upload folders and files to an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to upload your folders or files to.
3. Choose Upload.
4. In the Upload window, do one of the following:

• Drag and drop files and folders to the Upload window.


• Choose Add file or Add folder, choose files or folders to upload, and choose Open.
5. To enable versioning, under Destination, choose Enable Bucket Versioning.
6. To upload the listed files and folders without configuring additional upload options, at the bottom
of the page, choose Upload.

Amazon S3 uploads your objects and folders. When the upload completes, you can see a success
message on the Upload: status page.
7. To configure additional object properties before uploading, see To configure additional object
properties (p. 66).

To configure additional object properties

1. To configure additional object properties, choose Additional upload options.


2. Under Storage class, choose the storage class for the files you're uploading.

For more information about storage classes, see Using Amazon S3 storage classes (p. 496).
3. To update the encryption settings for your objects, under Server-side encryption settings, do the
following.

a. Choose Override default encryption bucket settings.


b. To encrypt the uploaded files using keys that are managed by Amazon S3, choose Amazon S3
key (SSE-S3).

For more information, see Protecting data using server-side encryption with Amazon S3-
managed encryption keys (SSE-S3) (p. 174).
c. To encrypt the uploaded files using the AWS Key Management Service (AWS KMS), choose AWS
Key Management Service key (SSE-KMS). Then choose an option for AWS KMS key.

• AWS managed key (aws/s3) - Choose an AWS managed CMK.


• Choose from your KMS master keys - Choose a customer managed CMK from a list of CMKs in
the same Region as your bucket.

For more information about creating a customer managed AWS KMS CMK, see Creating Keys
in the AWS Key Management Service Developer Guide. For more information about protecting
data with AWS KMS, see Protecting Data Using Server-Side Encryption with CMKs Stored in
AWS Key Management Service (SSE-KMS) (p. 158).
• Enter KMS master key ARN - Specify the AWS KMS key ARN for a customer managed CMK,
and enter the Amazon Resource Name (ARN).

You can use the KMS master key ARN to give an external account the ability to use an object
that is protected by an AWS KMS CMK. To do this, choose Enter KMS master key ARN, and

API Version 2006-03-01


66
Amazon Simple Storage Service User Guide
Uploading objects

enter the Amazon Resource Name (ARN) for the external account. Administrators of an
external account that have usage permissions to an object protected by your AWS KMS CMK
can further restrict access by creating a resource-level IAM policy.

Note
To encrypt objects in a bucket, you can use only CMKs that are available in the same
AWS Region as the bucket.
4. To change access control list permissions, under Access control list (ACL), edit permissions.

For information about object access permissions, see Using the S3 console to set ACL permissions for
an object (p. 390). You can grant read access to your objects to the general public (everyone in the
world) for all of the files that you're uploading. We recommend that you do not change the default
setting for public read access. Granting public read access is applicable to a small subset of use cases
such as when buckets are used for websites. You can always make changes to object permissions
after you upload the object.
5. To add tags to all of the objects that you are uploading, choose Add tag. Type a tag name in the Key
field. Type a value for the tag.

Object tagging gives you a way to categorize storage. Each tag is a key-value pair. Key and tag
values are case sensitive. You can have up to 10 tags per object. A tag key can be up to 128 Unicode
characters in length and tag values can be up to 255 Unicode characters in length. For more
information about object tags, see Categorizing your storage using tags (p. 609).
6. To add metadata, choose Add metadata.

a. Under Type, choose System defined or User defined.

For system-defined metadata, you can select common HTTP headers, such as Content-Type
and Content-Disposition. For a list of system-defined metadata and information about whether
you can add the value, see System-defined object metadata (p. 61). Any metadata starting
with prefix x-amz-meta- is treated as user-defined metadata. User-defined metadata is stored
with the object and is returned when you download the object. Both the keys and their values
must conform to US-ASCII standards. User-defined metadata can be as large as 2 KB. For
more information about system defined and user defined metadata, see Working with object
metadata (p. 60).
b. For Key, choose a key.
c. Type a value for the key.
7. To upload your objects, choose Upload.

Amazon S3 uploads your object. When the upload completes, you can see a success message on the
Upload: status page.
8. Choose Exit.

Using the AWS SDKs


You can use the AWS SDK to upload objects in Amazon S3. The SDK provides wrapper libraries for you
to upload data easily. However, if your application requires it, you can use the REST API directly in your
application. You can also use the AWS Command Line Interface (AWS CLI).

Java

The following example creates two objects. The first object has a text string as data, and the second
object is a file. The example creates the first object by specifying the bucket name, object key, and
text data directly in a call to AmazonS3Client.putObject(). The example creates the second
object by using a PutObjectRequest that specifies the bucket name, object key, and file path. The
PutObjectRequest also specifies the ContentType header and title metadata.

API Version 2006-03-01


67
Amazon Simple Storage Service User Guide
Uploading objects

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;

import java.io.File;
import java.io.IOException;

public class UploadObject {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String stringObjKeyName = "*** String object key name ***";
String fileObjKeyName = "*** File object key name ***";
String fileName = "*** Path to file to upload ***";

try {
//This code expects that you have AWS credentials set up per:
// https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-
credentials.html
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();

// Upload a text string as a new object.


s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");

// Upload a file as a new object with ContentType and title specified.


PutObjectRequest request = new PutObjectRequest(bucketName, fileObjKeyName,
new File(fileName));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("plain/text");
metadata.addUserMetadata("title", "someTitle");
request.setMetadata(metadata);
s3Client.putObject(request);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

The following C# code example creates two objects with two PutObjectRequest requests:

• The first PutObjectRequest request saves a text string as sample object data. It also specifies
the bucket and object key names.
• The second PutObjectRequest request uploads a file by specifying the file name. This request
also specifies the ContentType header and optional object metadata (a title).

API Version 2006-03-01


68
Amazon Simple Storage Service User Guide
Uploading objects

For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadObjectTest
{
private const string bucketName = "*** bucket name ***";
// For simplicity the example creates two objects from the same file.
// You specify key names for these objects.
private const string keyName1 = "*** key name for first object created ***";
private const string keyName2 = "*** key name for second object created ***";
private const string filePath = @"*** file path ***";
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.EUWest1;

private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
WritingAnObjectAsync().Wait();
}

static async Task WritingAnObjectAsync()


{
try
{
// 1. Put object-specify only key name for the new object.
var putRequest1 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName1,
ContentBody = "sample text"
};

PutObjectResponse response1 = await client.PutObjectAsync(putRequest1);

// 2. Put the object-set ContentType and add metadata.


var putRequest2 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName2,
FilePath = filePath,
ContentType = "text/plain"
};

putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
PutObjectResponse response2 = await client.PutObjectAsync(putRequest2);
}
catch (AmazonS3Exception e)
{
Console.WriteLine(
"Error encountered ***. Message:'{0}' when writing an object"
, e.Message);
}
catch (Exception e)
{
Console.WriteLine(

API Version 2006-03-01


69
Amazon Simple Storage Service User Guide
Uploading objects

"Unknown encountered on server. Message:'{0}' when writing an


object"
, e.Message);
}
}
}
}

PHP

This topic guides you through using classes from the AWS SDK for PHP to upload an object of
up to 5 GB in size. For larger files, you must use multipart upload API. For more information, see
Uploading and copying objects using multipart upload (p. 72).

This topic assumes that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly installed.

Example — Creating an object in an Amazon S3 bucket by uploading data

The following PHP example creates an object in a specified bucket by uploading data using the
putObject() method. For information about running the PHP examples in this guide, see Running
PHP Examples (p. 952).

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

try {
// Upload data.
$result = $s3->putObject([
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => 'Hello, world!',
'ACL' => 'public-read'
]);

// Print the URL to the object.


echo $result['ObjectURL'] . PHP_EOL;
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

Ruby

The AWS SDK for Ruby - Version 3 has two ways of uploading an object to Amazon S3. The first
uses a managed file uploader, which makes it easy to upload files of any size from disk. To use the
managed file uploader method:

1. Create an instance of the Aws::S3::Resource class.


2. Reference the target object by bucket name and key. Objects live in a bucket and have unique
keys that identify each object.
3. Call#upload_file on the object.

API Version 2006-03-01


70
Amazon Simple Storage Service User Guide
Uploading objects

Example

require 'aws-sdk-s3'

# Uploads an object to a bucket in Amazon Simple Storage Service (Amazon S3).


#
# Prerequisites:
#
# - An S3 bucket.
# - An object to upload to the bucket.
#
# @param s3_client [Aws::S3::Resource] An initialized S3 resource.
# @param bucket_name [String] The name of the bucket.
# @param object_key [String] The name of the object.
# @param file_path [String] The path and file name of the object to upload.
# @return [Boolean] true if the object was uploaded; otherwise, false.
# @example
# exit 1 unless object_uploaded?(
# Aws::S3::Resource.new(region: 'us-east-1'),
# 'doc-example-bucket',
# 'my-file.txt',
# './my-file.txt'
# )
def object_uploaded?(s3_resource, bucket_name, object_key, file_path)
object = s3_resource.bucket(bucket_name).object(object_key)
object.upload_file(file_path)
return true
rescue StandardError => e
puts "Error uploading object: #{e.message}"
return false
end

The second way that AWS SDK for Ruby - Version 3 can upload an object uses the #put method of
Aws::S3::Object. This is useful if the object is a string or an I/O object that is not a file on disk. To
use this method:

1. Create an instance of the Aws::S3::Resource class.


2. Reference the target object by bucket name and key.
3. Call#put, passing in the string or I/O object.

Example

require 'aws-sdk-s3'

# Uploads an object to a bucket in Amazon Simple Storage Service (Amazon S3).


#
# Prerequisites:
#
# - An S3 bucket.
# - An object to upload to the bucket.
#
# @param s3_client [Aws::S3::Resource] An initialized S3 resource.
# @param bucket_name [String] The name of the bucket.
# @param object_key [String] The name of the object.
# @param file_path [String] The path and file name of the object to upload.
# @return [Boolean] true if the object was uploaded; otherwise, false.
# @example
# exit 1 unless object_uploaded?(
# Aws::S3::Resource.new(region: 'us-east-1'),
# 'doc-example-bucket',
# 'my-file.txt',

API Version 2006-03-01


71
Amazon Simple Storage Service User Guide
Using multipart upload

# './my-file.txt'
# )
def object_uploaded?(s3_resource, bucket_name, object_key, file_path)
object = s3_resource.bucket(bucket_name).object(object_key)
File.open(file_path, 'rb') do |file|
object.put(body: file)
end
return true
rescue StandardError => e
puts "Error uploading object: #{e.message}"
return false
end

Using the REST API


You can send REST requests to upload an object. You can send a PUT request to upload data in a single
operation. For more information, see PUT Object.

Using the AWS CLI


If your application requires it, you can send AWS Command Line Interface (AWS CLI) requests to upload
an object. You can send a PUT request to upload data in a single operation. For more information, see
PutObject in the AWS CLI Command Reference.

Uploading and copying objects using multipart


upload
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion
of the object's data. You can upload these object parts independently and in any order. If transmission
of any part fails, you can retransmit that part without affecting other parts. After all parts of your object
are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size
reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single
operation.

Using multipart upload provides the following advantages:

• Improved throughput - You can upload parts in parallel to improve throughput.


• Quick recovery from any network issues - Smaller part size minimizes the impact of restarting a failed
upload due to a network error.
• Pause and resume object uploads - You can upload object parts over time. After you initiate a
multipart upload, there is no expiry; you must explicitly complete or stop the multipart upload.
• Begin an upload before you know the final object size - You can upload an object as you are creating it.

We recommend that you use multipart upload in the following ways:

• If you're uploading large objects over a stable high-bandwidth network, use multipart upload to
maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded
performance.
• If you're uploading over a spotty network, use multipart upload to increase resiliency to network errors
by avoiding upload restarts. When using multipart upload, you need to retry uploading only parts that
are interrupted during the upload. You don't need to restart uploading your object from the beginning.

API Version 2006-03-01


72
Amazon Simple Storage Service User Guide
Multipart upload process

Multipart upload process


Multipart upload is a three-step process: You initiate the upload, you upload the object parts, and after
you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete
multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then
access the object just as you would any other object in your bucket.

You can list all of your in-progress multipart uploads or get a list of the parts that you have uploaded for
a specific multipart upload. Each of these operations is explained in this section.

Multipart upload initiation

When you send a request to initiate a multipart upload, Amazon S3 returns a response with an upload
ID, which is a unique identifier for your multipart upload. You must include this upload ID whenever you
upload parts, list the parts, complete an upload, or stop an upload. If you want to provide any metadata
describing the object being uploaded, you must provide it in the request to initiate multipart upload.

Parts upload

When uploading a part, in addition to the upload ID, you must specify a part number. You can choose
any part number between 1 and 10,000. A part number uniquely identifies a part and its position in the
object you are uploading. The part number that you choose doesn’t need to be in a consecutive sequence
(for example, it can be 1, 5, and 14). If you upload a new part using the same part number as a previously
uploaded part, the previously uploaded part is overwritten.

Whenever you upload a part, Amazon S3 returns an ETag header in its response. For each part upload,
you must record the part number and the ETag value. You must include these values in the subsequent
request to complete the multipart upload.
Note
After you initiate a multipart upload and upload one or more parts, you must either complete
or stop the multipart upload in order to stop getting charged for storage of the uploaded parts.
Only after you either complete or stop a multipart upload will Amazon S3 free up the parts
storage and stop charging you for the parts storage.

Multipart upload completion

When you complete a multipart upload, Amazon S3 creates an object by concatenating the parts in
ascending order based on the part number. If any object metadata was provided in the initiate multipart
upload request, Amazon S3 associates that metadata with the object. After a successful complete
request, the parts no longer exist.

Your complete multipart upload request must include the upload ID and a list of both part numbers
and corresponding ETag values. The Amazon S3 response includes an ETag that uniquely identifies the
combined object data. This ETag is not necessarily an MD5 hash of the object data.

You can optionally stop the multipart upload. After stopping a multipart upload, you cannot upload any
part using that upload ID again. All storage from any part of the canceled multipart upload is then freed.
If any part uploads were in-progress, they can still succeed or fail even after you stop. To free all storage
consumed by all parts, you must stop a multipart upload only after all part uploads have completed.

Multipart upload listings

You can list the parts of a specific multipart upload or all in-progress multipart uploads. The list parts
operation returns the parts information that you have uploaded for a specific multipart upload. For each
list parts request, Amazon S3 returns the parts information for the specified multipart upload, up to a
maximum of 1,000 parts. If there are more than 1,000 parts in the multipart upload, you must send a
series of list part requests to retrieve all the parts. Note that the returned list of parts doesn't include

API Version 2006-03-01


73
Amazon Simple Storage Service User Guide
Concurrent multipart upload operations

parts that haven't completed uploading. Using the list multipart uploads operation, you can obtain a list
of multipart uploads in progress.

An in-progress multipart upload is an upload that you have initiated, but have not yet completed or
stopped. Each request returns at most 1,000 multipart uploads. If there are more than 1,000 multipart
uploads in progress, you need to send additional requests to retrieve the remaining multipart uploads.
Only use the returned listing for verification. You should not use the result of this listing when sending
a complete multipart upload request. Instead, maintain your own list of the part numbers you specified
when uploading parts and the corresponding ETag values that Amazon S3 returns

Concurrent multipart upload operations


In a distributed development environment, it is possible for your application to initiate several updates
on the same object at the same time. Your application might initiate several multipart uploads using
the same object key. For each of these uploads, your application can then upload parts and send a
complete upload request to Amazon S3 to create the object. When the buckets have versioning enabled,
completing a multipart upload always creates a new version. For buckets that don't have versioning
enabled, it is possible that some other request received between the time when a multipart upload is
initiated and when it is completed might take precedence.
Note
It is possible for some other request received between the time you initiated a multipart upload
and completed it to take precedence. For example, if another operation deletes a key after you
initiate a multipart upload with that key, but before you complete it, the complete multipart
upload response might indicate a successful object creation without you ever seeing the object.

Multipart upload and pricing


After you initiate a multipart upload, Amazon S3 retains all the parts until you either complete or
stop the upload. Throughout its lifetime, you are billed for all storage, bandwidth, and requests for
this multipart upload and its associated parts. If you stop the multipart upload, Amazon S3 deletes
upload artifacts and any parts that you have uploaded, and you are no longer billed for them. For more
information about pricing, see Amazon S3 pricing.

API support for multipart upload


You can use an AWS SDKs to upload an object in parts. The following AWS SDK libraries support
multipart upload:

• AWS SDK for Java


• AWS SDK for .NET
• AWS SDK for PHP
• AWS SDK for Ruby
• AWS SDK for Python (Boto)
• AWS SDK for JavaScript in Node.js

These libraries provide a high-level abstraction that makes uploading multipart objects easy. However, if
your application requires, you can use the REST API directly. The following sections in the Amazon Simple
Storage Service API Reference describe the REST API for multipart upload.

• Initiate Multipart Upload


• Upload Part
• Upload Part (Copy)

API Version 2006-03-01


74
Amazon Simple Storage Service User Guide
Multipart upload API and permissions

• Complete Multipart Upload


• Abort Multipart Upload
• List Parts
• List Multipart Uploads

The following sections in the AWS Command Line Interface describe the operations for multipart upload.

• Initiate Multipart Upload


• Upload Part
• Upload Part (Copy)
• Complete Multipart Upload
• Abort Multipart Upload
• List Parts
• List Multipart Uploads

Multipart upload API and permissions


A user must must have the necessary permissions to use the multipart upload operations. You can
use access control lists (ACLs), the bucket policy, or the user policy to grant individuals permissions to
perform these operations. The following table lists the required permissions for various multipart upload
operations when using ACLs, a bucket policy, or a user policy.

Action Required permissions

Create You must be allowed to perform the s3:PutObject action on an object to create
Multipart multipart upload.
Upload
The bucket owner can allow other principals to perform the s3:PutObject action.

Initiate You must be allowed to perform the s3:PutObject action on an object to initiate
Multipart multipart upload.
Upload
The bucket owner can allow other principals to perform the s3:PutObject action.

Initiator Container element that identifies who initiated the multipart upload. If the initiator is
an AWS account, this element provides the same information as the Owner element.
If the initiator is an IAM User, this element provides the user ARN and display name.

Upload Part You must be allowed to perform the s3:PutObject action on an object to upload a
part.

The bucket owner must allow the initiator to perform the s3:PutObject action on
an object in order for the initiator to upload a part for that object.

Upload Part You must be allowed to perform the s3:PutObject action on an object to upload a
(Copy) part. Because you are uploading a part from an existing object, you must be allowed
s3:GetObject on the source object.

For the initiator to upload a part for an object, the owner of the bucket must allow
the initiator to perform the s3:PutObject action on the object.

Complete You must be allowed to perform the s3:PutObject action on an object to complete
Multipart a multipart upload.
Upload

API Version 2006-03-01


75
Amazon Simple Storage Service User Guide
Multipart upload API and permissions

Action Required permissions


The bucket owner must allow the initiator to perform the s3:PutObject action on
an object in order for the initiator to complete a multipart upload for that object.

Stop Multipart You must be allowed to perform the s3:AbortMultipartUpload action to stop a
Upload multipart upload.

By default, the bucket owner and the initiator of the multipart upload are allowed
to perform this action. If the initiator is an IAM user, that user's AWS account is also
allowed to stop that multipart upload.

In addition to these defaults, the bucket owner can allow other principals to perform
the s3:AbortMultipartUpload action on an object. The bucket owner can deny
any principal the ability to perform the s3:AbortMultipartUpload action.

List Parts You must be allowed to perform the s3:ListMultipartUploadParts action to


list parts in a multipart upload.

By default, the bucket owner has permission to list parts for any multipart upload to
the bucket. The initiator of the multipart upload has the permission to list parts of
the specific multipart upload. If the multipart upload initiator is an IAM user, the AWS
account controlling that IAM user also has permission to list parts of that upload.

In addition to these defaults, the bucket owner can allow other principals to perform
the s3:ListMultipartUploadParts action on an object. The bucket owner can
also deny any principal the ability to perform the s3:ListMultipartUploadParts
action.

List Multipart You must be allowed to perform the s3:ListBucketMultipartUploads action on


Uploads a bucket to list multipart uploads in progress to that bucket.

In addition to the default, the bucket owner can allow other principals to perform the
s3:ListBucketMultipartUploads action on the bucket.

AWS KMS To perform a multipart upload with encryption using an AWS Key Management
Encrypt and Service (AWS KMS) customer master key (CMK), the requester must have permission
Decrypt to the kms:Decrypt and kms:GenerateDataKey* actions on the key. These
related permissions are required because Amazon S3 must decrypt and read data from the
permissions encrypted file parts before it completes the multipart upload.

For more information, see Uploading a large file to Amazon S3 with encryption using
an AWS KMS CMK in the AWS Knowledge Center.

If your IAM user or role is in the same AWS account as the AWS KMS CMK, then you
must have these permissions on the key policy. If your IAM user or role belongs to a
different account than the CMK, then you must have the permissions on both the key
policy and your IAM user or role.
>

For information on the relationship between ACL permissions and permissions in access policies, see
Mapping of ACL permissions and access policy permissions (p. 386). For information on IAM users, go to
Working with Users and Groups.

Topics
• Configuring a bucket lifecycle policy to abort incomplete multipart uploads (p. 77)
• Uploading an object using multipart upload (p. 78)
• Uploading a directory using the high-level .NET TransferUtility class (p. 88)

API Version 2006-03-01


76
Amazon Simple Storage Service User Guide
Configuring a lifecycle policy

• Listing multipart uploads (p. 90)


• Tracking a multipart upload (p. 92)
• Aborting a multipart upload (p. 94)
• Copying an object using multipart upload (p. 98)
• Amazon S3 multipart upload limits (p. 102)

Configuring a bucket lifecycle policy to abort


incomplete multipart uploads
As a best practice, we recommend you configure a lifecycle rule using the
AbortIncompleteMultipartUpload action to minimize your storage costs. For more information
about aborting a multipart upload, see Aborting a multipart upload (p. 94).

Amazon S3 supports a bucket lifecycle rule that you can use to direct Amazon S3 to stop multipart
uploads that don't complete within a specified number of days after being initiated. When a multipart
upload is not completed within the timeframe, it becomes eligible for an abort operation and Amazon S3
stops the multipart upload (and deletes the parts associated with the multipart upload).

The following is an example lifecycle configuration that specifies a rule with the
AbortIncompleteMultipartUpload action.

<LifecycleConfiguration>
<Rule>
<ID>sample-rule</ID>
<Prefix></Prefix>
<Status>Enabled</Status>
<AbortIncompleteMultipartUpload>
<DaysAfterInitiation>7</DaysAfterInitiation>
</AbortIncompleteMultipartUpload>
</Rule>
</LifecycleConfiguration>

In the example, the rule does not specify a value for the Prefix element (object key name prefix).
Therefore, it applies to all objects in the bucket for which you initiated multipart uploads. Any multipart
uploads that were initiated and did not complete within seven days become eligible for an abort
operation. The abort action has no effect on completed multipart uploads.

For more information about the bucket lifecycle configuration, see Managing your storage
lifecycle (p. 501).
Note
If the multipart upload is completed within the number of days specified in the rule, the
AbortIncompleteMultipartUpload lifecycle action does not apply (that is, Amazon S3 does
not take any action). Also, this action does not apply to objects. No objects are deleted by this
lifecycle action.

The following put-bucket-lifecycle-configuration CLI command adds the lifecycle


configuration for the specified bucket.

$ aws s3api put-bucket-lifecycle-configuration  \


--bucket bucketname  \
--lifecycle-configuration filename-containing-lifecycle-configuration

To test the CLI command, do the following:

API Version 2006-03-01


77
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

1. Set up the AWS CLI. For instructions, see Developing with Amazon S3 using the AWS CLI (p. 942).
2. Save the following example lifecycle configuration in a file (lifecycle.json). The example
configuration specifies empty prefix and therefore it applies to all objects in the bucket. You can
specify a prefix to restrict the policy to a subset of objects.

{
"Rules": [
{
"ID": "Test Rule",
"Status": "Enabled",
"Filter": {
"Prefix": ""
},
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}
]
}

3. Run the following CLI command to set lifecycle configuration on your bucket.

aws s3api put-bucket-lifecycle-configuration   \


--bucket bucketname  \
--lifecycle-configuration file://lifecycle.json

4. To verify, retrieve the lifecycle configuration using the get-bucket-lifecycle CLI command.

aws s3api get-bucket-lifecycle  \


--bucket bucketname

5. To delete the lifecycle configuration, use the delete-bucket-lifecycle CLI command.

aws s3api delete-bucket-lifecycle \


--bucket bucketname

Uploading an object using multipart upload


You can use the multipart upload to programmatically upload a single object to Amazon S3.

For more information, see the following sections.

Using the AWS SDKs (high-level API)


The AWS SDK exposes a high-level API, called TransferManager, that simplifies multipart uploads. For
more information, see Uploading and copying objects using multipart upload (p. 72).

You can upload data from a file or a stream. You can also set advanced options, such as the part size
you want to use for the multipart upload, or the number of concurrent threads you want to use when
uploading the parts. You can also set optional object properties, the storage class, or the access control
list (ACL). You use the PutObjectRequest and the TransferManagerConfiguration classes to set
these advanced options.

When possible, TransferManager tries to use multiple threads to upload multiple parts of a single
upload at once. When dealing with large content sizes and high bandwidth, this can increase throughput
significantly.

API Version 2006-03-01


78
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

In addition to file-upload functionality, the TransferManager class enables you to stop an in-progress
multipart upload. An upload is considered to be in progress after you initiate it and until you complete or
stop it. The TransferManager stops all in-progress multipart uploads on a specified bucket that were
initiated before a specified date and time.

If you need to pause and resume multipart uploads, vary part sizes during the upload, or do not know
the size of the data in advance, use the low-level PHP API. For more information about multipart
uploads, including additional functionality offered by the low-level API methods, see Using the AWS
SDKs (low-level-level API) (p. 82).

Java

The following example loads an object using the high-level multipart upload Java API (the
TransferManager class). For instructions on creating and testing a working sample, see Testing the
Amazon S3 Java Code Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;

import java.io.File;

public class HighLevelMultipartUpload {

public static void main(String[] args) throws Exception {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Object key ***";
String filePath = "*** Path for file to upload ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();

// TransferManager processes all transfers asynchronously,


// so this call returns immediately.
Upload upload = tm.upload(bucketName, keyName, new File(filePath));
System.out.println("Object upload started");

// Optionally, wait for the upload to finish before continuing.


upload.waitForCompletion();
System.out.println("Object upload complete");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}

API Version 2006-03-01


79
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

.NET

To upload a file to an S3 bucket, use the TransferUtility class. When uploading data from a file,
you must provide the object's key name. If you don't, the API uses the file name for the key name.
When uploading data from a stream, you must provide the object's key name.

To set advanced upload options—such as the part size, the number of threads when
uploading the parts concurrently, metadata, the storage class, or ACL—use the
TransferUtilityUploadRequest class.

The following C# example uploads a file to an Amazon S3 bucket in multiple parts. It shows how to
use various TransferUtility.Upload overloads to upload a file. Each successive call to upload
replaces the previous upload. For information about the example's compatibility with a specific
version of the AWS SDK for .NET and instructions for creating and testing a working sample, see
Running the Amazon S3 .NET Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadFileMPUHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file to
upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
UploadFileAsync().Wait();
}

private static async Task UploadFileAsync()


{
try
{
var fileTransferUtility =
new TransferUtility(s3Client);

// Option 1. Upload a file. The file name is used as the object key
name.
await fileTransferUtility.UploadAsync(filePath, bucketName);
Console.WriteLine("Upload 1 completed");

// Option 2. Specify object key name explicitly.


await fileTransferUtility.UploadAsync(filePath, bucketName, keyName);
Console.WriteLine("Upload 2 completed");

// Option 3. Upload data from a type of System.IO.Stream.


using (var fileToUpload =

API Version 2006-03-01


80
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

new FileStream(filePath, FileMode.Open, FileAccess.Read))


{
await fileTransferUtility.UploadAsync(fileToUpload,
bucketName, keyName);
}
Console.WriteLine("Upload 3 completed");

// Option 4. Specify advanced settings.


var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
StorageClass = S3StorageClass.StandardInfrequentAccess,
PartSize = 6291456, // 6 MB.
Key = keyName,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtilityRequest.Metadata.Add("param1", "Value1");
fileTransferUtilityRequest.Metadata.Add("param2", "Value2");

await fileTransferUtility.UploadAsync(fileTransferUtilityRequest);
Console.WriteLine("Upload 4 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}

}
}
}

PHP

This topic explains how to use the high-level Aws\S3\Model\MultipartUpload\UploadBuilder


class from the AWS SDK for PHP for multipart file uploads. It assumes that you are already following
the instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 952) and have the
AWS SDK for PHP properly installed.

The following PHP example uploads a file to an Amazon S3 bucket. The example demonstrates how
to set parameters for the MultipartUploader object.

For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).

require 'vendor/autoload.php';

use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'

API Version 2006-03-01


81
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

]);

// Prepare the upload parameters.


$uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [
'bucket' => $bucket,
'key' => $keyname
]);

// Perform the upload.


try {
$result = $uploader->upload();
echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL;
} catch (MultipartUploadException $e) {
echo $e->getMessage() . PHP_EOL;
}

Using the AWS SDKs (low-level-level API)


The AWS SDK exposes a low-level API that closely resembles the Amazon S3 REST API for multipart
uploads (see Uploading and copying objects using multipart upload (p. 72). Use the low-level API
when you need to pause and resume multipart uploads, vary part sizes during the upload, or do not
know the size of the upload data in advance. When you don't have these requirements, use the high-level
API (see Using the AWS SDKs (high-level API) (p. 78)).

Java

The following example shows how to use the low-level Java classes to upload a file. It performs the
following steps:

• Initiates a multipart upload using the AmazonS3Client.initiateMultipartUpload()


method, and passes in an InitiateMultipartUploadRequest object.
• Saves the upload ID that the AmazonS3Client.initiateMultipartUpload() method returns.
You provide this upload ID for each subsequent multipart upload operation.
• Uploads the parts of the object. For each part, you call the AmazonS3Client.uploadPart()
method. You provide part upload information using an UploadPartRequest object.
• For each part, saves the ETag from the response of the AmazonS3Client.uploadPart()
method in a list. You use the ETag values to complete the multipart upload.
• Calls the AmazonS3Client.completeMultipartUpload() method to complete the multipart
upload.

Example

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;

import java.io.File;
import java.io.IOException;
import java.util.ArrayList;

API Version 2006-03-01


82
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

import java.util.List;

public class LowLevelMultipartUpload {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ***";
String filePath = "*** Path to file to upload ***";

File file = new File(filePath);


long contentLength = file.length();
long partSize = 5 * 1024 * 1024; // Set part size to 5 MB.

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Create a list of ETag objects. You retrieve ETags for each object part
uploaded,
// then, after each individual part has been uploaded, pass the list of
ETags to
// the request to complete the upload.
List<PartETag> partETags = new ArrayList<PartETag>();

// Initiate the multipart upload.


InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest(bucketName, keyName);
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);

// Upload the file parts.


long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Because the last part could be less than 5 MB, adjust the part size
as needed.
partSize = Math.min(partSize, (contentLength - filePosition));

// Create the request to upload a part.


UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(bucketName)
.withKey(keyName)
.withUploadId(initResponse.getUploadId())
.withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);

// Upload the part and add the response's ETag to our list.
UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest);
partETags.add(uploadResult.getPartETag());

filePosition += partSize;
}

// Complete the multipart upload.


CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(bucketName, keyName,
initResponse.getUploadId(), partETags);
s3Client.completeMultipartUpload(compRequest);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();

API Version 2006-03-01


83
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

The following C# example shows how to use the low-level AWS SDK for .NET multipart upload API
to upload a file to an S3 bucket. For information about Amazon S3 multipart uploads, see Uploading
and copying objects using multipart upload (p. 72).
Note
When you use the AWS SDK for .NET API to upload large objects, a timeout might occur
while data is being written to the request stream. You can set an explicit timeout using the
UploadPartRequest.

The following C# example uploads a file to an S3 bucket using the low-level multipart upload API.
For information about the example's compatibility with a specific version of the AWS SDK for .NET
and instructions for creating and testing a working sample, see Running the Amazon S3 .NET Code
Examples (p. 951).

using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadFileMPULowLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file to
upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
Console.WriteLine("Uploading an object");
UploadObjectAsync().Wait();
}

private static async Task UploadObjectAsync()


{
// Create list to store upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();

// Setup information required to initiate the multipart upload.


InitiateMultipartUploadRequest initiateRequest = new
InitiateMultipartUploadRequest
{

API Version 2006-03-01


84
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

BucketName = bucketName,
Key = keyName
};

// Initiate the upload.


InitiateMultipartUploadResponse initResponse =
await s3Client.InitiateMultipartUploadAsync(initiateRequest);

// Upload parts.
long contentLength = new FileInfo(filePath).Length;
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB

try
{
Console.WriteLine("Uploading parts");

long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++)
{
UploadPartRequest uploadRequest = new UploadPartRequest
{
BucketName = bucketName,
Key = keyName,
UploadId = initResponse.UploadId,
PartNumber = i,
PartSize = partSize,
FilePosition = filePosition,
FilePath = filePath
};

// Track upload progress.


uploadRequest.StreamTransferProgress +=
new
EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback);

// Upload a part and add the response to our list.


uploadResponses.Add(await s3Client.UploadPartAsync(uploadRequest));

filePosition += partSize;
}

// Setup to complete the upload.


CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest
{
BucketName = bucketName,
Key = keyName,
UploadId = initResponse.UploadId
};
completeRequest.AddPartETags(uploadResponses);

// Complete the upload.


CompleteMultipartUploadResponse completeUploadResponse =
await s3Client.CompleteMultipartUploadAsync(completeRequest);
}
catch (Exception exception)
{
Console.WriteLine("An AmazonS3Exception was thrown: { 0}",
exception.Message);

// Abort the upload.


AbortMultipartUploadRequest abortMPURequest = new
AbortMultipartUploadRequest
{
BucketName = bucketName,
Key = keyName,

API Version 2006-03-01


85
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

UploadId = initResponse.UploadId
};
await s3Client.AbortMultipartUploadAsync(abortMPURequest);
}
}
public static void UploadPartProgressEventCallback(object sender,
StreamTransferProgressArgs e)
{
// Process event.
Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
}
}

PHP

This topic shows how to use the low-level uploadPart method from version 3 of the AWS SDK for
PHP to upload a file in multiple parts. It assumes that you are already following the instructions for
Using the AWS SDK for PHP and Running PHP Examples (p. 952) and have the AWS SDK for PHP
properly installed.

The following PHP example uploads a file to an Amazon S3 bucket using the low-level PHP API
multipart upload. For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';
$filename = '*** Path to and Name of the File to Upload ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

$result = $s3->createMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'StorageClass' => 'REDUCED_REDUNDANCY',
'Metadata' => [
'param1' => 'value 1',
'param2' => 'value 2',
'param3' => 'value 3'
]
]);
$uploadId = $result['UploadId'];

// Upload the file in parts.


try {
$file = fopen($filename, 'r');
$partNumber = 1;
while (!feof($file)) {
$result = $s3->uploadPart([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId,
'PartNumber' => $partNumber,
'Body' => fread($file, 5 * 1024 * 1024),
]);
$parts['Parts'][$partNumber] = [
'PartNumber' => $partNumber,

API Version 2006-03-01


86
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

'ETag' => $result['ETag'],


];
$partNumber++;

echo "Uploading part {$partNumber} of {$filename}." . PHP_EOL;


}
fclose($file);
} catch (S3Exception $e) {
$result = $s3->abortMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId
]);

echo "Upload of {$filename} failed." . PHP_EOL;


}

// Complete the multipart upload.


$result = $s3->completeMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId,
'MultipartUpload' => $parts,
]);
$url = $result['Location'];

echo "Uploaded {$filename} to {$url}." . PHP_EOL;

Using the AWS SDK for Ruby


The AWS SDK for Ruby version 3 supports Amazon S3 multipart uploads in two ways. For the first option,
you can use managed file uploads. For more information, see Uploading Files to Amazon S3 in the AWS
Developer Blog. Managed file uploads are the recommended method for uploading files to a bucket. They
provide the following benefits:

• Manage multipart uploads for objects larger than 15MB.


• Correctly open files in binary mode to avoid encoding issues.
• Use multiple threads for uploading parts of large objects in parallel.

Alternatively, you can use the following multipart upload client operations directly:

• create_multipart_upload – Initiates a multipart upload and returns an upload ID.


• upload_part – Uploads a part in a multipart upload.
• upload_part_copy – Uploads a part by copying data from an existing object as data source.
• complete_multipart_upload – Completes a multipart upload by assembling previously uploaded parts.
• abort_multipart_upload – Stops a multipart upload.

For more information, see Using the AWS SDK for Ruby - Version 3 (p. 953).

Using the REST API


The following sections in the Amazon Simple Storage Service API Reference describe the REST API for
multipart upload.

• Initiate Multipart Upload


• Upload Part

API Version 2006-03-01


87
Amazon Simple Storage Service User Guide
Uploading a directory

• Complete Multipart Upload


• Stop Multipart Upload
• List Parts
• List Multipart Uploads

Using the AWS CLI


The following sections in the AWS Command Line Interface (AWS CLI) describe the operations for
multipart upload.

• Initiate Multipart Upload


• Upload Part
• Upload Part (Copy)
• Complete Multipart Upload
• Abort Multipart Upload
• List Parts
• List Multipart Uploads

You can also use the REST API to make your own REST requests, or you can use one of the AWS SDKs. For
more information about the REST API, see Using the REST API (p. 87). For more information about the
SDKs, see Uploading an object using multipart upload (p. 78).

Uploading a directory using the high-level .NET


TransferUtility class
You can use the TransferUtility class to upload an entire directory. By default, the API uploads only
the files at the root of the specified directory. You can, however, specify recursively uploading files in all
of the subdirectories.

To select files in the specified directory based on filtering criteria, specify filtering expressions. For
example, to upload only the .pdf files from a directory, specify the "*.pdf" filter expression.

When uploading files from a directory, you don't specify the key names for the resulting objects. Amazon
S3 constructs the key names using the original file path. For example, assume that you have a directory
called c:\myfolder with the following structure:

Example

C:\myfolder
\a.txt
\b.pdf
\media\
An.mp3

When you upload this directory, Amazon S3 uses the following key names:

Example

a.txt
b.pdf

API Version 2006-03-01


88
Amazon Simple Storage Service User Guide
Uploading a directory

media/An.mp3

Example

The following C# example uploads a directory to an Amazon S3 bucket. It shows how to use various
TransferUtility.UploadDirectory overloads to upload the directory. Each successive call to
upload replaces the previous upload. For instructions on how to create and test a working sample, see
Running the Amazon S3 .NET Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadDirMPUHighLevelAPITest
{
private const string existingBucketName = "*** bucket name ***";
private const string directoryPath = @"*** directory path ***";
// The example uploads only .txt files.
private const string wildCard = "*.txt";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
UploadDirAsync().Wait();
}

private static async Task UploadDirAsync()


{
try
{
var directoryTransferUtility =
new TransferUtility(s3Client);

// 1. Upload a directory.
await directoryTransferUtility.UploadDirectoryAsync(directoryPath,
existingBucketName);
Console.WriteLine("Upload statement 1 completed");

// 2. Upload only the .txt files from a directory


// and search recursively.
await directoryTransferUtility.UploadDirectoryAsync(
directoryPath,
existingBucketName,
wildCard,
SearchOption.AllDirectories);
Console.WriteLine("Upload statement 2 completed");

// 3. The same as Step 2 and some optional configuration.


// Search recursively for .txt files to upload.
var request = new TransferUtilityUploadDirectoryRequest
{
BucketName = existingBucketName,
Directory = directoryPath,
SearchOption = SearchOption.AllDirectories,
SearchPattern = wildCard
};

API Version 2006-03-01


89
Amazon Simple Storage Service User Guide
Listing multipart uploads

await directoryTransferUtility.UploadDirectoryAsync(request);
Console.WriteLine("Upload statement 3 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine(
"Error encountered ***. Message:'{0}' when writing an object",
e.Message);
}
catch (Exception e)
{
Console.WriteLine(
"Unknown encountered on server. Message:'{0}' when writing an object",
e.Message);
}
}
}
}

Listing multipart uploads


You can use the AWS SDKs (low-level API) to retrieve a list of in-progress multipart uploads in Amazon
S3.

Listing multipart uploads using the AWS SDK (low-level API)


Java

The following tasks guide you through using the low-level Java classes to list all in-progress
multipart uploads on a bucket.

Low-level API multipart uploads listing process

1 Create an instance of the ListMultipartUploadsRequest class and provide the


bucket name.

2 Run the AmazonS3Client.listMultipartUploads method. The method returns


an instance of the MultipartUploadListing class that gives you information
about the multipart uploads in progress.

The following Java code example demonstrates the preceding tasks.

Example

ListMultipartUploadsRequest allMultpartUploadsRequest =
new ListMultipartUploadsRequest(existingBucketName);
MultipartUploadListing multipartUploadListing =
s3Client.listMultipartUploads(allMultpartUploadsRequest);

.NET

To list all of the in-progress multipart uploads on a specific bucket, use the AWS SDK
for .NET low-level multipart upload API's ListMultipartUploadsRequest class.
The AmazonS3Client.ListMultipartUploads method returns an instance of the
ListMultipartUploadsResponse class that provides information about the in-progress
multipart uploads.

API Version 2006-03-01


90
Amazon Simple Storage Service User Guide
Listing multipart uploads

An in-progress multipart upload is a multipart upload that has been initiated using the initiate
multipart upload request, but has not yet been completed or stopped. For more information about
Amazon S3 multipart uploads, see Uploading and copying objects using multipart upload (p. 72).

The following C# example shows how to use the AWS SDK for .NET to list all in-progress multipart
uploads on a bucket. For information about the example's compatibility with a specific version of
the AWS SDK for .NET and instructions on how to create and test a working sample, see Running the
Amazon S3 .NET Code Examples (p. 951).

ListMultipartUploadsRequest request = new ListMultipartUploadsRequest


{
BucketName = bucketName // Bucket receiving the uploads.
};

ListMultipartUploadsResponse response = await


AmazonS3Client.ListMultipartUploadsAsync(request);

PHP

This topic shows how to use the low-level API classes from version 3 of the AWS SDK for PHP to
list all in-progress multipart uploads on a bucket. It assumes that you are already following the
instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 952) and have the AWS
SDK for PHP properly installed.

The following PHP example demonstrates listing all in-progress multipart uploads on a bucket.

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Retrieve a list of the current multipart uploads.


$result = $s3->listMultipartUploads([
'Bucket' => $bucket
]);

// Write the list of uploads to the page.


print_r($result->toArray());

Listing multipart uploads using the REST API


The following sections in the Amazon Simple Storage Service API Reference describe the REST API for
listing multipart uploads:

• ListParts‐list the uploaded parts for a specific multipart upload.


• ListMultipartUploads‐list in-progress multipart uploads.

Listing multipart uploads using the AWS CLI


The following sections in the AWS Command Line Interface describe the operations for listing multipart
uploads.

• list-parts‐list the uploaded parts for a specific multipart upload.

API Version 2006-03-01


91
Amazon Simple Storage Service User Guide
Tracking a multipart upload

• list-multipart-uploads‐list in-progress multipart uploads.

Tracking a multipart upload


The high-level multipart upload API provides a listen interface, ProgressListener, to track the upload
progress when uploading an object to Amazon S3. Progress events occur periodically and notify the
listener that bytes have been transferred.

Java

Example

TransferManager tm = new TransferManager(new ProfileCredentialsProvider());

PutObjectRequest request = new PutObjectRequest(


existingBucketName, keyName, new File(filePath));

// Subscribe to the event and provide event handler.


request.setProgressListener(new ProgressListener() {
public void progressChanged(ProgressEvent event) {
System.out.println("Transferred bytes: " +
event.getBytesTransfered());
}
});

Example

The following Java code uploads a file and uses the ProgressListener to track the upload
progress. For instructions on how to create and test a working sample, see Testing the Amazon S3
Java Code Examples (p. 950).

import java.io.File;

import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.event.ProgressEvent;
import com.amazonaws.event.ProgressListener;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.Upload;

public class TrackMPUProgressUsingHighLevelAPI {

public static void main(String[] args) throws Exception {


String existingBucketName = "*** Provide bucket name ***";
String keyName = "*** Provide object key ***";
String filePath = "*** file to upload ***";

TransferManager tm = new TransferManager(new ProfileCredentialsProvider());

// For more advanced uploads, you can create a request object


// and supply additional request parameters (ex: progress listeners,
// canned ACLs, etc.)
PutObjectRequest request = new PutObjectRequest(
existingBucketName, keyName, new File(filePath));

// You can ask the upload for its progress, or you can
// add a ProgressListener to your request to receive notifications
// when bytes are transferred.

API Version 2006-03-01


92
Amazon Simple Storage Service User Guide
Tracking a multipart upload

request.setGeneralProgressListener(new ProgressListener() {
@Override
public void progressChanged(ProgressEvent progressEvent) {
System.out.println("Transferred bytes: " +
progressEvent.getBytesTransferred());
}
});

// TransferManager processes all transfers asynchronously,


// so this call will return immediately.
Upload upload = tm.upload(request);

try {
// You can block and wait for the upload to finish
upload.waitForCompletion();
} catch (AmazonClientException amazonClientException) {
System.out.println("Unable to upload file, upload aborted.");
amazonClientException.printStackTrace();
}
}
}

.NET

The following C# example uploads a file to an S3 bucket using the TransferUtility class, and
tracks the progress of the upload. For information about the example's compatibility with a specific
version of the AWS SDK for .NET and instructions for creating and testing a working sample, see
Running the Amazon S3 .NET Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class TrackMPUUsingHighLevelAPITest
{
private const string bucketName = "*** provide the bucket name ***";
private const string keyName = "*** provide the name for the uploaded object
***";
private const string filePath = " *** provide the full path name of the file to
upload **";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
TrackMPUAsync().Wait();
}

private static async Task TrackMPUAsync()


{
try
{
var fileTransferUtility = new TransferUtility(s3Client);

// Use TransferUtilityUploadRequest to configure options.


// In this example we subscribe to an event.
var uploadRequest =

API Version 2006-03-01


93
Amazon Simple Storage Service User Guide
Aborting a multipart upload

new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
Key = keyName
};

uploadRequest.UploadProgressEvent +=
new EventHandler<UploadProgressArgs>
(uploadRequest_UploadPartProgressEvent);

await fileTransferUtility.UploadAsync(uploadRequest);
Console.WriteLine("Upload completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}

static void uploadRequest_UploadPartProgressEvent(object sender,


UploadProgressArgs e)
{
// Process event.
Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
}
}

Aborting a multipart upload


After you initiate a multipart upload, you begin uploading parts. Amazon S3 stores these parts, but it
creates the object from the parts only after you upload all of them and send a successful request
to complete the multipart upload (you should verify that your request to complete multipart upload is
successful). Upon receiving the complete multipart upload request, Amazon S3 assembles the parts and
creates an object. If you don't send the complete multipart upload request successfully, Amazon S3 does
not assemble the parts and does not create any object.

You are billed for all storage associated with uploaded parts. For more information, see Multipart upload
and pricing (p. 74). So it's important that you either complete the multipart upload to have the object
created or stop the multipart upload to remove any uploaded parts.

You can stop an in-progress multipart upload in Amazon S3 using the AWS Command Line Interface
(AWS CLI), REST API, or AWS SDKs. You can also stop an incomplete multipart upload using a bucket
lifecycle policy.

Using the AWS SDKs (high-level API)


Java

The TransferManager class provides the abortMultipartUploads method to stop multipart


uploads in progress. An upload is considered to be in progress after you initiate it and until you
complete it or stop it. You provide a Date value, and this API stops all the multipart uploads on that
bucket that were initiated before the specified Date and are still in progress.

API Version 2006-03-01


94
Amazon Simple Storage Service User Guide
Aborting a multipart upload

The following tasks guide you through using the high-level Java classes to stop multipart uploads.

High-level API multipart uploads stopping process

1 Create an instance of the TransferManager class.

2 Run the TransferManager.abortMultipartUploads method by passing the


bucket name and a Date value.

The following Java code stops all multipart uploads in progress that were initiated on a specific
bucket over a week ago. For instructions on how to create and test a working sample, see Testing the
Amazon S3 Java Code Examples (p. 950).

import java.util.Date;

import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.transfer.TransferManager;

public class AbortMPUUsingHighLevelAPI {

public static void main(String[] args) throws Exception {


String existingBucketName = "*** Provide existing bucket name ***";

TransferManager tm = new TransferManager(new ProfileCredentialsProvider());

int sevenDays = 1000 * 60 * 60 * 24 * 7;


Date oneWeekAgo = new Date(System.currentTimeMillis() - sevenDays);

try {
tm.abortMultipartUploads(existingBucketName, oneWeekAgo);
} catch (AmazonClientException amazonClientException) {
System.out.println("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
}
}

Note
You can also stop a specific multipart upload. For more information, see Using the AWS
SDKs (low-level API) (p. 96).
.NET

The following C# example stops all in-progress multipart uploads that were initiated on a specific
bucket over a week ago. For information about the example's compatibility with a specific version of
the AWS SDK for .NET and instructions on creating and testing a working sample, see Running the
Amazon S3 .NET Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class AbortMPUUsingHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";

API Version 2006-03-01


95
Amazon Simple Storage Service User Guide
Aborting a multipart upload

// Specify your bucket region (an example region is shown).


private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
AbortMPUAsync().Wait();
}

private static async Task AbortMPUAsync()


{
try
{
var transferUtility = new TransferUtility(s3Client);

// Abort all in-progress uploads initiated before the specified date.


await transferUtility.AbortMultipartUploadsAsync(
bucketName, DateTime.Now.AddDays(-7));
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

Note
You can also stop a specific multipart upload. For more information, see Using the AWS
SDKs (low-level API) (p. 96).

Using the AWS SDKs (low-level API)


You can stop an in-progress multipart upload by calling the AmazonS3.abortMultipartUpload
method. This method deletes any parts that were uploaded to Amazon S3 and frees up the resources.
You must provide the upload ID, bucket name, and key name. The following Java code example
demonstrates how to stop an in-progress multipart upload.

To stop a multipart upload, you provide the upload ID, and the bucket and key names that are used in
the upload. After you have stopped a multipart upload, you can't use the upload ID to upload additional
parts. For more information about Amazon S3 multipart uploads, see Uploading and copying objects
using multipart upload (p. 72).

Java

The following Java code example stops an in-progress multipart upload.

Example

InitiateMultipartUploadRequest initRequest =
new InitiateMultipartUploadRequest(existingBucketName, keyName);
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);

API Version 2006-03-01


96
Amazon Simple Storage Service User Guide
Aborting a multipart upload

AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());


s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(
existingBucketName, keyName, initResponse.getUploadId()));

Note
Instead of a specific multipart upload, you can stop all your multipart uploads initiated
before a specific time that are still in progress. This clean-up operation is useful to stop old
multipart uploads that you initiated but did not complete or stop. For more information,
see Using the AWS SDKs (high-level API) (p. 94).
.NET

The following C# example shows how to stop a multipart upload. For a complete C# sample that
includes the following code, see Using the AWS SDKs (low-level-level API) (p. 82).

AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest


{
BucketName = existingBucketName,
Key = keyName,
UploadId = initResponse.UploadId
};
await AmazonS3Client.AbortMultipartUploadAsync(abortMPURequest);

You can also abort all in-progress multipart uploads that were initiated prior to a specific time. This
clean-up operation is useful for aborting multipart uploads that didn't complete or were aborted.
For more information, see Using the AWS SDKs (high-level API) (p. 94).
PHP

This example shows how to use a class from version 3 of the AWS SDK for PHP to abort a multipart
upload that is in progress. It assumes that you are already following the instructions for Using the
AWS SDK for PHP and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly
installed. The example the abortMultipartUpload() method.

For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';
$uploadId = '*** Upload ID of upload to Abort ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Abort the multipart upload.


$s3->abortMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId,
]);

Using the REST API


For more information about using the REST API to stop a multipart upload, see AbortMultipartUpload in
the Amazon Simple Storage Service API Reference.

API Version 2006-03-01


97
Amazon Simple Storage Service User Guide
Copying an object

Using the AWS CLI


For more information about using the AWS CLI to stop a multipart upload, see abort-multipart-upload in
the AWS CLI Command Reference.

Copying an object using multipart upload


The examples in this section show you how to copy objects greater than 5 GB using the multipart
upload API. You can copy objects less than 5 GB in a single operation. For more information, see Copying
objects (p. 102).

Using the AWS SDKs


To copy an object using the low-level API, do the following:

• Initiate a multipart upload by calling the AmazonS3Client.initiateMultipartUpload() method.


• Save the upload ID from the response object that the
AmazonS3Client.initiateMultipartUpload() method returns. You provide this upload ID for
each part-upload operation.
• Copy all of the parts. For each part that you need to copy, create a new instance of the
CopyPartRequest class. Provide the part information, including the source and destination bucket
names, source and destination object keys, upload ID, locations of the first and last bytes of the part,
and part number.
• Save the responses of the AmazonS3Client.copyPart() method calls. Each response includes
the ETag value and part number for the uploaded part. You need this information to complete the
multipart upload.
• Call the AmazonS3Client.completeMultipartUpload() method to complete the copy operation.

Java

Example

The following example shows how to use the Amazon S3 low-level Java API to perform a multipart
copy. For instructions on creating and testing a working sample, see Testing the Amazon S3 Java
Code Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class LowLevelMultipartCopy {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String sourceBucketName = "*** Source bucket name ***";
String sourceObjectKey = "*** Source object key ***";
String destBucketName = "*** Target bucket name ***";
String destObjectKey = "*** Target object key ***";

API Version 2006-03-01


98
Amazon Simple Storage Service User Guide
Copying an object

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Initiate the multipart upload.


InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest(destBucketName, destObjectKey);
InitiateMultipartUploadResult initResult =
s3Client.initiateMultipartUpload(initRequest);

// Get the object size to track the end of the copy operation.
GetObjectMetadataRequest metadataRequest = new
GetObjectMetadataRequest(sourceBucketName, sourceObjectKey);
ObjectMetadata metadataResult =
s3Client.getObjectMetadata(metadataRequest);
long objectSize = metadataResult.getContentLength();

// Copy the object using 5 MB parts.


long partSize = 5 * 1024 * 1024;
long bytePosition = 0;
int partNum = 1;
List<CopyPartResult> copyResponses = new ArrayList<CopyPartResult>();
while (bytePosition < objectSize) {
// The last part might be smaller than partSize, so check to make sure
// that lastByte isn't beyond the end of the object.
long lastByte = Math.min(bytePosition + partSize - 1, objectSize - 1);

// Copy this part.


CopyPartRequest copyRequest = new CopyPartRequest()
.withSourceBucketName(sourceBucketName)
.withSourceKey(sourceObjectKey)
.withDestinationBucketName(destBucketName)
.withDestinationKey(destObjectKey)
.withUploadId(initResult.getUploadId())
.withFirstByte(bytePosition)
.withLastByte(lastByte)
.withPartNumber(partNum++);
copyResponses.add(s3Client.copyPart(copyRequest));
bytePosition += partSize;
}

// Complete the upload request to concatenate all uploaded parts and make
the copied object available.
CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest(
destBucketName,
destObjectKey,
initResult.getUploadId(),
getETags(copyResponses));
s3Client.completeMultipartUpload(completeRequest);
System.out.println("Multipart copy complete.");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}

// This is a helper function to construct a list of ETags.


private static List<PartETag> getETags(List<CopyPartResult> responses) {

API Version 2006-03-01


99
Amazon Simple Storage Service User Guide
Copying an object

List<PartETag> etags = new ArrayList<PartETag>();


for (CopyPartResult response : responses) {
etags.add(new PartETag(response.getPartNumber(), response.getETag()));
}
return etags;
}
}

.NET

The following C# example shows how to use the AWS SDK for .NET to copy an Amazon S3 object
that is larger than 5 GB from one source location to another, such as from one bucket to another. To
copy objects that are smaller than 5 GB, use the single-operation copy procedure described in Using
the AWS SDKs (p. 105). For more information about Amazon S3 multipart uploads, see Uploading
and copying objects using multipart upload (p. 72).

This example shows how to copy an Amazon S3 object that is larger than 5 GB from one S3
bucket to another using the AWS SDK for .NET multipart upload API. For information about SDK
compatibility and instructions for creating and testing a working sample, see Running the Amazon
S3 .NET Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class CopyObjectUsingMPUapiTest
{
private const string sourceBucket = "*** provide the name of the bucket with
source object ***";
private const string targetBucket = "*** provide the name of the bucket to copy
the object to ***";
private const string sourceObjectKey = "*** provide the name of object to copy
***";
private const string targetObjectKey = "*** provide the name of the object copy
***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
Console.WriteLine("Copying an object");
MPUCopyObjectAsync().Wait();
}
private static async Task MPUCopyObjectAsync()
{
// Create a list to store the upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();
List<CopyPartResponse> copyResponses = new List<CopyPartResponse>();

// Setup information required to initiate the multipart upload.


InitiateMultipartUploadRequest initiateRequest =
new InitiateMultipartUploadRequest
{
BucketName = targetBucket,
Key = targetObjectKey
};

API Version 2006-03-01


100
Amazon Simple Storage Service User Guide
Copying an object

// Initiate the upload.


InitiateMultipartUploadResponse initResponse =
await s3Client.InitiateMultipartUploadAsync(initiateRequest);

// Save the upload ID.


String uploadId = initResponse.UploadId;

try
{
// Get the size of the object.
GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest
{
BucketName = sourceBucket,
Key = sourceObjectKey
};

GetObjectMetadataResponse metadataResponse =
await s3Client.GetObjectMetadataAsync(metadataRequest);
long objectSize = metadataResponse.ContentLength; // Length in bytes.

// Copy the parts.


long partSize = 5 * (long)Math.Pow(2, 20); // Part size is 5 MB.

long bytePosition = 0;
for (int i = 1; bytePosition < objectSize; i++)
{
CopyPartRequest copyRequest = new CopyPartRequest
{
DestinationBucket = targetBucket,
DestinationKey = targetObjectKey,
SourceBucket = sourceBucket,
SourceKey = sourceObjectKey,
UploadId = uploadId,
FirstByte = bytePosition,
LastByte = bytePosition + partSize - 1 >= objectSize ?
objectSize - 1 : bytePosition + partSize - 1,
PartNumber = i
};

copyResponses.Add(await s3Client.CopyPartAsync(copyRequest));

bytePosition += partSize;
}

// Set up to complete the copy.


CompleteMultipartUploadRequest completeRequest =
new CompleteMultipartUploadRequest
{
BucketName = targetBucket,
Key = targetObjectKey,
UploadId = initResponse.UploadId
};
completeRequest.AddPartETags(copyResponses);

// Complete the copy.


CompleteMultipartUploadResponse completeUploadResponse =
await s3Client.CompleteMultipartUploadAsync(completeRequest);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{

API Version 2006-03-01


101
Amazon Simple Storage Service User Guide
Multipart upload limits

Console.WriteLine("Unknown encountered on server. Message:'{0}' when


writing an object", e.Message);
}
}
}
}

Using the REST API


The following sections in the Amazon Simple Storage Service API Reference describe the REST API for
multipart upload. For copying an existing object, use the Upload Part (Copy) API and specify the source
object by adding the x-amz-copy-source request header in your request.

• Initiate Multipart Upload


• Upload Part
• Upload Part (Copy)
• Complete Multipart Upload
• Abort Multipart Upload
• List Parts
• List Multipart Uploads

You can use these APIs to make your own REST requests, or you can use one the SDKs we provide. For
more information about using Multipart Upload with the AWS CLI, see Using the AWS CLI (p. 88). For
more information about the SDKs, see API support for multipart upload (p. 74).

Amazon S3 multipart upload limits


The following table provides multipart upload core specifications. For more information, see Uploading
and copying objects using multipart upload (p. 72).

Item Specification

Maximum object size 5 TB

Maximum number of parts per upload 10,000

Part numbers 1 to 10,000 (inclusive)

Part size 5 MB to 5 GB. There is no size limit on the last part of your
multipart upload.

Maximum number of parts returned 1000


for a list parts request

Maximum number of multipart 1000


uploads returned in a list multipart
uploads request

Copying objects
The copy operation creates a copy of an object that is already stored in Amazon S3. You can create
a copy of your object up to 5 GB in a single atomic operation. However, for copying an object that is
greater than 5 GB, you must use the multipart upload API. Using the copy operation, you can:

API Version 2006-03-01


102
Amazon Simple Storage Service User Guide
To copy an object

• Create additional copies of objects


• Rename objects by copying them and deleting the original ones
• Move objects across Amazon S3 locations (e.g., us-west-1 and Europe)
• Change object metadata

Each Amazon S3 object has metadata. It is a set of name-value pairs. You can set object metadata at
the time you upload it. After you upload the object, you cannot modify object metadata. The only way
to modify object metadata is to make a copy of the object and set the metadata. In the copy operation
you set the same object as the source and target.

Each object has metadata. Some of it is system metadata and other user-defined. Users control some of
the system metadata such as storage class configuration to use for the object, and configure server-side
encryption. When you copy an object, user-controlled system metadata and user-defined metadata are
also copied. Amazon S3 resets the system-controlled metadata. For example, when you copy an object,
Amazon S3 resets the creation date of the copied object. You don't need to set any of these values in
your copy request.

When copying an object, you might decide to update some of the metadata values. For example, if
your source object is configured to use standard storage, you might choose to use reduced redundancy
storage for the object copy. You might also decide to alter some of the user-defined metadata values
present on the source object. Note that if you choose to update any of the object's user-configurable
metadata (system or user-defined) during the copy, then you must explicitly specify all of the user-
configurable metadata present on the source object in your request, even if you are only changing only
one of the metadata values.

For more information about the object metadata, see Working with object metadata (p. 60).
Note

• Copying objects across locations incurs bandwidth charges.


• If the source object is archived in S3 Glacier or S3 Glacier Deep Archive, you
must first restore a temporary copy before you can copy the object to another bucket. For
information about archiving objects, see Transitioning to the S3 Glacier and S3 Glacier Deep
Archive storage classes (object archival) (p. 504).

When copying objects, you can request Amazon S3 to save the target object encrypted with an AWS
Key Management Service (AWS KMS) customer master key (CMK), an Amazon S3-managed encryption
key, or a customer-provided encryption key. Accordingly, you must specify encryption information in
your request. If the copy source is an object that is stored in Amazon S3 using server-side encryption
with customer provided key, you will need to provide encryption information in your request so
Amazon S3 can decrypt the object for copying. For more information, see Protecting data using
encryption (p. 157).

To copy more than one Amazon S3 object with a single request, you can use Amazon S3 batch
operations. You provide S3 Batch Operations with a list of objects to operate on. S3 Batch Operations
calls the respective API to perform the specified operation. A single Batch Operations job can perform
the specified operation on billions of objects containing exabytes of data.

The S3 Batch Operations feature tracks progress, sends notifications, and stores a detailed completion
report of all actions, providing a fully managed, auditable, serverless experience. You can use S3
Batch Operations through the AWS Management Console, AWS CLI, AWS SDKs, or REST API. For more
information, see the section called “Batch Ops basics” (p. 662).

To copy an object
To copy an object, use the examples below.

API Version 2006-03-01


103
Amazon Simple Storage Service User Guide
To copy an object

Using the S3 console


In the S3 console, you can copy or move an object. For more information, see the procedures below.

To copy an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to copy.
3. Select the check box to the left of the names of the objects that you want to copy.
4. Choose Actions and choose Copy from the list of options that appears.

Alternatively, choose Copy from the options in the upper right.


5. Select the destination type and destination account. To specify the destination path, choose Browse
S3, navigate to the destination, and select the check box to the left of the destination. Choose
Choose destination in the lower right.

Alternatively, enter the destination path.


6. If you do not have bucket versioning enabled, you might be asked to acknowledge that existing
objects with the same name are overwritten. If this is OK, select the check box and proceed. If you
want to keep all versions of objects in this bucket, select Enable Bucket Versioning. You can also
update default encryption and Object Lock properties.
7. Choose Copy in the bottom right and Amazon S3 moves your objects to the destination.

To move objects

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to move.
3. Select the check box to the left of the names of the objects that you want to move.
4. Choose Actions and choose Move from the list of options that appears.

Alternatively, choose Move from the options in the upper right.


5. To specify the destination path, choose Browse S3, navigate to the destination, and select the check
box to the left of the destination. Choose Choose destination in the lower right.

Alternatively, enter the destination path.


6. If you do not have bucket versioning enabled, you might be asked to acknowledge that existing
objects with the same name are overwritten. If this is OK, select the check box and proceed. If you
want to keep all versions of objects in this bucket, select Enable Bucket Versioning. You can also
update default encryption and Object Lock properties.
7. Choose Move in the bottom right and Amazon S3 moves your objects to the destination.

Note

• This action creates a copy of all specified objects with updated settings, updates the last-
modified date in the specified location, and adds a delete marker to the original object.
• When moving folders, wait for the move action to finish before making additional changes in
the folders.
• Objects encrypted with customer-provided encryption keys (SSE-C) cannot be copied using
the S3 console. To copy objects encrypted with SSE-C, use the AWS CLI, AWS SDK, or the
Amazon S3 REST API.

API Version 2006-03-01


104
Amazon Simple Storage Service User Guide
To copy an object

• This action updates metadata for bucket versioning, encryption, Object Lock features, and
archived objects.

Using the AWS SDKs


The examples in this section show how to copy objects up to 5 GB in a single operation. For copying
objects greater than 5 GB, you must use multipart upload API. For more information, see Copying an
object using multipart upload (p. 98).

Java

Example

The following example copies an object in Amazon S3 using the AWS SDK for Java. For instructions
on creating and testing a working sample, see Testing the Amazon S3 Java Code Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CopyObjectRequest;

import java.io.IOException;

public class CopyObjectSingleOperation {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String sourceKey = "*** Source object key *** ";
String destinationKey = "*** Destination object key ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Copy the object into a new object in the same bucket.


CopyObjectRequest copyObjRequest = new CopyObjectRequest(bucketName,
sourceKey, bucketName, destinationKey);
s3Client.copyObject(copyObjRequest);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

API Version 2006-03-01


105
Amazon Simple Storage Service User Guide
To copy an object

.NET

The following C# example uses the high-level AWS SDK for .NET to copy objects that are as large
as 5 GB in a single operation. For objects that are larger than 5 GB, use the multipart upload copy
example described in Copying an object using multipart upload (p. 98).

This example makes a copy of an object that is a maximum of 5 GB. For information about the
example's compatibility with a specific version of the AWS SDK for .NET and instructions on how to
create and test a working sample, see Running the Amazon S3 .NET Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class CopyObjectTest
{
private const string sourceBucket = "*** provide the name of the bucket with
source object ***";
private const string destinationBucket = "*** provide the name of the bucket to
copy the object to ***";
private const string objectKey = "*** provide the name of object to copy ***";
private const string destObjectKey = "*** provide the destination object key
name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
Console.WriteLine("Copying an object");
CopyingObjectAsync().Wait();
}

private static async Task CopyingObjectAsync()


{
try
{
CopyObjectRequest request = new CopyObjectRequest
{
SourceBucket = sourceBucket,
SourceKey = objectKey,
DestinationBucket = destinationBucket,
DestinationKey = destObjectKey
};
CopyObjectResponse response = await s3Client.CopyObjectAsync(request);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

API Version 2006-03-01


106
Amazon Simple Storage Service User Guide
To copy an object

PHP

This topic guides you through using classes from version 3 of the AWS SDK for PHP to copy a single
object and multiple objects within Amazon S3, from one bucket to another or within the same
bucket.

This topic assumes that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly installed.

The following PHP example illustrates the use of the copyObject() method to copy a single object
within Amazon S3 and using a batch of calls to CopyObject using the getcommand() method to
make multiple copies of an object.

Copying objects

1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class


constructor.

2 To make multiple copies of an object, you run a batch of calls to the Amazon S3 client
getCommand() method, which is inherited from the Aws\CommandInterface class.
You provide the CopyObject command as the first argument and an array containing
the source bucket, source key name, target bucket, and target key name as the second
argument.

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$sourceBucket = '*** Your Source Bucket Name ***';


$sourceKeyname = '*** Your Source Object Key ***';
$targetBucket = '*** Your Target Bucket Name ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Copy an object.
$s3->copyObject([
'Bucket' => $targetBucket,
'Key' => "{$sourceKeyname}-copy",
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
]);

// Perform a batch of CopyObject operations.


$batch = array();
for ($i = 1; $i <= 3; $i++) {
$batch[] = $s3->getCommand('CopyObject', [
'Bucket' => $targetBucket,
'Key' => "{targetKeyname}-{$i}",
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
]);
}
try {
$results = CommandPool::batch($s3, $batch);
foreach($results as $result) {
if ($result instanceof ResultInterface) {
// Result handling here
}
if ($result instanceof AwsException) {
// AwsException handling here

API Version 2006-03-01


107
Amazon Simple Storage Service User Guide
To copy an object

}
}
} catch (\Exception $e) {
// General error handling here
}

Ruby

The following tasks guide you through using the Ruby classes to copy an object in Amazon S3 from
one bucket to another or within the same bucket.

Copying objects

1 Use the Amazon S3 modularized gem for version 3 of the AWS SDK for Ruby, require
'aws-sdk-s3', and provide your AWS credentials. For more information about how
to provide your credentials, see Making requests using AWS account or IAM user
credentials (p. 909).

2 Provide the request information, such as source bucket name, source key name,
destination bucket name, and destination key.

The following Ruby code example demonstrates the preceding tasks using the #copy_object
method to copy an object from one bucket to another.

require 'aws-sdk-s3'

# Copies an object from one Amazon S3 bucket to another.


#
# Prerequisites:
#
# - Two S3 buckets (a source bucket and a target bucket).
# - An object in the source bucket to be copied.
#
# @param s3_client [Aws::S3::Client] An initialized Amazon S3 client.
# @param source_bucket_name [String] The source bucket's name.
# @param source_key [String] The name of the object
# in the source bucket to be copied.
# @param target_bucket_name [String] The target bucket's name.
# @param target_key [String] The name of the copied object.
# @return [Boolean] true if the object was copied; otherwise, false.
# @example
# s3_client = Aws::S3::Client.new(region: 'us-east-1')
# exit 1 unless object_copied?(
# s3_client,
# 'doc-example-bucket1',
# 'my-source-file.txt',
# 'doc-example-bucket2',
# 'my-target-file.txt'
# )
def object_copied?(
s3_client,
source_bucket_name,
source_key,
target_bucket_name,
target_key)

return true if s3_client.copy_object(


bucket: target_bucket_name,
copy_source: source_bucket_name + '/' + source_key,
key: target_key
)
rescue StandardError => e

API Version 2006-03-01


108
Amazon Simple Storage Service User Guide
Downloading an object

puts "Error while copying object: #{e.message}"


end

Copying an object using the REST API


This example describes how to copy an object using REST. For more information about the REST API, go
to PUT Object (Copy).

This example copies the flotsam object from the pacific bucket to the jetsam object of the
atlantic bucket, preserving its metadata.

PUT /jetsam HTTP/1.1


Host: atlantic.s3.amazonaws.com
x-amz-copy-source: /pacific/flotsam
Authorization: AWS AKIAIOSFODNN7EXAMPLE:ENoSbxYByFA0UGLZUqJN5EUnLDg=
Date: Wed, 20 Feb 2008 22:12:21 +0000

The signature was generated from the following information.

PUT\r\n
\r\n
\r\n
Wed, 20 Feb 2008 22:12:21 +0000\r\n

x-amz-copy-source:/pacific/flotsam\r\n
/atlantic/jetsam

Amazon S3 returns the following response that specifies the ETag of the object and when it was last
modified.

HTTP/1.1 200 OK
x-amz-id-2: Vyaxt7qEbzv34BnSu5hctyyNSlHTYZFMWK4FtzO+iX8JQNyaLdTshL0KxatbaOZt
x-amz-request-id: 6B13C3C5B34AF333
Date: Wed, 20 Feb 2008 22:13:01 +0000

Content-Type: application/xml
Transfer-Encoding: chunked
Connection: close
Server: AmazonS3
<?xml version="1.0" encoding="UTF-8"?>

<CopyObjectResult>
<LastModified>2008-02-20T22:13:01</LastModified>
<ETag>"7e9c608af58950deeb370c98608ed097"</ETag>
</CopyObjectResult>

Downloading an object
This section explains how to download objects from an S3 bucket.

Data transfer fees apply when you download objects. For information about Amazon S3 features, and
pricing, see Amazon S3.
Important
If an object key name consists of a single period (.), or two periods (..), you can’t download the
object using the Amazon S3 console. To download an object with a key name of “.” or “..”, you

API Version 2006-03-01


109
Amazon Simple Storage Service User Guide
Downloading an object

must use the AWS CLI, AWS SDKs, or REST API. For more information about naming objects, see
Object key naming guidelines (p. 58).

You can download a single object per request using the Amazon S3 console. To download multiple
objects, use the AWS CLI, AWS SDKs, or REST API.

When you download an object programmatically, its metadata is returned in the response headers. There
are times when you want to override certain response header values returned in a GET response. For
example, you might override the Content-Disposition response header value in your GET request.
The REST GET Object API (see GET Object) allows you to specify query string parameters in your GET
request to override these values. The AWS SDKs for Java, .NET, and PHP also provide necessary objects
you can use to specify values for these response headers in your GET request.

When retrieving objects that are stored encrypted using server-side encryption, you must provide
appropriate request headers. For more information, see Protecting data using encryption (p. 157).

Using the S3 console


This section explains how to use the Amazon S3 console to download objects from an S3 bucket.

Data transfer fees apply when you download objects. For information about Amazon S3 features, and
pricing, see Amazon S3.
Important
If an object key name consists of a single period (.), or two periods (..), you can’t download the
object using the Amazon S3 console. To download an object with a key name of “.” or “..”, you
must use the AWS CLI, AWS SDKs, or REST API. For more information about naming objects, see
Object key naming guidelines (p. 58).

To download an object from an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to download an object from.

3. You can download an object from an S3 bucket in any of the following ways:

• Choose the name of the object that you want to download.

On the Overview page, choose Download.


• Choose the name of the object that you want to download and then choose Download or
Download as from the Action menu.
• Choose the name of the object that you want to download. Choose Latest version and then
choose the download icon.

Using the AWS SDKs


Java

When you download an object through the AWS SDK for Java, Amazon S3 returns all of the object's
metadata and an input stream from which to read the object's contents.

To retrieve an object, you do the following:

• Execute the AmazonS3Client.getObject() method, providing the bucket name and object key
in the request.
• Execute one of the S3Object instance methods to process the input stream.

API Version 2006-03-01


110
Amazon Simple Storage Service User Guide
Downloading an object

Note
Your network connection remains open until you read all of the data or close the input
stream. We recommend that you read the content of the stream as quickly as possible.

The following are some variations you might use:

• Instead of reading the entire object, you can read only a portion of the object data by specifying
the byte range that you want in the request.
• You can optionally override the response header values by using a ResponseHeaderOverrides
object and setting the corresponding request property. For example, you can use this feature to
indicate that the object should be downloaded into a file with a different file name than the object
key name.

The following example retrieves an object from an Amazon S3 bucket three ways: first, as a
complete object, then as a range of bytes from the object, then as a complete object with overridden
response header values. For more information about getting objects from Amazon S3, see GET
Object. For instructions on creating and testing a working sample, see Testing the Amazon S3 Java
Code Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ResponseHeaderOverrides;
import com.amazonaws.services.s3.model.S3Object;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;

public class GetObject2 {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String key = "*** Object key ***";

S3Object fullObject = null, objectPortion = null, headerOverrideObject = null;


try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Get an object and print its contents.


System.out.println("Downloading an object");
fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
System.out.println("Content-Type: " +
fullObject.getObjectMetadata().getContentType());
System.out.println("Content: ");
displayTextInputStream(fullObject.getObjectContent());

// Get a range of bytes from an object and print the bytes.


GetObjectRequest rangeObjectRequest = new GetObjectRequest(bucketName, key)
.withRange(0, 9);
objectPortion = s3Client.getObject(rangeObjectRequest);
System.out.println("Printing bytes retrieved.");

API Version 2006-03-01


111
Amazon Simple Storage Service User Guide
Downloading an object

displayTextInputStream(objectPortion.getObjectContent());

// Get an entire object, overriding the specified response headers, and


print the object's content.
ResponseHeaderOverrides headerOverrides = new ResponseHeaderOverrides()
.withCacheControl("No-cache")
.withContentDisposition("attachment; filename=example.txt");
GetObjectRequest getObjectRequestHeaderOverride = new
GetObjectRequest(bucketName, key)
.withResponseHeaders(headerOverrides);
headerOverrideObject = s3Client.getObject(getObjectRequestHeaderOverride);
displayTextInputStream(headerOverrideObject.getObjectContent());
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
} finally {
// To ensure that the network connection doesn't remain open, close any
open input streams.
if (fullObject != null) {
fullObject.close();
}
if (objectPortion != null) {
objectPortion.close();
}
if (headerOverrideObject != null) {
headerOverrideObject.close();
}
}
}

private static void displayTextInputStream(InputStream input) throws IOException {


// Read the text input stream one line at a time and display each line.
BufferedReader reader = new BufferedReader(new InputStreamReader(input));
String line = null;
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
System.out.println();
}
}

.NET

When you download an object, you get all of the object's metadata and a stream from which to read
the contents. You should read the content of the stream as quickly as possible because the data is
streamed directly from Amazon S3 and your network connection will remain open until you read all
the data or close the input stream. You do the following to get an object:

• Execute the getObject method by providing bucket name and object key in the request.
• Execute one of the GetObjectResponse methods to process the stream.

The following are some variations you might use:

• Instead of reading the entire object, you can read only the portion of the object data by specifying
the byte range in the request, as shown in the following C# example:

API Version 2006-03-01


112
Amazon Simple Storage Service User Guide
Downloading an object

Example

GetObjectRequest request = new GetObjectRequest


{
BucketName = bucketName,
Key = keyName,
ByteRange = new ByteRange(0, 10)
};

• When retrieving an object, you can optionally override the response header values (see
Downloading an object (p. 109)) by using the ResponseHeaderOverrides object and setting
the corresponding request property. The following C# code example shows how to do this. For
example, you can use this feature to indicate that the object should be downloaded into a file with
a different file name than the object key name.

Example

GetObjectRequest request = new GetObjectRequest


{
BucketName = bucketName,
Key = keyName
};

ResponseHeaderOverrides responseHeaders = new ResponseHeaderOverrides();


responseHeaders.CacheControl = "No-cache";
responseHeaders.ContentDisposition = "attachment; filename=testing.txt";

request.ResponseHeaderOverrides = responseHeaders;

Example

The following C# code example retrieves an object from an Amazon S3 bucket. From the response,
the example reads the object data using the GetObjectResponse.ResponseStream property.
The example also shows how you can use the GetObjectResponse.Metadata collection to read
object metadata. If the object you retrieve has the x-amz-meta-title metadata, the code prints
the metadata value.

For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class GetObjectTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** object key ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()

API Version 2006-03-01


113
Amazon Simple Storage Service User Guide
Downloading an object

{
client = new AmazonS3Client(bucketRegion);
ReadObjectDataAsync().Wait();
}

static async Task ReadObjectDataAsync()


{
string responseBody = "";
try
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = keyName
};
using (GetObjectResponse response = await
client.GetObjectAsync(request))
using (Stream responseStream = response.ResponseStream)
using (StreamReader reader = new StreamReader(responseStream))
{
string title = response.Metadata["x-amz-meta-title"]; // Assume you
have "title" as medata added to the object.
string contentType = response.Headers["Content-Type"];
Console.WriteLine("Object metadata, Title: {0}", title);
Console.WriteLine("Content type: {0}", contentType);

responseBody = reader.ReadToEnd(); // Now you process the response


body.
}
}
catch (AmazonS3Exception e)
{
// If bucket or object does not exist
Console.WriteLine("Error encountered ***. Message:'{0}' when reading
object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
reading object", e.Message);
}
}
}
}

PHP

This topic explains how to use a class from the AWS SDK for PHP to retrieve an Amazon S3 object.
You can retrieve an entire object or a byte range from the object. We assume that you are already
following the instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 952) and
have the AWS SDK for PHP properly installed.

When retrieving an object, you can optionally override the response header values by
adding the response keys, ResponseContentType, ResponseContentLanguage,
ResponseContentDisposition, ResponseCacheControl, and ResponseExpires, to the
getObject() method, as shown in the following PHP code example:

Example

$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname,

API Version 2006-03-01


114
Amazon Simple Storage Service User Guide
Deleting objects

'ResponseContentType' => 'text/plain',


'ResponseContentLanguage' => 'en-US',
'ResponseContentDisposition' => 'attachment; filename=testing.txt',
'ResponseCacheControl' => 'No-cache',
'ResponseExpires' => gmdate(DATE_RFC2822, time() + 3600),
]);

For more information about retrieving objects, see Downloading an object (p. 109).

The following PHP example retrieves an object and displays the content of the object in the browser.
The example shows how to use the getObject() method. For information about running the PHP
examples in this guide, see Running PHP Examples (p. 952).

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

try {
// Get the object.
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname
]);

// Display the object in the browser.


header("Content-Type: {$result['ContentType']}");
echo $result['Body'];
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

Using the REST API


You can use the AWS SDK to retrieve object keys from a bucket. However, if your application requires it,
you can send REST requests directly. You can send a GET request to retrieve object keys.

For more information about the request and response format, see Get Object.

Deleting Amazon S3 objects


You can delete one or more objects directly from Amazon S3 using the Amazon S3 console, AWS SDKs,
AWS Command Line Interface (AWS CLI), or REST API. Because all objects in your S3 bucket incur
storage costs, you should delete objects that you no longer need. For example, if you're collecting log
files, it's a good idea to delete them when they're no longer needed. You can set up a lifecycle rule to
automatically delete objects such as log files. For more information, see the section called “Setting
lifecycle configuration” (p. 507).

For information about Amazon S3 features and pricing, see Amazon S3 pricing.

API Version 2006-03-01


115
Amazon Simple Storage Service User Guide
Programmatically deleting objects
from a version-enabled bucket

You have the following API options when deleting an object:

• Delete a single object — Amazon S3 provides the DELETE API that you can use to delete one object in
a single HTTP request.
• Delete multiple objects — Amazon S3 provides the Multi-Object Delete API that you can use to delete
up to 1,000 objects in a single HTTP request.

When deleting objects from a bucket that is not version-enabled, you provide only the object key name.
However, when deleting objects from a version-enabled bucket, you can optionally provide the version ID
of the object to delete a specific version of the object.

Programmatically deleting objects from a version-


enabled bucket
If your bucket is version-enabled, multiple versions of the same object can exist in the bucket. When
working with version-enabled buckets, the delete API enables the following options:

• Specify a non-versioned delete request — Specify only the object's key, and not the version ID. In
this case, Amazon S3 creates a delete marker and returns its version ID in the response. This makes
your object disappear from the bucket. For information about object versioning and the delete marker
concept, see Using versioning in S3 buckets (p. 453).
• Specify a versioned delete request — Specify both the key and also a version ID. In this case the
following two outcomes are possible:
• If the version ID maps to a specific object version, Amazon S3 deletes the specific version of the
object.
• If the version ID maps to the delete marker of that object, Amazon S3 deletes the delete marker.
This makes the object reappear in your bucket.

Deleting objects from an MFA-enabled bucket


When deleting objects from a multi-factor authentication (MFA)-enabled bucket, note the following:

• If you provide an invalid MFA token, the request always fails.


• If you have an MFA-enabled bucket, and you make a versioned delete request (you provide an object
key and version ID), the request fails if you don't provide a valid MFA token. In addition, when using
the Multi-Object Delete API on an MFA-enabled bucket, if any of the deletes are a versioned delete
request (that is, you specify object key and version ID), the entire request fails if you don't provide an
MFA token.

However, in the following cases the request succeeds:

• If you have an MFA-enabled bucket, and you make a non-versioned delete request (you are not
deleting a versioned object), and you don't provide an MFA token, the delete succeeds.
• If you have a Multi-Object Delete request specifying only non-versioned objects to delete from an
MFA-enabled bucket, and you don't provide an MFA token, the deletions succeed.

For information about MFA delete, see Configuring MFA delete (p. 460).

Topics
• Deleting a single object (p. 117)
• Deleting multiple objects (p. 123)

API Version 2006-03-01


116
Amazon Simple Storage Service User Guide
Deleting a single object

Deleting a single object


You can use the Amazon S3 console or the DELETE API to delete a single existing object from an S3
bucket.

Because all objects in your S3 bucket incur storage costs, you should delete objects that you no longer
need. For example, if you are collecting log files, it's a good idea to delete them when they're no longer
needed. You can set up a lifecycle rule to automatically delete objects such as log files. For more
information, see the section called “Setting lifecycle configuration” (p. 507).

For information about Amazon S3 features and pricing, see Amazon S3 pricing.

Using the S3 console


Follow these steps to use the Amazon S3 console to delete a single object from a bucket.

To delete an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to delete an object from.
3. Choose the name of the object that you want to delete.
4. To delete the current version of the object, choose Latest version, and choose the trash can icon.
5. To delete a previous version of the object, choose Latest version, and choose the trash can icon
beside the version that you want to delete.

Using the AWS SDKs


The following examples show how you can use the AWS SDKs to delete an object from a bucket. For
more information, see DELETE Object in the Amazon Simple Storage Service API Reference

If you have S3 Versioning enabled on the bucket, you have the following options:

• Delete a specific object version by specifying a version ID.


• Delete an object without specifying a version ID, in which case Amazon S3 adds a delete marker to the
object.

For more information about S3 Versioning, see Using versioning in S3 buckets (p. 453).

Java

Example Example 1: Deleting an object (non-versioned bucket)


The following example assumes that the bucket is not versioning-enabled and the object doesn't
have any version IDs. In the delete request, you specify only the object key and not a version ID.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;

API Version 2006-03-01


117
Amazon Simple Storage Service User Guide
Deleting a single object

import java.io.IOException;
import java.util.ArrayList;

public class DeleteMultipleObjectsNonVersionedBucket {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();

// Upload three sample objects.


ArrayList<KeyVersion> keys = new ArrayList<KeyVersion>();
for (int i = 0; i < 3; i++) {
String keyName = "delete object example " + i;
s3Client.putObject(bucketName, keyName, "Object number " + i + " to be
deleted.");
keys.add(new KeyVersion(keyName));
}
System.out.println(keys.size() + " objects successfully created.");

// Delete the sample objects.


DeleteObjectsRequest multiObjectDeleteRequest = new
DeleteObjectsRequest(bucketName)
.withKeys(keys)
.withQuiet(false);

// Verify that the objects were deleted successfully.


DeleteObjectsResult delObjRes =
s3Client.deleteObjects(multiObjectDeleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " objects successfully deleted.");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Example Example 2: Deleting an object (versioned bucket)


The following example deletes an object from a versioned bucket. The example deletes a specific
object version by specifying the object key name and version ID.

The example does the following:

1. Adds a sample object to the bucket. Amazon S3 returns the version ID of the newly added object.
The example uses this version ID in the delete request.
2. Deletes the object version by specifying both the object key name and a version ID. If there are no
other versions of that object, Amazon S3 deletes the object entirely. Otherwise, Amazon S3 only
deletes the specified version.
Note
You can get the version IDs of an object by sending a ListVersions request.

API Version 2006-03-01


118
Amazon Simple Storage Service User Guide
Deleting a single object

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketVersioningConfiguration;
import com.amazonaws.services.s3.model.DeleteVersionRequest;
import com.amazonaws.services.s3.model.PutObjectResult;

import java.io.IOException;

public class DeleteObjectVersionEnabledBucket {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ****";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Check to ensure that the bucket is versioning-enabled.


String bucketVersionStatus =
s3Client.getBucketVersioningConfiguration(bucketName).getStatus();
if (!bucketVersionStatus.equals(BucketVersioningConfiguration.ENABLED)) {
System.out.printf("Bucket %s is not versioning-enabled.", bucketName);
} else {
// Add an object.
PutObjectResult putResult = s3Client.putObject(bucketName, keyName,
"Sample content for deletion example.");
System.out.printf("Object %s added to bucket %s\n", keyName,
bucketName);

// Delete the version of the object that we just created.


System.out.println("Deleting versioned object " + keyName);
s3Client.deleteVersion(new DeleteVersionRequest(bucketName, keyName,
putResult.getVersionId()));
System.out.printf("Object %s, version %s deleted\n", keyName,
putResult.getVersionId());
}
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

The following examples show how to delete an object from both versioned and non-versioned
buckets. For more information about S3 Versioning, see Using versioning in S3 buckets (p. 453).

API Version 2006-03-01


119
Amazon Simple Storage Service User Guide
Deleting a single object

Example Deleting an object from a non-versioned bucket

The following C# example deletes an object from a non-versioned bucket. The example assumes that
the objects don't have version IDs, so you don't specify version IDs. You specify only the object key.

For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class DeleteObjectNonVersionedBucketTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** object key ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
DeleteObjectNonVersionedBucketAsync().Wait();
}
private static async Task DeleteObjectNonVersionedBucketAsync()
{
try
{
var deleteObjectRequest = new DeleteObjectRequest
{
BucketName = bucketName,
Key = keyName
};

Console.WriteLine("Deleting an object");
await client.DeleteObjectAsync(deleteObjectRequest);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
deleting an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
deleting an object", e.Message);
}
}
}
}

Example Deleting an object from a versioned bucket

The following C# example deletes an object from a versioned bucket. It deletes a specific version of
the object by specifying the object key name and version ID.

The code performs the following tasks:

API Version 2006-03-01


120
Amazon Simple Storage Service User Guide
Deleting a single object

1. Enables S3 Versioning on a bucket that you specify (if S3 Versioning is already enabled, this has
no effect).
2. Adds a sample object to the bucket. In response, Amazon S3 returns the version ID of the newly
added object. The example uses this version ID in the delete request.
3. Deletes the sample object by specifying both the object key name and a version ID.
Note
You can also get the version ID of an object by sending a ListVersions request.

var listResponse = client.ListVersions(new ListVersionsRequest { BucketName =


bucketName, Prefix = keyName });

For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class DeleteObjectVersion
{
private const string bucketName = "*** versioning-enabled bucket name ***";
private const string keyName = "*** Object Key Name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
CreateAndDeleteObjectVersionAsync().Wait();
}

private static async Task CreateAndDeleteObjectVersionAsync()


{
try
{
// Add a sample object.
string versionID = await PutAnObject(keyName);

// Delete the object by specifying an object key and a version ID.


DeleteObjectRequest request = new DeleteObjectRequest
{
BucketName = bucketName,
Key = keyName,
VersionId = versionID
};
Console.WriteLine("Deleting an object");
await client.DeleteObjectAsync(request);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
deleting an object", e.Message);
}
catch (Exception e)
{

API Version 2006-03-01


121
Amazon Simple Storage Service User Guide
Deleting a single object

Console.WriteLine("Unknown encountered on server. Message:'{0}' when


deleting an object", e.Message);
}
}

static async Task<string> PutAnObject(string objectKey)


{
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketName,
Key = objectKey,
ContentBody = "This is the content body!"
};
PutObjectResponse response = await client.PutObjectAsync(request);
return response.VersionId;
}
}
}

PHP

This example shows how to use classes from version 3 of the AWS SDK for PHP to delete an object
from a non-versioned bucket. For information about deleting an object from a versioned bucket, see
Using the REST API (p. 123).

This example assumes that you are already following the instructions for Using the AWS SDK for
PHP and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly installed. For
information about running the PHP examples in this guide, see Running PHP Examples (p. 952).

The following PHP example deletes an object from a bucket. Because this example shows how to
delete objects from non-versioned buckets, it provides only the bucket name and object key (not a
version ID) in the delete request.

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// 1. Delete the object from the bucket.


try
{
echo 'Attempting to delete ' . $keyname . '...' . PHP_EOL;

$result = $s3->deleteObject([
'Bucket' => $bucket,
'Key' => $keyname
]);

if ($result['DeleteMarker'])
{
echo $keyname . ' was deleted or does not exist.' . PHP_EOL;
} else {
exit('Error: ' . $keyname . ' was not deleted.' . PHP_EOL);

API Version 2006-03-01


122
Amazon Simple Storage Service User Guide
Deleting multiple objects

}
}
catch (S3Exception $e) {
exit('Error: ' . $e->getAwsErrorMessage() . PHP_EOL);
}

// 2. Check to see if the object was deleted.


try
{
echo 'Checking to see if ' . $keyname . ' still exists...' . PHP_EOL;

$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname
]);

echo 'Error: ' . $keyname . ' still exists.';


}
catch (S3Exception $e) {
exit($e->getAwsErrorMessage());
}

Using the AWS CLI

To delete one object per request, use the DELETE API. For more information, see DELETE Object. For
more information about using the CLI to delete an object, see delete-object.

Using the REST API


You can use the AWS SDKs to delete an object. However, if your application requires it, you can send
REST requests directly. For more information, see DELETE Object in the Amazon Simple Storage Service
API Reference.

Deleting multiple objects


Because all objects in your S3 bucket incur storage costs, you should delete objects that you no longer
need. For example, if you are collecting log files, it's a good idea to delete them when they're no longer
needed. You can set up a lifecycle rule to automatically delete objects such as log files. For more
information, see the section called “Setting lifecycle configuration” (p. 507).

For information about Amazon S3 features and pricing, see Amazon S3 pricing.

You can use the Amazon S3 console or the Multi-Object Delete API to delete multiple objects
simultaneously from an S3 bucket.

Using the S3 console


Follow these steps to use the Amazon S3 console to delete multiple objects from a bucket.

To delete objects

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to delete.
3. Select the check box to the left of the names of the objects that you want to delete.
4. Choose Actions and choose Delete from the list of options that appears.

Alternatively, choose Delete from the options in the upper right.

API Version 2006-03-01


123
Amazon Simple Storage Service User Guide
Deleting multiple objects

5. Enter delete if asked to confirm that you want to delete these objects.
6. Choose Delete objects in the bottom right and Amazon S3 deletes the specified objects.

Warning

• Deleting the specified objects cannot be undone.


• This action deletes all specified objects. When deleting folders, wait for the delete action to
finish before adding new objects to the folder. Otherwise, new objects might be deleted as
well.
• Deleting the specified objects cannot be undone.

Using the AWS SDKs


Amazon S3 provides the Multi-Object Delete API , which you can use to delete multiple objects in
a single request. The API supports two modes for the response: verbose and quiet. By default, the
operation uses verbose mode. In verbose mode, the response includes the result of the deletion of
each key that is specified in your request. In quiet mode, the response includes only keys for which the
delete operation encountered an error. If all keys are successfully deleted when you're using quiet mode,
Amazon S3 returns an empty response. For more information, see Delete - Multi-Object Delete.

To learn more about object deletion, see Deleting Amazon S3 objects (p. 115).

Java

The AWS SDK for Java provides the AmazonS3Client.deleteObjects() method for deleting
multiple objects. For each object that you want to delete, you specify the key name. If the bucket is
versioning-enabled, you have the following options:

• Specify only the object's key name. Amazon S3 adds a delete marker to the object.
• Specify both the object's key name and a version ID to be deleted. Amazon S3 deletes the
specified version of the object.

Example

The following example uses the Multi-Object Delete API to delete objects from a bucket that
is not version-enabled. The example uploads sample objects to the bucket and then uses the
AmazonS3Client.deleteObjects() method to delete the objects in a single request. In the
DeleteObjectsRequest, the example specifies only the object key names because the objects do
not have version IDs.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;

import java.io.IOException;
import java.util.ArrayList;

API Version 2006-03-01


124
Amazon Simple Storage Service User Guide
Deleting multiple objects

public class DeleteMultipleObjectsNonVersionedBucket {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();

// Upload three sample objects.


ArrayList<KeyVersion> keys = new ArrayList<KeyVersion>();
for (int i = 0; i < 3; i++) {
String keyName = "delete object example " + i;
s3Client.putObject(bucketName, keyName, "Object number " + i + " to be
deleted.");
keys.add(new KeyVersion(keyName));
}
System.out.println(keys.size() + " objects successfully created.");

// Delete the sample objects.


DeleteObjectsRequest multiObjectDeleteRequest = new
DeleteObjectsRequest(bucketName)
.withKeys(keys)
.withQuiet(false);

// Verify that the objects were deleted successfully.


DeleteObjectsResult delObjRes =
s3Client.deleteObjects(multiObjectDeleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " objects successfully deleted.");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Example

The following example uses the Multi-Object Delete API to delete objects from a version-enabled
bucket. It does the following:

1. Creates sample objects and then deletes them, specifying the key name and version ID for each
object to delete. The operation deletes only the specified object versions.
2. Creates sample objects and then deletes them by specifying only the key names. Because the
example doesn't specify version IDs, the operation adds a delete marker to each object, without
deleting any specific object versions. After the delete markers are added, these objects will not
appear in the AWS Management Console.
3. Removes the delete markers by specifying the object keys and version IDs of the delete markers.
The operation deletes the delete markers, which results in the objects reappearing in the AWS
Management Console.

API Version 2006-03-01


125
Amazon Simple Storage Service User Guide
Deleting multiple objects

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketVersioningConfiguration;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;
import com.amazonaws.services.s3.model.DeleteObjectsResult.DeletedObject;
import com.amazonaws.services.s3.model.PutObjectResult;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class DeleteMultipleObjectsVersionEnabledBucket {


private static AmazonS3 S3_CLIENT;
private static String VERSIONED_BUCKET_NAME;

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
VERSIONED_BUCKET_NAME = "*** Bucket name ***";

try {
S3_CLIENT = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Check to make sure that the bucket is versioning-enabled.


String bucketVersionStatus =
S3_CLIENT.getBucketVersioningConfiguration(VERSIONED_BUCKET_NAME).getStatus();
if (!bucketVersionStatus.equals(BucketVersioningConfiguration.ENABLED)) {
System.out.printf("Bucket %s is not versioning-enabled.",
VERSIONED_BUCKET_NAME);
} else {
// Upload and delete sample objects, using specific object versions.
uploadAndDeleteObjectsWithVersions();

// Upload and delete sample objects without specifying version IDs.


// Amazon S3 creates a delete marker for each object rather than
deleting
// specific versions.
DeleteObjectsResult unversionedDeleteResult =
uploadAndDeleteObjectsWithoutVersions();

// Remove the delete markers placed on objects in the non-versioned


create/delete method.
multiObjectVersionedDeleteRemoveDeleteMarkers(unversionedDeleteResult);
}
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}

private static void uploadAndDeleteObjectsWithVersions() {


System.out.println("Uploading and deleting objects with versions specified.");

API Version 2006-03-01


126
Amazon Simple Storage Service User Guide
Deleting multiple objects

// Upload three sample objects.


ArrayList<KeyVersion> keys = new ArrayList<KeyVersion>();
for (int i = 0; i < 3; i++) {
String keyName = "delete object without version ID example " + i;
PutObjectResult putResult = S3_CLIENT.putObject(VERSIONED_BUCKET_NAME,
keyName,
"Object number " + i + " to be deleted.");
// Gather the new object keys with version IDs.
keys.add(new KeyVersion(keyName, putResult.getVersionId()));
}

// Delete the specified versions of the sample objects.


DeleteObjectsRequest multiObjectDeleteRequest = new
DeleteObjectsRequest(VERSIONED_BUCKET_NAME)
.withKeys(keys)
.withQuiet(false);

// Verify that the object versions were successfully deleted.


DeleteObjectsResult delObjRes =
S3_CLIENT.deleteObjects(multiObjectDeleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " objects successfully deleted");
}

private static DeleteObjectsResult uploadAndDeleteObjectsWithoutVersions() {


System.out.println("Uploading and deleting objects with no versions
specified.");

// Upload three sample objects.


ArrayList<KeyVersion> keys = new ArrayList<KeyVersion>();
for (int i = 0; i < 3; i++) {
String keyName = "delete object with version ID example " + i;
S3_CLIENT.putObject(VERSIONED_BUCKET_NAME, keyName, "Object number " + i +
" to be deleted.");
// Gather the new object keys without version IDs.
keys.add(new KeyVersion(keyName));
}

// Delete the sample objects without specifying versions.


DeleteObjectsRequest multiObjectDeleteRequest = new
DeleteObjectsRequest(VERSIONED_BUCKET_NAME).withKeys(keys)
.withQuiet(false);

// Verify that delete markers were successfully added to the objects.


DeleteObjectsResult delObjRes =
S3_CLIENT.deleteObjects(multiObjectDeleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " objects successfully marked for
deletion without versions.");
return delObjRes;
}

private static void


multiObjectVersionedDeleteRemoveDeleteMarkers(DeleteObjectsResult response) {
List<KeyVersion> keyList = new ArrayList<KeyVersion>();
for (DeletedObject deletedObject : response.getDeletedObjects()) {
// Note that the specified version ID is the version ID for the delete
marker.
keyList.add(new KeyVersion(deletedObject.getKey(),
deletedObject.getDeleteMarkerVersionId()));
}
// Create a request to delete the delete markers.
DeleteObjectsRequest deleteRequest = new
DeleteObjectsRequest(VERSIONED_BUCKET_NAME).withKeys(keyList);

// Delete the delete markers, leaving the objects intact in the bucket.

API Version 2006-03-01


127
Amazon Simple Storage Service User Guide
Deleting multiple objects

DeleteObjectsResult delObjRes = S3_CLIENT.deleteObjects(deleteRequest);


int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " delete markers successfully deleted");
}
}

.NET

The AWS SDK for .NET provides a convenient method for deleting multiple objects:
DeleteObjects. For each object that you want to delete, you specify the key name and the version
of the object. If the bucket is not versioning-enabled, you specify null for the version ID. If an
exception occurs, review the DeleteObjectsException response to determine which objects were
not deleted and why.

Example Deleting multiple objects from a non-versioning bucket

The following C# example uses the multi-object delete API to delete objects from a bucket that
is not version-enabled. The example uploads the sample objects to the bucket, and then uses the
DeleteObjects method to delete the objects in a single request. In the DeleteObjectsRequest,
the example specifies only the object key names because the version IDs are null.

For information about creating and testing a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class DeleteMultipleObjectsNonVersionedBucketTest
{
private const string bucketName = "*** versioning-enabled bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
MultiObjectDeleteAsync().Wait();
}

static async Task MultiObjectDeleteAsync()


{
// Create sample objects (for subsequent deletion).
var keysAndVersions = await PutObjectsAsync(3);

// a. multi-object delete by specifying the key names and version IDs.


DeleteObjectsRequest multiObjectDeleteRequest = new DeleteObjectsRequest
{
BucketName = bucketName,
Objects = keysAndVersions // This includes the object keys and null
version IDs.
};
// You can add specific object key to the delete request using the .AddKey.
// multiObjectDeleteRequest.AddKey("TickerReference.csv", null);
try
{

API Version 2006-03-01


128
Amazon Simple Storage Service User Guide
Deleting multiple objects

DeleteObjectsResponse response = await


s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} items",
response.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionErrorStatus(e);
}
}

private static void PrintDeletionErrorStatus(DeleteObjectsException e)


{
// var errorResponse = e.ErrorResponse;
DeleteObjectsResponse errorResponse = e.Response;
Console.WriteLine("x {0}", errorResponse.DeletedObjects.Count);

Console.WriteLine("No. of objects successfully deleted = {0}",


errorResponse.DeletedObjects.Count);
Console.WriteLine("No. of objects failed to delete = {0}",
errorResponse.DeleteErrors.Count);

Console.WriteLine("Printing error data...");


foreach (DeleteError deleteError in errorResponse.DeleteErrors)
{
Console.WriteLine("Object Key: {0}\t{1}\t{2}", deleteError.Key,
deleteError.Code, deleteError.Message);
}
}

static async Task<List<KeyVersion>> PutObjectsAsync(int number)


{
List<KeyVersion> keys = new List<KeyVersion>();
for (int i = 0; i < number; i++)
{
string key = "ExampleObject-" + new System.Random().Next();
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketName,
Key = key,
ContentBody = "This is the content body!",
};

PutObjectResponse response = await s3Client.PutObjectAsync(request);


KeyVersion keyVersion = new KeyVersion
{
Key = key,
// For non-versioned bucket operations, we only need object key.
// VersionId = response.VersionId
};
keys.Add(keyVersion);
}
return keys;
}
}
}

Example Multi-object deletion for a version-enabled bucket

The following C# example uses the multi-object delete API to delete objects from a version-enabled
bucket. The example performs the following actions:

1. Creates sample objects and deletes them by specifying the key name and version ID for each
object. The operation deletes specific versions of the objects.

API Version 2006-03-01


129
Amazon Simple Storage Service User Guide
Deleting multiple objects

2. Creates sample objects and deletes them by specifying only the key names. Because the example
doesn't specify version IDs, the operation only adds delete markers. It doesn't delete any specific
versions of the objects. After deletion, these objects don't appear in the Amazon S3 console.
3. Deletes the delete markers by specifying the object keys and version IDs of the delete markers.
When the operation deletes the delete markers, the objects reappear in the console.

For information about creating and testing a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class DeleteMultipleObjVersionedBucketTest
{
private const string bucketName = "*** versioning-enabled bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
DeleteMultipleObjectsFromVersionedBucketAsync().Wait();
}

private static async Task DeleteMultipleObjectsFromVersionedBucketAsync()


{

// Delete objects (specifying object version in the request).


await DeleteObjectVersionsAsync();

// Delete objects (without specifying object version in the request).


var deletedObjects = await DeleteObjectsAsync();

// Additional exercise - remove the delete markers S3 returned in the


preceding response.
// This results in the objects reappearing in the bucket (you can
// verify the appearance/disappearance of objects in the console).
await RemoveDeleteMarkersAsync(deletedObjects);
}

private static async Task<List<DeletedObject>> DeleteObjectsAsync()


{
// Upload the sample objects.
var keysAndVersions2 = await PutObjectsAsync(3);

// Delete objects using only keys. Amazon S3 creates a delete marker and
// returns its version ID in the response.
List<DeletedObject> deletedObjects = await
NonVersionedDeleteAsync(keysAndVersions2);
return deletedObjects;
}

private static async Task DeleteObjectVersionsAsync()


{
// Upload the sample objects.
var keysAndVersions1 = await PutObjectsAsync(3);

API Version 2006-03-01


130
Amazon Simple Storage Service User Guide
Deleting multiple objects

// Delete the specific object versions.


await VersionedDeleteAsync(keysAndVersions1);
}

private static void PrintDeletionReport(DeleteObjectsException e)


{
var errorResponse = e.Response;
Console.WriteLine("No. of objects successfully deleted = {0}",
errorResponse.DeletedObjects.Count);
Console.WriteLine("No. of objects failed to delete = {0}",
errorResponse.DeleteErrors.Count);
Console.WriteLine("Printing error data...");
foreach (var deleteError in errorResponse.DeleteErrors)
{
Console.WriteLine("Object Key: {0}\t{1}\t{2}", deleteError.Key,
deleteError.Code, deleteError.Message);
}
}

static async Task VersionedDeleteAsync(List<KeyVersion> keys)


{
// a. Perform a multi-object delete by specifying the key names and version
IDs.
var multiObjectDeleteRequest = new DeleteObjectsRequest
{
BucketName = bucketName,
Objects = keys // This includes the object keys and specific version
IDs.
};
try
{
Console.WriteLine("Executing VersionedDelete...");
DeleteObjectsResponse response = await
s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} items",
response.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
}
}

static async Task<List<DeletedObject>> NonVersionedDeleteAsync(List<KeyVersion>


keys)
{
// Create a request that includes only the object key names.
DeleteObjectsRequest multiObjectDeleteRequest = new DeleteObjectsRequest();
multiObjectDeleteRequest.BucketName = bucketName;

foreach (var key in keys)


{
multiObjectDeleteRequest.AddKey(key.Key);
}
// Execute DeleteObjects - Amazon S3 add delete marker for each object
// deletion. The objects disappear from your bucket.
// You can verify that using the Amazon S3 console.
DeleteObjectsResponse response;
try
{
Console.WriteLine("Executing NonVersionedDelete...");
response = await s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} items",
response.DeletedObjects.Count);
}

API Version 2006-03-01


131
Amazon Simple Storage Service User Guide
Deleting multiple objects

catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
throw; // Some deletes failed. Investigate before continuing.
}
// This response contains the DeletedObjects list which we use to delete
the delete markers.
return response.DeletedObjects;
}

private static async Task RemoveDeleteMarkersAsync(List<DeletedObject>


deletedObjects)
{
var keyVersionList = new List<KeyVersion>();

foreach (var deletedObject in deletedObjects)


{
KeyVersion keyVersion = new KeyVersion
{
Key = deletedObject.Key,
VersionId = deletedObject.DeleteMarkerVersionId
};
keyVersionList.Add(keyVersion);
}
// Create another request to delete the delete markers.
var multiObjectDeleteRequest = new DeleteObjectsRequest
{
BucketName = bucketName,
Objects = keyVersionList
};

// Now, delete the delete marker to bring your objects back to the bucket.
try
{
Console.WriteLine("Removing the delete markers .....");
var deleteObjectResponse = await
s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} delete markers",
deleteObjectResponse.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
}
}

static async Task<List<KeyVersion>> PutObjectsAsync(int number)


{
var keys = new List<KeyVersion>();

for (var i = 0; i < number; i++)


{
string key = "ObjectToDelete-" + new System.Random().Next();
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketName,
Key = key,
ContentBody = "This is the content body!",

};

var response = await s3Client.PutObjectAsync(request);


KeyVersion keyVersion = new KeyVersion
{
Key = key,
VersionId = response.VersionId

API Version 2006-03-01


132
Amazon Simple Storage Service User Guide
Deleting multiple objects

};

keys.Add(keyVersion);
}
return keys;
}
}
}

PHP

These examples show how to use classes from version 3 of the AWS SDK for PHP to delete multiple
objects from versioned and non-versioned Amazon S3 buckets. For more information about
versioning, see Using versioning in S3 buckets (p. 453).

The examples assume that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly installed.

Example Deleting multiple objects from a non-versioned bucket


The following PHP example uses the deleteObjects() method to delete multiple objects from a
bucket that is not version-enabled.

For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// 1. Create a few objects.


for ($i = 1; $i <= 3; $i++) {
$s3->putObject([
'Bucket' => $bucket,
'Key' => "key{$i}",
'Body' => "content {$i}",
]);
}

// 2. List the objects and get the keys.


$keys = $s3->listObjects([
'Bucket' => $bucket
]);

// 3. Delete the objects.


foreach ($keys['Contents'] as $key)
{
$s3->deleteObjects([
'Bucket' => $bucket,
'Delete' => [
'Objects' => [
[
'Key' => $key['Key']
]
]

API Version 2006-03-01


133
Amazon Simple Storage Service User Guide
Deleting multiple objects

]
]);
}

Example Deleting multiple objects from a version-enabled bucket

The following PHP example uses the deleteObjects() method to delete multiple objects from a
version-enabled bucket.

For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// 1. Enable object versioning for the bucket.


$s3->putBucketVersioning([
'Bucket' => $bucket,
'VersioningConfiguration' => [
'Status' => 'Enabled'
]
]);

// 2. Create a few versions of an object.


for ($i = 1; $i <= 3; $i++) {
$s3->putObject([
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => "content {$i}",
]);
}

// 3. List the objects versions and get the keys and version IDs.
$versions = $s3->listObjectVersions(['Bucket' => $bucket]);

// 4. Delete the object versions.


$deletedResults = 'The following objects were deleted successfully:' . PHP_EOL;
$deleted = false;
$errorResults = 'The following objects could not be deleted:' . PHP_EOL;
$errors = false;

foreach ($versions['Versions'] as $version)


{
$result = $s3->deleteObjects([
'Bucket' => $bucket,
'Delete' => [
'Objects' => [
[
'Key' => $version['Key'],
'VersionId' => $version['VersionId']
]
]
]

API Version 2006-03-01


134
Amazon Simple Storage Service User Guide
Organizing and listing objects

]);

if (isset($result['Deleted']))
{
$deleted = true;

$deletedResults .= "Key: {$result['Deleted'][0]['Key']}, " .


"VersionId: {$result['Deleted'][0]['VersionId']}" . PHP_EOL;
}

if (isset($result['Errors']))
{
$errors = true;

$errorResults .= "Key: {$result['Errors'][0]['Key']}, " .


"VersionId: {$result['Errors'][0]['VersionId']}, " .
"Message: {$result['Errors'][0]['Message']}" . PHP_EOL;
}
}

if ($deleted)
{
echo $deletedResults;
}

if ($errors)
{
echo $errorResults;
}

// 5. Suspend object versioning for the bucket.


$s3->putBucketVersioning([
'Bucket' => $bucket,
'VersioningConfiguration' => [
'Status' => 'Suspended'
]
]);

Using the REST API


You can use the AWS SDKs to delete multiple objects using the Multi-Object Delete API. However, if your
application requires it, you can send REST requests directly.

For more information, see Delete Multiple Objects in the Amazon Simple Storage Service API Reference.

Organizing, listing, and working with your objects


In Amazon S3, you can use prefixes to organize your storage. A prefix is a logical grouping of the objects
in a bucket. The prefix value is similar to a directory name that enables you to store similar data under
the same directory in a bucket. When you programmatically upload objects, you can use prefixes to
organize your data.

In the Amazon S3 console, prefixes are called folders. You can view all your objects and folders in the
S3 console by navigating to a bucket. You can also view information about each object, including object
properties.

For more information about listing and organizing your data in Amazon S3, see the following topics.

Topics
• Organizing objects using prefixes (p. 136)

API Version 2006-03-01


135
Amazon Simple Storage Service User Guide
Using prefixes

• Listing object keys programmatically (p. 137)


• Organizing objects in the Amazon S3 console using folders (p. 141)
• Viewing an object overview in the Amazon S3 console (p. 143)
• Viewing object properties in the Amazon S3 console (p. 144)

Organizing objects using prefixes


You can use prefixes to organize the data that you store in Amazon S3 buckets. A prefix value is
similar to a directory name that enables you to group similar objects together in a bucket. When you
programmatically upload objects, you can use prefixes to organize your data.

The prefix limits the results to only those keys that begin with the specified prefix. The delimiter causes a
list operation to roll up all the keys that share a common prefix into a single summary list result.

The purpose of the prefix and delimiter parameters is to help you organize and then browse your keys
hierarchically. To do this, first pick a delimiter for your bucket, such as slash (/), that doesn't occur in any
of your anticipated key names. Next, construct your key names by concatenating all containing levels of
the hierarchy, separating each level with the delimiter.

For example, if you were storing information about cities, you might naturally organize them by
continent, then by country, then by province or state. Because these names don't usually contain
punctuation, you might use slash (/) as the delimiter. The following examples use a slash (/) delimiter.

• Europe/France/Nouvelle-Aquitaine/Bordeaux
• North America/Canada/Quebec/Montreal
• North America/USA/Washington/Bellevue
• North America/USA/Washington/Seattle

If you stored data for every city in the world in this manner, it would become awkward to manage
a flat key namespace. By using Prefix and Delimiter with the list operation, you can use the
hierarchy you've created to list your data. For example, to list all the states in USA, set Delimiter='/'
and Prefix='North America/USA/'. To list all the provinces in Canada for which you have data, set
Delimiter='/' and Prefix='North America/Canada/'.

Listing objects using prefixes and delimiters


A list request with a delimiter lets you browse your hierarchy at just one level, skipping over and
summarizing the (possibly millions of) keys nested at deeper levels. For example, assume that you have a
bucket (ExampleBucket) with the following keys.

sample.jpg

photos/2006/January/sample.jpg

photos/2006/February/sample2.jpg

photos/2006/February/sample3.jpg

photos/2006/February/sample4.jpg

The sample bucket has only the sample.jpg object at the root level. To list only the root level objects
in the bucket, you send a GET request on the bucket with "/" delimiter character. In response, Amazon S3
returns the sample.jpg object key because it does not contain the "/" delimiter character. All other keys
contain the delimiter character. Amazon S3 groups these keys and returns a single CommonPrefixes
element with prefix value photos/ that is a substring from the beginning of these keys to the first
occurrence of the specified delimiter.

API Version 2006-03-01


136
Amazon Simple Storage Service User Guide
Listing objects

Example

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>ExampleBucket</Name>
<Prefix></Prefix>
<Marker></Marker>
<MaxKeys>1000</MaxKeys>
<Delimiter>/</Delimiter>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>sample.jpg</Key>
<LastModified>2011-07-24T19:39:30.000Z</LastModified>
<ETag>&quot;d1a7fb5eab1c16cb4f7cf341cf188c3d&quot;</ETag>
<Size>6</Size>
<Owner>
<ID>75cc57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
<DisplayName>displayname</DisplayName>
</Owner>
<StorageClass>STANDARD</StorageClass>
</Contents>
<CommonPrefixes>
<Prefix>photos/</Prefix>
</CommonPrefixes>
</ListBucketResult>

For more information about listing object keys programmatically, see Listing object keys
programmatically (p. 137).

Listing object keys programmatically


In Amazon S3, keys can be listed by prefix. You can choose a common prefix for the names of related
keys and mark these keys with a special character that delimits hierarchy. You can then use the list
operation to select and browse keys hierarchically. This is similar to how files are stored in directories
within a file system.

Amazon S3 exposes a list operation that lets you enumerate the keys contained in a bucket. Keys are
selected for listing by bucket and prefix. For example, consider a bucket named "dictionary" that
contains a key for every English word. You might make a call to list all the keys in that bucket that start
with the letter "q". List results are always returned in UTF-8 binary order.

Both the SOAP and REST list operations return an XML document that contains the names of matching
keys and information about the object identified by each key.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

Groups of keys that share a prefix terminated by a special delimiter can be rolled up by that common
prefix for the purposes of listing. This enables applications to organize and browse their keys
hierarchically, much like how you would organize your files into directories in a file system.

For example, to extend the dictionary bucket to contain more than just English words, you might form
keys by prefixing each word with its language and a delimiter, such as "French/logical". Using this
naming scheme and the hierarchical listing feature, you could retrieve a list of only French words. You
could also browse the top-level list of available languages without having to iterate through all the
lexicographically intervening keys. For more information about this aspect of listing, see Organizing
objects using prefixes (p. 136).

REST API

API Version 2006-03-01


137
Amazon Simple Storage Service User Guide
Listing objects

If your application requires it, you can send REST requests directly. You can send a GET request to return
some or all of the objects in a bucket or you can use selection criteria to return a subset of the objects
in a bucket. For more information, see GET Bucket (List Objects) Version 2 in the Amazon Simple Storage
Service API Reference.

List implementation efficiency

List performance is not substantially affected by the total number of keys in your bucket. It's also not
affected by the presence or absence of the prefix, marker, maxkeys, or delimiter arguments.

Iterating through multipage results

As buckets can contain a virtually unlimited number of keys, the complete results of a list query can
be extremely large. To manage large result sets, the Amazon S3 API supports pagination to split them
into multiple responses. Each list keys response returns a page of up to 1,000 keys with an indicator
indicating if the response is truncated. You send a series of list keys requests until you have received all
the keys. AWS SDK wrapper libraries provide the same pagination.

Java

The following example lists the object keys in a bucket. The example uses pagination to retrieve
a set of object keys. If there are more keys to return after the first page, Amazon S3 includes a
continuation token in the response. The example uses the continuation token in the subsequent
request to fetch the next set of object keys.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ListObjectsV2Request;
import com.amazonaws.services.s3.model.ListObjectsV2Result;
import com.amazonaws.services.s3.model.S3ObjectSummary;

import java.io.IOException;

public class ListKeys {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

System.out.println("Listing objects");

// maxKeys is set to 2 to demonstrate the use of


// ListObjectsV2Result.getNextContinuationToken()
ListObjectsV2Request req = new
ListObjectsV2Request().withBucketName(bucketName).withMaxKeys(2);
ListObjectsV2Result result;

do {
result = s3Client.listObjectsV2(req);

API Version 2006-03-01


138
Amazon Simple Storage Service User Guide
Listing objects

for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {


System.out.printf(" - %s (size: %d)\n", objectSummary.getKey(),
objectSummary.getSize());
}
// If there are more than maxKeys keys in the bucket, get a
continuation token
// and list the next objects.
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

The following C# example lists the object keys for a bucket. In the example, pagination is used to
retrieve a set of object keys. If there are more keys to return, Amazon S3 includes a continuation
token in the response. The code uses the continuation token in the subsequent request to fetch the
next set of object keys.

For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class ListObjectsTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;

private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
ListingObjectsAsync().Wait();
}

static async Task ListingObjectsAsync()


{
try
{
ListObjectsV2Request request = new ListObjectsV2Request
{
BucketName = bucketName,

API Version 2006-03-01


139
Amazon Simple Storage Service User Guide
Listing objects

MaxKeys = 10
};
ListObjectsV2Response response;
do
{
response = await client.ListObjectsV2Async(request);

// Process the response.


foreach (S3Object entry in response.S3Objects)
{
Console.WriteLine("key = {0} size = {1}",
entry.Key, entry.Size);
}
Console.WriteLine("Next Continuation Token: {0}",
response.NextContinuationToken);
request.ContinuationToken = response.NextContinuationToken;
} while (response.IsTruncated);
}
catch (AmazonS3Exception amazonS3Exception)
{
Console.WriteLine("S3 error occurred. Exception: " +
amazonS3Exception.ToString());
Console.ReadKey();
}
catch (Exception e)
{
Console.WriteLine("Exception: " + e.ToString());
Console.ReadKey();
}
}
}
}

PHP

This example guides you through using classes from version 3 of the AWS SDK for PHP to list the
object keys contained in an Amazon S3 bucket.

This example assumes that you are already following the instructions for Using the AWS SDK for
PHP and Running PHP Examples (p. 952) and have the AWS SDK for PHP properly installed.

To list the object keys contained in a bucket using the AWS SDK for PHP, you first must list the
objects contained in the bucket and then extract the key from each of the listed objects. When
listing objects in a bucket you have the option of using the low-level Aws\S3\S3Client::listObjects()
method or the high-level Aws\ResultPaginator class.

The low-level listObjects() method maps to the underlying Amazon S3 REST API. Each
listObjects() request returns a page of up to 1,000 objects. If you have more than 1,000 objects
in the bucket, your response will be truncated and you must send another listObjects() request
to retrieve the next set of 1,000 objects.

You can use the high-level ListObjects paginator to make it easier to list the objects contained
in a bucket. To use the ListObjects paginator to create an object list, run the Amazon S3 client
getPaginator() method (inherited from the Aws/AwsClientInterface class) with the ListObjects
command as the first argument and an array to contain the returned objects from the specified
bucket as the second argument.

When used as a ListObjects paginator, the getPaginator() method returns all the objects
contained in the specified bucket. There is no 1,000 object limit, so you don't need to worry whether
the response is truncated.

API Version 2006-03-01


140
Amazon Simple Storage Service User Guide
Using folders

The following tasks guide you through using the PHP Amazon S3 client methods to list the objects
contained in a bucket from which you can list the object keys.

Example Listing object keys

The following PHP example demonstrates how to list the keys from a specified bucket. It shows
how to use the high-level getIterator() method to list the objects in a bucket and then extract
the key from each of the objects in the list. It also shows how to use the low-level listObjects()
method to list the objects in a bucket and then extract the key from each of the objects in the
list returned. For information about running the PHP examples in this guide, see Running PHP
Examples (p. 952).

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';

// Instantiate the client.


$s3 = new S3Client([
'version' => 'latest',
'region' => 'us-east-1'
]);

// Use the high-level iterators (returns ALL of your objects).


try {
$results = $s3->getPaginator('ListObjects', [
'Bucket' => $bucket
]);

foreach ($results as $result) {


foreach ($result['Contents'] as $object) {
echo $object['Key'] . PHP_EOL;
}
}
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

// Use the plain API (returns ONLY up to 1000 of your objects).


try {
$objects = $s3->listObjects([
'Bucket' => $bucket
]);
foreach ($objects['Contents'] as $object) {
echo $object['Key'] . PHP_EOL;
}
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

Organizing objects in the Amazon S3 console using


folders
In Amazon S3, buckets and objects are the primary resources, and objects are stored in buckets. Amazon
S3 has a flat structure instead of a hierarchy like you would see in a file system. However, for the sake
of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping

API Version 2006-03-01


141
Amazon Simple Storage Service User Guide
Using folders

objects. It does this by using a shared name prefix for objects (that is, objects have names that begin with
a common string). Object names are also referred to as key names.

For example, you can create a folder on the console named photos and store an object named
myphoto.jpg in it. The object is then stored with the key name photos/myphoto.jpg, where
photos/ is the prefix.

Here are two more examples:

• If you have three objects in your bucket—logs/date1.txt, logs/date2.txt, and logs/


date3.txt—the console will show a folder named logs. If you open the folder in the console, you
will see three objects: date1.txt, date2.txt, and date3.txt.
• If you have an object named photos/2017/example.jpg, the console will show you a folder named
photos containing the folder 2017. The folder 2017 will contain the object example.jpg.

You can have folders within folders, but not buckets within buckets. You can upload and copy objects
directly into a folder. Folders can be created, deleted, and made public, but they cannot be renamed.
Objects can be copied from one folder to another.
Important
The Amazon S3 console treats all objects that have a forward slash ("/") character as the last
(trailing) character in the key name as a folder, for example examplekeyname/. You can't
upload an object that has a key name with a trailing "/" character using the Amazon S3 console.
However, you can upload objects that are named with a trailing "/" with the Amazon S3 API by
using the AWS CLI, AWS SDKs, or REST API.
An object that is named with a trailing "/" appears as a folder in the Amazon S3 console. The
Amazon S3 console does not display the content and metadata for such an object. When you
use the console to copy an object named with a trailing "/", a new folder is created in the
destination location, but the object's data and metadata are not copied.

Topics
• Creating a folder (p. 142)
• Making folders public (p. 143)
• Deleting folders (p. 143)

Creating a folder
This section describes how to use the Amazon S3 console to create a folder.
Important
If your bucket policy prevents uploading objects to this bucket without encryption, tags,
metadata, or access control list (ACL) grantees, you will not be able to create a folder using
this configuration. Instead, upload an empty folder and specify these settings in the upload
configuration.

To create a folder

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to create a folder in.
3. Choose Create folder.
4. Enter a name for the folder (for example, favorite-pics). Then choose Create folder.

API Version 2006-03-01


142
Amazon Simple Storage Service User Guide
Viewing an object overview

Making folders public


We recommend blocking all public access to your Amazon S3 folders and buckets unless you specifically
require a public folder or bucket. When you make a folder public, anyone on the internet can view all the
objects that are grouped in that folder.

In the Amazon S3 console, you can make a folder public. You can also make a folder public by creating a
bucket policy that limits access by prefix. For more information, see Identity and access management in
Amazon S3 (p. 209).
Warning
After you make a folder public in the Amazon S3 console, you can't make it private again.
Instead, you must set permissions on each individual object in the public folder so that the
objects have no public access. For more information, see Configuring ACLs (p. 389).

Deleting folders
This section explains how to use the Amazon S3 console to delete folders from an S3 bucket.

For information about Amazon S3 features and pricing, see Amazon S3.

To delete folders from an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to delete folders from.
3. In the Objects list, select the check box next to the folders and objects that you want to delete.
4. Choose Delete.
5. On the Delete objects page, verify that the names of the folders you selected for deletion are listed.
6. In the Delete objects box, enter delete, and choose Delete objects.

Warning
This action deletes all specified objects. When deleting folders, wait for the delete action to
finish before adding new objects to the folder. Otherwise, new objects might be deleted as well.

Viewing an object overview in the Amazon S3


console
You can use the Amazon S3 console to view an overview of an object. The object overview in the console
provides all the essential information for an object in one place.

To open the overview pane for an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object for which you want an overview.

The object overview opens.


4. To download the object, choose Object actions, and then choose Download. To copy the path of the
object to the clipboard, under Object URL, choose the URL.
5. If versioning is enabled on the bucket, choose Versions to list the versions of the object.

API Version 2006-03-01


143
Amazon Simple Storage Service User Guide
Viewing object properties

• To download an object version, select the check box next to the version ID, choose Actions, and
then choose Download.
• To delete an object version, select the check box next to the version ID, and choose Delete.

Important
You can undelete an object only if it was deleted as the latest (current) version. You can't
undelete a previous version of an object that was deleted.

Viewing object properties in the Amazon S3 console


You can use the Amazon S3 console to view the properties of an object, including storage class,
encryption settings, tags, and metadata.

To view the properties of an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object you want to view properties for.

The Object overview for your object opens. You can scroll down to view the object properties.
4. On the Object overview page, you can configure the following properties for the object.
Note
If you change the Storage Class, Encryption, or Metadata properties, a new object is
created to replace the old one. If S3 Versioning is enabled, a new version of the object
is created, and the existing object becomes an older version. The role that changes the
property also becomes the owner of the new object or (object version).

a. Storage class – Each object in Amazon S3 has a storage class associated with it. The storage
class that you choose to use depends on how frequently you access the object. The default
storage class for S3 objects is STANDARD. You choose which storage class to use when you
upload an object. For more information about storage classes, see Using Amazon S3 storage
classes (p. 496).

To change the storage class after you upload an object, choose Storage class. Choose the
storage class that you want, and then choose Save.
b. Server-side encryption settings – You can use server-side encryption to encrypt your S3
objects. For more information, see Specifying server-side encryption with AWS KMS (SSE-
KMS) (p. 161) or Specifying Amazon S3 encryption (p. 175).
c. Metadata – Each object in Amazon S3 has a set of name-value pairs that represents its
metadata. For information about adding metadata to an S3 object, see Editing object metadata
in the Amazon S3 console (p. 63).
d. Tags – You categorize storage by adding tags to an S3 object. For more information, see
Categorizing your storage using tags (p. 609).
e. Object lock legal hold and rentention – You can prevent an object from being deleted. For
more information, see Using S3 Object Lock (p. 488).

Accessing an object using a presigned URL


A presigned URL gives you access to the object identified in the URL, provided that the creator of the
presigned URL has permissions to access that object. That is, if you receive a presigned URL to upload an

API Version 2006-03-01


144
Amazon Simple Storage Service User Guide
Limiting presigned URL capabilities

object, you can upload the object only if the creator of the presigned URL has the necessary permissions
to upload that object.

All objects and buckets by default are private. The presigned URLs are useful if you want your user/
customer to be able to upload a specific object to your bucket, but you don't require them to have AWS
security credentials or permissions.

When you create a presigned URL, you must provide your security credentials and then specify a bucket
name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The
presigned URLs are valid only for the specified duration. That is, you must start the action before the
expiration date and time. If the action consists of multiple steps, such as a multipart upload, all steps
must be started before the expiration, otherwise you will receive an error when Amazon S3 attempts to
start a step with an expired URL.

You can use the presigned URL multiple times, up to the expiration date and time.
Note
Anyone with valid security credentials can create a presigned URL. However, for you to
successfully upload an object, the presigned URL must be created by someone who has
permission to perform the operation that the presigned URL is based upon.

You can generate a presigned URL programmatically using the AWS SDK for Java, .NET, Ruby, PHP,
Node.js, and Python.

If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a presigned
object URL without writing any code. Anyone who receives a valid presigned URL can then
programmatically upload an object. For more information, see Using Amazon S3 from AWS Explorer. For
instructions on how to install AWS Explorer, see Developing with Amazon S3 using the AWS SDKs, and
explorers (p. 943).

Limiting presigned URL capabilities


You can use presigned URLs to generate a URL that can be used to access your S3 buckets. When you
create a presigned URL, you associate it with a specific action. You can share the URL, and anyone with
access to it can perform the action embedded in the URL as if they were the original signing user. The
URL will expire and no longer work when it reaches its expiration time. The capabilities of the URL are
limited by the permissions of the user who created the presigned URL.

In essence, presigned URLs are a bearer token that grants access to customers who possess them. As
such, we recommend that you protect them appropriately.

If you want to restrict the use of presigned URLs and all S3 access to particular network paths, you
can write AWS Identity and Access Management (IAM) policies that require a particular network path.
These policies can be set on the IAM principal that makes the call, the Amazon S3 bucket, or both. A
network-path restriction on the principal requires the user of those credentials to make requests from
the specified network. A restriction on the bucket limits access to that bucket only to requests originating
from the specified network. Realize that these restrictions also apply outside of the presigned URL
scenario.

The IAM global condition that you use depends on the type of endpoint. If you are using the public
endpoint for Amazon S3, use aws:SourceIp. If you are using a VPC endpoint to Amazon S3, use
aws:SourceVpc or aws:SourceVpce.

The following IAM policy statement requires the principal to access AWS from only the specified network
range. With this policy statement in place, all access is required to originate from that range. This
includes the case of someone using a presigned URL for S3.

{
"Sid": "NetworkRestrictionForIAMPrincipal",
"Effect": "Deny",
"Action": "",

API Version 2006-03-01


145
Amazon Simple Storage Service User Guide
Generating a presigned URL

"Resource": "",
"Condition": {
"NotIpAddressIfExists": { "aws:SourceIp": "IP-address" },
"BoolIfExists": { "aws:ViaAWSService": "false" }
}
}

Topics
• Generating a presigned object URL (p. 146)
• Uploading objects using presigned URLs (p. 149)

Generating a presigned object URL


All objects by default are private. Only the object owner has permission to access these objects. However,
the object owner can optionally share objects with others by creating a presigned URL, using their own
security credentials, to grant time-limited permission to download the objects.

When you create a presigned URL for your object, you must provide your security credentials, specify a
bucket name, an object key, specify the HTTP method (GET to download the object) and expiration date
and time. The presigned URLs are valid only for the specified duration.

Anyone who receives the presigned URL can then access the object. For example, if you have a video
in your bucket and both the bucket and the object are private, you can share the video with others by
generating a presigned URL.
Note

• Anyone with valid security credentials can create a presigned URL. However, in order to
successfully access an object, the presigned URL must be created by someone who has
permission to perform the operation that the presigned URL is based upon.
• The credentials that you can use to create a presigned URL include:
• IAM instance profile: Valid up to 6 hours
• AWS Security Token Service : Valid up to 36 hours when signed with permanent credentials,
such as the credentials of the AWS account root user or an IAM user
• IAM user: Valid up to 7 days when using AWS Signature Version 4

To create a presigned URL that's valid for up to 7 days, first designate IAM user credentials
(the access key and secret access key) to the SDK that you're using. Then, generate a
presigned URL using AWS Signature Version 4.
• If you created a presigned URL using a temporary token, then the URL expires when the token
expires, even if the URL was created with a later expiration time.
• Since presigned URLs grant access to your Amazon S3 buckets to whoever has the URL, we
recommend that you protect them appropriately. For more details about protecting presigned
URLs, see Uploading objects using presigned URLs (p. 149).

You can generate a presigned URL programmatically using the REST API, the AWS Command Line
Interface, and the AWS SDK for Java, .NET, Ruby, PHP, Node.js, Python, and Go.

Using AWS Explorer for Visual Studio


If you are using Visual Studio, you can generate a presigned URL for an object without writing any
code by using AWS Explorer for Visual Studio. Anyone with this URL can download the object. For more
information, go to Using Amazon S3 from AWS Explorer.

For instructions about how to install the AWS Explorer, see Developing with Amazon S3 using the AWS
SDKs, and explorers (p. 943).

API Version 2006-03-01


146
Amazon Simple Storage Service User Guide
Generating a presigned URL

Using the AWS SDKs


The following examples generates a presigned URL that you can give to others so that they can retrieve
an object. For more information, see Accessing an object using a presigned URL (p. 144).

Java

Example
The following example generates a presigned URL that you can give to others so that they can
retrieve an object from an S3 bucket. For more information, see Accessing an object using a
presigned URL (p. 144).

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.HttpMethod;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;

import java.io.IOException;
import java.net.URL;

public class GeneratePresignedURL {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String objectKey = "*** Object key ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Set the presigned URL to expire after one hour.


java.util.Date expiration = new java.util.Date();
long expTimeMillis = expiration.getTime();
expTimeMillis += 1000 * 60 * 60;
expiration.setTime(expTimeMillis);

// Generate the presigned URL.


System.out.println("Generating pre-signed URL.");
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.GET)
.withExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);

System.out.println("Pre-Signed URL: " + url.toString());


} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.

API Version 2006-03-01


147
Amazon Simple Storage Service User Guide
Generating a presigned URL

e.printStackTrace();
}
}
}

.NET

Example

The following example generates a presigned URL that you can give to others so that they can
retrieve an object. For more information, see Accessing an object using a presigned URL (p. 144).

For instructions about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;

namespace Amazon.DocSamples.S3
{
class GenPresignedURLTest
{
private const string bucketName = "*** bucket name ***";
private const string objectKey = "*** object key ***";
// Specify how long the presigned URL lasts, in hours
private const double timeoutDuration = 12;
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
string urlString = GeneratePreSignedURL(timeoutDuration);
}
static string GeneratePreSignedURL(double duration)
{
string urlString = "";
try
{
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Expires = DateTime.UtcNow.AddHours(duration)
};
urlString = s3Client.GetPreSignedURL(request1);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
return urlString;
}
}

API Version 2006-03-01


148
Amazon Simple Storage Service User Guide
Uploading objects using presigned URLs

Go

You can use SDK for Go to upload an object. You can send a PUT request to upload data in a single
operation. For more information, see Generate a Pre-Signed URL for an Amazon S3 PUT Operation
with a Specific Payload in the AWS SDK for Go Developer Guide.

Uploading objects using presigned URLs


You can use the AWS SDK to generate a presigned URL that you, or anyone you give the URL, can use
to upload an object to Amazon S3. When you use the URL to upload an object, Amazon S3 creates the
object in the specified bucket. If an object with the same key that is specified in the presigned URL
already exists in the bucket, Amazon S3 replaces the existing object with the uploaded object.

The following examples show how to upload objects using presigned URLs.

Java

To successfully complete an upload, you must do the following:

• Specify the HTTP PUT verb when creating the GeneratePresignedUrlRequest and
HttpURLConnection objects.
• Interact with the HttpURLConnection object in some way after finishing the upload. The
following example accomplishes this by using the HttpURLConnection object to check the HTTP
response code.

Example

This example generates a presigned URL and uses it to upload sample data as an object. For
instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.HttpMethod;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;
import com.amazonaws.services.s3.model.S3Object;

import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import java.net.URL;

public class GeneratePresignedUrlAndUploadObject {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String objectKey = "*** Object key ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())

API Version 2006-03-01


149
Amazon Simple Storage Service User Guide
Uploading objects using presigned URLs

.withRegion(clientRegion)
.build();

// Set the pre-signed URL to expire after one hour.


java.util.Date expiration = new java.util.Date();
long expTimeMillis = expiration.getTime();
expTimeMillis += 1000 * 60 * 60;
expiration.setTime(expTimeMillis);

// Generate the pre-signed URL.


System.out.println("Generating pre-signed URL.");
GeneratePresignedUrlRequest generatePresignedUrlRequest = new
GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.PUT)
.withExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);

// Create the connection and use it to upload the new object using the pre-
signed URL.
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStreamWriter out = new
OutputStreamWriter(connection.getOutputStream());
out.write("This text uploaded as an object via presigned URL.");
out.close();

// Check the HTTP response code. To complete the upload and make the object
available,
// you must interact with the connection object in some way.
connection.getResponseCode();
System.out.println("HTTP response code: " + connection.getResponseCode());

// Check to make sure that the object was uploaded successfully.


S3Object object = s3Client.getObject(bucketName, objectKey);
System.out.println("Object " + object.getKey() + " created in bucket " +
object.getBucketName());
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

The following C# example shows how to use the AWS SDK for .NET to upload an object to an S3
bucket using a presigned URL.

This example generates a presigned URL for a specific object and uses it to upload a file. For
information about the example's compatibility with a specific version of the AWS SDK for .NET and
instructions about how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;

API Version 2006-03-01


150
Amazon Simple Storage Service User Guide
Uploading objects using presigned URLs

using System.IO;
using System.Net;

namespace Amazon.DocSamples.S3
{
class UploadObjectUsingPresignedURLTest
{
private const string bucketName = "*** provide bucket name ***";
private const string objectKey = "*** provide the name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file
to upload ***";
// Specify how long the presigned URL lasts, in hours
private const double timeoutDuration = 12;
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
var url = GeneratePreSignedURL(timeoutDuration);
UploadObject(url);
}

private static void UploadObject(string url)


{
HttpWebRequest httpRequest = WebRequest.Create(url) as HttpWebRequest;
httpRequest.Method = "PUT";
using (Stream dataStream = httpRequest.GetRequestStream())
{
var buffer = new byte[8000];
using (FileStream fileStream = new FileStream(filePath, FileMode.Open,
FileAccess.Read))
{
int bytesRead = 0;
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
dataStream.Write(buffer, 0, bytesRead);
}
}
}
HttpWebResponse response = httpRequest.GetResponse() as HttpWebResponse;
}

private static string GeneratePreSignedURL(double duration)


{
var request = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Verb = HttpVerb.PUT,
Expires = DateTime.UtcNow.AddHours(duration)
};

string url = s3Client.GetPreSignedURL(request);


return url;
}
}
}

Ruby

The following tasks guide you through using a Ruby script to upload an object using a presigned URL
for SDK for Ruby - Version 3.

API Version 2006-03-01


151
Amazon Simple Storage Service User Guide
Uploading objects using presigned URLs

Uploading objects - SDK for Ruby - version 3

1 Create an instance of the Aws::S3::Resource class.

2 Provide a bucket name and an object key by calling the #bucket[] and the
#object[] methods of your Aws::S3::Resource class instance.

Generate a presigned URL by creating an instance of the URI class, and use it to parse
the .presigned_url method of your Aws::S3::Resource class instance. You
must specify :put as an argument to .presigned_url, and you must specify PUT to
Net::HTTP::Session#send_request if you want to upload an object.

3 Anyone with the presigned URL can upload an object.

The upload creates an object or replaces any existing object with the same key that is
specified in the presigned URL.

The following Ruby code example demonstrates the preceding tasks for SDK for Ruby - Version 3.

Example

require 'aws-sdk-s3'
require 'net/http'

# Uploads an object to a bucket in Amazon Simple Storage Service (Amazon S3)


# by using a presigned URL.
#
# Prerequisites:
#
# - An S3 bucket.
# - An object in the bucket to upload content to.
#
# @param s3_client [Aws::S3::Resource] An initialized S3 resource.
# @param bucket_name [String] The name of the bucket.
# @param object_key [String] The name of the object.
# @param object_content [String] The content to upload to the object.
# @param http_client [Net::HTTP] An initialized HTTP client.
# This is especially useful for testing with mock HTTP clients.
# If not specified, a default HTTP client is created.
# @return [Boolean] true if the object was uploaded; otherwise, false.
# @example
# exit 1 unless object_uploaded_to_presigned_url?(
# Aws::S3::Resource.new(region: 'us-east-1'),
# 'doc-example-bucket',
# 'my-file.txt',
# 'This is the content of my-file.txt'
# )
def object_uploaded_to_presigned_url?(
s3_resource,
bucket_name,
object_key,
object_content,
http_client = nil
)
object = s3_resource.bucket(bucket_name).object(object_key)
url = URI.parse(object.presigned_url(:put))

if http_client.nil?
Net::HTTP.start(url.host) do |http|
http.send_request(
'PUT',
url.request_uri,

API Version 2006-03-01


152
Amazon Simple Storage Service User Guide
Using BitTorrent

object_content,
'content-type' => ''
)
end
else
http_client.start(url.host) do |http|
http.send_request(
'PUT',
url.request_uri,
object_content,
'content-type' => ''
)
end
end
content = object.get.body
puts "The presigned URL for the object '#{object_key}' in the bucket " \
"'#{bucket_name}' is:\n\n"
puts url
puts "\nUsing this presigned URL to get the content that " \
"was just uploaded to this object, the object\'s content is:\n\n"
puts content.read
return true
rescue StandardError => e
puts "Error uploading to presigned URL: #{e.message}"
return false
end

PHP

Please see Upload an object using a presigned URL (AWS SDK for PHP).

Retrieving Amazon S3 objects using BitTorrent


BitTorrent is an open, peer-to-peer protocol for distributing files. You can use the BitTorrent protocol to
retrieve any publicly-accessible object in Amazon S3. This section describes why you might want to use
BitTorrent to distribute your data out of Amazon S3 and how to do so.

Amazon S3 supports the BitTorrent protocol so that developers can save costs when distributing content
at high scale. Amazon S3 is useful for simple, reliable storage of any data. The default distribution
mechanism for Amazon S3 data is via client/server download. In client/server distribution, the entire
object is transferred point-to-point from Amazon S3 to every authorized user who requests that object.
While client/server delivery is appropriate for a wide variety of use cases, it is not optimal for everybody.
Specifically, the costs of client/server distribution increase linearly as the number of users downloading
objects increases. This can make it expensive to distribute popular objects.

BitTorrent addresses this problem by recruiting the very clients that are downloading the object as
distributors themselves: Each client downloads some pieces of the object from Amazon S3 and some
from other clients, while simultaneously uploading pieces of the same object to other interested "peers."
The benefit for publishers is that for large, popular files the amount of data actually supplied by Amazon
S3 can be substantially lower than what it would have been serving the same clients via client/server
download. Less data transferred means lower costs for the publisher of the object.
Note

• Amazon S3 does not support the BitTorrent protocol in AWS Regions launched after May 30,
2016.
• You can only get a torrent file for objects that are less than 5 GBs in size.

For more information, see the topics below.

API Version 2006-03-01


153
Amazon Simple Storage Service User Guide
How you are charged for BitTorrent delivery

Topics
• How you are charged for BitTorrent delivery (p. 154)
• Using BitTorrent to retrieve objects stored in Amazon S3 (p. 154)
• Publishing content using Amazon S3 and BitTorrent (p. 155)

How you are charged for BitTorrent delivery


There is no extra charge for use of BitTorrent with Amazon S3. Data transfer via the BitTorrent
protocol is metered at the same rate as client/server delivery. To be precise, whenever a downloading
BitTorrent client requests a "piece" of an object from the Amazon S3 "seeder," charges accrue just as if an
anonymous request for that piece had been made using the REST or SOAP protocol. These charges will
appear on your Amazon S3 bill and usage reports in the same way. The difference is that if a lot of clients
are requesting the same object simultaneously via BitTorrent, then the amount of data Amazon S3 must
serve to satisfy those clients will be lower than with client/server delivery. This is because the BitTorrent
clients are simultaneously uploading and downloading amongst themselves.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

The data transfer savings achieved from use of BitTorrent can vary widely depending on how popular
your object is. Less popular objects require heavier use of the "seeder" to serve clients, and thus the
difference between BitTorrent distribution costs and client/server distribution costs might be small for
such objects. In particular, if only one client is ever downloading a particular object at a time, the cost of
BitTorrent delivery will be the same as direct download.

Using BitTorrent to retrieve objects stored in Amazon


S3
Any object in Amazon S3 that can be read anonymously can also be downloaded via BitTorrent. Doing so
requires use of a BitTorrent client application. Amazon does not distribute a BitTorrent client application,
but there are many free clients available. The Amazon S3BitTorrent implementation has been tested to
work with the official BitTorrent client (go to http://www.bittorrent.com/).

The starting point for a BitTorrent download is a .torrent file. This small file describes for BitTorrent
clients both the data to be downloaded and where to get started finding that data. A .torrent file is a
small fraction of the size of the actual object to be downloaded. Once you feed your BitTorrent client
application an Amazon S3 generated .torrent file, it should start downloading immediately from Amazon
S3 and from any "peer" BitTorrent clients.

Retrieving a .torrent file for any publicly available object is easy. Simply add a "?torrent" query string
parameter at the end of the REST GET request for the object. No authentication is required. Once you
have a BitTorrent client installed, downloading an object using BitTorrent download might be as easy as
opening this URL in your web browser.

There is no mechanism to fetch the .torrent for an Amazon S3 object using the SOAP API.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

Example
This example retrieves the Torrent file for the "Nelson" object in the "quotes" bucket.

API Version 2006-03-01


154
Amazon Simple Storage Service User Guide
Publishing content using Amazon S3 and BitTorrent

Sample Request

GET /quotes/Nelson?torrent HTTP/1.0


Date: Wed, 25 Nov 2009 12:00:00 GMT

Sample Response

HTTP/1.1 200 OK
x-amz-request-id: 7CD745EBB7AB5ED9
Date: Wed, 25 Nov 2009 12:00:00 GMT
Content-Disposition: attachment; filename=Nelson.torrent;
Content-Type: application/x-bittorrent
Content-Length: 537
Server: AmazonS3

<body: a Bencoded dictionary as defined by the BitTorrent specification>

Publishing content using Amazon S3 and BitTorrent


Every anonymously readable object stored in Amazon S3 is automatically available for download using
BitTorrent. The process for changing the ACL on an object to allow anonymous READ operations is
described in Identity and access management in Amazon S3 (p. 209).

You can direct your clients to your BitTorrent accessible objects by giving them the .torrent file directly or
by publishing a link to the ?torrent URL of your object, as described by GetObjectTorrent in the Amazon
Simple Storage Service API Reference. One important thing to note is that the .torrent file describing an
Amazon S3 object is generated on demand the first time it is requested (via the REST ?torrent resource).
Generating the .torrent for an object takes time proportional to the size of that object. For large objects,
this time can be significant. Therefore, before publishing a ?torrent link, we suggest making the first
request for it yourself. Amazon S3 might take several minutes to respond to this first request, as it
generates the .torrent file. Unless you update the object in question, subsequent requests for the .torrent
will be fast. Following this procedure before distributing a ?torrent link will ensure a smooth BitTorrent
downloading experience for your customers.

To stop distributing a file using BitTorrent, simply remove anonymous access to it. This can be
accomplished by either deleting the file from Amazon S3, or modifying your access control policy to
prohibit anonymous reads. After doing so, Amazon S3 will no longer act as a "seeder" in the BitTorrent
network for your file, and will no longer serve the .torrent file via the ?torrent REST API. However, after
a .torrent for your file is published, this action might not stop public downloads of your object that
happen exclusively using the BitTorrent peer to peer network.

API Version 2006-03-01


155
Amazon Simple Storage Service User Guide
Data protection

Amazon S3 Security
Cloud security at AWS is the highest priority. As an AWS customer, you benefit from a data center
and network architecture that are built to meet the requirements of the most security-sensitive
organizations.

Security is a shared responsibility between AWS and you. The shared responsibility model describes this
as security of the cloud and security in the cloud:

• Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services
in the AWS Cloud. AWS also provides you with services that you can use securely. The effectiveness
of our security is regularly tested and verified by third-party auditors as part of the AWS compliance
programs. To learn about the compliance programs that apply to Amazon S3, see AWS Services in
Scope by Compliance Program.
• Security in the cloud – Your responsibility is determined by the AWS service that you use. You are also
responsible for other factors including the sensitivity of your data, your organization’s requirements,
and applicable laws and regulations.

This documentation will help you understand how to apply the shared responsibility model when using
Amazon S3. The following topics show you how to configure Amazon S3 to meet your security and
compliance objectives. You'll also learn how to use other AWS services that can help you monitor and
secure your Amazon S3 resources.

Topics
• Data protection in Amazon S3 (p. 156)
• Protecting data using encryption (p. 157)
• Internetwork traffic privacy (p. 202)
• AWS PrivateLink for Amazon S3 (p. 202)
• Identity and access management in Amazon S3 (p. 209)
• Logging and monitoring in Amazon S3 (p. 442)
• Compliance Validation for Amazon S3 (p. 443)
• Resilience in Amazon S3 (p. 444)
• Infrastructure security in Amazon S3 (p. 446)
• Configuration and vulnerability analysis in Amazon S3 (p. 447)
• Security Best Practices for Amazon S3 (p. 448)

Data protection in Amazon S3


Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary
data storage. Objects are redundantly stored on multiple devices across multiple facilities in an Amazon
S3 Region. To help better ensure data durability, Amazon S3 PUT and PUT Object copy operations
synchronously store your data across multiple facilities. After the objects are stored, Amazon S3
maintains their durability by quickly detecting and repairing any lost redundancy.

Amazon S3 standard storage offers the following features:

• Backed with the Amazon S3 Service Level Agreement


• Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year
• Designed to sustain the concurrent loss of data in two facilities

API Version 2006-03-01


156
Amazon Simple Storage Service User Guide
Data encryption

Amazon S3 further protects your data using versioning. You can use versioning to preserve, retrieve, and
restore every version of every object that is stored in your Amazon S3 bucket. With versioning, you can
easily recover from both unintended user actions and application failures. By default, requests retrieve
the most recently written version. You can retrieve older versions of an object by specifying a version of
the object in a request.

For data protection purposes, we recommend that you protect AWS account credentials and set up
individual user accounts with AWS Identity and Access Management (IAM), so that each user is given only
the permissions necessary to fulfill their job duties.

If you require FIPS 140-2 validated cryptographic modules when accessing AWS through a command line
interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see
Federal Information Processing Standard (FIPS) 140-2.

The following security best practices also address data protection in Amazon S3:

• Implement server-side encryption


• Enforce encryption of data in transit
• Consider using Amazon Macie with Amazon S3
• Identify and audit all your Amazon S3 buckets
• Monitor AWS security advisories

Protecting data using encryption


Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at
rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit using Secure
Socket Layer/Transport Layer Security (SSL/TLS) or client-side encryption. You have the following
options for protecting data at rest in Amazon S3:

• Server-Side Encryption – Request Amazon S3 to encrypt your object before saving it on disks in its
data centers and then decrypt it when you download the objects.
• Client-Side Encryption – Encrypt data client-side and upload the encrypted data to Amazon S3. In this
case, you manage the encryption process, the encryption keys, and related tools.

For more information about server-side encryption and client-side encryption, review the topics listed
below.

Topics

• Protecting data using server-side encryption (p. 157)


• Protecting data using client-side encryption (p. 198)

Protecting data using server-side encryption


Server-side encryption is the encryption of data at its destination by the application or service that
receives it. Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers
and decrypts it for you when you access it. As long as you authenticate your request and you have access
permissions, there is no difference in the way you access encrypted or unencrypted objects. For example,
if you share your objects using a presigned URL, that URL works the same way for both encrypted and
unencrypted objects. Additionally, when you list objects in your bucket, the list API returns a list of all
objects, regardless of whether they are encrypted.
Note
You can't apply different types of server-side encryption to the same object simultaneously.

API Version 2006-03-01


157
Amazon Simple Storage Service User Guide
Server-side encryption

You have three mutually exclusive options, depending on how you choose to manage the encryption
keys.

Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted
with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly
rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit
Advanced Encryption Standard (AES-256), to encrypt your data. For more information, see Protecting
data using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) (p. 174).

Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service
(SSE-KMS)

Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-
KMS) is similar to SSE-S3, but with some additional benefits and charges for using this service. There
are separate permissions for the use of a CMK that provides added protection against unauthorized
access of your objects in Amazon S3. SSE-KMS also provides you with an audit trail that shows when your
CMK was used and by whom. Additionally, you can create and manage customer managed CMKs or use
AWS managed CMKs that are unique to you, your service, and your Region. For more information, see
Protecting Data Using Server-Side Encryption with CMKs Stored in AWS Key Management Service (SSE-
KMS) (p. 158).

Server-Side Encryption with Customer-Provided Keys (SSE-C)

With Server-Side Encryption with Customer-Provided Keys (SSE-C), you manage the encryption keys
and Amazon S3 manages the encryption, as it writes to disks, and decryption, when you access your
objects. For more information, see Protecting data using server-side encryption with customer-provided
encryption keys (SSE-C) (p. 185).

Protecting Data Using Server-Side Encryption with CMKs Stored


in AWS Key Management Service (SSE-KMS)
Server-side encryption is the encryption of data at its destination by the application or service that
receives it. AWS Key Management Service (AWS KMS) is a service that combines secure, highly available
hardware and software to provide a key management system scaled for the cloud. Amazon S3 uses
AWS KMS customer master keys (CMKs) to encrypt your Amazon S3 objects. AWS KMS encrypts only the
object data. Any object metadata is not encrypted.

If you use CMKs, you use AWS KMS via the AWS Management Console or AWS KMS APIs to centrally
create CMKs, define the policies that control how CMKs can be used, and audit their usage to prove that
they are being used correctly. You can use these CMKs to protect your data in Amazon S3 buckets. When
you use SSE-KMS encryption with an S3 bucket, the AWS KMS CMK must be in the same Region as the
bucket.

There are additional charges for using AWS KMS CMKs. For more information, see AWS KMS concepts -
Customer master keys (CMKs) in the AWS Key Management Service Developer Guide and AWS KMS pricing.
Important
You need the kms:Decrypt permission when you upload or download an Amazon S3
object encrypted with an AWS KMS CMK. This is in addition to the kms:ReEncrypt,
kms:GenerateDataKey, and kms:DescribeKey permissions. For more information, see
Failure to upload a large file to Amazon S3 with encryption using an AWS KMS key.

AWS managed CMKs and customer managed CMKs


When you use server-side encryption with AWS KMS (SSE-KMS), you can use the default AWS managed
CMK, or you can specify a customer managed CMK that you have already created.

API Version 2006-03-01


158
Amazon Simple Storage Service User Guide
Server-side encryption

If you don't specify a customer managed CMK, Amazon S3 automatically creates an AWS managed
CMK in your AWS account the first time that you add an object encrypted with SSE-KMS to a bucket. By
default, Amazon S3 uses this CMK for SSE-KMS.

If you want to use a customer managed CMK for SSE-KMS, create the CMK before you configure SSE-
KMS. Then, when you configure SSE-KMS for your bucket, specify the existing customer managed CMK.

Creating your own customer managed CMK gives you more flexibility and control. For example, you
can create, rotate, and disable customer managed CMKs. You can also define access controls and
audit the customer managed CMKs that you use to protect your data. For more information about
customer managed and AWS managed CMKs, see AWS KMS concepts in the AWS Key Management
Service Developer Guide.
Important
When you use an AWS KMS CMK for server-side encryption in Amazon S3, you must choose a
symmetric CMK. Amazon S3 only supports symmetric CMKs and not asymmetric CMKs. For more
information, see Using Symmetric and Asymmetric Keys in the AWS Key Management Service
Developer Guide.

Amazon S3 Bucket Keys


When you configure server-side encryption using AWS KMS (SSE-KMS), you can configure your bucket to
use S3 Bucket Keys for SSE-KMS. This bucket-level key for SSE-KMS can reduce your KMS request costs
by up to 99 percent by decreasing the request traffic from Amazon S3 to AWS KMS.

When you configure your bucket to use S3 Bucket Keys for SSE-KMS on new objects, AWS KMS generates
a bucket-level key that is used to create unique data keys for objects in the bucket. This bucket key is
used for a time-limited period within Amazon S3, further reducing the need for Amazon S3 to make
requests to AWS KMS to complete encryption operations. For more information about using S3 Bucket
Keys, see Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys (p. 166).

AWS Signature Version 4


If you are uploading or accessing objects encrypted by SSE-KMS, you need to use AWS Signature Version
4 for added security. For more information on how to do this using an AWS SDK, see Specifying the
Signature Version in Request Authentication (p. 944).
Important
All GET and PUT requests for an object protected by AWS KMS fail if they are not made via SSL
or TLS, or if they are not made using SigV4.

SSE-KMS highlights
The highlights of SSE-KMS are as follows:

• You can choose a customer managed CMK that you create and manage, or you can choose an AWS
managed CMK that Amazon S3 creates in your AWS account and manages for you. Like a customer
managed CMK, your AWS managed CMK is unique to your AWS account and Region. Only Amazon S3
has permission to use this CMK on your behalf. Amazon S3 only supports symmetric CMKs.
• You can create, rotate, and disable auditable customer managed CMKs from the AWS KMS console.
• The ETag in the response is not the MD5 of the object data.
• The data keys used to encrypt your data are also encrypted and stored alongside the data that they
protect.
• The security controls in AWS KMS can help you meet encryption-related compliance requirements.

Requiring server-side encryption


To require server-side encryption of all objects in a particular Amazon S3 bucket, you can use a policy.
For example, the following bucket policy denies upload object (s3:PutObject) permission to everyone

API Version 2006-03-01


159
Amazon Simple Storage Service User Guide
Server-side encryption

if the request does not include the x-amz-server-side-encryption header requesting server-side
encryption with SSE-KMS.

{
"Version":"2012-10-17",
"Id":"PutObjectPolicy",
"Statement":[{
"Sid":"DenyUnEncryptedObjectUploads",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{
"StringNotEquals":{
"s3:x-amz-server-side-encryption":"aws:kms"
}
}
}
]
}

To require that a particular AWS KMS CMK be used to encrypt the objects in a bucket, you can use the
s3:x-amz-server-side-encryption-aws-kms-key-id condition key. To specify the AWS KMS
CMK, you must use a key Amazon Resource Name (ARN) that is in the "arn:aws:kms:region:acct-
id:key/key-id" format.
Note
When you upload an object, you can specify the AWS KMS CMK using the x-amz-server-
side-encryption-aws-kms-key-id header. If the header is not present in the request,
Amazon S3 assumes the AWS managed CMK. Regardless, the AWS KMS key ID that Amazon S3
uses for object encryption must match the AWS KMS key ID in the policy, otherwise Amazon S3
denies the request.

For a complete list of Amazon S3‐specific condition keys and more information about specifying
condition keys, see Amazon S3 condition keys (p. 232).

Encryption context
Amazon S3 supports an encryption context with the x-amz-server-side-encryption-context
header. An encryption context is an optional set of key-value pairs that can contain additional contextual
information about the data.

For information about the encryption context in Amazon S3, see Encryption context (p. 160). For
general information about encryption context, see AWS Key Management Service Concepts - Encryption
Context in the AWS Key Management Service Developer Guide.

The encryption context can be any value that you want, provided that the header adheres to the Base64-
encoded JSON format. However, because the encryption context is not encrypted and because it is
logged if AWS CloudTrail logging is turned on, the encryption context should not include sensitive
information. We further recommend that your context describe the data being encrypted or decrypted
so that you can better understand the CloudTrail events produced by AWS KMS.

In Amazon S3, the object or bucket Amazon Resource Name (ARN) is commonly used as an encryption
context. If you use SSE-KMS without enabling an S3 Bucket Key, you use the object ARN as your
encryption context, for example, arn:aws:s3:::object_ARN. However, if you use SSE-KMS
and enable an S3 Bucket Key, you use the bucket ARN for your encryption context, for example,
arn:aws:s3:::bucket_ARN. For more information about S3 Bucket Keys, see Reducing the cost of
SSE-KMS with Amazon S3 Bucket Keys (p. 166).

If the key aws:s3:arn is not already in the encryption context, Amazon S3 can append a predefined
key of aws:s3:arn to the encryption context that you provide. Amazon S3 appends this predefined key

API Version 2006-03-01


160
Amazon Simple Storage Service User Guide
Server-side encryption

when it processes your requests. If you use SSE-KMS without an S3 Bucket Key, the value is equal to the
object ARN. If you use SSE-KMS with an S3 Bucket Key enabled, the value is equal to the bucket ARN.

You can use this predefined key to track relevant requests in CloudTrail. So you can always see which
Amazon S3 ARN was used with which encryption key. You can use CloudTrail logs to ensure that the
encryption context is not identical between different Amazon S3 objects and buckets, which provides
additional security. Your full encryption context will be validated to have the value equal to the object or
bucket ARN.

Topics
• Specifying server-side encryption with AWS KMS (SSE-KMS) (p. 161)
• Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys (p. 166)

Specifying server-side encryption with AWS KMS (SSE-KMS)


When you create an object, you can specify the use of server-side encryption with AWS Key Management
Service (AWS KMS) customer master keys (CMKs) to encrypt your data. This is true when you are either
uploading a new object or copying an existing object. This encryption is known as SSE-KMS.

You can specify SSE-KMS using the S3 console, REST APIs, AWS SDKs, and AWS CLI. For more
information, see the topics below.

Using the S3 console

This topic describes how to set or change the type of encryption an object using the Amazon S3 console.
Note
If you change an object's encryption, a new object is created to replace the old one. If S3
Versioning is enabled, a new version of the object is created, and the existing object becomes an
older version. The role that changes the property also becomes the owner of the new object or
(object version).

To add or change encryption for an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object that you want to add or change encryption for.

The Object overview opens, displaying the properties for your object.
4. Under Server-side encryption settings, choose Edit.

The Edit server-side encryption page opens


5. To enable server-side encryption for your object, under Server-side encryption, choose Enable.
6. Under Encryption key type, choose AWS Key Management Service key (SSE-KMS).
Important
If you use the AWS KMS option for your default encryption configuration, you are subject
to the RPS (requests per second) limits of AWS KMS. For more information about AWS KMS
limits and how to request a limit increase, see AWS KMS limits.
7. Under AWS KMS key, choose one of the following:

• AWS managed key (aws/s3)


• Choose from your KMS master keys, and choose your KMS master key.
• Enter KMS master key ARN, and enter your AWS KMS key ARN.

API Version 2006-03-01


161
Amazon Simple Storage Service User Guide
Server-side encryption

Important
You can only use KMS CMKs that are enabled in the same AWS Region as the bucket. When
you choose Choose from your KMS master keys, the S3 console only lists 100 KMS CMKs
per Region. If you have more than 100 CMKs in the same Region, you can only see the first
100 CMKs in the S3 console. To use a KMS CMK that is not listed in the console, choose
Custom KMS ARN, and enter the KMS CMK ARN.
When you use an AWS KMS CMK for server-side encryption in Amazon S3, you must choose
a CMK that is enabled in the same Region as your bucket. Additionally, Amazon S3 only
supports symmetric CMKs and not asymmetric CMKs. For more information, see Using
Symmetric and Asymmetric Keys in the AWS Key Management Service Developer Guide.

For more information about creating an AWS KMS CMK, see Creating Keys in the AWS Key
Management Service Developer Guide. For more information about using AWS KMS with Amazon
S3, see Protecting Data Using Server-Side Encryption with CMKs Stored in AWS Key Management
Service (SSE-KMS) (p. 158).
8. Choose Save changes.

Note
This action applies encryption to all specified objects. When encrypting folders, wait for the save
operation to finish before adding new objects to the folder.

Using the REST API


When you create an object—that is, when you upload a new object or copy an existing object—you
can specify the use of server-side encryption with AWS Key Management Service (AWS KMS) customer
master keys (CMKs) to encrypt your data. To do this, add the x-amz-server-side-encryption
header to the request. Set the value of the header to the encryption algorithm aws:kms. Amazon S3
confirms that your object is stored using SSE-KMS by returning the response header x-amz-server-
side-encryption.

If you specify the x-amz-server-side-encryption header with a value of aws:kms, you can also use
the following request headers:

• x-amz-server-side-encryption-aws-kms-key-id
• x-amz-server-side-encryption-context
• x-amz-server-side-encryption-bucket-key-enabled

Topics
• Amazon S3 REST APIs that support SSE-KMS (p. 162)
• Encryption context (x-amz-server-side-encryption-context) (p. 163)
• AWS KMS key ID (x-amz-server-side-encryption-aws-kms-key-id) (p. 163)
• S3 Bucket Keys (x-amz-server-side-encryption-aws-bucket-key-enabled) (p. 164)

Amazon S3 REST APIs that support SSE-KMS


The following REST APIs accept the x-amz-server-side-encryption, x-amz-server-side-
encryption-aws-kms-key-id, and x-amz-server-side-encryption-context request headers.

• PUT Object — When you upload data using the PUT API, you can specify these request headers.
• PUT Object - Copy— When you copy an object, you have both a source object and a target object.
When you pass SSE-KMS headers with the COPY operation, they are applied only to the target object.
When copying an existing object, regardless of whether the source object is encrypted or not, the
destination object is not encrypted unless you explicitly request server-side encryption.

API Version 2006-03-01


162
Amazon Simple Storage Service User Guide
Server-side encryption

• POST Object— When you use a POST operation to upload an object, instead of the request headers,
you provide the same information in the form fields.
• Initiate Multipart Upload— When you upload large objects using the multipart upload API, you can
specify these headers. You specify these headers in the initiate request.

The response headers of the following REST APIs return the x-amz-server-side-encryption header
when an object is stored using server-side encryption.

• PUT Object
• PUT Object - Copy
• POST Object
• Initiate Multipart Upload
• Upload Part
• Upload Part - Copy
• Complete Multipart Upload
• Get Object
• Head Object

Important

• All GET and PUT requests for an object protected by AWS KMS will fail if you don't make them
using Secure Sockets Language (SSL) or Signature Version 4.
• If your object uses SSE-KMS, don't send encryption request headers for GET requests and
HEAD requests, or you’ll get an HTTP 400 BadRequest error.

Encryption context (x-amz-server-side-encryption-context)

If you specify x-amz-server-side-encryption:aws:kms, the Amazon S3 API supports an


encryption context with the x-amz-server-side-encryption-context header. An encryption
context is an optional set of key-value pairs that can contain additional contextual information about the
data.

In Amazon S3, the object or bucket Amazon Resource Name (ARN) is commonly used as an encryption
context. If you use SSE-KMS without enabling an S3 Bucket Key, you use the object ARN as your
encryption context, for example, arn:aws:s3:::object_ARN. However, if you use SSE-KMS
and enable an S3 Bucket Key, you use the bucket ARN for your encryption context, for example,
arn:aws:s3:::bucket_ARN.

For information about the encryption context in Amazon S3, see Encryption context (p. 160). For
general information about encryption context, see AWS Key Management Service Concepts - Encryption
Context in the AWS Key Management Service Developer Guide.

AWS KMS key ID (x-amz-server-side-encryption-aws-kms-key-id)

You can use the x-amz-server-side-encryption-aws-kms-key-id header to specify the


ID of the customer managed CMK used to protect the data. If you specify x-amz-server-side-
encryption:aws:kms, but don't provide x-amz-server-side-encryption-aws-kms-key-id,
Amazon S3 uses the AWS managed CMK in AWS KMS to protect the data. If you want to use a customer
managed AWS KMS CMK, you must provide the x-amz-server-side-encryption-aws-kms-key-id
of the customer managed CMK.
Important
When you use an AWS KMS CMK for server-side encryption in Amazon S3, you must choose a
symmetric CMK. Amazon S3 only supports symmetric CMKs and not asymmetric CMKs. For more

API Version 2006-03-01


163
Amazon Simple Storage Service User Guide
Server-side encryption

information, see Using Symmetric and Asymmetric Keys in the AWS Key Management Service
Developer Guide.

S3 Bucket Keys (x-amz-server-side-encryption-aws-bucket-key-enabled)

You can use the x-amz-server-side-encryption-aws-bucket-key-enabled request header to


enable or disable an S3 Bucket Key at the object-level. S3 Bucket Keys can reduce your AWS KMS request
costs by decreasing the request traffic from Amazon S3 to AWS KMS. For more information, see Reducing
the cost of SSE-KMS with Amazon S3 Bucket Keys (p. 166).

If you specify x-amz-server-side-encryption:aws:kms, but don't provide x-amz-server-


side-encryption-aws-bucket-key-enabled, your object uses the S3 Bucket Key settings for the
destination bucket to encrypt your object. For more information, see Configuring an S3 Bucket Key at the
object level using the REST API, AWS SDKs, or AWS CLI (p. 171).

Using the AWS SDKs

When using AWS SDKs, you can request Amazon S3 to use AWS Key Management Service (AWS KMS)
customer master keys (CMKs). This section provides examples of using the AWS SDKs for Java and .NET.
For information about other SDKs, go to Sample Code and Libraries.
Important
When you use an AWS KMS CMK for server-side encryption in Amazon S3, you must choose a
symmetric CMK. Amazon S3 only supports symmetric CMKs and not asymmetric CMKs. For more
information, see Using Symmetric and Asymmetric Keys in the AWS Key Management Service
Developer Guide.

Copy operation

When copying objects, you add the same request properties (ServerSideEncryptionMethod and
ServerSideEncryptionKeyManagementServiceKeyId) to request Amazon S3 to use an AWS KMS
CMK. For more information about copying objects, see Copying objects (p. 102).

Put operation

Java

When uploading an object using the AWS SDK for Java, you can request Amazon S3 to use an AWS
KMS CMK by adding the SSEAwsKeyManagementParams property as shown in the following
request.

PutObjectRequest putRequest = new PutObjectRequest(bucketName,


keyName, file).withSSEAwsKeyManagementParams(new SSEAwsKeyManagementParams());

In this case, Amazon S3 uses the AWS managed CMK (see Using Server-Side Encryption with CMKs
Stored in AWS KMS (p. 158)). You can optionally create a symmetric customer managed CMK and
specify that in the request.

PutObjectRequest putRequest = new PutObjectRequest(bucketName,


keyName, file).withSSEAwsKeyManagementParams(new SSEAwsKeyManagementParams(keyID));

For more information about creating customer managed CMKs, see Programming the AWS KMS API
in the AWS Key Management Service Developer Guide.

For working code examples of uploading an object, see the following topics. You will need to update
those code examples and provide encryption information as shown in the preceding code fragment.

• For uploading an object in a single operation, see Uploading objects (p. 65).

API Version 2006-03-01


164
Amazon Simple Storage Service User Guide
Server-side encryption

• For a multipart upload, see the following topics:


• Using high-level multipart upload API, see Uploading an object using multipart upload (p. 78).
• If you are using the low-level multipart upload API, see Using the AWS SDKs (low-level-level
API) (p. 82).

.NET

When uploading an object using the AWS SDK for .NET, you can request Amazon S3 to use an AWS
KMS CMK by adding the ServerSideEncryptionMethod property as shown in the following
request.

PutObjectRequest putRequest = new PutObjectRequest


{
BucketName = bucketName,
Key = keyName,
// other properties.
ServerSideEncryptionMethod = ServerSideEncryptionMethod.AWSKMS
};

In this case, Amazon S3 uses the AWS managed CMK. For more information, see Protecting
Data Using Server-Side Encryption with CMKs Stored in AWS Key Management Service (SSE-
KMS) (p. 158). You can optionally create your own symmetric customer managed CMK and specify
that in the request.

PutObjectRequest putRequest1 = new PutObjectRequest


{
BucketName = bucketName,
Key = keyName,
// other properties.
ServerSideEncryptionMethod = ServerSideEncryptionMethod.AWSKMS,
ServerSideEncryptionKeyManagementServiceKeyId = keyId
};

For more information about creating customer managed CMKs, see Programming the AWS KMS API
in the AWS Key Management Service Developer Guide.

For working code examples of uploading an object, see the following topics. You will need to update
these code examples and provide encryption information as shown in the preceding code fragment.

• For uploading an object in a single operation, see Uploading objects (p. 65).
• For multipart upload see the following topics:
• Using high-level multipart upload API, see Uploading an object using multipart upload (p. 78).
• Using low-level multipart upload API, see Uploading an object using multipart upload (p. 78).

Presigned URLs

Java

When creating a presigned URL for an object encrypted using an AWS KMS CMK, you must explicitly
specify Signature Version 4.

ClientConfiguration clientConfiguration = new ClientConfiguration();


clientConfiguration.setSignerOverride("AWSS3V4SignerType");
AmazonS3Client s3client = new AmazonS3Client(
new ProfileCredentialsProvider(), clientConfiguration);
...

API Version 2006-03-01


165
Amazon Simple Storage Service User Guide
Server-side encryption

For a code example, see Accessing an object using a presigned URL (p. 144).
.NET

When creating a presigned URL for an object encrypted using an AWS KMS CMK, you must explicitly
specify Signature Version 4.

AWSConfigs.S3Config.UseSignatureVersion4 = true;

For a code example, see Accessing an object using a presigned URL (p. 144).

Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys


Amazon S3 Bucket Keys reduce the cost of Amazon S3 server-side encryption using AWS Key
Management Service (SSE-KMS). This new bucket-level key for SSE can reduce AWS KMS request costs by
up to 99 percent by decreasing the request traffic from Amazon S3 to AWS KMS. With a few clicks in the
AWS Management Console, and without any changes to your client applications, you can configure your
bucket to use an S3 Bucket Key for AWS KMS-based encryption on new objects.

S3 Bucket Keys for SSE-KMS


Workloads that access millions or billions of objects encrypted with SSE-KMS can generate large volumes
of requests to AWS KMS. When you use SSE-KMS to protect your data without an S3 Bucket Key,
Amazon S3 uses an individual AWS KMS data key for every object. It makes a call to AWS KMS every
time a request is made against a KMS-encrypted object. For information about how SSE-KMS works,
see Protecting Data Using Server-Side Encryption with CMKs Stored in AWS Key Management Service
(SSE-KMS) (p. 158).

When you configure your bucket to use an S3 Bucket Key for SSE-KMS on new objects, AWS KMS
generates a bucket-level key that is used to create unique data keys for objects in the bucket. This S3
Bucket Key is used for a time-limited period within Amazon S3, reducing the need for Amazon S3 to
make requests to AWS KMS to complete encryption operations. This reduces traffic from S3 to AWS KMS,
allowing you to access AWS KMS-encrypted objects in S3 at a fraction of the previous cost.

Amazon S3 will only share an S3 Bucket Key for objects encrypted by the same AWS KMS customer
master key (CMK).

Configuring S3 Bucket Keys


You can configure your bucket to use an S3 Bucket Key for SSE-KMS on new objects through the Amazon
S3 console, AWS SDKs, AWS CLI, or REST API. You can also override the S3 Bucket Key configuration for

API Version 2006-03-01


166
Amazon Simple Storage Service User Guide
Server-side encryption

specific objects in a bucket with an individual per-object KMS key using the REST API, AWS SDK, or AWS
CLI. You can also view S3 Bucket Key settings.

Before you configure your bucket to use an S3 Bucket Key, review Changes to note before enabling an S3
Bucket Key (p. 167).

Configuring an S3 Bucket Key using the Amazon S3 console

When you create a new bucket, you can configure your bucket to use an S3 Bucket Key for SSE-KMS on
new objects. You can also configure an existing bucket to use an S3 Bucket Key for SSE-KMS on new
objects by updating your bucket properties. 

For more information, see Configuring your bucket to use an S3 Bucket Key with SSE-KMS for new
objects (p. 169).

REST API, AWS CLI, and AWS SDK support for S3 Bucket Keys

You can use the REST API, AWS CLI, or AWS SDK to configure your bucket to use an S3 Bucket Key for
SSE-KMS on new objects. You can also enable an S3 Bucket Key at the object level.

For more information, see the following: 

• Configuring an S3 Bucket Key at the object level using the REST API, AWS SDKs, or AWS CLI (p. 171)
• Configuring your bucket to use an S3 Bucket Key with SSE-KMS for new objects (p. 169)

The following APIs support S3 Bucket Keys for SSE-KMS:

• PutBucketEncryption
• ServerSideEncryptionRule accepts the BucketKeyEnabled parameter for enabling and
disabling an S3 Bucket Key.
• GetBucketEncryption
• ServerSideEncryptionRule returns the settings for BucketKeyEnabled.
• PutObject, CopyObject, CreateMutlipartUpload, and PostObject
• x-amz-server-side-encryption-bucket-key-enabled request header enables or disables an
S3 Bucket Key at the object level.
• HeadObject, GetObject, UploadPartCopy, UploadPart, and CompleteMultipartUpload
• x-amz-server-side-encryption-bucket-key-enabled response header indicates if an S3
Bucket Key is enabled or disabled for an object.

Working with AWS CloudFormation

In AWS CloudFormation, the AWS::S3::Bucket resource includes an encryption property called


BucketKeyEnabled that you can use to enable or disable an S3 Bucket Key.

For more information, see Using AWS CloudFormation (p. 171).

Changes to note before enabling an S3 Bucket Key

Before you enable an S3 Bucket Key, please note the following related changes:

kms:Decrypt permissions for copy and upload


Important
To copy or upload objects with S3 Bucket Keys, the AWS KMS key policy for the CMK must
include the kms:Decrypt permission for the calling principal.

API Version 2006-03-01


167
Amazon Simple Storage Service User Guide
Server-side encryption

When you enable an S3 Bucket Key, the AWS KMS key policy for the CMK must include the
kms:Decrypt permission for the calling principal. If the calling principal is in a different account than
the AWS KMS CMK, you must also include kms:Decrypt permission in the IAM policy. The call to
kms:Decrypt verifies the integrity of the S3 Bucket Key before using it.

You only need to include kms:Decrypt permissions in the key policy if you use a customer managed
AWS KMS CMK. If you enable an S3 Bucket Key for server-side encryption using an AWS managed CMK
(aws/s3), your AWS KMS key policy already includes kms:Decrypt permissions.

IAM or KMS key policies

If your existing IAM policies or AWS KMS key policies use your object Amazon Resource Name (ARN) as
the encryption context to refine or limit access to your AWS KMS CMKs, these policies won’t work with
an S3 Bucket Key. S3 Bucket Keys use the bucket ARN as encryption context. Before you enable an S3
Bucket Key, update your IAM policies or AWS KMS key policies to use your bucket ARN as encryption
context.

For more information about encryption context and S3 Bucket Keys, see Encryption context (x-amz-
server-side-encryption-context) (p. 163).

AWS KMS CloudTrail events

After you enable an S3 Bucket Key, your AWS KMS CloudTrail events log your bucket ARN instead of your
object ARN. Additionally, you see fewer KMS CloudTrail events for SSE-KMS objects in your logs. Because
key material is time-limited in Amazon S3, fewer requests are made to AWS KMS. 

Using an S3 Bucket Key with replication

You can use S3 Bucket Keys with Same-Region Replication (SRR) and Cross-Region Replication (CRR).

When Amazon S3 replicates an encrypted object, it generally preserves the encryption settings of
the replica object in the destination bucket. However, if the source object is not encrypted and your
destination bucket uses default encryption or an S3 Bucket Key, Amazon S3 encrypts the object with the
destination bucket’s configuration.
Important
To use replication with an S3 Bucket Key, the AWS KMS key policy for the CMK used to encrypt
the object replica must include kms:Decrypt permissions for the calling principal. The call to
kms:Decrypt verifies the integrity of the S3 Bucket Key before using it. For more information,
see Using an S3 Bucket Key with replication (p. 168). For more information about SSE-KMS and
S3 Bucket Key, see Amazon S3 Bucket Keys and replication (p. 601).

The following examples illustrate how an S3 Bucket Key works with replication. For more information,
see Replicating objects created with server-side encryption (SSE) using AWS KMS CMKs (p. 599). 

Example Example 1 – Source object uses S3 Bucket Keys, destination bucket uses default
encryption

If your source object uses an S3 Bucket Key but your destination bucket uses default encryption with
SSE-KMS, the replica object maintains its S3 Bucket Key encryption settings in the destination bucket.
The destination bucket still uses default encryption with SSE-KMS.

Example Example 2 – Source object is not encrypted, destination bucket uses an S3 Bucket
Key with SSE-KMS

If your source object is not encrypted and the destination bucket uses an S3 Bucket Key with SSE-KMS,
the source object is encrypted with an S3 Bucket Key using SSE-KMS in the destination bucket. This
results in the ETag of the source object being different from the ETag of the replica object. You must
update applications that use the ETag to accommodate for this difference.

API Version 2006-03-01


168
Amazon Simple Storage Service User Guide
Server-side encryption

Working with S3 Bucket Keys

For more information about enabling and working with S3 Bucket Keys, see the following sections:

• Configuring your bucket to use an S3 Bucket Key with SSE-KMS for new objects (p. 169)
• Configuring an S3 Bucket Key at the object level using the REST API, AWS SDKs, or AWS CLI (p. 171)
• Viewing settings for an S3 Bucket Key (p. 172)

Configuring your bucket to use an S3 Bucket Key with SSE-KMS for new objects

When you configure server-side encryption using SSE-KMS, you can configure your bucket to use an S3
Bucket Key for SSE-KMS on new objects. S3 Bucket Keys decrease the request traffic from Amazon S3 to
AWS Key Management Service (AWS KMS) and reduce the cost of SSE-KMS. For more information, see
Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys (p. 166).

You can configure your bucket to use an S3 Bucket Key for SSE-KMS on new objects by using the Amazon
S3 console, REST API, AWS SDK, AWS CLI, or AWS CloudFormation. If you want to enable or disable an S3
Bucket Key for existing objects, you can use a COPY operation. For more information, see Configuring an
S3 Bucket Key at the object level using the REST API, AWS SDKs, or AWS CLI (p. 171).

When an S3 Bucket Key is enabled for the source or destination bucket, the encryption context
will be the bucket Amazon Resource Name (ARN) and not the object ARN, for example,
arn:aws:s3:::bucket_ARN. You need to update your IAM policies to use the bucket ARN for
the encryption context. For more information, see Granting additional permissions for the IAM role
(p. 600).

The following examples illustrate how an S3 Bucket Key works with replication. For more information,
see Replicating objects created with server-side encryption (SSE) using AWS KMS CMKs (p. 599). 

Prerequisite:

Before you configure your bucket to use an S3 Bucket Key, review Changes to note before enabling an S3
Bucket Key (p. 167).

Using the S3 console

In the S3 console, you can enable or disable an S3 Bucket Key for a new or existing bucket. Objects in
the S3 console inherit their S3 Bucket Key setting from the bucket configuration. When you enable an S3
Bucket Key for your bucket, new objects that you upload to the bucket use an S3 Bucket Key for server-
side encryption using AWS KMS.

Uploading, copying, or modifying objects in buckets that have an S3 Bucket Key enabled

If you upload, modify, or copy an object in a bucket that has an S3 Bucket Key enabled, the S3 Bucket
Key settings for that object might be updated to align with bucket configuration.

If an object already has an S3 Bucket Key enabled, the S3 Bucket Key settings for that object don't
change when you copy or modify the object. However, if you modify or copy an object that doesn’t have
an S3 Bucket Key enabled, and the destination bucket has an S3 Bucket Key configuration, the object
inherits the destination bucket's S3 Bucket Key settings. For example, if your source object doesn't have
an S3 Bucket Key enabled but the destination bucket has S3 Bucket Key enabled, an S3 Bucket Key will
be enabled for the object.

To enable an S3 Bucket Key when you create a new bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Create bucket.

API Version 2006-03-01


169
Amazon Simple Storage Service User Guide
Server-side encryption

3. Enter your bucket name, and choose your AWS Region.


4. Under Default encryption, choose Enable.
5. Under Encryption type, choose AWS Key Management Service key (SSE-KMS).
6. Choose an AWS KMS key:

• Choose AWS managed key (aws/s3).


• Choose Customer managed key, and choose a symmetric customer managed CMK in the same
Region as your bucket.
7. Under Bucket Key, choose Enable.
8. Choose Create bucket.

Amazon S3 creates your bucket with an S3 Bucket Key enabled. New objects that you upload to the
bucket will use an S3 Bucket Key. To disable an S3 Bucket Key, follow the previous steps, and choose
disable.

To enable an S3 Bucket Key for an existing bucket

1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.


2. In the Buckets list, choose the bucket that you want to enable an S3 Bucket Key for.
3. Choose Properties.
4. Under Default encryption, choose Edit.
5. Under Default encryption, choose Enable.
6. Under Encryption type, choose AWS Key Management Service key (SSE-KMS).
7. Choose an AWS KMS key:

• Choose AWS managed key (aws/s3).


• Choose Customer managed key, and choose a symmetric customer managed CMK in the same
Region as your bucket.
8. Under Bucket Key, choose Enable.
9. Choose Save changes.

Amazon S3 enables an S3 Bucket Key for new objects added to your bucket. Existing objects don't
use the S3 Bucket Key. To disable an S3 Bucket Key, follow the previous steps, and choose Disable.

Using the REST API

You can use PutBucketEncryption to enable or disable an S3 Bucket Key for your bucket. To configure
an S3 Bucket Key with PutBucketEncryption, specify the ServerSideEncryptionRule, which includes
default encryption with server-side encryption using AWS KMS customer master keys (CMKs). You can
also optionally use a customer managed CMK by specifying the KMS key ID for the CMK. 

For more information and example syntax, see PutBucketEncryption.

Using the AWS SDK Java

The following example enables default bucket encryption with SSE-KMS and an S3 Bucket Key using the
AWS SDK for Java.

Java

AmazonS3 s3client = AmazonS3ClientBuilder.standard()


    .withRegion(Regions.DEFAULT_REGION)
    .build();

API Version 2006-03-01


170
Amazon Simple Storage Service User Guide
Server-side encryption

   
ServerSideEncryptionByDefault serverSideEncryptionByDefault = new
ServerSideEncryptionByDefault()
    .withSSEAlgorithm(SSEAlgorithm.KMS);
ServerSideEncryptionRule rule = new ServerSideEncryptionRule()
    .withApplyServerSideEncryptionByDefault(serverSideEncryptionByDefault)
    .withBucketKeyEnabled(true);
ServerSideEncryptionConfiguration serverSideEncryptionConfiguration =
    new ServerSideEncryptionConfiguration().withRules(Collections.singleton(rule));

SetBucketEncryptionRequest setBucketEncryptionRequest = new


SetBucketEncryptionRequest()
    .withServerSideEncryptionConfiguration(serverSideEncryptionConfiguration)
    .withBucketName(bucketName);
            
s3client.setBucketEncryption(setBucketEncryptionRequest);

Using the AWS CLI

The following example enables default bucket encryption with SSE-KMS and an S3 Bucket Key using the
AWS CLI.

aws s3api put-bucket-encryption --bucket <bucket-name> --server-side-encryption-


configuration '{
        "Rules": [
            {
                "ApplyServerSideEncryptionByDefault": {
                    "SSEAlgorithm": "aws:kms",
                    "KMSMasterKeyID": "<KMS-Key-ARN>"
                },
                "BucketKeyEnabled": true
            }
        ]
    }'

Using AWS CloudFormation

For more information about configuring an S3 Bucket Key using AWS CloudFormation, see
AWS::S3::Bucket ServerSideEncryptionRule in the AWS CloudFormation User Guide.

Configuring an S3 Bucket Key at the object level using the REST API, AWS SDKs, or AWS CLI

When you perform a PUT or COPY operation using the REST API, AWS SDKs, or AWS CLI, you can enable
or disable an S3 Bucket Key at the object level. S3 Bucket Keys reduce the cost of server-side encryption
using AWS Key Management Service (AWS KMS) (SSE-KMS) by decreasing request traffic from Amazon
S3 to AWS KMS. For more information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket
Keys (p. 166).

When you configure an S3 Bucket Key for an object using a PUT or COPY operation, Amazon S3 only
updates the settings for that object. The S3 Bucket Key settings for the destination bucket do not
change. If you don't specify an S3 Bucket Key for your object, Amazon S3 applies the S3 Bucket Key
settings for the destination bucket to the object.

Prerequisite:

Before you configure your object to use an S3 Bucket Key, review Changes to note before enabling an S3
Bucket Key (p. 167).

Topics
• Using the REST API (p. 172)

API Version 2006-03-01


171
Amazon Simple Storage Service User Guide
Server-side encryption

• Using the AWS SDK Java (PutObject) (p. 172)


• Using the AWS CLI (PutObject) (p. 172)

Using the REST API


When you use SSE-KMS, you can enable an S3 Bucket Key for an object using the following APIs:

• PutObject – When you upload an object, you can specify the x-amz-server-side-encryption-


bucket-key-enabled request header to enable or disable an S3 Bucket Key at the object level.
• CopyObject – When you copy an object and configure SSE-KMS, you can specify the x-amz-server-
side-encryption-bucket-key-enabled request header to enable or disable an S3 Bucket Key for
your object.
• PostObject – When you use a POST operation to upload an object and configure SSE-KMS, you can use
the x-amz-server-side-encryption-bucket-key-enabled form field to enable or disable an
S3 Bucket Key for your object.
• CreateMutlipartUpload – When you upload large objects using the multipart upload API and configure
SSE-KMS, you can use the x-amz-server-side-encryption-bucket-key-enabled request
header to enable or disable an S3 Bucket Key for your object.

To enable an S3 Bucket Key at the object level, include the x-amz-server-side-encryption-


bucket-key-enabled request header. For more information about SSE-KMS and the REST API,
see Using the REST API (p. 162).

Using the AWS SDK Java (PutObject)


You can use the following example to configure an S3 Bucket Key at the object level using the AWS SDK
for Java.

Java

AmazonS3 s3client = AmazonS3ClientBuilder.standard()


    .withRegion(Regions.DEFAULT_REGION)
    .build();

String bucketName = "bucket name";


String keyName = "key name for object";
String contents = "file contents";

PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, keyName, contents)


    .withBucketKeyEnabled(true);
    
s3client.putObject(putObjectRequest);

Using the AWS CLI (PutObject)


You can use the following AWS CLI example to configure an S3 Bucket Key at the object level as part of a
PutObject request.

aws s3api put-object --bucket <bucket name> --key <object key name> --server-side-
encryption aws:kms --bucket-key-enabled —body <filepath>

Viewing settings for an S3 Bucket Key


You can view settings for an S3 Bucket Key at the bucket or object level using the Amazon S3 console,
REST API, AWS CLI, or AWS SDKs.

API Version 2006-03-01


172
Amazon Simple Storage Service User Guide
Server-side encryption

S3 Bucket Keys decrease request traffic from Amazon S3 to AWS KMS and reduce the cost of server-side
encryption using AWS Key Management Service (SSE-KMS). For more information, see Reducing the cost
of SSE-KMS with Amazon S3 Bucket Keys (p. 166).

To view S3 Bucket Key settings for a bucket or an object that has inherited S3 Bucket Key settings from
the bucket configuration, you need permission to perform the s3:GetEncryptionConfiguration
action. For more information, see GetBucketEncryption in the Amazon Simple Storage Service API
Reference.

Using the S3 console

In the S3 console, you can view the S3 Bucket Key settings for your bucket or object. S3 Bucket Key
settings are inherited from the bucket configuration unless the source objects already has an S3 Bucket
Key configured.

Objects and folders in the same bucket can have different S3 Bucket Key settings. For example, if you
upload an object using the REST API and enable an S3 Bucket Key for the object, the object retains its S3
Bucket Key setting in the destination bucket, even if S3 Bucket Key is disabled in the destination bucket.
As another example, if you enable an S3 Bucket Key for an existing bucket, objects that are already in the
bucket do not use an S3 Bucket Key. However, new objects have an S3 Bucket Key enabled.

To view bucket-level an S3 Bucket Key setting

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the bucket that you want to enable an S3 Bucket Key for.
3. Choose Properties.
4. In the Default encryption section, under Bucket Key, you see the S3 Bucket Key setting for your
bucket.

If you can’t see the S3 Bucket Key setting, you might not have permission to perform the
s3:GetEncryptionConfiguration action. For more information, see GetBucketEncryption in the
Amazon Simple Storage Service API Reference.

To view the S3 Bucket Key setting for your object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the bucket that you want to enable an S3 Bucket Key for.
3. In the Objects list, choose your object name.
4. On the Details tab, under Server-side encryption settings, choose Edit.

Under Bucket Key, you see the S3 Bucket Key setting for your object but you cannot edit it.

Using the REST API

To return bucket-level S3 Bucket Key settings

To return encryption information for a bucket, including settings for an S3 Bucket Key, use the
GetBucketEncryption operation. S3 Bucket Key settings are returned in the response body in the
ServerSideEncryptionConfiguration with the BucketKeyEnabled setting. For more information,
see GetBucketEncryption in the Amazon S3 API Reference.

To return object-level settings for an S3 Bucket Key

API Version 2006-03-01


173
Amazon Simple Storage Service User Guide
Server-side encryption

To return the S3 Bucket Key status for an object, use the HeadObject operation. HeadObject returns
the x-amz-server-side-encryption-bucket-key-enabled response header to show whether an
S3 Bucket Key is enabled or disabled for the object. For more information, see HeadObject in the Amazon
S3 API Reference.

The following API operations also return the x-amz-server-side-encryption-bucket-key-


enabled response header if an S3 Bucket Key is configured for an object:

• PutObject
• PostObject
• CopyObject
• CreateMultipartUpload
• UploadPartCopy
• UploadPart
• CompleteMultipartUpload
• GetObject

Protecting data using server-side encryption with Amazon S3-


managed encryption keys (SSE-S3)
Server-side encryption protects data at rest. Amazon S3 encrypts each object with a unique key. As an
additional safeguard, it encrypts the key itself with a master key that it rotates regularly. Amazon S3
server-side encryption uses one of the strongest block ciphers available to encrypt your data, 256-bit
Advanced Encryption Standard (AES-256).

There are no new charges for using server-side encryption with Amazon S3-managed keys (SSE-
S3). However, requests to configure and use SSE-S3 incur standard Amazon S3 request charges. For
information about pricing, see Amazon S3 pricing.

If you need server-side encryption for all of the objects that are stored in a bucket, use a bucket policy.
For example, the following bucket policy denies permissions to upload an object unless the request
includes the x-amz-server-side-encryption header to request server-side encryption:

{
"Version": "2012-10-17",
"Id": "PutObjectPolicy",
"Statement": [
{
"Sid": "DenyIncorrectEncryptionHeader",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::awsexamplebucket1/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
},
{
"Sid": "DenyUnencryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::awsexamplebucket1/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption": "true"
}

API Version 2006-03-01


174
Amazon Simple Storage Service User Guide
Server-side encryption

}
}
]
}

Note

• Server-side encryption encrypts only the object data, not object metadata.

API Support for Server-Side Encryption


To request server-side encryption using the object creation REST APIs, provide the x-amz-server-
side-encryption request header. For information about the REST APIs, see Using the REST
API (p. 176).

The following Amazon S3 APIs support this header:

• PUT operations—Specify the request header when uploading data using the PUT API. For more
information, see PUT Object.
• Initiate Multipart Upload—Specify the header in the initiate request when uploading large objects
using the multipart upload API . For more information, see Initiate Multipart Upload.
• COPY operations—When you copy an object, you have both a source object and a target object. For
more information, see PUT Object - Copy.

Note
When using a POST operation to upload an object, instead of providing the request header, you
provide the same information in the form fields. For more information, see POST Object.

The AWS SDKs also provide wrapper APIs that you can use to request server-side encryption. You can
also use the AWS Management Console to upload objects and request server-side encryption.

Topics
• Specifying Amazon S3 encryption (p. 175)

Specifying Amazon S3 encryption


When you create an object, you can specify the use of server-side encryption with Amazon S3-managed
encryption keys to encrypt your data. This is true when you are either uploading a new object or copying
an existing object. This encryption is known as SSE-S3.

You can specify SSE-S3 using the S3 console, REST APIs, AWS SDKs, and AWS CLI. For more information,
see the topics below.

Using the S3 console

This topic describes how to set or change the type of encryption an object using the AWS Management
Console. When you copy and object using the console, it copies the object as is. That means if the
source is encrypted, the target object is also encrypted. The console also allows you to add or change
encryption for an object.
Note
If you change an object's encryption, a new object is created to replace the old one. If S3
Versioning is enabled, a new version of the object is created, and the existing object becomes an
older version. The role that changes the property also becomes the owner of the new object or
(object version).

API Version 2006-03-01


175
Amazon Simple Storage Service User Guide
Server-side encryption

To add or change encryption for an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object that you want to add or change encryption for.

The Object overview opens, displaying the properties for your object.
4. Under Server-side encryption settings, choose Edit.

The Edit server-side encryption page opens.


5. To enable server-side encryption for your object, under Server-side encryption, choose Enable.
6. To enable server-side encryption using an Amazon S3-managed key, under Encryption key type,
choose Amazon S3 key (SSE-S3).

For more information about using Amazon S3 server-side encryption to encrypt your data, see
Protecting data using server-side encryption with Amazon S3-managed encryption keys (SSE-
S3) (p. 174).
7. Choose Save changes.

Note
This action applies encryption to all specified objects. When encrypting folders, wait for the save
operation to finish before adding new objects to the folder.

Using the REST API

At the time of object creation—that is, when you are uploading a new object or making a copy of an
existing object—you can specify if you want Amazon S3 to encrypt your data by adding the x-amz-
server-side-encryption header to the request. Set the value of the header to the encryption
algorithm AES256 that Amazon S3 supports. Amazon S3 confirms that your object is stored using
server-side encryption by returning the response header x-amz-server-side-encryption.

The following REST upload APIs accept the x-amz-server-side-encryption request header.

• PUT Object
• PUT Object - Copy
• POST Object
• Initiate Multipart Upload

When uploading large objects using the multipart upload API, you can specify server-side encryption
by adding the x-amz-server-side-encryption header to the Initiate Multipart Upload request.
When you are copying an existing object, regardless of whether the source object is encrypted or not, the
destination object is not encrypted unless you explicitly request server-side encryption.

The response headers of the following REST APIs return the x-amz-server-side-encryption header
when an object is stored using server-side encryption.

• PUT Object
• PUT Object - Copy
• POST Object
• Initiate Multipart Upload
• Upload Part
• Upload Part - Copy

API Version 2006-03-01


176
Amazon Simple Storage Service User Guide
Server-side encryption

• Complete Multipart Upload


• Get Object
• Head Object

Note
Encryption request headers should not be sent for GET requests and HEAD requests if your
object uses SSE-S3 or you’ll get an HTTP 400 BadRequest error.

Using the AWS SDKs

When using AWS SDKs, you can request Amazon S3 to use Amazon S3-managed encryption keys. This
section provides examples of using the AWS SDKs in multiple languages. For information about other
SDKs, go to Sample Code and Libraries.

Java

When you use the AWS SDK for Java to upload an object, you can use server-side encryption
to encrypt it. To request server-side encryption, use the ObjectMetadata property of the
PutObjectRequest to set the x-amz-server-side-encryption request header. When you call
the putObject() method of the AmazonS3Client, Amazon S3 encrypts and saves the data.

You can also request server-side encryption when uploading objects with the multipart upload API:

• When using the high-level multipart upload API, you use the TransferManager methods to
apply server-side encryption to objects as you upload them. You can use any of the upload
methods that take ObjectMetadata as a parameter. For more information, see Uploading an
object using multipart upload (p. 78).
• When using the low-level multipart upload API, you specify server-side encryption when
you initiate the multipart upload. You add the ObjectMetadata property by calling the
InitiateMultipartUploadRequest.setObjectMetadata() method. For more information,
see Using the AWS SDKs (low-level-level API) (p. 82).

You can't directly change the encryption state of an object (encrypting an unencrypted object or
decrypting an encrypted object). To change an object's encryption state, you make a copy of the
object, specifying the desired encryption state for the copy, and then delete the original object.
Amazon S3 encrypts the copied object only if you explicitly request server-side encryption. To
request encryption of the copied object through the Java API, use the ObjectMetadata property to
specify server-side encryption in the CopyObjectRequest.

Example Example

The following example shows how to set server-side encryption using the AWS SDK for Java. It
shows how to perform the following tasks:

• Upload a new object using server-side encryption.


• Change an object's encryption state (in this example, encrypting a previously unencrypted object)
by making a copy of the object.
• Check the encryption state of the object.

For more information about server-side encryption, see Using the REST API (p. 176). For
instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 950).

import com.amazonaws.AmazonServiceException;

API Version 2006-03-01


177
Amazon Simple Storage Service User Guide
Server-side encryption

import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.internal.SSEResultBase;
import com.amazonaws.services.s3.model.*;

import java.io.ByteArrayInputStream;

public class SpecifyServerSideEncryption {

public static void main(String[] args) {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyNameToEncrypt = "*** Key name for an object to upload and encrypt
***";
String keyNameToCopyAndEncrypt = "*** Key name for an unencrypted object to be
encrypted by copying ***";
String copiedObjectKeyName = "*** Key name for the encrypted copy of the
unencrypted object ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Upload an object and encrypt it with SSE.


uploadObjectWithSSEEncryption(s3Client, bucketName, keyNameToEncrypt);

// Upload a new unencrypted object, then change its encryption state


// to encrypted by making a copy.
changeSSEEncryptionStatusByCopying(s3Client,
bucketName,
keyNameToCopyAndEncrypt,
copiedObjectKeyName);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}

private static void uploadObjectWithSSEEncryption(AmazonS3 s3Client, String


bucketName, String keyName) {
String objectContent = "Test object encrypted with SSE";
byte[] objectBytes = objectContent.getBytes();

// Specify server-side encryption.


ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(objectBytes.length);
objectMetadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
PutObjectRequest putRequest = new PutObjectRequest(bucketName,
keyName,
new ByteArrayInputStream(objectBytes),
objectMetadata);

// Upload the object and check its encryption status.


PutObjectResult putResult = s3Client.putObject(putRequest);
System.out.println("Object \"" + keyName + "\" uploaded with SSE.");
printEncryptionStatus(putResult);

API Version 2006-03-01


178
Amazon Simple Storage Service User Guide
Server-side encryption

private static void changeSSEEncryptionStatusByCopying(AmazonS3 s3Client,


String bucketName,
String sourceKey,
String destKey) {
// Upload a new, unencrypted object.
PutObjectResult putResult = s3Client.putObject(bucketName, sourceKey, "Object
example to encrypt by copying");
System.out.println("Unencrypted object \"" + sourceKey + "\" uploaded.");
printEncryptionStatus(putResult);

// Make a copy of the object and use server-side encryption when storing the
copy.
CopyObjectRequest request = new CopyObjectRequest(bucketName,
sourceKey,
bucketName,
destKey);
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
request.setNewObjectMetadata(objectMetadata);

// Perform the copy operation and display the copy's encryption status.
CopyObjectResult response = s3Client.copyObject(request);
System.out.println("Object \"" + destKey + "\" uploaded with SSE.");
printEncryptionStatus(response);

// Delete the original, unencrypted object, leaving only the encrypted copy in
Amazon S3.
s3Client.deleteObject(bucketName, sourceKey);
System.out.println("Unencrypted object \"" + sourceKey + "\" deleted.");
}

private static void printEncryptionStatus(SSEResultBase response) {


String encryptionStatus = response.getSSEAlgorithm();
if (encryptionStatus == null) {
encryptionStatus = "Not encrypted with SSE";
}
System.out.println("Object encryption status is: " + encryptionStatus);
}
}

.NET

When you upload an object, you can direct Amazon S3 to encrypt it. To change the encryption state
of an existing object, you make a copy of the object and delete the source object. By default, the
copy operation encrypts the target only if you explicitly request server-side encryption of the target
object. To specify server-side encryption in the CopyObjectRequest, add the following:

ServerSideEncryptionMethod = ServerSideEncryptionMethod.AES256

For a working sample of how to copy an object, see Using the AWS SDKs (p. 105).

The following example uploads an object. In the request, the example directs Amazon S3 to encrypt
the object. The example then retrieves object metadata and verifies the encryption method that
was used. For information about creating and testing a working sample, see Running the Amazon
S3 .NET Code Examples (p. 951).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;

API Version 2006-03-01


179
Amazon Simple Storage Service User Guide
Server-side encryption

using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class SpecifyServerSideEncryptionTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** key name for object created ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
WritingAnObjectAsync().Wait();
}

static async Task WritingAnObjectAsync()


{
try
{
var putRequest = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
ContentBody = "sample text",
ServerSideEncryptionMethod = ServerSideEncryptionMethod.AES256
};

var putResponse = await client.PutObjectAsync(putRequest);

// Determine the encryption state of an object.


GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest
{
BucketName = bucketName,
Key = keyName
};
GetObjectMetadataResponse response = await
client.GetObjectMetadataAsync(metadataRequest);
ServerSideEncryptionMethod objectEncryption =
response.ServerSideEncryptionMethod;

Console.WriteLine("Encryption method used: {0}",


objectEncryption.ToString());
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered ***. Message:'{0}' when writing an
object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

PHP

This topic shows how to use classes from version 3 of the AWS SDK for PHP to add server-side
encryption to objects that you upload to Amazon Simple Storage Service (Amazon S3). It assumes

API Version 2006-03-01


180
Amazon Simple Storage Service User Guide
Server-side encryption

that you are already following the instructions for Using the AWS SDK for PHP and Running PHP
Examples (p. 952) and have the AWS SDK for PHP properly installed.

To upload an object to Amazon S3, use the Aws\S3\S3Client::putObject() method. To add


the x-amz-server-side-encryption request header to your upload request, specify the
ServerSideEncryption parameter with the value AES256, as shown in the following code
example. For information about server-side encryption requests, see Using the REST API (p. 176).

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

// $filepath should be an absolute path to a file on disk.


$filepath = '*** Your File Path ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Upload a file with server-side encryption.


$result = $s3->putObject([
'Bucket' => $bucket,
'Key' => $keyname,
'SourceFile' => $filepath,
'ServerSideEncryption' => 'AES256',
]);

In response, Amazon S3 returns the x-amz-server-side-encryption header with the value of


the encryption algorithm that was used to encrypt your object's data.

When you upload large objects using the multipart upload API, you can specify server-side
encryption for the objects that you are uploading, as follows:

• When using the low-level multipart upload API, specify server-side encryption when you
call the Aws\S3\S3Client::createMultipartUpload() method. To add the x-amz-server-
side-encryption request header to your request, specify the array parameter's
ServerSideEncryption key with the value AES256. For more information about the low-level
multipart upload API, see Using the AWS SDKs (low-level-level API) (p. 82).
• When using the high-level multipart upload API, specify server-side encryption using the
ServerSideEncryption parameter of the CreateMultipartUpload method. For an example
of using the setOption() method with the high-level multipart upload API, see Uploading an
object using multipart upload (p. 78).

To determine the encryption state of an existing object, retrieve the object metadata by calling the
Aws\S3\S3Client::headObject() method as shown in the following PHP code example.

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'

API Version 2006-03-01


181
Amazon Simple Storage Service User Guide
Server-side encryption

]);

// Check which server-side encryption algorithm is used.


$result = $s3->headObject([
'Bucket' => $bucket,
'Key' => $keyname,
]);
echo $result['ServerSideEncryption'];

To change the encryption state of an existing object, make a copy of the object using the Aws
\S3\S3Client::copyObject() method and delete the source object. By default, copyObject() does
not encrypt the target unless you explicitly request server-side encryption of the destination object
using the ServerSideEncryption parameter with the value AES256. The following PHP code
example makes a copy of an object and adds server-side encryption to the copied object.

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$sourceBucket = '*** Your Source Bucket Name ***';


$sourceKeyname = '*** Your Source Object Key ***';

$targetBucket = '*** Your Target Bucket Name ***';


$targetKeyname = '*** Your Target Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Copy an object and add server-side encryption.


$s3->copyObject([
'Bucket' => $targetBucket,
'Key' => $targetKeyname,
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
'ServerSideEncryption' => 'AES256',
]);

• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class


• AWS SDK for PHP Documentation

Ruby

When using the AWS SDK for Ruby to upload an object, you can specify that the object be
stored encrypted at rest with server-side encryption (SSE). When you read the object back, it is
automatically decrypted.

The following AWS SDK for Ruby – Version 3 example demonstrates how to specify that a file
uploaded to Amazon S3 be encrypted at rest.

require 'aws-sdk-s3'

# Uploads a file to an Amazon S3 bucket and then encrypts the file server-side
# by using the 256-bit Advanced Encryption Standard (AES-256) block cipher.
#
# Prerequisites:
#
# - An Amazon S3 bucket.
#
# @param s3_client [Aws::S3::Client] An initialized Amazon S3 client.

API Version 2006-03-01


182
Amazon Simple Storage Service User Guide
Server-side encryption

# @param bucket_name [String] The name of the bucket.


# @param object_key [String] The name for the uploaded object.
# @param object_content [String] The content to upload into the object.
# @return [Boolean] true if the file was successfully uploaded and then
# encrypted; otherwise, false.
# @example
# exit 1 unless upload_file_encrypted_aes256_at_rest?(
# Aws::S3::Client.new(region: 'us-east-1'),
# 'doc-example-bucket',
# 'my-file.txt',
# 'This is the content of my-file.txt.'
# )
def upload_file_encrypted_aes256_at_rest?(
s3_client,
bucket_name,
object_key,
object_content
)
s3_client.put_object(
bucket: bucket_name,
key: object_key,
body: object_content,
server_side_encryption: 'AES256'
)
return true
rescue StandardError => e
puts "Error uploading object: #{e.message}"
return false
end

For an example that shows how to upload an object without SSE, see Uploading objects (p. 65).

The following code example demonstrates how to determine the encryption state of an existing
object.

require 'aws-sdk-s3'

# Gets the server-side encryption state of an object in an Amazon S3 bucket.


#
# Prerequisites:
#
# - An Amazon S3 bucket.
# - An object within that bucket.
#
# @param s3_client [Aws::S3::Client] An initialized Amazon S3 client.
# @param bucket_name [String] The bucket's name.
# @param object_key [String] The object's key.
# @return [String] The server-side encryption state.
# @example
# s3_client = Aws::S3::Client.new(region: 'us-east-1')
# puts get_server_side_encryption_state(
# s3_client,
# 'doc-example-bucket',
# 'my-file.txt'
# )
def get_server_side_encryption_state(s3_client, bucket_name, object_key)
response = s3_client.get_object(
bucket: bucket_name,
key: object_key
)
encryption_state = response.server_side_encryption
encryption_state.nil? ? 'not set' : encryption_state
rescue StandardError => e
"unknown or error: #{e.message}"

API Version 2006-03-01


183
Amazon Simple Storage Service User Guide
Server-side encryption

end

If server-side encryption is not used for the object that is stored in Amazon S3, the method returns
null.

To change the encryption state of an existing object, make a copy of the object and delete the
source object. By default, the copy methods do not encrypt the target unless you explicitly request
server-side encryption. You can request the encryption of the target object by specifying the
server_side_encryption value in the options hash argument as shown in the following Ruby
code example. The code example demonstrates how to copy an object and encrypt the copy.

require 'aws-sdk-s3'

# Copies an object from one Amazon S3 bucket to another,


# changing the object's server-side encryption state during
# the copy operation.
#
# Prerequisites:
#
# - A bucket containing an object to be copied.
# - A separate bucket to copy the object into.
#
# @param s3_client [Aws::S3::Client] An initialized Amazon S3 client.
# @param source_bucket_name [String] The source bucket's name.
# @param source_object_key [String] The name of the object to be copied.
# @param target_bucket_name [String] The target bucket's name.
# @param target_object_key [String] The name of the copied object.
# @param encryption_type [String] The server-side encryption type for
# the copied object.
# @return [Boolean] true if the object was copied with the specified
# server-side encryption; otherwise, false.
# @example
# s3_client = Aws::S3::Client.new(region: 'us-east-1')
# if object_copied_with_encryption?(
# s3_client,
# 'doc-example-bucket1',
# 'my-source-file.txt',
# 'doc-example-bucket2',
# 'my-target-file.txt',
# 'AES256'
# )
# puts 'Copied.'
# else
# puts 'Not copied.'
# end
def object_copied_with_encryption?(
s3_client,
source_bucket_name,
source_object_key,
target_bucket_name,
target_object_key,
encryption_type
)
response = s3_client.copy_object(
bucket: target_bucket_name,
copy_source: source_bucket_name + '/' + source_object_key,
key: target_object_key,
server_side_encryption: encryption_type
)
return true if response.copy_object_result
rescue StandardError => e
puts "Error while copying object: #{e.message}"
end

API Version 2006-03-01


184
Amazon Simple Storage Service User Guide
Server-side encryption

For examples of setting up encryption using AWS CloudFormation, see Create a bucket with default
encryption and Create a bucket using AWS KMS server-side encryption with an S3 Bucket Key in the AWS
CloudFormation User Guide.

For a sample of how to copy an object without encryption, see Copying objects (p. 102).

Protecting data using server-side encryption with customer-


provided encryption keys (SSE-C)
Server-side encryption is about protecting data at rest. Server-side encryption encrypts only the object
data, not object metadata. Using server-side encryption with customer-provided encryption keys (SSE-C)
allows you to set your own encryption keys. With the encryption key you provide as part of your request,
Amazon S3 manages the encryption as it writes to disks and decryption when you access your objects.
Therefore, you don't need to maintain any code to perform data encryption and decryption. The only
thing you do is manage the encryption keys you provide.

When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256
encryption to your data and removes the encryption key from memory. When you retrieve an object,
you must provide the same encryption key as part of your request. Amazon S3 first verifies that the
encryption key you provided matches and then decrypts the object before returning the object data to
you.

There are no new charges for using server-side encryption with customer-provided encryption keys
(SSE-C). However, requests to configure and use SSE-C incur standard Amazon S3 request charges. For
information about pricing, see Amazon S3 pricing.
Important
Amazon S3 does not store the encryption key you provide. Instead, it stores a randomly salted
HMAC value of the encryption key to validate future requests. The salted HMAC value cannot
be used to derive the value of the encryption key or to decrypt the contents of the encrypted
object. That means if you lose the encryption key, you lose the object.

SSE-C overview
This section provides an overview of SSE-C:

• You must use HTTPS.


Important
Amazon S3 rejects any requests made over HTTP when using SSE-C. For security
considerations, we recommend that you consider any key you erroneously send using HTTP to
be compromised. You should discard the key and rotate as appropriate.
• The ETag in the response is not the MD5 of the object data.
• You manage a mapping of which encryption key was used to encrypt which object. Amazon S3 does
not store encryption keys. You are responsible for tracking which encryption key you provided for
which object.
• If your bucket is versioning-enabled, each object version you upload using this feature can have its
own encryption key. You are responsible for tracking which encryption key was used for which object
version.
• Because you manage encryption keys on the client side, you manage any additional safeguards, such
as key rotation, on the client side.
Warning
If you lose the encryption key, any GET request for an object without its encryption key fails,
and you lose the object.

Topics

API Version 2006-03-01


185
Amazon Simple Storage Service User Guide
Server-side encryption

• Specifying server-side encryption with customer-provided keys (SSE-C) (p. 186)

Specifying server-side encryption with customer-provided keys (SSE-C)


At the time of object creation with the REST API, you can specify server-side encryption with customer-
provided encryption keys (SSE-C). When you use SSE-C, you must provide encryption key information
using the following request headers.

Name Description

x-amz-server-side- Use this header to specify the encryption algorithm. The header value
encryption-customer- must be "AES256".
algorithm

x-amz-server-side- Use this header to provide the 256-bit, base64-encoded encryption key
encryption-customer- for Amazon S3 to use to encrypt or decrypt your data.
key

x-amz-server-side- Use this header to provide the base64-encoded 128-bit MD5 digest of
encryption-customer- the encryption key according to RFC 1321. Amazon S3 uses this header
key-MD5 for a message integrity check to ensure that the encryption key was
transmitted without error.

You can use AWS SDK wrapper libraries to add these headers to your request. If you need to, you can
make the Amazon S3 REST API calls directly in your application.
Note
You cannot use the Amazon S3 console to upload an object and request SSE-C. You also cannot
use the console to update (for example, change the storage class or add metadata) an existing
object stored using SSE-C.

Presigned URLs and SSE-C


You can generate a presigned URL that can be used for operations such as upload a new object, retrieve
an existing object, or object metadata. Presigned URLs support the SSE-C as follows:

• When creating a presigned URL, you must specify the algorithm using the x-amz-server-side-
encryption-customer-algorithm in the signature calculation.
• When using the presigned URL to upload a new object, retrieve an existing object, or retrieve only
object metadata, you must provide all the encryption headers in your client application.
Note
For non-SSE-C objects, you can generate a presigned URL and directly paste that into a
browser, for example to access the data.
However, this is not true for SSE-C objects because in addition to the presigned URL, you also
need to include HTTP headers that are specific to SSE-C objects. Therefore, you can use the
presigned URL for SSE-C objects only programmatically.

Using the REST API

Amazon S3 rest APIs that support SSE-C


The following Amazon S3 APIs support server-side encryption with customer-provided encryption keys
(SSE-C).

• GET operation — When retrieving objects using the GET API (see GET Object), you can specify the
request headers. Torrents are not supported for objects encrypted using SSE-C.

API Version 2006-03-01


186
Amazon Simple Storage Service User Guide
Server-side encryption

• HEAD operation — To retrieve object metadata using the HEAD API (see HEAD Object), you can
specify these request headers.
• PUT operation — When uploading data using the PUT Object API (see PUT Object), you can specify
these request headers.
• Multipart Upload — When uploading large objects using the multipart upload API, you can specify
these headers. You specify these headers in the initiate request (see Initiate Multipart Upload) and
each subsequent part upload request (see Upload Part or

Upload Part - Copy)

). For each part upload request, the encryption information must be the same as what you provided in
the initiate multipart upload request.
• POST operation — When using a POST operation to upload an object (see POST Object), instead of
the request headers, you provide the same information in the form fields.
• Copy operation — When you copy an object (see PUT Object - Copy), you have both a source object
and a target object:
• If you want the target object encrypted using server-side encryption with AWS managed keys, you
must provide the x-amz-server-side-encryption request header.
• If you want the target object encrypted using SSE-C, you must provide encryption information using
the three headers described in the preceding table.
• If the source object is encrypted using SSE-C, you must provide encryption key information using the
following headers so that Amazon S3 can decrypt the object for copying.

Name Description

x-amz-copy-source Include this header to specify the algorithm Amazon S3 should use to
-server-side decrypt the source object. This value must be AES256.
-encryption-
customer-algorithm

x-amz-copy-source Include this header to provide the base64-encoded encryption key for
-server-side Amazon S3 to use to decrypt the source object. This encryption key
-encryption- must be the one that you provided Amazon S3 when you created the
customer-key source object. Otherwise, Amazon S3 cannot decrypt the object.

x-amz-copy- Include this header to provide the base64-encoded 128-bit MD5 digest
source-server- of the encryption key according to RFC 1321.
side-encryption-
customer-key-MD5

Using the AWS SDKs to specify SSE-C for PUT, GET, Head, and Copy operations

The following examples show how to request server-side encryption with customer-provided keys (SSE-
C) for objects. The examples perform the following operations. Each operation shows how to specify
SSE-C-related headers in the request:

• Put object—Uploads an object and requests server-side encryption using a customer-provided


encryption key.
• Get object—Downloads the object uploaded in the previous step. In the request, you provide the
same encryption information you provided when you uploaded the object. Amazon S3 needs this
information to decrypt the object so that it can return it to you.
• Get object metadata—Retrieves the object's metadata. You provide the same encryption information
used when the object was created.
API Version 2006-03-01
187
Amazon Simple Storage Service User Guide
Server-side encryption

• Copy object—Makes a copy of the previously-uploaded object. Because the source object is stored
using SSE-C, you must provide its encryption information in your copy request. By default, Amazon
S3 encrypts the copy of the object only if you explicitly request it. This example directs Amazon S3 to
store an encrypted copy of the object.

Java
Note
This example shows how to upload an object in a single operation. When using the
Multipart Upload API to upload large objects, you provide encryption information in the
same way shown in this example. For examples of multipart uploads that use the AWS SDK
for Java, see Uploading an object using multipart upload (p. 78).

To add the required encryption information, you include an SSECustomerKey in your request. For
more information about the SSECustomerKey class, see the REST API section.

For information about SSE-C, see Protecting data using server-side encryption with customer-
provided encryption keys (SSE-C) (p. 185). For instructions on creating and testing a working
sample, see Testing the Amazon S3 Java Code Examples (p. 950).

Example

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;

import javax.crypto.KeyGenerator;
import java.io.BufferedReader;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;

public class ServerSideEncryptionUsingClientSideEncryptionKey {


private static SSECustomerKey SSE_KEY;
private static AmazonS3 S3_CLIENT;
private static KeyGenerator KEY_GENERATOR;

public static void main(String[] args) throws IOException, NoSuchAlgorithmException


{
Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ***";
String uploadFileName = "*** File path ***";
String targetKeyName = "*** Target key name ***";

// Create an encryption key.


KEY_GENERATOR = KeyGenerator.getInstance("AES");
KEY_GENERATOR.init(256, new SecureRandom());
SSE_KEY = new SSECustomerKey(KEY_GENERATOR.generateKey());

try {
S3_CLIENT = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

API Version 2006-03-01


188
Amazon Simple Storage Service User Guide
Server-side encryption

// Upload an object.
uploadObject(bucketName, keyName, new File(uploadFileName));

// Download the object.


downloadObject(bucketName, keyName);

// Verify that the object is properly encrypted by attempting to retrieve


it
// using the encryption key.
retrieveObjectMetadata(bucketName, keyName);

// Copy the object into a new object that also uses SSE-C.
copyObject(bucketName, keyName, targetKeyName);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}

private static void uploadObject(String bucketName, String keyName, File file) {


PutObjectRequest putRequest = new PutObjectRequest(bucketName, keyName,
file).withSSECustomerKey(SSE_KEY);
S3_CLIENT.putObject(putRequest);
System.out.println("Object uploaded");
}

private static void downloadObject(String bucketName, String keyName) throws


IOException {
GetObjectRequest getObjectRequest = new GetObjectRequest(bucketName,
keyName).withSSECustomerKey(SSE_KEY);
S3Object object = S3_CLIENT.getObject(getObjectRequest);

System.out.println("Object content: ");


displayTextInputStream(object.getObjectContent());
}

private static void retrieveObjectMetadata(String bucketName, String keyName) {


GetObjectMetadataRequest getMetadataRequest = new
GetObjectMetadataRequest(bucketName, keyName)
.withSSECustomerKey(SSE_KEY);
ObjectMetadata objectMetadata =
S3_CLIENT.getObjectMetadata(getMetadataRequest);
System.out.println("Metadata retrieved. Object size: " +
objectMetadata.getContentLength());
}

private static void copyObject(String bucketName, String keyName, String


targetKeyName)
throws NoSuchAlgorithmException {
// Create a new encryption key for target so that the target is saved using
SSE-C.
SSECustomerKey newSSEKey = new SSECustomerKey(KEY_GENERATOR.generateKey());

CopyObjectRequest copyRequest = new CopyObjectRequest(bucketName, keyName,


bucketName, targetKeyName)
.withSourceSSECustomerKey(SSE_KEY)
.withDestinationSSECustomerKey(newSSEKey);

S3_CLIENT.copyObject(copyRequest);
System.out.println("Object copied");
}

API Version 2006-03-01


189
Amazon Simple Storage Service User Guide
Server-side encryption

private static void displayTextInputStream(S3ObjectInputStream input) throws


IOException {
// Read one line at a time from the input stream and display each line.
BufferedReader reader = new BufferedReader(new InputStreamReader(input));
String line;
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
System.out.println();
}
}

.NET
Note
For examples of uploading large objects using the multipart upload API, see Uploading an
object using multipart upload (p. 78) and Using the AWS SDKs (low-level-level API) (p. 82).

For information about SSE-C, see Protecting data using server-side encryption with customer-
provided encryption keys (SSE-C) (p. 185)). For information about creating and testing a working
sample, see Running the Amazon S3 .NET Code Examples (p. 951).

Example

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Security.Cryptography;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class SSEClientEncryptionKeyObjectOperationsTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** key name for new object created ***";
private const string copyTargetKeyName = "*** key name for object copy ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
ObjectOpsUsingClientEncryptionKeyAsync().Wait();
}
private static async Task ObjectOpsUsingClientEncryptionKeyAsync()
{
try
{
// Create an encryption key.
Aes aesEncryption = Aes.Create();
aesEncryption.KeySize = 256;
aesEncryption.GenerateKey();
string base64Key = Convert.ToBase64String(aesEncryption.Key);

// 1. Upload the object.


PutObjectRequest putObjectRequest = await UploadObjectAsync(base64Key);
// 2. Download the object and verify that its contents matches what you
uploaded.
await DownloadObjectAsync(base64Key, putObjectRequest);

API Version 2006-03-01


190
Amazon Simple Storage Service User Guide
Server-side encryption

// 3. Get object metadata and verify that the object uses AES-256
encryption.
await GetObjectMetadataAsync(base64Key);
// 4. Copy both the source and target objects using server-side
encryption with
// a customer-provided encryption key.
await CopyObjectAsync(aesEncryption, base64Key);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered ***. Message:'{0}' when writing an
object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}

private static async Task<PutObjectRequest> UploadObjectAsync(string base64Key)


{
PutObjectRequest putObjectRequest = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
ContentBody = "sample text",
ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = base64Key
};
PutObjectResponse putObjectResponse = await
client.PutObjectAsync(putObjectRequest);
return putObjectRequest;
}
private static async Task DownloadObjectAsync(string base64Key,
PutObjectRequest putObjectRequest)
{
GetObjectRequest getObjectRequest = new GetObjectRequest
{
BucketName = bucketName,
Key = keyName,
// Provide encryption information for the object stored in Amazon S3.
ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = base64Key
};

using (GetObjectResponse getResponse = await


client.GetObjectAsync(getObjectRequest))
using (StreamReader reader = new StreamReader(getResponse.ResponseStream))
{
string content = reader.ReadToEnd();
if (String.Compare(putObjectRequest.ContentBody, content) == 0)
Console.WriteLine("Object content is same as we uploaded");
else
Console.WriteLine("Error...Object content is not same.");

if (getResponse.ServerSideEncryptionCustomerMethod ==
ServerSideEncryptionCustomerMethod.AES256)
Console.WriteLine("Object encryption method is AES256, same as we
set");
else
Console.WriteLine("Error...Object encryption method is not the same
as AES256 we set");

API Version 2006-03-01


191
Amazon Simple Storage Service User Guide
Server-side encryption

// Assert.AreEqual(putObjectRequest.ContentBody, content);
// Assert.AreEqual(ServerSideEncryptionCustomerMethod.AES256,
getResponse.ServerSideEncryptionCustomerMethod);
}
}
private static async Task GetObjectMetadataAsync(string base64Key)
{
GetObjectMetadataRequest getObjectMetadataRequest = new
GetObjectMetadataRequest
{
BucketName = bucketName,
Key = keyName,

// The object stored in Amazon S3 is encrypted, so provide the


necessary encryption information.
ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = base64Key
};

GetObjectMetadataResponse getObjectMetadataResponse = await


client.GetObjectMetadataAsync(getObjectMetadataRequest);
Console.WriteLine("The object metadata show encryption method used is:
{0}", getObjectMetadataResponse.ServerSideEncryptionCustomerMethod);
// Assert.AreEqual(ServerSideEncryptionCustomerMethod.AES256,
getObjectMetadataResponse.ServerSideEncryptionCustomerMethod);
}
private static async Task CopyObjectAsync(Aes aesEncryption, string base64Key)
{
aesEncryption.GenerateKey();
string copyBase64Key = Convert.ToBase64String(aesEncryption.Key);

CopyObjectRequest copyRequest = new CopyObjectRequest


{
SourceBucket = bucketName,
SourceKey = keyName,
DestinationBucket = bucketName,
DestinationKey = copyTargetKeyName,
// Information about the source object's encryption.
CopySourceServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
CopySourceServerSideEncryptionCustomerProvidedKey = base64Key,
// Information about the target object's encryption.
ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = copyBase64Key
};
await client.CopyObjectAsync(copyRequest);
}
}
}

Using the AWS SDKs to specify SSE-C for multipart uploads

The example in the preceding section shows how to request server-side encryption with customer-
provided key (SSE-C) in the PUT, GET, Head, and Copy operations. This section describes other Amazon
S3 APIs that support SSE-C.

Java

To upload large objects, you can use multipart upload API (see Uploading and copying objects using
multipart upload (p. 72)). You can use either high-level or low-level APIs to upload large objects.
These APIs support encryption-related headers in the request.

API Version 2006-03-01


192
Amazon Simple Storage Service User Guide
Server-side encryption

• When using the high-level TransferManager API, you provide the encryption-specific headers in
the PutObjectRequest (see Uploading an object using multipart upload (p. 78)).
• When using the low-level API, you provide encryption-related information in the
InitiateMultipartUploadRequest, followed by identical encryption information in each
UploadPartRequest. You do not need to provide any encryption-specific headers in your
CompleteMultipartUploadRequest. For examples, see Using the AWS SDKs (low-level-level
API) (p. 82).

The following example uses TransferManager to create objects and shows how to provide SSE-C
related information. The example does the following:

• Creates an object using the TransferManager.upload() method. In the PutObjectRequest


instance, you provide encryption key information to request. Amazon S3 encrypts the object using
the customer-provided encryption key.
• Makes a copy of the object by calling the TransferManager.copy() method. The example
directs Amazon S3 to encrypt the object copy using a new SSECustomerKey. Because the source
object is encrypted using SSE-C, the CopyObjectRequest also provides the encryption key of the
source object so that Amazon S3 can decrypt the object before copying it.

Example

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.SSECustomerKey;
import com.amazonaws.services.s3.transfer.Copy;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;

import javax.crypto.KeyGenerator;
import java.io.File;
import java.security.SecureRandom;

public class ServerSideEncryptionCopyObjectUsingHLwithSSEC {

public static void main(String[] args) throws Exception {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String fileToUpload = "*** File path ***";
String keyName = "*** New object key name ***";
String targetKeyName = "*** Key name for object copy ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();

// Create an object from a file.

API Version 2006-03-01


193
Amazon Simple Storage Service User Guide
Server-side encryption

PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName,


keyName, new File(fileToUpload));

// Create an encryption key.


KeyGenerator keyGenerator = KeyGenerator.getInstance("AES");
keyGenerator.init(256, new SecureRandom());
SSECustomerKey sseCustomerEncryptionKey = new
SSECustomerKey(keyGenerator.generateKey());

// Upload the object. TransferManager uploads asynchronously, so this call


returns immediately.
putObjectRequest.setSSECustomerKey(sseCustomerEncryptionKey);
Upload upload = tm.upload(putObjectRequest);

// Optionally, wait for the upload to finish before continuing.


upload.waitForCompletion();
System.out.println("Object created.");

// Copy the object and store the copy using SSE-C with a new key.
CopyObjectRequest copyObjectRequest = new CopyObjectRequest(bucketName,
keyName, bucketName, targetKeyName);
SSECustomerKey sseTargetObjectEncryptionKey = new
SSECustomerKey(keyGenerator.generateKey());
copyObjectRequest.setSourceSSECustomerKey(sseCustomerEncryptionKey);

copyObjectRequest.setDestinationSSECustomerKey(sseTargetObjectEncryptionKey);

// Copy the object. TransferManager copies asynchronously, so this call


returns immediately.
Copy copy = tm.copy(copyObjectRequest);

// Optionally, wait for the upload to finish before continuing.


copy.waitForCompletion();
System.out.println("Copy complete.");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

To upload large objects, you can use multipart upload API (see Uploading and copying objects using
multipart upload (p. 72)). AWS SDK for .NET provides both high-level or low-level APIs to upload
large objects. These APIs support encryption-related headers in the request.

• When using high-level Transfer-Utility API, you provide the encryption-specific headers in
the TransferUtilityUploadRequest as shown. For code examples, see Uploading an object
using multipart upload (p. 78).

TransferUtilityUploadRequest request = new TransferUtilityUploadRequest()


{
FilePath = filePath,
BucketName = existingBucketName,
Key = keyName,
// Provide encryption information.
ServerSideEncryptionCustomerMethod = ServerSideEncryptionCustomerMethod.AES256,

API Version 2006-03-01


194
Amazon Simple Storage Service User Guide
Server-side encryption

ServerSideEncryptionCustomerProvidedKey = base64Key,
};

• When using the low-level API, you provide encryption-related information in the initiate multipart
upload request, followed by identical encryption information in the subsequent upload part
requests. You do not need to provide any encryption-specific headers in your complete multipart
upload request. For examples, see Using the AWS SDKs (low-level-level API) (p. 82).

The following is a low-level multipart upload example that makes a copy of an existing large
object. In the example, the object to be copied is stored in Amazon S3 using SSE-C, and you want
to save the target object also using SSE-C. In the example, you do the following:
• Initiate a multipart upload request by providing an encryption key and related information.
• Provide source and target object encryption keys and related information in the
CopyPartRequest.
• Obtain the size of the source object to be copied by retrieving the object metadata.
• Upload the objects in 5 MB parts.

Example

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.IO;
using System.Security.Cryptography;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class SSECLowLevelMPUcopyObjectTest
{
private const string existingBucketName = "*** bucket name ***";
private const string sourceKeyName = "*** source object key name ***";
private const string targetKeyName = "*** key name for the target object
***";
private const string filePath = @"*** file path ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
CopyObjClientEncryptionKeyAsync().Wait();
}

private static async Task CopyObjClientEncryptionKeyAsync()


{
Aes aesEncryption = Aes.Create();
aesEncryption.KeySize = 256;
aesEncryption.GenerateKey();
string base64Key = Convert.ToBase64String(aesEncryption.Key);

await CreateSampleObjUsingClientEncryptionKeyAsync(base64Key, s3Client);

await CopyObjectAsync(s3Client, base64Key);


}
private static async Task CopyObjectAsync(IAmazonS3 s3Client, string
base64Key)
{
List<CopyPartResponse> uploadResponses = new List<CopyPartResponse>();

API Version 2006-03-01


195
Amazon Simple Storage Service User Guide
Server-side encryption

// 1. Initialize.
InitiateMultipartUploadRequest initiateRequest = new
InitiateMultipartUploadRequest
{
BucketName = existingBucketName,
Key = targetKeyName,
ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = base64Key,
};

InitiateMultipartUploadResponse initResponse =
await s3Client.InitiateMultipartUploadAsync(initiateRequest);

// 2. Upload Parts.
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB
long firstByte = 0;
long lastByte = partSize;

try
{
// First find source object size. Because object is stored encrypted
with
// customer provided key you need to provide encryption information
in your request.
GetObjectMetadataRequest getObjectMetadataRequest = new
GetObjectMetadataRequest()
{
BucketName = existingBucketName,
Key = sourceKeyName,
ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = base64Key // " *
**source object encryption key ***"
};

GetObjectMetadataResponse getObjectMetadataResponse = await


s3Client.GetObjectMetadataAsync(getObjectMetadataRequest);

long filePosition = 0;
for (int i = 1; filePosition <
getObjectMetadataResponse.ContentLength; i++)
{
CopyPartRequest copyPartRequest = new CopyPartRequest
{
UploadId = initResponse.UploadId,
// Source.
SourceBucket = existingBucketName,
SourceKey = sourceKeyName,
// Source object is stored using SSE-C. Provide encryption
information.
CopySourceServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
CopySourceServerSideEncryptionCustomerProvidedKey =
base64Key, //"***source object encryption key ***",
FirstByte = firstByte,
// If the last part is smaller then our normal part size then
use the remaining size.
LastByte = lastByte >
getObjectMetadataResponse.ContentLength ?
getObjectMetadataResponse.ContentLength - 1 : lastByte,

// Target.
DestinationBucket = existingBucketName,
DestinationKey = targetKeyName,
PartNumber = i,

API Version 2006-03-01


196
Amazon Simple Storage Service User Guide
Server-side encryption

// Encryption information for the target object.


ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = base64Key
};
uploadResponses.Add(await
s3Client.CopyPartAsync(copyPartRequest));
filePosition += partSize;
firstByte += partSize;
lastByte += partSize;
}

// Step 3: complete.
CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest
{
BucketName = existingBucketName,
Key = targetKeyName,
UploadId = initResponse.UploadId,
};
completeRequest.AddPartETags(uploadResponses);

CompleteMultipartUploadResponse completeUploadResponse =
await s3Client.CompleteMultipartUploadAsync(completeRequest);
}
catch (Exception exception)
{
Console.WriteLine("Exception occurred: {0}", exception.Message);
AbortMultipartUploadRequest abortMPURequest = new
AbortMultipartUploadRequest
{
BucketName = existingBucketName,
Key = targetKeyName,
UploadId = initResponse.UploadId
};
s3Client.AbortMultipartUpload(abortMPURequest);
}
}
private static async Task CreateSampleObjUsingClientEncryptionKeyAsync(string
base64Key, IAmazonS3 s3Client)
{
// List to store upload part responses.
List<UploadPartResponse> uploadResponses = new
List<UploadPartResponse>();

// 1. Initialize.
InitiateMultipartUploadRequest initiateRequest = new
InitiateMultipartUploadRequest
{
BucketName = existingBucketName,
Key = sourceKeyName,
ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = base64Key
};

InitiateMultipartUploadResponse initResponse =
await s3Client.InitiateMultipartUploadAsync(initiateRequest);

// 2. Upload Parts.
long contentLength = new FileInfo(filePath).Length;
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB

try
{
long filePosition = 0;

API Version 2006-03-01


197
Amazon Simple Storage Service User Guide
Using client-side encryption

for (int i = 1; filePosition < contentLength; i++)


{
UploadPartRequest uploadRequest = new UploadPartRequest
{
BucketName = existingBucketName,
Key = sourceKeyName,
UploadId = initResponse.UploadId,
PartNumber = i,
PartSize = partSize,
FilePosition = filePosition,
FilePath = filePath,
ServerSideEncryptionCustomerMethod =
ServerSideEncryptionCustomerMethod.AES256,
ServerSideEncryptionCustomerProvidedKey = base64Key
};

// Upload part and add response to our list.


uploadResponses.Add(await
s3Client.UploadPartAsync(uploadRequest));

filePosition += partSize;
}

// Step 3: complete.
CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest
{
BucketName = existingBucketName,
Key = sourceKeyName,
UploadId = initResponse.UploadId,
//PartETags = new List<PartETag>(uploadResponses)

};
completeRequest.AddPartETags(uploadResponses);

CompleteMultipartUploadResponse completeUploadResponse =
await s3Client.CompleteMultipartUploadAsync(completeRequest);

}
catch (Exception exception)
{
Console.WriteLine("Exception occurred: {0}", exception.Message);
AbortMultipartUploadRequest abortMPURequest = new
AbortMultipartUploadRequest
{
BucketName = existingBucketName,
Key = sourceKeyName,
UploadId = initResponse.UploadId
};
await s3Client.AbortMultipartUploadAsync(abortMPURequest);
}
}
}
}

Protecting data using client-side encryption


Client-side encryption is the act of encrypting data before sending it to Amazon S3. To enable client-side
encryption, you have the following options:

• Use a customer master key (CMK) stored in AWS Key Management Service (AWS KMS).
• Use a master key that you store within your application.

API Version 2006-03-01


198
Amazon Simple Storage Service User Guide
Using client-side encryption

The following AWS SDKs support client-side encryption:

• AWS SDK for .NET


• AWS SDK for Go
• AWS SDK for Java
• AWS SDK for PHP
• AWS SDK for Ruby
• AWS SDK for C++

Option 1: Using a CMK stored in AWS KMS


With this option, you use an AWS KMS CMK for client-side encryption when uploading or downloading
data in Amazon S3.

• When uploading an object — Using the CMK ID, the client first sends a request to AWS KMS for a
new symmetric key that it can use to encrypt your object data. AWS KMS returns two versions of a
randomly generated data key:
• A plaintext version of the data key that the client uses to encrypt the object data.
• A cipher blob of the same data key that the client uploads to Amazon S3 as object metadata.
Note
The client obtains a unique data key for each object that it uploads.
• When downloading an object — The client downloads the encrypted object from Amazon S3 along
with the cipher blob version of the data key stored as object metadata. The client then sends the
cipher blob to AWS KMS to get the plaintext version of the data key so that it can decrypt the object
data.

For more information about AWS KMS, see What is AWS Key Management Service? in the AWS Key
Management Service Developer Guide.

Example

The following code example demonstrates how to upload an object to Amazon S3 using AWS KMS with
the AWS SDK for Java. The example uses an AWS managed CMK to encrypt data on the client side before
uploading it to Amazon S3. If you already have a CMK, you can use that by specifying the value of the
keyId variable in the example code. If you don't have a CMK, or you need another one, you can generate
one through the Java API. The example code automatically generates a CMK to use.

For instructions on creating and testing a working example, see Testing the Amazon S3 Java Code
Examples (p. 950).

AWSKMS kmsClient = AWSKMSClientBuilder.standard()


.withRegion(Regions.DEFAULT_REGION)
.build();

// create CMK for for testing this example


CreateKeyRequest createKeyRequest = new CreateKeyRequest();
CreateKeyResult createKeyResult = kmsClient.createKey(createKeyRequest);

// --
// specify an Amazon KMS customer master key (CMK) ID
String keyId = createKeyResult.getKeyMetadata().getKeyId();

String s3ObjectKey = "EncryptedContent1.txt";


String s3ObjectContent = "This is the 1st content to encrypt";
// --

API Version 2006-03-01


199
Amazon Simple Storage Service User Guide
Using client-side encryption

AmazonS3EncryptionV2 s3Encryption = AmazonS3EncryptionClientV2Builder.standard()


.withRegion(Regions.US_WEST_2)
.withCryptoConfiguration(new
CryptoConfigurationV2().withCryptoMode(CryptoMode.StrictAuthenticatedEncryption))
.withEncryptionMaterialsProvider(new KMSEncryptionMaterialsProvider(keyId))
.build();

s3Encryption.putObject(bucket_name, s3ObjectKey, s3ObjectContent);


System.out.println(s3Encryption.getObjectAsString(bucket_name, s3ObjectKey));

// schedule deletion of CMK generated for testing


ScheduleKeyDeletionRequest scheduleKeyDeletionRequest =
new
ScheduleKeyDeletionRequest().withKeyId(keyId).withPendingWindowInDays(7);
kmsClient.scheduleKeyDeletion(scheduleKeyDeletionRequest);

s3Encryption.shutdown();
kmsClient.shutdown();

Option 2: Using a master key stored within your application


With this option, you use a master key that is stored within your application for client-side data
encryption.
Important
Your client-side master keys and your unencrypted data are never sent to AWS. It's important
that you safely manage your encryption keys. If you lose them, you can't decrypt your data.

This is how it works:

• When uploading an object — You provide a client-side master key to the Amazon S3 encryption
client. The client uses the master key only to encrypt the data encryption key that it generates
randomly.

The following steps describe the process:


1. The Amazon S3 encryption client generates a one-time-use symmetric key (also known as a data
encryption key or data key) locally. It uses the data key to encrypt the data of a single Amazon S3
object. The client generates a separate data key for each object.
2. The client encrypts the data encryption key using the master key that you provide. The client
uploads the encrypted data key and its material description as part of the object metadata. The
client uses the material description to determine which client-side master key to use for decryption.
3. The client uploads the encrypted data to Amazon S3 and saves the encrypted data key as object
metadata (x-amz-meta-x-amz-key) in Amazon S3.
• When downloading an object — The client downloads the encrypted object from Amazon S3. Using
the material description from the object's metadata, the client determines which master key to use to
decrypt the data key. The client uses that master key to decrypt the data key and then uses the data
key to decrypt the object.

The client-side master key that you provide can be either a symmetric key or a public/private key pair.
The following code examples show how to use each type of key.

For more information, see Client-Side Data Encryption with the AWS SDK for Java and Amazon S3.
Note
If you get a cipher-encryption error message when you use the encryption API for the first time,
your version of the JDK might have a Java Cryptography Extension (JCE) jurisdiction policy file
that limits the maximum key length for encryption and decryption transformations to 128 bits.
The AWS SDK requires a maximum key length of 256 bits.

API Version 2006-03-01


200
Amazon Simple Storage Service User Guide
Using client-side encryption

To check your maximum key length, use the getMaxAllowedKeyLength() method of


the javax.crypto.Cipher class. To remove the key-length restriction, install the Java
Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files.

Example

The following code example shows how to do these tasks:

• Generate a 256-bit AES key.


• Use the AES key to encrypt data on the client side before sending it to Amazon S3.
• Use the AES key to decrypt data received from Amazon S3.
• Print out a string representation of the decrypted object.

For instructions on creating and testing a working example, see Testing the Amazon S3 Java Code
Examples (p. 950).

KeyGenerator keyGenerator = KeyGenerator.getInstance("AES");


keyGenerator.init(256);

// --
// generate a symmetric encryption key for testing
SecretKey secretKey = keyGenerator.generateKey();

String s3ObjectKey = "EncryptedContent2.txt";


String s3ObjectContent = "This is the 2nd content to encrypt";
// --

AmazonS3EncryptionV2 s3Encryption = AmazonS3EncryptionClientV2Builder.standard()


.withRegion(Regions.DEFAULT_REGION)
.withClientConfiguration(new ClientConfiguration())
.withCryptoConfiguration(new
CryptoConfigurationV2().withCryptoMode(CryptoMode.AuthenticatedEncryption))
.withEncryptionMaterialsProvider(new StaticEncryptionMaterialsProvider(new
EncryptionMaterials(secretKey)))
.build();

s3Encryption.putObject(bucket_name, s3ObjectKey, s3ObjectContent);


System.out.println(s3Encryption.getObjectAsString(bucket_name, s3ObjectKey));
s3Encryption.shutdown();

Example

The following code example shows how to do these tasks:

• Generate a 2048-bit RSA key pair for testing purposes.


• Use the RSA keys to encrypt data on the client side before sending it to Amazon S3.
• Use the RSA keys to decrypt data received from Amazon S3.
• Print out a string representation of the decrypted object.

For instructions on creating and testing a working example, see Testing the Amazon S3 Java Code
Examples (p. 950).

KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA");


keyPairGenerator.initialize(2048);

// --
// generate an asymmetric key pair for testing

API Version 2006-03-01


201
Amazon Simple Storage Service User Guide
Internetwork privacy

KeyPair keyPair = keyPairGenerator.generateKeyPair();

String s3ObjectKey = "EncryptedContent3.txt";


String s3ObjectContent = "This is the 3rd content to encrypt";
// --

AmazonS3EncryptionV2 s3Encryption = AmazonS3EncryptionClientV2Builder.standard()


.withRegion(Regions.US_WEST_2)
.withCryptoConfiguration(new
CryptoConfigurationV2().withCryptoMode(CryptoMode.StrictAuthenticatedEncryption))
.withEncryptionMaterialsProvider(new StaticEncryptionMaterialsProvider(new
EncryptionMaterials(keyPair)))
.build();

s3Encryption.putObject(bucket_name, s3ObjectKey, s3ObjectContent);


System.out.println(s3Encryption.getObjectAsString(bucket_name, s3ObjectKey));
s3Encryption.shutdown();

Internetwork traffic privacy


This topic describes how Amazon S3 secures connections from the service to other locations.

Traffic between service and on-premises clients and


applications
You have multiple connectivity options between your private network and AWS:

• An AWS Site-to-Site VPN connection. For more information, see What is AWS Site-to-Site VPN?
• An AWS Direct Connect connection. For more information, see What is AWS Direct Connect?
• An AWS PrivateLink connection. For more information, see AWS PrivateLink for Amazon S3 (p. 202).

Access to Amazon S3 via the network is through AWS published APIs. Clients must support Transport
Layer Security (TLS) 1.0. We recommend TLS 1.2 or above. Clients must also support cipher suites
with Perfect Forward Secrecy (PFS), such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Diffie-
Hellman Ephemeral (ECDHE). Most modern systems such as Java 7 and later support these modes.
Additionally, you must sign requests using an access key ID and a secret access key that are associated
with an IAM principal, or you can use the AWS Security Token Service (STS) to generate temporary
security credentials to sign requests.

Traffic between AWS resources in the same Region


A virtual private cloud (VPC) endpoint for Amazon S3 is a logical entity within a VPC that allows
connectivity only to Amazon S3. The VPC routes requests to Amazon S3 and routes responses back to
the VPC. For more information, see VPC Endpoints in the VPC User Guide. For example bucket policies
that you can use to control S3 bucket access from VPC endpoints, see Controlling access from VPC
endpoints with bucket policies (p. 321).

AWS PrivateLink for Amazon S3


With AWS PrivateLink for Amazon S3, you can provision interface VPC endpoints (interface endpoints)
in your virtual private cloud (VPC) instead of connecting over the internet. These endpoints are directly
accessible from applications that are on premises or in a different AWS Region.

API Version 2006-03-01


202
Amazon Simple Storage Service User Guide
Types of VPC endpoints

Interface endpoints are represented by one or more elastic network interfaces (ENIs) that are assigned
private IP addresses from subnets in your VPC. Requests that are made to interface endpoints for
Amazon S3 are automatically routed to Amazon S3 on the Amazon network. You can also access
interface endpoints in your VPC from on-premises applications through AWS Direct Connect or AWS
Virtual Private Network (AWS VPN). For more information about how to connect your VPC with your on-
premises network, see the AWS Direct Connect User Guide and the AWS Site-to-Site VPN User Guide.

For general information about interface endpoints, see Interface VPC endpoints (AWS PrivateLink).

Topics
• Types of VPC endpoints for Amazon S3 (p. 203)
• Accessing Amazon S3 interface endpoints (p. 203)
• Accessing buckets and S3 access points from S3 interface endpoints (p. 204)
• Updating an on-premises DNS configuration (p. 206)
• Creating a VPC endpoint policy for Amazon S3 (p. 207)

Types of VPC endpoints for Amazon S3


You can use two types of VPC endpoints to access Amazon S3: gateway endpoints and interface
endpoints. A gateway endpoint is a gateway that you specify in your route table to access Amazon S3
from your VPC over the Amazon network. Interface endpoints extend the functionality of gateway
endpoints by using private IP addresses to route requests to Amazon S3 from within your VPC, on
premises, or from a different AWS Region. Interface endpoints are compatible with gateway endpoints. If
you have an existing gateway endpoint in the VPC, you can use both types of endpoints in the same VPC.

Gateway endpoints for Amazon S3 Interface endpoints for Amazon S3

In both cases, your network traffic remains on the Amazon network.

Use Amazon S3 public IP addresses Use private IP addresses from


your VPC to access Amazon S3

Does not allow access from on premises Allows access from on premises

Does not allow access from another AWS Region Allows access from another AWS Region

Not billed Billed

For more information about gateway endpoints, see Gateway VPC endpoints in the Amazon VPC User
Guide.

Accessing Amazon S3 interface endpoints


Important
To access Amazon S3 using AWS PrivateLink, you must update your applications to use
endpoint-specific DNS names.

When you create an interface endpoint, Amazon S3 generates two types of endpoint-specific, S3 DNS
names: regional and zonal.

• Regional DNS names include a unique VPC endpoint ID, a service identifier, the AWS
Region, and vpce.amazonaws.com in its name. For example, for VPC endpoint ID
vpce-1a2b3c4d, the DNS name generated might be similar to vpce-1a2b3c4d-5e6f.s3.us-
east-1.vpce.amazonaws.com.

API Version 2006-03-01


203
Amazon Simple Storage Service User Guide
Accessing buckets and S3 access
points from S3 interface endpoints

• Zonal DNS names include the Availability Zone—for example, vpce-1a2b3c4d-5e6f-us-


east-1a.s3.us-east-1.vpce.amazonaws.com. You might use this option if your architecture
isolates Availability Zones. For example, you could use it for fault containment or to reduce regional
data transfer costs.

Endpoint-specific S3 DNS names can be resolved from the S3 public DNS domain.
Note
Amazon S3 interface endpoints do not support the private DNS feature of interface endpoints.
For more information about Private DNS for interface endpoints, see the Amazon VPC User
Guide.

Accessing buckets and S3 access points from S3


interface endpoints
You can use the AWS CLI or AWS SDK to access buckets, S3 access points, and S3-control APIs through S3
interface endpoints.

The following image shows the VPC console Details tab, where you can find the DNS name of a VPC
endpoint. In this example, the VPC endpoint ID (vpce-id) is vpce-0e25b8cdd720f900e and the DNS
name is vpce-0e25b8cdd720f900e-argc85vg.s3.us-east-1.vpce.amazonaws.com.

For more about how to view your endpoint-specific DNS names, see Viewing endpoint service private
DNS name configuration in the Amazon VPC User Guide.

AWS CLI examples


Use the --endpoint-url parameter to access S3 buckets, S3 access points, or S3 control APIs through
S3 interface endpoints. Replace the text in red with appropriate names and endpoint URLs.

Example: Using the endpoint URL to list objects in your bucket

aws s3 --endpoint-url https://bucket.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com ls


s3://my-bucket/

Example: Using the endpoint URL to list objects from an access point

aws s3api list-objects-v2 --bucket arn:aws:s3:us-east-1:123456789012:accesspoint/test --


endpoint-url https://accesspoint.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com

Example: Using the endpoint URL to list jobs with S3 control

aws s3control --endpoint-url https://control.vpce-1a2b3c4d-5e6f.s3.us-


east-1.vpce.amazonaws.com list-jobs --account-id 12345678

AWS SDK examples


Update your SDKs to the latest version, and configure your clients to use an endpoint URL for accessing
a bucket, access point, or S3 control API through S3 interface endpoints. Replace the text in red with
appropriate resource names and endpoint URLs.

API Version 2006-03-01


204
Amazon Simple Storage Service User Guide
Accessing buckets and S3 access
points from S3 interface endpoints

SDK for Python (Boto3)

Example: Use an endpoint URL to access an S3 bucket

s3_client = session.client(
service_name='s3',
endpoint_url='https://bucket.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com'
)

Example: Use an endpoint URL to access an S3 access point

ap_client = session.client(
service_name='s3',
endpoint_url='https://accesspoint.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com'
)

Example: Use an endpoint URL to access the S3 control API

control_client = session.client(
service_name='s3control',
endpoint_url='https://control.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com'
)

SDK for Java 1.x

Example: Use an endpoint URL to access an S3 bucket

// bucket client
final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
"https://bucket.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com",
Regions.DEFAULT_REGION.getName()
)
).build();
List<Bucket> buckets = s3.listBuckets();

Example: Use an endpoint URL to access an S3 access point

// accesspoint client
final AmazonS3 s3accesspoint =
AmazonS3ClientBuilder.standard().withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
"https://accesspoint.vpce-1a2b3c4d-5e6f.s3.us-
east-1.vpce.amazonaws.com",
Regions.DEFAULT_REGION.getName()
)
).build();
ObjectListing objects = s3accesspoint.listObjects("arn:aws:s3:us-
east-1:123456789012:accesspoint/prod");

Example: Use an endpoint URL to access the S3 control API

// control client
final AWSS3Control s3control = AWSS3ControlClient.builder().withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
"https://control.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com",
Regions.DEFAULT_REGION.getName()
)
).build();
final ListJobsResult jobs = s3control.listJobs(new ListJobsRequest());

API Version 2006-03-01


205
Amazon Simple Storage Service User Guide
Updating an on-premises DNS configuration

SDK for Java 2.x

Example: Use an endpoint URL to access an S3 bucket

// bucket client
Region region = Region.US_EAST_1;
s3Client = S3Client.builder().region(region)
.endpointOverride(URI.create("https://
bucket.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com"))
.build()

Example: Use an endpoint URL to access an S3 access point

// accesspoint client
Region region = Region.US_EAST_1;
s3Client = S3Client.builder().region(region)
.endpointOverride(URI.create("https://
accesspoint.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com"))
.build()

Example: Use an endpoint URL to access the S3 control API

// control client
Region region = Region.US_EAST_1;
s3ControlClient = S3ControlClient.builder().region(region)
.endpointOverride(URI.create("https://
control.vpce-1a2b3c4d-5e6f.s3.us-east-1.vpce.amazonaws.com"))
.build()

Updating an on-premises DNS configuration


When using endpoint-specific DNS names to access the interface endpoints for Amazon S3, you don’t
have to update your on-premises DNS resolver. You can resolve the endpoint-specific DNS name with the
private IP address of the interface endpoint from the public Amazon S3 DNS domain.

Using interface endpoints to access Amazon S3 without a


gateway endpoint or an internet gateway in the VPC
Interface endpoints in your VPC can route both in-VPC applications and on-premises applications to
Amazon S3 over the Amazon network, as illustrated in the following diagram.

API Version 2006-03-01


206
Amazon Simple Storage Service User Guide
Creating a VPC endpoint policy

The diagram illustrates the following:

• Your on-premises network uses AWS Direct Connect or AWS VPN to connect to VPC A.
• Your applications on-premises and in VPC A use endpoint-specific DNS names to access Amazon S3
through the S3 interface endpoint.
• On-premises applications send data to the interface endpoint in the VPC through AWS Direct Connect
(or AWS VPN). AWS PrivateLink moves the data from the interface endpoint to Amazon S3 over the
Amazon network.
• In-VPC applications also send traffic to the interface endpoint. AWS PrivateLink moves the data from
the interface endpoint to Amazon S3 over the Amazon network.

Using gateway endpoints and interface endpoints together in


the same VPC to access Amazon S3
You can create interface endpoints and retain the existing gateway endpoint in the same VPC, as the
following diagram shows. By doing this, you allow in-VPC applications to continue accessing Amazon
S3 through the gateway endpoint, which are not billed. Then, only your on-premises applications would
use interface endpoints to access Amazon S3. To access S3 this way, you must update your on-premises
applications to use endpoint-specific DNS names for Amazon S3.

The diagram illustrates the following:

• On-premises applications use endpoint-specific DNS names to send data to the interface endpoint
within the VPC through AWS Direct Connect (or AWS VPN). AWS PrivateLink moves the data from the
interface endpoint to Amazon S3 over the Amazon network.
• Using default regional Amazon S3 names, in-VPC applications send data to the gateway endpoint that
connects to Amazon S3 over the Amazon network.

For more information about gateway endpoints, see Gateway VPC endpoints in the Amazon VPC User
Guide.

Creating a VPC endpoint policy for Amazon S3


You can attach an endpoint policy to your VPC endpoint that controls access to Amazon S3. The policy
specifies the following information:

• The AWS Identity and Access Management (IAM) principal that can perform actions
• The actions that can be performed
• The resources on which actions can be performed

API Version 2006-03-01


207
Amazon Simple Storage Service User Guide
Creating a VPC endpoint policy

You can also use Amazon S3 bucket policies to restrict access to specific buckets from a specific VPC
endpoint using the aws:sourceVpce condition in your bucket policy. The following examples show
policies that restrict access to a bucket or to an endpoint.

• Example: Restricting access to a specific bucket from a VPC endpoint


• Example: Restricting access to buckets in a specific account from a VPC endpoint
• Example: Restricting access to a specific VPC endpoint in the S3 bucket policy

Important

• When applying the Amazon S3 bucket policies for VPC endpoints described in this section,
you might block your access to the bucket without intending to do so. Bucket permissions
that are intended to specifically limit bucket access to connections originating from your VPC
endpoint can block all connections to the bucket. For information about how to fix this issue,
see My bucket policy has the wrong VPC or VPC endpoint ID. How can I fix the policy so that I
can access the bucket? in the AWS Support Knowledge Center.
• Before using the following example policy, replace the VPC endpoint ID with an appropriate
value for your use case. Otherwise, you won't be able to access your bucket.
• This policy disables console access to the specified bucket, because console requests don't
originate from the specified VPC endpoint.

Example: Restricting access to a specific bucket from a VPC endpoint

You can create an endpoint policy that restricts access to specific Amazon S3 buckets only. This is useful
if you have other AWS services in your VPC that use buckets. The following bucket policy restricts access
to DOC-EXAMPLE-BUCKET1 only. Replace DOC-EXAMPLE-BUCKET1 with the name of your bucket.

{
"Version": "2012-10-17",
"Id": "Policy1415115909151",
"Statement": [
{ "Sid": "Access-to-specific-bucket-only",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET1",
"arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*"]
}
]
}

Example: Restricting access to buckets in a specific account from a VPC endpoint

You can create a policy that restricts access only to the S3 buckets in a specific AWS account. Use this
to prevent clients within your VPC from accessing buckets that you do not own. The following example
creates a policy that restricts access to resources owned by a single AWS account ID, 111122223333.

{
"Statement": [
{
"Sid": "Access-to-bucket-in-specific-account-only",
"Principal": "",
"Action": [
"s3:GetObject",

API Version 2006-03-01


208
Amazon Simple Storage Service User Guide
Identity and access management

"s3:PutObject"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::",
"Condition": {
"StringNotEquals": {
"s3:ResourceAccount": "111122223333"
}
}
}
]
}

Example: Restricting access to a specific VPC endpoint in the S3 bucket policy

The following Amazon S3 bucket policy allows access to a specific bucket, DOC-EXAMPLE-BUCKET2,
from endpoint vpce-1a2b3c4d only. The policy denies all access to the bucket if the specified endpoint
is not being used. The aws:sourceVpce condition is used to specify the endpoint and does not require
an Amazon Resource Name (ARN) for the VPC endpoint resource, only the endpoint ID. Replace DOC-
EXAMPLE-BUCKET2 and vpce-1a2b3c4d with a real bucket name and endpoint.

{
"Version": "2012-10-17",
"Id": "Policy1415115909152",
"Statement": [
{ "Sid": "Access-to-specific-VPCE-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET2",
"arn:aws:s3:::DOC-EXAMPLE-BUCKET2/*"],
"Condition": {"StringNotEquals": {"aws:sourceVpce": "vpce-1a2b3c4d"}}
}
]
}

For more policy examples, see Endpoints for Amazon S3 in the Amazon VPC User Guide.

For more information about VPC connectivity, see Network-to-Amazon VPC connectivity options in the
AWS whitepaper: Amazon Virtual Private Cloud Connectivity Options.

Identity and access management in Amazon S3


By default, all Amazon S3 resources—buckets, objects, and related subresources (for example,
lifecycle configuration and website configuration)—are private. Only the resource owner, the
AWS account that created it, can access the resource. The resource owner can optionally grant access
permissions to others by writing an access policy.

Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies.
Access policies that you attach to your resources (buckets and objects) are referred to as resource-based
policies. For example, bucket policies and access control lists (ACLs) are resource-based policies. You can
also attach access policies to users in your account. These are called user policies. You can choose to use
resource-based policies, user policies, or some combination of these to manage permissions to your
Amazon S3 resources. The following sections provide general guidelines for managing permissions in
Amazon S3.

For more information about managing access to your Amazon S3 objects and buckets, see the topics
below.

API Version 2006-03-01


209
Amazon Simple Storage Service User Guide
Overview

Topics
• Overview of managing access (p. 210)
• Access policy guidelines (p. 215)
• How Amazon S3 authorizes a request (p. 219)
• Bucket policies and user policies (p. 226)
• Managing access with ACLs (p. 383)
• Using cross-origin resource sharing (CORS) (p. 397)
• Blocking public access to your Amazon S3 storage (p. 408)
• Managing data access with Amazon S3 access points (p. 418)
• Reviewing bucket access using Access Analyzer for S3 (p. 432)
• Controlling ownership of uploaded objects using S3 Object Ownership (p. 436)
• Verifying bucket ownership with bucket owner condition (p. 438)

Overview of managing access


When granting permissions in Amazon S3, you decide who is getting the permissions, which Amazon S3
resources they are getting permissions for, and the specific actions you want to allow on those resources.
The following sections provide an overview of Amazon S3 resources and how to determine the best
method to control access to them.

Topics
• Amazon S3 resources: buckets and objects (p. 210)
• Amazon S3 bucket and object ownership (p. 211)
• Resource operations (p. 212)
• Managing access to resources (p. 212)
• Which access control method should I use? (p. 215)

Amazon S3 resources: buckets and objects


In AWS, a resource is an entity that you can work with. In Amazon S3, buckets and objects are the
resources, and both have associated subresources.

Bucket subresources include the following:

• lifecycle – Stores lifecycle configuration information. For more information, see Managing your
storage lifecycle (p. 501).
• website – Stores website configuration information if you configure your bucket for website hosting.
For information, see Hosting a static website using Amazon S3 (p. 857).
• versioning – Stores versioning configuration. For more information, see PUT Bucket versioning in
the Amazon Simple Storage Service API Reference.
• policy and acl (access control list) – Store access permission information for the bucket.
• cors (cross-origin resource sharing) – Supports configuring your bucket to allow cross-origin requests.
For more information, see Using cross-origin resource sharing (CORS) (p. 397).
• object ownership – Enables the bucket owner to take ownership of new objects in the bucket,
regardless of who uploads them. For more information, see Controlling ownership of uploaded objects
using S3 Object Ownership (p. 436).
• logging – Enables you to request Amazon S3 to save bucket access logs.

Object subresources include the following:

API Version 2006-03-01


210
Amazon Simple Storage Service User Guide
Overview

• acl – Stores a list of access permissions on the object. For more information, see Managing access with
ACLs (p. 383).
• restore – Supports temporarily restoring an archived object. For more information, see POST Object
restore in the Amazon Simple Storage Service API Reference.

An object in the S3 Glacier storage class is an archived object. To access the object, you must first
initiate a restore request, which restores a copy of the archived object. In the request, you specify
the number of days that you want the restored copy to exist. For more information about archiving
objects, see Managing your storage lifecycle (p. 501).

Amazon S3 bucket and object ownership


Buckets and objects are Amazon S3 resources. By default, only the resource owner can access these
resources. The resource owner refers to the AWS account that creates the resource. For example:

• The AWS account that you use to create buckets and upload objects owns those resources.
• If you upload an object using AWS Identity and Access Management (IAM) user or role credentials, the
AWS account that the user or role belongs to owns the object.
• A bucket owner can grant cross-account permissions to another AWS account (or users in another
account) to upload objects. In this case, the AWS account that uploads objects owns those objects. The
bucket owner does not have permissions on the objects that other accounts own, with the following
exceptions:
• The bucket owner pays the bills. The bucket owner can deny access to any objects, or delete any
objects in the bucket, regardless of who owns them.
• The bucket owner can archive any objects or restore archived objects regardless of who owns them.
Archival refers to the storage class used to store the objects. For more information, see Managing
your storage lifecycle (p. 501).

Ownership and request authentication


All requests to a bucket are either authenticated or unauthenticated. Authenticated requests must
include a signature value that authenticates the request sender, and unauthenticated requests do not.
For more information about request authentication, see Making requests (p. 900).

A bucket owner can allow unauthenticated requests. For example, unauthenticated PUT Object
requests are allowed when a bucket has a public bucket policy, or when a bucket ACL grants WRITE or
FULL_CONTROL access to the All Users group or the anonymous user specifically. For more information
about public bucket policies and public access control lists (ACLs), see The meaning of "public" (p. 411).

All unauthenticated requests are made by the anonymous user. This user is represented in ACLs by
the specific canonical user ID 65a011a29cdf8ec533ec3d1ccaae921c. If an object is uploaded to a
bucket through an unauthenticated request, the anonymous user owns the object. The default object
ACL grants FULL_CONTROL to the anonymous user as the object's owner. Therefore, Amazon S3 allows
unauthenticated requests to retrieve the object or modify its ACL.

To prevent objects from being modified by the anonymous user, we recommend that you do not
implement bucket policies that allow anonymous public writes to your bucket or use ACLs that allow
the anonymous user write access to your bucket. You can enforce this recommended behavior by using
Amazon S3 Block Public Access.

For more information about blocking public access, see Blocking public access to your Amazon S3
storage (p. 408). For more information about ACLs, see Managing access with ACLs (p. 383).
Important
We recommend that you don't use the AWS account root user credentials to make authenticated
requests. Instead, create an IAM user and grant that user full access. We refer to these users as

API Version 2006-03-01


211
Amazon Simple Storage Service User Guide
Overview

administrator users. You can use the administrator user credentials, instead of AWS account root
user credentials, to interact with AWS and perform tasks, such as create a bucket, create users,
and grant permissions. For more information, see AWS account root user credentials and IAM
user credentials in the AWS General Reference and Security best practices in IAM in the IAM User
Guide.

Resource operations
Amazon S3 provides a set of operations to work with the Amazon S3 resources. For a list of available
operations, see Actions defined by Amazon S3 (p. 243).

Managing access to resources


Managing access refers to granting others (AWS accounts and users) permission to perform the resource
operations by writing an access policy. For example, you can grant PUT Object permission to a user
in an AWS account so the user can upload objects to your bucket. In addition to granting permissions
to individual users and accounts, you can grant permissions to everyone (also referred as anonymous
access) or to all authenticated users (users with AWS credentials). For example, if you configure your
bucket as a website, you may want to make objects public by granting the GET Object permission to
everyone.

Access policy options


Access policy describes who has access to what. You can associate an access policy with a resource
(bucket and object) or a user. Accordingly, you can categorize the available Amazon S3 access policies as
follows:

• Resource-based policies – Bucket policies and access control lists (ACLs) are resource-based because
you attach them to your Amazon S3 resources.

• ACL – Each bucket and object has an ACL associated with it. An ACL is a list of grants identifying
grantee and permission granted. You use ACLs to grant basic read/write permissions to other AWS
accounts. ACLs use an Amazon S3–specific XML schema.

The following is an example bucket ACL. The grant in the ACL shows a bucket owner as having full
control permission.

<?xml version="1.0" encoding="UTF-8"?>

API Version 2006-03-01


212
Amazon Simple Storage Service User Guide
Overview

<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>*** Owner-Canonical-User-ID ***</ID>
<DisplayName>owner-display-name</DisplayName>
</Owner>
<AccessControlList>
<Grant>
<Grantee xmlns:xsi="http://www.w3.org/2