0% found this document useful (0 votes)
759 views1,167 pages

s3 Userguide

Amazon Simple Storage Service User Guide API Version 2006-03-01

Uploaded by

Betman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
759 views1,167 pages

s3 Userguide

Amazon Simple Storage Service User Guide API Version 2006-03-01

Uploaded by

Betman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1167

Amazon Simple Storage Service

User Guide
API Version 2006-03-01
Amazon Simple Storage Service User Guide

Amazon Simple Storage Service: User Guide


Copyright © Amazon Web Services, Inc. and/or its affiliates. All rights reserved.

Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon Simple Storage Service User Guide

Table of Contents
What is Amazon S3? ........................................................................................................................... 1
How do I...? ............................................................................................................................... 1
Advantages of using Amazon S3 .................................................................................................. 2
Amazon S3 concepts .................................................................................................................. 2
Buckets ............................................................................................................................. 2
Objects ............................................................................................................................. 3
Keys ................................................................................................................................. 3
Regions ............................................................................................................................. 3
Amazon S3 data consistency model ...................................................................................... 3
Amazon S3 features ................................................................................................................... 5
Storage classes .................................................................................................................. 6
Bucket policies ................................................................................................................... 6
AWS Identity and Access Management .................................................................................. 7
Access control lists ............................................................................................................. 7
Versioning ......................................................................................................................... 7
Operations ........................................................................................................................ 7
Amazon S3 application programming interfaces (API) ..................................................................... 7
The REST interface ............................................................................................................. 8
The SOAP interface ............................................................................................................ 8
Paying for Amazon S3 ................................................................................................................ 8
Related services ......................................................................................................................... 8
Getting started ................................................................................................................................ 10
Setting up ............................................................................................................................... 10
Sign up for AWS .............................................................................................................. 11
Create an IAM user ........................................................................................................... 11
Sign in as an IAM user ...................................................................................................... 12
Step 1: Create a bucket ............................................................................................................. 12
Step 2: Upload an object .......................................................................................................... 13
Step 3: Download an object ...................................................................................................... 14
Using the S3 console ........................................................................................................ 14
Step 4: Copy an object ............................................................................................................. 15
Step 5: Delete the objects and bucket ........................................................................................ 15
Deleting an object ............................................................................................................ 16
Emptying your bucket ....................................................................................................... 16
Deleting your bucket ........................................................................................................ 16
Where do I go from here? ......................................................................................................... 17
Common use scenarios ...................................................................................................... 17
Considerations ................................................................................................................. 17
Advanced features ............................................................................................................ 18
Access control .................................................................................................................. 19
Development resources ..................................................................................................... 23
Working with buckets ....................................................................................................................... 24
Buckets overview ...................................................................................................................... 24
About permissions ............................................................................................................ 25
Managing public access to buckets ..................................................................................... 25
Bucket configuration ......................................................................................................... 26
Naming rules ........................................................................................................................... 27
Example bucket names ..................................................................................................... 28
Creating a bucket ..................................................................................................................... 28
Viewing bucket properties ......................................................................................................... 33
Accessing a bucket ................................................................................................................... 33
Virtual-hosted–style access ................................................................................................ 34
Path-style access .............................................................................................................. 34
Accessing an S3 bucket over IPv6 ....................................................................................... 34

API Version 2006-03-01


iii
Amazon Simple Storage Service User Guide

Accessing a bucket through S3 access points ....................................................................... 35


Accessing a bucket using S3:// ........................................................................................... 35
Emptying a bucket ................................................................................................................... 35
Deleting a bucket ..................................................................................................................... 37
Setting default bucket encryption .............................................................................................. 40
Using encryption for cross-account operations ..................................................................... 40
Using default encryption with replication ............................................................................ 41
Using Amazon S3 Bucket Keys with default encryption ......................................................... 41
Enabling default encryption ............................................................................................... 41
Monitoring default encryption ........................................................................................... 44
Configuring Transfer Acceleration ............................................................................................... 44
Why use Transfer Acceleration? .......................................................................................... 44
Requirements for using Transfer Acceleration ....................................................................... 45
Getting Started ................................................................................................................ 45
Enabling Transfer Acceleration ........................................................................................... 47
Speed Comparison tool ..................................................................................................... 51
Using Requester Pays ................................................................................................................ 52
How Requester Pays charges work ...................................................................................... 52
Configuring Requester Pays ............................................................................................... 53
Retrieving the requestPayment configuration ....................................................................... 54
Downloading objects in Requester Pays buckets ................................................................... 54
Restrictions and limitations ....................................................................................................... 55
Working with objects ........................................................................................................................ 57
Objects ................................................................................................................................... 57
Subresources .................................................................................................................... 58
Creating object keys ................................................................................................................. 58
Object key naming guidelines ............................................................................................ 59
Working with metadata ............................................................................................................. 61
System-defined object metadata ........................................................................................ 62
User-defined object metadata ............................................................................................ 63
Editing object metadata .................................................................................................... 64
Uploading objects .................................................................................................................... 66
Using multipart upload ............................................................................................................. 74
Multipart upload process ................................................................................................... 74
Concurrent multipart upload operations .............................................................................. 75
Multipart upload and pricing ............................................................................................. 76
API support for multipart upload ....................................................................................... 76
AWS Command Line Interface support for multipart upload .................................................. 76
AWS SDK support for multipart upload ............................................................................... 76
Multipart upload API and permissions ................................................................................. 77
Configuring a lifecycle policy ............................................................................................. 79
Uploading an object using multipart upload ........................................................................ 80
Uploading a directory ....................................................................................................... 93
Listing multipart uploads .................................................................................................. 95
Tracking a multipart upload .............................................................................................. 97
Aborting a multipart upload .............................................................................................. 99
Copying an object .......................................................................................................... 103
Multipart upload limits .................................................................................................... 107
Copying objects ...................................................................................................................... 108
To copy an object ........................................................................................................... 109
Downloading an object ........................................................................................................... 115
Deleting objects ..................................................................................................................... 121
Programmatically deleting objects from a version-enabled bucket ........................................ 121
Deleting objects from an MFA-enabled bucket .................................................................... 122
Deleting a single object ................................................................................................... 122
Deleting multiple objects ................................................................................................. 129
Organizing and listing objects .................................................................................................. 141

API Version 2006-03-01


iv
Amazon Simple Storage Service User Guide

Using prefixes ................................................................................................................ 141


Listing objects ................................................................................................................ 143
Using folders ................................................................................................................. 147
Viewing an object overview ............................................................................................. 149
Viewing object properties ................................................................................................ 150
Using presigned URLs ............................................................................................................. 150
Limiting presigned URL capabilities ................................................................................... 150
Sharing an object with a presigned URL ............................................................................ 151
Uploading objects using presigned URLs ............................................................................ 155
Transforming objects .............................................................................................................. 161
Creating Object Lambda Access Points .............................................................................. 162
Configuring IAM policies .................................................................................................. 166
Writing Lambda functions ............................................................................................... 168
Using AWS built functions ............................................................................................... 179
Best practices and guidelines for S3 Object Lambda ........................................................... 181
Security considerations .................................................................................................... 182
Working with access points .............................................................................................................. 184
Configuring IAM policies .......................................................................................................... 184
Condition keys ............................................................................................................... 185
Delegating access control to access points ......................................................................... 185
Access point policy examples ........................................................................................... 186
Creating access points ............................................................................................................. 189
Rules for naming Amazon S3 access points ........................................................................ 189
Creating an access point .................................................................................................. 189
Creating access points restricted to a VPC ......................................................................... 190
Managing public access ................................................................................................... 192
Using access points ................................................................................................................. 193
Monitoring and logging ................................................................................................... 193
Managing access points ................................................................................................... 194
Using a bucket-style alias for your access point .................................................................. 196
Using access points ......................................................................................................... 197
Restrictions and limitations ...................................................................................................... 200
Working with Multi-Region Access Points ........................................................................................... 201
Creating Multi-Region Access Points .......................................................................................... 202
Rules for naming Amazon S3 Multi-Region Access Points ..................................................... 203
Rules for choosing buckets for Amazon S3 Multi-Region Access Points ................................... 204
Blocking public access with Amazon S3 Multi-Region Access Points ....................................... 204
Creating Amazon S3 Multi-Region Access Points ................................................................. 205
Configuring AWS PrivateLink ............................................................................................ 206
Using a Multi-Region Access Point ............................................................................................ 208
Multi-Region Access Point hostnames ................................................................................ 209
Multi-Region Access Points and Amazon S3 Transfer Acceleration ......................................... 209
Multi-Region Access Point permissions .............................................................................. 210
Request routing .............................................................................................................. 211
Bucket replication ........................................................................................................... 212
Supported operations ..................................................................................................... 212
Managing Multi-Region Access Points ........................................................................................ 213
Monitoring and logging ........................................................................................................... 214
Monitoring and logging requests made to Multi-Region Access Point management APIs ........... 214
Using CloudTrail ............................................................................................................. 215
Restrictions and limitations ...................................................................................................... 216
Security ......................................................................................................................................... 218
Data protection ...................................................................................................................... 218
Data encryption ..................................................................................................................... 219
Server-side encryption .................................................................................................... 219
Using client-side encryption ............................................................................................. 261
Internetwork privacy ............................................................................................................... 265

API Version 2006-03-01


v
Amazon Simple Storage Service User Guide

Traffic between service and on-premises clients and applications .......................................... 265
Traffic between AWS resources in the same Region ............................................................. 266
AWS PrivateLink for Amazon S3 ............................................................................................... 266
Types of VPC endpoints .................................................................................................. 266
Restrictions and limitations of AWS PrivateLink for Amazon S3 ............................................. 267
Accessing Amazon S3 interface endpoints .......................................................................... 267
Accessing buckets and S3 access points from S3 interface endpoints ..................................... 267
Updating an on-premises DNS configuration ...................................................................... 271
Creating a VPC endpoint policy ........................................................................................ 272
Identity and access management .............................................................................................. 274
Overview ....................................................................................................................... 275
Access policy guidelines ................................................................................................... 280
Request authorization ..................................................................................................... 284
Bucket policies and user policies ....................................................................................... 291
AWS managed policies .................................................................................................... 458
Managing access with ACLs .............................................................................................. 460
Using CORS ................................................................................................................... 477
Blocking public access ..................................................................................................... 488
Reviewing bucket access .................................................................................................. 497
Controlling object ownership ........................................................................................... 502
Verifying bucket ownership .............................................................................................. 504
Logging and monitoring .......................................................................................................... 508
Compliance Validation ............................................................................................................. 509
Resilience .............................................................................................................................. 510
Backup encryption .......................................................................................................... 511
Infrastructure security ............................................................................................................. 512
Configuration and vulnerability analysis .................................................................................... 513
Security Best Practices ............................................................................................................ 514
Amazon S3 Preventative Security Best Practices ................................................................. 514
Amazon S3 Monitoring and Auditing Best Practices ............................................................ 516
Managing storage ........................................................................................................................... 519
Using S3 Versioning ................................................................................................................ 519
Unversioned, versioning-enabled, and versioning-suspended buckets ..................................... 519
Using S3 Versioning with S3 Lifecycle ............................................................................... 520
S3 Versioning ................................................................................................................. 520
Enabling versioning on buckets ........................................................................................ 523
Configuring MFA delete ................................................................................................... 528
Working with versioning-enabled objects ........................................................................... 529
Working with versioning-suspended objects ....................................................................... 546
Working with archived objects ......................................................................................... 549
Using Object Lock .................................................................................................................. 559
S3 Object Lock ............................................................................................................... 559
Configuring Object Lock on the console ............................................................................ 563
Managing Object Lock .................................................................................................... 564
Managing storage classes ........................................................................................................ 567
Frequently accessed objects ............................................................................................. 567
Automatically optimizing data with changing or unknown access patterns ............................. 567
Infrequently accessed objects ........................................................................................... 568
Archiving objects ............................................................................................................ 569
Amazon S3 on Outposts .................................................................................................. 569
Comparing storage classes ............................................................................................... 570
Setting the storage class of an object ............................................................................... 570
Amazon S3 Intelligent-Tiering .................................................................................................. 571
How S3 Intelligent-Tiering works ..................................................................................... 571
Using S3 Intelligent-Tiering ............................................................................................ 572
Managing S3 Intelligent-Tiering ...................................................................................... 576
Managing lifecycle .................................................................................................................. 578

API Version 2006-03-01


vi
Amazon Simple Storage Service User Guide

Managing object lifecycle ................................................................................................ 578


Creating a lifecycle configuration ...................................................................................... 579
Transitioning objects ....................................................................................................... 579
Expiring objects .............................................................................................................. 584
Setting lifecycle configuration .......................................................................................... 584
Using other bucket configurations .................................................................................... 594
Lifecycle configuration elements ...................................................................................... 596
Examples of lifecycle configuration ................................................................................... 602
Managing inventory ................................................................................................................ 612
Amazon S3 inventory buckets .......................................................................................... 612
Inventory lists ................................................................................................................ 613
Configuring Amazon S3 inventory .................................................................................... 614
Setting up notifications for inventory completion ............................................................... 618
Locating your inventory .................................................................................................. 618
Querying inventory with Athena ....................................................................................... 621
Replicating objects .................................................................................................................. 623
Why use replication ........................................................................................................ 623
When to use Cross-Region Replication .............................................................................. 624
When to use Same-Region Replication .............................................................................. 624
Requirements for replication ............................................................................................ 624
What's replicated? ........................................................................................................... 625
Setting up replication ..................................................................................................... 627
Configuring replication .................................................................................................... 641
Additional configurations ................................................................................................. 666
Getting replication status ................................................................................................ 680
Troubleshooting ............................................................................................................. 682
Additional considerations ................................................................................................. 684
Using object tags ................................................................................................................... 685
API operations related to object tagging ........................................................................... 687
Additional configurations ................................................................................................. 688
Access control ................................................................................................................ 688
Managing object tags ...................................................................................................... 691
Using cost allocation tags ........................................................................................................ 695
More Info ...................................................................................................................... 696
Billing and usage reporting .............................................................................................. 696
Using Amazon S3 Select .......................................................................................................... 710
Requirements and limits .................................................................................................. 710
Constructing a request .................................................................................................... 711
Errors ............................................................................................................................ 711
S3 Select examples ......................................................................................................... 712
SQL Reference ............................................................................................................... 714
Using Batch Operations ........................................................................................................... 738
Batch Operations basics .................................................................................................. 738
Granting permissions ...................................................................................................... 739
Creating a job ................................................................................................................ 744
Supported operations ..................................................................................................... 751
Managing jobs ................................................................................................................ 774
Tracking job status and completion reports ....................................................................... 777
Using tags ..................................................................................................................... 786
Managing S3 Object Lock ................................................................................................ 797
Monitoring Amazon S3 .................................................................................................................... 814
Monitoring tools ..................................................................................................................... 814
Automated tools ............................................................................................................ 814
Manual tools .................................................................................................................. 815
Logging options ..................................................................................................................... 815
Logging with CloudTrail .......................................................................................................... 817
Using CloudTrail logs with Amazon S3 server access logs and CloudWatch Logs ...................... 817

API Version 2006-03-01


vii
Amazon Simple Storage Service User Guide

CloudTrail tracking with Amazon S3 SOAP API calls ............................................................ 818


CloudTrail events ............................................................................................................ 818
Example log files ............................................................................................................ 822
Enabling CloudTrail ......................................................................................................... 826
Identifying S3 requests ................................................................................................... 827
Logging server access ............................................................................................................. 833
How do I enable log delivery? .......................................................................................... 833
Log object key format ..................................................................................................... 834
How are logs delivered? .................................................................................................. 834
Best effort server log delivery .......................................................................................... 835
Bucket logging status changes take effect over time ........................................................... 835
Enabling server access logging ......................................................................................... 835
Log format .................................................................................................................... 841
Deleting log files ............................................................................................................ 850
Identifying S3 requests ................................................................................................... 851
Monitoring metrics with CloudWatch ........................................................................................ 854
Metrics and dimensions ................................................................................................... 855
Accessing CloudWatch metrics .......................................................................................... 862
CloudWatch metrics configurations ................................................................................... 863
Amazon S3 Event Notifications ................................................................................................ 867
Overview ....................................................................................................................... 868
Notification types and destinations ................................................................................... 869
Granting permissions ...................................................................................................... 872
Enabling event notifications ............................................................................................. 874
Walkthrough: Configuring SNS or SQS .............................................................................. 877
Configuring notifications using object key name filtering ..................................................... 883
Event message structure ................................................................................................. 887
Using analytics and insights ............................................................................................................. 892
Storage Class Analysis ............................................................................................................. 892
How to set up storage class analysis ................................................................................. 892
Storage class analysis ...................................................................................................... 893
How can I export storage class analysis data? .................................................................... 894
Configuring storage class analysis ..................................................................................... 895
S3 Storage Lens ..................................................................................................................... 897
Understanding S3 Storage Lens ....................................................................................... 898
Working with Organizations ............................................................................................. 903
Setting permissions ........................................................................................................ 905
Viewing storage metrics .................................................................................................. 907
Using Amazon S3 Storage Lens to optimize your storage costs ............................................. 911
Metrics glossary ............................................................................................................. 914
Working with S3 Storage Lens ......................................................................................... 918
Tracing requests using X-Ray ................................................................................................... 943
How X-Ray works with Amazon S3 ................................................................................... 943
Available Regions ........................................................................................................... 943
Hosting a static website .................................................................................................................. 944
Website endpoints .................................................................................................................. 944
Website endpoint examples ............................................................................................. 945
Adding a DNS CNAME ..................................................................................................... 945
Using a custom domain with Route 53 .............................................................................. 946
Key differences between a website endpoint and a REST API endpoint ................................... 946
Enabling website hosting ......................................................................................................... 946
Configuring an index document ............................................................................................... 950
Index document and folders ............................................................................................ 950
Configure an index document .......................................................................................... 951
Configuring a custom error document ....................................................................................... 952
Amazon S3 HTTP response codes ..................................................................................... 952
Configuring a custom error document ............................................................................... 953

API Version 2006-03-01


viii
Amazon Simple Storage Service User Guide

Setting permissions for website access ...................................................................................... 954


Step 1: Edit S3 Block Public Access settings ....................................................................... 955
Step 2: Add a bucket policy ............................................................................................. 956
Object access control lists ................................................................................................ 957
Logging web traffic ................................................................................................................. 957
Configuring a redirect ............................................................................................................. 958
Redirect requests to another host ..................................................................................... 958
Configure redirection rules ............................................................................................... 959
Redirect requests for an object ......................................................................................... 964
Example walkthroughs ............................................................................................................ 965
Configuring a static website ............................................................................................. 965
Configuring a static website using a custom domain ........................................................... 971
Developing with Amazon S3 ............................................................................................................ 987
Making requests ..................................................................................................................... 987
About access keys ........................................................................................................... 987
Request endpoints .......................................................................................................... 989
Making requests over IPv6 ............................................................................................... 989
Making requests using the AWS SDKs ............................................................................... 996
Making requests using the REST API ............................................................................... 1020
Using the AWS CLI ................................................................................................................ 1029
Using the AWS SDKs ............................................................................................................. 1030
Specifying the Signature Version in Request Authentication ............................................... 1031
Using the AWS SDK for Java .......................................................................................... 1037
Using the AWS SDK for .NET .......................................................................................... 1038
Using the AWS SDK for PHP and Running PHP Examples ................................................... 1039
Using the AWS SDK for Ruby - Version 3 ......................................................................... 1040
Using the AWS SDK for Python (Boto) ............................................................................. 1041
Using the AWS Mobile SDKs for iOS and Android .............................................................. 1042
Using the AWS Amplify JavaScript Library ....................................................................... 1042
Using the AWS SDK for JavaScript .................................................................................. 1042
Using the REST API ............................................................................................................... 1042
Request routing ............................................................................................................ 1043
Error handling ...................................................................................................................... 1047
The REST error response ................................................................................................ 1047
The SOAP error response ............................................................................................... 1048
Amazon S3 error best practices ...................................................................................... 1049
Reference ............................................................................................................................. 1049
Appendix a: Using the SOAP API ..................................................................................... 1050
Appendix b: Authenticating requests (AWS signature version 2) ........................................... 1052
Optimizing Amazon S3 performance ............................................................................................... 1081
Performance Guidelines ......................................................................................................... 1081
Measure Performance .................................................................................................... 1082
Scale Horizontally ......................................................................................................... 1082
Use Byte-Range Fetches ................................................................................................ 1082
Retry Requests ............................................................................................................. 1082
Combine Amazon S3 and Amazon EC2 in the Same Region ................................................ 1083
Use Transfer Acceleration to Minimize Latency ................................................................. 1083
Use the Latest AWS SDKs .............................................................................................. 1083
Performance Design Patterns ................................................................................................. 1083
Caching Frequently Accessed Content .............................................................................. 1084
Timeouts and Retries for Latency-Sensitive Apps .............................................................. 1084
Horizontal Scaling and Request Parallelization ................................................................. 1085
Accelerating Geographically Disparate Data Transfers ....................................................... 1086
Using S3 on Outposts ................................................................................................................... 1087
Getting started with S3 on Outposts ....................................................................................... 1087
Ordering an Outpost ..................................................................................................... 1088
Setting up S3 on Outposts ............................................................................................ 1088

API Version 2006-03-01


ix
Amazon Simple Storage Service User Guide

Restrictions and limitations .................................................................................................... 1088


Specifications ............................................................................................................... 1088
Data consistency model ................................................................................................. 1089
Supported API operations .............................................................................................. 1089
Unsupported Amazon S3 features ................................................................................... 1090
Network restrictions ...................................................................................................... 1091
Using IAM with S3 on Outposts .............................................................................................. 1091
ARNs for S3 on Outposts ............................................................................................... 1092
Accessing S3 on Outposts ...................................................................................................... 1092
Accessing resources using ARNs ...................................................................................... 1093
Accessing S3 on Outposts using a VPC ............................................................................ 1094
Managing connections using cross-account elastic network interfaces .................................. 1095
Permissions for endpoints .............................................................................................. 1095
Encryption options ........................................................................................................ 1096
Monitoring S3 on Outposts .................................................................................................... 1097
Managing capacity ........................................................................................................ 1097
CloudTrail logs ............................................................................................................. 1097
S3 on Outposts event notifications ................................................................................. 1097
Managing S3 on Outposts buckets and objects ......................................................................... 1098
Using the console ......................................................................................................... 1098
Using the AWS CLI ........................................................................................................ 1107
Using the SDK for Java .................................................................................................. 1112
Troubleshooting ............................................................................................................................ 1130
Troubleshooting Amazon S3 by Symptom ................................................................................ 1130
Significant Increases in HTTP 503 Responses to Requests to Buckets with Versioning Enabled .. 1130
Unexpected Behavior When Accessing Buckets Set with CORS ............................................ 1131
Getting Amazon S3 Request IDs for AWS Support ..................................................................... 1131
Using HTTP to Obtain Request IDs .................................................................................. 1131
Using a Web Browser to Obtain Request IDs .................................................................... 1132
Using AWS SDKs to Obtain Request IDs ........................................................................... 1132
Using the AWS CLI to Obtain Request IDs ........................................................................ 1133
Related Topics ...................................................................................................................... 1133
Document history ......................................................................................................................... 1135
Earlier updates ..................................................................................................................... 1143
AWS glossary ............................................................................................................................... 1157

API Version 2006-03-01


x
Amazon Simple Storage Service User Guide
How do I...?

What is Amazon S3?


Amazon Simple Storage Service (Amazon S3) is storage for the Internet. It is designed to make web-scale
computing easier.

Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of
data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable,
reliable, fast, and inexpensive data storage infrastructure that Amazon uses to run its own global
network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to
developers.

This introduction to Amazon Simple Storage Service (Amazon S3) provides a detailed summary of this
web service. After reading this section, you should have a good idea of what it offers and how it can fit in
with your business.

This guide describes how you send requests to create buckets, store and retrieve your objects, and
manage permissions on your resources. The guide also describes access control and the authentication
process. Access control defines who can access objects and buckets within Amazon S3, and the type of
access (for example, READ and WRITE). The authentication process verifies the identity of a user who is
trying to access Amazon Web Services (AWS).

Topics
• How do I...? (p. 1)
• Advantages of using Amazon S3 (p. 2)
• Amazon S3 concepts (p. 2)
• Amazon S3 features (p. 5)
• Amazon S3 application programming interfaces (API) (p. 7)
• Paying for Amazon S3 (p. 8)
• Related services (p. 8)

How do I...?
Information Relevant sections

General product overview and pricing Amazon S3

How do I work with buckets? Buckets overview (p. 24)

How do I work with access points? Managing data access with Amazon S3 access
points (p. 184)

How do I work with objects? Amazon S3 objects overview (p. 57)

How do I make requests? Making requests (p. 987)

How do I manage access to my Identity and access management in Amazon S3 (p. 274)
resources?

API Version 2006-03-01


1
Amazon Simple Storage Service User Guide
Advantages of using Amazon S3

Advantages of using Amazon S3


Amazon S3 is intentionally built with a minimal feature set that focuses on simplicity and robustness.
Following are some of the advantages of using Amazon S3:

• Creating buckets – Create and name a bucket that stores data. Buckets are the fundamental
containers in Amazon S3 for data storage.
• Storing data – Store an infinite amount of data in a bucket. Upload as many objects as you like into
an Amazon S3 bucket. Each object can contain up to 5 TB of data. Each object is stored and retrieved
using a unique developer-assigned key.
• Downloading data – Download your data or enable others to do so. Download your data anytime you
like, or allow others to do the same.
• Permissions – Grant or deny access to others who want to upload or download data into your
Amazon S3 bucket. Grant upload and download permissions to three types of users. Authentication
mechanisms can help keep data secure from unauthorized access.
• Standard interfaces – Use standards-based REST and SOAP interfaces designed to work with any
internet-development toolkit.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or
the AWS SDKs.

Amazon S3 concepts
This section describes key concepts and terminology you need to understand to use Amazon S3
effectively. They are presented in the order that you will most likely encounter them.

Topics
• Buckets (p. 2)
• Objects (p. 3)
• Keys (p. 3)
• Regions (p. 3)
• Amazon S3 data consistency model (p. 3)

Buckets
A bucket is a container for objects stored in Amazon S3. Every object is contained in a bucket. For
example, if the object named photos/puppy.jpg is stored in the awsexamplebucket1 bucket in the
US West (Oregon) Region, then it is addressable using the URL https://awsexamplebucket1.s3.us-
west-2.amazonaws.com/photos/puppy.jpg.

Buckets serve several purposes:

• They organize the Amazon S3 namespace at the highest level.


• They identify the account responsible for storage and data transfer charges.
• They play a role in access control.
• They serve as the unit of aggregation for usage reporting.

You can configure buckets so that they are created in a specific AWS Region. For more information, see
Accessing a Bucket (p. 33). You can also configure a bucket so that every time an object is added to it,

API Version 2006-03-01


2
Amazon Simple Storage Service User Guide
Objects

Amazon S3 generates a unique version ID and assigns it to the object. For more information, see Using
Versioning (p. 519).

For more information about buckets, see Buckets overview (p. 24).

Objects
Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata.
The data portion is opaque to Amazon S3. The metadata is a set of name-value pairs that describe
the object. These include some default metadata, such as the date last modified, and standard HTTP
metadata, such as Content-Type. You can also specify custom metadata at the time the object is
stored.

An object is uniquely identified within a bucket by a key (name) and a version ID. For more information,
see Keys (p. 3) and Using Versioning (p. 519).

Keys
A key is the unique identifier for an object within a bucket. Every object in a bucket has exactly
one key. The combination of a bucket, key, and version ID uniquely identify each object. So you
can think of Amazon S3 as a basic data map between "bucket + key + version" and the object
itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web
service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://
doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl, "doc" is the name of the bucket and
"2006-03-01/AmazonS3.wsdl" is the key.

For more information about object keys, see Creating object key names (p. 58).

Regions
You can choose the geographical AWS Region where Amazon S3 will store the buckets that you create.
You might choose a Region to optimize latency, minimize costs, or address regulatory requirements.
Objects stored in a Region never leave the Region unless you explicitly transfer them to another Region.
For example, objects stored in the Europe (Ireland) Region never leave it.
Note
You can only access Amazon S3 and its features in AWS Regions that are enabled for your
account.

For a list of Amazon S3 Regions and endpoints, see Regions and Endpoints in the AWS General Reference.

Amazon S3 data consistency model


Amazon S3 provides strong read-after-write consistency for PUTs and DELETEs of objects in your
Amazon S3 bucket in all AWS Regions. This applies to both writes to new objects as well as PUTs that
overwrite existing objects and DELETEs. In addition, read operations on Amazon S3 Select, Amazon
S3 Access Control Lists, Amazon S3 Object Tags, and object metadata (e.g. HEAD object) are strongly
consistent.

Updates to a single key are atomic. For example, if you PUT to an existing key from one thread and
perform a GET on the same key from a second thread concurrently, you will get either the old data or the
new data, but never partial or corrupt data.

Amazon S3 achieves high availability by replicating data across multiple servers within AWS data centers.
If a PUT request is successful, your data is safely stored. Any read (GET or LIST) that is initiated following
the receipt of a successful PUT response will return the data written by the PUT. Here are examples of
this behavior:

API Version 2006-03-01


3
Amazon Simple Storage Service User Guide
Amazon S3 data consistency model

• A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new
object will appear in the list.
• A process replaces an existing object and immediately tries to read it. Amazon S3 will return the new
data.
• A process deletes an existing object and immediately tries to read it. Amazon S3 will not return any
data as the object has been deleted.
• A process deletes an existing object and immediately lists keys within its bucket. The object will not
appear in the listing.

Note

• Amazon S3 does not support object locking for concurrent writers. If two PUT requests are
simultaneously made to the same key, the request with the latest timestamp wins. If this is an
issue, you will need to build an object-locking mechanism into your application
• Updates are key-based. There is no way to make atomic updates across keys. For example,
you cannot make the update of one key dependent on the update of another key unless you
design this functionality into your application.

Bucket configurations have an eventual consistency model. Specifically:

• If you delete a bucket and immediately list all buckets, the deleted bucket might still appear in the list.
• If you enable versioning on a bucket for the first time, it might take a short amount of time for the
change to be fully propagated. We recommend that you wait for 15 minutes after enabling versioning
before issuing write operations (PUT or DELETE) on objects in the bucket.

Concurrent applications
This section provides examples of behavior to be expected from Amazon S3 when multiple clients are
writing to the same items.

In this example, both W1 (write 1) and W2 (write 2) complete before the start of R1 (read 1) and R2 (read
2). Because S3 is strongly consistent, R1 and R2 both return color = ruby.

In the next example, W2 does not complete before the start of R1. Therefore, R1 might return color =
ruby or color = garnet. However, since W1 and W2 finish before the start of R2, R2 returns color =
garnet.

API Version 2006-03-01


4
Amazon Simple Storage Service User Guide
Amazon S3 features

In the last example, W2 begins before W1 has received an acknowledgement. Therefore, these writes are
considered concurrent. Amazon S3 internally uses last-writer-wins semantics to determine which write
takes precedence. However, the order in which Amazon S3 receives the requests and the order in which
applications receive acknowledgements cannot be predicted due to factors such as network latency.
For example, W2 might be initiated by an Amazon EC2 instance in the same region while W1 might be
initiated by a host that is further away. The best way to determine the final value is to perform a read
after both writes have been acknowledged.

Amazon S3 features
This section describes important Amazon S3 features.

Topics
• Storage classes (p. 6)
• Bucket policies (p. 6)
• AWS Identity and Access Management (p. 7)
• Access control lists (p. 7)
• Versioning (p. 7)

API Version 2006-03-01


5
Amazon Simple Storage Service User Guide
Storage classes

• Operations (p. 7)

Storage classes
Amazon S3 offers a range of storage classes designed for different use cases. These include Amazon S3
STANDARD for general-purpose storage of frequently accessed data, Amazon S3 STANDARD_IA for long-
lived, but less frequently accessed data, and S3 Glacier for long-term archive.

For more information, see Using Amazon S3 storage classes (p. 567).

Bucket policies
Bucket policies provide centralized access control to buckets and objects based on a variety of conditions,
including Amazon S3 operations, requesters, resources, and aspects of the request (for example, IP
address). The policies are expressed in the access policy language and enable centralized management of
permissions. The permissions attached to a bucket apply to all of the bucket's objects that are owned by
the bucket owner account.

Both individuals and companies can use bucket policies. When companies register with Amazon S3,
they create an account. Thereafter, the company becomes synonymous with the account. Accounts are
financially responsible for the AWS resources that they (and their employees) create. Accounts have
the power to grant bucket policy permissions and assign employees permissions based on a variety of
conditions. For example, an account could create a policy that gives a user write access:

• To a particular S3 bucket
• From an account's corporate network
• During business hours

An account can grant one user limited read and write access, but allow another to create and delete
buckets also. An account could allow several field offices to store their daily reports in a single bucket. It
could allow each office to write only to a certain set of names (for example, "Nevada/*" or "Utah/*") and
only from the office's IP address range.

Unlike access control lists (described later), which can add (grant) permissions only on individual objects,
policies can either add or deny permissions across all (or a subset) of objects within a bucket. With one
request, an account can set the permissions of any number of objects in a bucket. An account can use
wildcards (similar to regular expression operators) on Amazon Resource Names (ARNs) and other values.
The account could then control access to groups of objects that begin with a common prefix or end with
a given extension, such as .html.

Only the bucket owner is allowed to associate a policy with a bucket. Policies (written in the access policy
language) allow or deny requests based on the following:

• Amazon S3 bucket operations (such as PUT ?acl), and object operations (such as PUT Object, or
GET Object)
• Requester
• Conditions specified in the policy

An account can control access based on specific Amazon S3 operations, such as GetObject,
GetObjectVersion, DeleteObject, or DeleteBucket.

The conditions can be such things as IP addresses, IP address ranges in CIDR notation, dates, user agents,
HTTP referrer, and transports (HTTP and HTTPS).

For more information, see Bucket policies and user policies (p. 291).

API Version 2006-03-01


6
Amazon Simple Storage Service User Guide
AWS Identity and Access Management

AWS Identity and Access Management


You can use AWS Identity and Access Management (IAM) to manage access to your Amazon S3 resources.

For example, you can use IAM with Amazon S3 to control the type of access a user or group of users has
to specific parts of an Amazon S3 bucket your AWS account owns.

For more information about IAM, see the following:

• AWS Identity and Access Management (IAM)


• Getting started
• IAM User Guide

Access control lists


You can control access to each of your buckets and objects using an access control list (ACL). For more
information, see Access control list (ACL) overview (p. 460).

Versioning
You can use versioning to keep multiple versions of an object in the same bucket. For more information,
see Using versioning in S3 buckets (p. 519).

Operations
Following are the most common operations that you'll run through the API.

Common operations

• Create a bucket – Create and name your own bucket in which to store your objects.
• Write an object – Store data by creating or overwriting an object. When you write an object, you
specify a unique key in the namespace of your bucket. This is also a good time to specify any access
control you want on the object.
• Read an object – Read data back. You can download the data via HTTP.
• Delete an object – Delete some of your data.
• List keys – List the keys contained in one of your buckets. You can filter the key list based on a prefix.

These operations and all other functionality are described in detail throughout this guide.

Amazon S3 application programming interfaces


(API)
The Amazon S3 architecture is designed to be programming language-neutral, using AWS Supported
interfaces to store and retrieve objects.

Amazon S3 provides a REST and a SOAP interface. They are similar, but there are some differences. For
example, in the REST interface, metadata is returned in HTTP headers. Because we only support HTTP
requests of up to 4 KB (not including the body), the amount of metadata you can supply is restricted.

API Version 2006-03-01


7
Amazon Simple Storage Service User Guide
The REST interface

Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

The REST interface


The REST API is an HTTP interface to Amazon S3. Using REST, you use standard HTTP requests to create,
fetch, and delete buckets and objects.

You can use any toolkit that supports HTTP to use the REST API. You can even use a browser to fetch
objects, as long as they are anonymously readable.

The REST API uses the standard HTTP headers and status codes, so that standard browsers and toolkits
work as expected. In some areas, we have added functionality to HTTP (for example, we added headers
to support access control). In these cases, we have done our best to add the new functionality in a way
that matched the style of standard HTTP usage.

The SOAP interface


Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

The SOAP API provides a SOAP 1.1 interface using document literal encoding. The most common way to
use SOAP is to download the WSDL (see https://doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl),
use a SOAP toolkit such as Apache Axis or Microsoft .NET to create bindings, and then write code that
uses the bindings to call Amazon S3.

Paying for Amazon S3


Pricing for Amazon S3 is designed so that you don't have to plan for the storage requirements of your
application. Most storage providers force you to purchase a predetermined amount of storage and
network transfer capacity: If you exceed that capacity, your service is shut off or you are charged high
overage fees. If you do not exceed that capacity, you pay as though you used it all.

Amazon S3 charges you only for what you actually use, with no hidden fees and no overage charges.
This gives developers a variable-cost service that can grow with their business while enjoying the cost
advantages of the AWS infrastructure.

Before storing anything in Amazon S3, you must register with the service and provide a payment method
that is charged at the end of each month. There are no setup fees to begin using the service. At the end
of the month, your payment method is automatically charged for that month's usage.

For information about paying for Amazon S3 storage, see Amazon S3 Pricing.

Related services
After you load your data into Amazon S3, you can use it with other AWS services. The following are the
services you might use most frequently:

• Amazon Elastic Compute Cloud (Amazon EC2) – This service provides virtual compute resources in
the cloud. For more information, see the Amazon EC2 product details page.

API Version 2006-03-01


8
Amazon Simple Storage Service User Guide
Related services

• Amazon EMR – This service enables businesses, researchers, data analysts, and developers to easily
and cost-effectively process vast amounts of data. It uses a hosted Hadoop framework running on the
web-scale infrastructure of Amazon EC2 and Amazon S3. For more information, see the Amazon EMR
product details page.
• AWS Snowball – This service accelerates transferring large amounts of data into and out of AWS using
physical storage devices, bypassing the internet. Each AWS Snowball device type can transport data
at faster-than internet speeds. This transport is done by shipping the data in the devices through a
regional carrier. For more information, see the AWS Snowball product details page.

API Version 2006-03-01


9
Amazon Simple Storage Service User Guide
Setting up

Getting started with Amazon S3


You can get started with Amazon S3 by working with buckets and objects. A bucket is a container for
objects. An object is a file and any metadata that describes that file.

To store an object in Amazon S3, you create a bucket and then upload the object to the bucket. When
the object is in the bucket, you can open it, download it, and move it. When you no longer need an object
or a bucket, you can clean up your resources.

With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.

Prerequisites

Before you begin, confirm that you've completed the steps in Prerequisite: Setting up Amazon
S3 (p. 10).

Topics
• Prerequisite: Setting up Amazon S3 (p. 10)
• Step 1: Create your first S3 bucket (p. 12)
• Step 2: Upload an object to your bucket (p. 13)
• Step 3: Download an object (p. 14)
• Step 4: Copy your object to a folder (p. 15)
• Step 5: Delete your objects and bucket (p. 15)
• Where do I go from here? (p. 17)

Prerequisite: Setting up Amazon S3


When you sign up for AWS, your AWS account is automatically signed up for all services in AWS,
including Amazon S3. You are charged only for the services that you use.

With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.

To set up Amazon S3, use the steps in the following sections.

When you sign up for AWS and set up Amazon S3, you can optionally change the display language in the
AWS Management Console. For more information, see Changing the language of the AWS Management
Console in the AWS Management Console Getting Started Guide.

Topics
• Sign up for AWS (p. 11)
• Create an IAM user (p. 11)
• Sign in as an IAM user (p. 12)

API Version 2006-03-01


10
Amazon Simple Storage Service User Guide
Sign up for AWS

Sign up for AWS


If you do not have an AWS account, complete the following steps to create one.

To sign up for an AWS account

1. Open https://portal.aws.amazon.com/billing/signup.
2. Follow the online instructions.

Part of the sign-up procedure involves receiving a phone call and entering a verification code on the
phone keypad.

AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view
your current account activity and manage your account by going to https://aws.amazon.com/ and
choosing My Account.

Create an IAM user


When you first create an Amazon Web Services (AWS) account, you begin with a single sign-in identity.
That identity has complete access to all AWS services and resources in the account. This identity is called
the AWS account root user. When you sign in, enter the email address and password that you used to
create the account.
Important
We strongly recommend that you do not use the root user for your everyday tasks, even the
administrative ones. Instead, adhere to the best practice of using the root user only to create
your first IAM user. Then securely lock away the root user credentials and use them to perform
only a few account and service management tasks. To view the tasks that require you to sign in
as the root user, see Tasks that require root user credentials.

If you signed up for AWS but have not created an IAM user for yourself, follow these steps.

To create an administrator user for yourself and add the user to an administrators group
(console)

1. Sign in to the IAM console as the account owner by choosing Root user and entering your AWS
account email address. On the next page, enter your password.
Note
We strongly recommend that you adhere to the best practice of using the Administrator
IAM user that follows and securely lock away the root user credentials. Sign in as the root
user only to perform a few account and service management tasks.
2. In the navigation pane, choose Users and then choose Add user.
3. For User name, enter Administrator.
4. Select the check box next to AWS Management Console access. Then select Custom password, and
then enter your new password in the text box.
5. (Optional) By default, AWS requires the new user to create a new password when first signing in. You
can clear the check box next to User must create a new password at next sign-in to allow the new
user to reset their password after they sign in.
6. Choose Next: Permissions.
7. Under Set permissions, choose Add user to group.
8. Choose Create group.
9. In the Create group dialog box, for Group name enter Administrators.

API Version 2006-03-01


11
Amazon Simple Storage Service User Guide
Sign in as an IAM user

10. Choose Filter policies, and then select AWS managed - job function to filter the table contents.
11. In the policy list, select the check box for AdministratorAccess. Then choose Create group.
Note
You must activate IAM user and role access to Billing before you can use the
AdministratorAccess permissions to access the AWS Billing and Cost Management
console. To do this, follow the instructions in step 1 of the tutorial about delegating access
to the billing console.
12. Back in the list of groups, select the check box for your new group. Choose Refresh if necessary to
see the group in the list.
13. Choose Next: Tags.
14. (Optional) Add metadata to the user by attaching tags as key-value pairs. For more information
about using tags in IAM, see Tagging IAM entities in the IAM User Guide.
15. Choose Next: Review to see the list of group memberships to be added to the new user. When you
are ready to proceed, choose Create user.

You can use this same process to create more groups and users and to give your users access to your AWS
account resources. To learn about using policies that restrict user permissions to specific AWS resources,
see Access management and Example policies.

Sign in as an IAM user


After you create an IAM user, you can sign in to AWS with your IAM user name and password.

Before you sign in as an IAM user, you can verify the sign-in link for IAM users in the IAM console. On the
IAM Dashboard, under IAM users sign-in link, you can see the sign-in link for your AWS account. The URL
for your sign-in link contains your AWS account ID without dashes (‐).

If you don't want the URL for your sign-in link to contain your AWS account ID, you can create an account
alias. For more information, see Creating, deleting, and listing an AWS account alias in the IAM User
Guide.

To sign in as an AWS user

1. Sign out of the AWS Management Console.


2. Enter your sign-in link.

Your sign-in link includes your AWS account ID (without dashes) or your AWS account alias:

https://aws_account_id_or_alias.signin.aws.amazon.com/console

3. Enter the IAM user name and password that you just created.

When you're signed in, the navigation bar displays "your_user_name @ your_aws_account_id".

Step 1: Create your first S3 bucket


After you sign up for AWS, you're ready to create a bucket in Amazon S3 using the AWS Management
Console. Every object in Amazon S3 is stored in a bucket. Before you can store data in Amazon S3, you
must create a bucket.
Note
You are not charged for creating a bucket. You are charged only for storing objects in the
bucket and for transferring objects in and out of the bucket. The charges that you incur through

API Version 2006-03-01


12
Amazon Simple Storage Service User Guide
Step 2: Upload an object

following the examples in this guide are minimal (less than $1). For more information about
storage charges, see Amazon S3 pricing.

To create a bucket using the AWS Command Line Interface, see create-bucket in the AWS CLI Command
Reference.

To create a bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Create bucket.

The Create bucket page opens.


3. In Bucket name, enter a DNS-compliant name for your bucket.

The bucket name must:

• Be unique across all of Amazon S3.


• Be between 3 and 63 characters long.
• Not contain uppercase characters.
• Start with a lowercase letter or number.

After you create the bucket, you can't change its name. For information about naming buckets, see
Bucket naming rules (p. 27).
Important
Avoid including sensitive information, such as account numbers, in the bucket name. The
bucket name is visible in the URLs that point to the objects in the bucket.
4. In Region, choose the AWS Region where you want the bucket to reside.

Choose a Region that is close to you geographically to minimize latency and costs and to address
regulatory requirements. Objects stored in a Region never leave that Region unless you explicitly
transfer them to another Region. For a list of Amazon S3 AWS Regions, see AWS Service Endpoints
in the Amazon Web Services General Reference.
5. Keep the remaining settings set to the defaults. For more information on additional bucket settings,
see Creating a bucket (p. 28).
6. Choose Create bucket.

You've created a bucket in Amazon S3.

Next step

To add an object to your bucket, see Step 2: Upload an object to your bucket (p. 13).

Step 2: Upload an object to your bucket


After creating a bucket in Amazon S3, you're ready to upload an object to the bucket. An object can be
any kind of file: a text file, a photo, a video, and so on.

To upload an object to a bucket

1. Open the Amazon S3 console at https://console.aws.amazon.com/s3/.


2. In the Buckets list, choose the name of the bucket that you want to upload your object to.

API Version 2006-03-01


13
Amazon Simple Storage Service User Guide
Step 3: Download an object

3. On the Objects tab for your bucket, choose Upload.


4. Under Files and folders, choose Add files.
5. Choose a file to upload, and then choose Open.
6. Choose Upload.

You've successfully uploaded an object to your bucket.

Next step

To view your object, see Step 3: Download an object (p. 14).

Step 3: Download an object


After you upload an object to a bucket, you can view information about your object and download the
object to your local computer.

Using the S3 console


This section explains how to use the Amazon S3 console to download an object from an S3 bucket using
a presigned URL.
Note

• You can only download one object at a time.


• Objects with key names ending with period(s) "." downloaded using the Amazon S3 console
will have the period(s) "." removed from the key name of the downloaded object. To download
an object with the key name ending in period(s) "." retained in the downloaded object, you
must use the AWS Command Line Interface (AWS CLI), AWS SDKs, or REST API.

To download an object from an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to download an object from.

3. You can download an object from an S3 bucket in any of the following ways:

• Choose the name of the object that you want to download.

On the Overview page, select the object and from the Actions menu choose Download or
Download as if you want to download the object to a specific folder.
• Choose the object that you want to download and then from the Object actions menu choose
Download or Download as if you want to download the object to a specific folder.
• If you want to download a specific version of the object, choose the name of the object that you
want to download. Choose the Versions tab and then from the Actions menu choose Download
or Download as if you want to download the object to a specific folder.

You've successfully downloaded your object.

Next step

To copy and paste your object within Amazon S3, see Step 4: Copy your object to a folder (p. 15).

API Version 2006-03-01


14
Amazon Simple Storage Service User Guide
Step 4: Copy an object

Step 4: Copy your object to a folder


You've already added an object to a bucket and downloaded the object. Now, you create a folder and
copy the object and paste it into the folder.

To copy an object to a folder

1. In the Buckets list, choose your bucket name.


2. Choose Create folder and configure a new folder:

a. Enter a folder name (for example, favorite-pics).


b. For the folder encryption setting, choose None.
c. Choose Save.
3. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to copy.
4. Select the check box to the left of the names of the objects that you want to copy.
5. Choose Actions and choose Copy from the list of options that appears.

Alternatively, choose Copy from the options in the upper right.


6. Choose the destination folder:

a. Choose Browse S3.


b. Choose the option button to the left of the folder name.

To navigate into a folder and choose a subfolder as your destination, choose the folder name.
c. Choose Choose destination.

The path to your destination folder appears in the Destination box. In Destination, you can
alternately enter your destination path, for example, s3://bucket-name/folder-name/.
7. In the bottom right, choose Copy.

Amazon S3 moves your objects to the destination folder.

Next step

To delete an object and a bucket in Amazon S3, see Step 5: Delete your objects and bucket (p. 15).

Step 5: Delete your objects and bucket


When you no longer need an object or a bucket, we recommend that you delete them to prevent further
charges. If you completed this getting started walkthrough as a learning exercise, and you don't plan to
use your bucket or objects, we recommend that you delete your bucket and objects so that charges no
longer accrue.

Before you delete your bucket, empty the bucket or delete the objects in the bucket. After you delete
your objects and bucket, they are no longer available.

If you want to continue to use the same bucket name, we recommend that you delete the objects or
empty the bucket, but don't delete the bucket. After you delete a bucket, the name becomes available
to reuse. However, another AWS account might create a bucket with the same name before you have a
chance to reuse it.

Topics

API Version 2006-03-01


15
Amazon Simple Storage Service User Guide
Deleting an object

• Deleting an object (p. 16)


• Emptying your bucket (p. 16)
• Deleting your bucket (p. 16)

Deleting an object
If you want to choose which objects you delete without emptying all the objects from your bucket, you
can delete an object.

1. In the Buckets list, choose the name of the bucket that you want to delete an object from.
2. Select the check box to the left of the names of the objects that you want to delete.
3. Choose Actions and choose Delete from the list of options that appears.

Alternatively, choose Delete from the options in the upper right.


4. Type permanently delete if asked to confirm that you want to delete these objects.
5. Choose Delete objects in the bottom right and Amazon S3 deletes the specified objects.

Emptying your bucket


If you plan to delete your bucket, you must first empty your bucket, which deletes all the objects in the
bucket.

To empty a bucket

1. In the Buckets list, select the bucket that you want to empty, and then choose Empty.
2. To confirm that you want to empty the bucket and delete all the objects in it, in Empty bucket, type
permanently delete.
Important
Emptying the bucket cannot be undone. Objects added to the bucket while the empty
bucket action is in progress will be deleted.
3. To empty the bucket and delete all the objects in it, and choose Empty.

An Empty bucket: Status page opens that you can use to review a summary of failed and successful
object deletions.
4. To return to your bucket list, choose Exit.

Deleting your bucket


After you empty your bucket or delete all the objects from your bucket, you can delete your bucket.

1. To delete a bucket, in the Buckets list, select the bucket.


2. Choose Delete.
3. To confirm deletion, in Delete bucket, type the name of the bucket.
Important
Deleting a bucket cannot be undone. Bucket names are unique. If you delete your bucket,
another AWS user can use the name. If you want to continue to use the same bucket name,
don't delete your bucket. Instead, empty and keep the bucket.
4. To delete your bucket, choose Delete bucket.

API Version 2006-03-01


16
Amazon Simple Storage Service User Guide
Where do I go from here?

Where do I go from here?


In the preceding examples, you learned how to perform some basic Amazon S3 tasks.

The following topics explain various ways in which you can gain a deeper understanding of Amazon S3
so that you can implement it in your applications.

Topics
• Common use scenarios (p. 17)
• Considerations going forward (p. 17)
• Advanced Amazon S3 features (p. 18)
• Access control best practices (p. 19)
• Development resources (p. 23)

Common use scenarios


The AWS Solutions site lists many of the ways you can use Amazon S3. The following list summarizes
some of those ways.

• Backup and storage – Provide data backup and storage services for others.
• Application hosting – Provide services that deploy, install, and manage web applications.
• Media hosting – Build a redundant, scalable, and highly available infrastructure that hosts video,
photo, or music uploads and downloads.
• Software delivery – Host your software applications that customers can download.

For more information, see AWS Solutions.

Considerations going forward


This section introduces you to topics you should consider before launching your own Amazon S3 product.

Topics
• AWS account and security credentials (p. 17)
• Security (p. 18)
• AWS integration (p. 18)
• Pricing (p. 18)

AWS account and security credentials


When you signed up for the service, you created an AWS account using an email address and password.
Those are your AWS account root user credentials. As a best practice, you should not use your root
user credentials to access AWS. Nor should you give your credentials to anyone else. Instead, create
individual users for those who need access to your AWS account. First, create an AWS Identity and Access
Management (IAM) administrator user for yourself and use it for your daily work. For details, see Creating
your first IAM admin user and group in the IAM User Guide. Then create additional IAM users for other
people. For details, see Creating your first IAM delegated user and group in the IAM User Guide.

If you're an account owner or administrator and want to know more about IAM, see the product
description at https://aws.amazon.com/iam or the technical documentation in the IAM User Guide.

API Version 2006-03-01


17
Amazon Simple Storage Service User Guide
Advanced features

Security
Amazon S3 provides authentication mechanisms to secure data stored in Amazon S3 against
unauthorized access. Unless you specify otherwise, only the AWS account owner can access data
uploaded to Amazon S3. For more information about how to manage access to buckets and objects, go
to Identity and access management in Amazon S3 (p. 274).

You can also encrypt your data before uploading it to Amazon S3.

AWS integration
You can use Amazon S3 alone or in concert with one or more other Amazon products. The following are
the most common products used with Amazon S3:

• Amazon EC2
• Amazon EMR
• Amazon SQS
• Amazon CloudFront

Pricing
Learn the pricing structure for storing and transferring data on Amazon S3. For more information, see
Amazon S3 pricing.

Advanced Amazon S3 features


The examples in this guide show how to accomplish the basic tasks of creating a bucket, uploading and
downloading data to and from it, and moving and deleting the data. The following table summarizes
some of the most common advanced functionality offered by Amazon S3. Note that some advanced
functionality is not available in the AWS Management Console and requires that you use the Amazon S3
API.

Link Functionality

Using Requester Pays buckets for storage Learn how to configure a bucket so that a
transfers and usage (p. 52) customer pays for the downloads they make.

API Version 2006-03-01


18
Amazon Simple Storage Service User Guide
Access control

Access control best practices


Amazon S3 provides a variety of security features and tools. The following scenarios should serve as a
guide to what tools and settings you might want to use when performing certain tasks or operating in
specific environments. Proper application of these tools can help maintain the integrity of your data and
help ensure that your resources are accessible to the intended users.

Topics
• Creating a new bucket (p. 19)
• Storing and sharing data (p. 20)
• Sharing resources (p. 21)
• Protecting data (p. 21)

Creating a new bucket


When creating a new bucket, you should apply the following tools and settings to help ensure that your
Amazon S3 resources are protected. 

Block Public Access

S3 Block Public Access provides four settings to help you avoid inadvertently exposing your S3 resources.
You can apply these settings in any combination to individual access points, buckets, or entire AWS
accounts. If you apply a setting to an account, it applies to all buckets and access points that are owned
by that account. By default, the Block all public access setting is applied to new buckets created in the
Amazon S3 console. 

For more information, see The meaning of "public" (p. 491).

If the S3 Block Public Access settings are too restrictive, you can use AWS Identity and Access
Management (IAM) identities to grant access to specific users rather than disabling all Block Public Access
settings. Using Block Public Access with IAM identities helps ensure that any operation that is blocked by
a Block Public Access setting is rejected unless the requesting user has been given specific permission.

For more information, see Block public access settings (p. 489).

Grant access with IAM identities

When setting up accounts for new team members who require S3 access, use IAM users and roles to
ensure least privileges. You can also implement a form of IAM multi-factor authentication (MFA) to
support a strong identity foundation. Using IAM identities, you can grant unique permissions to users
and specify what resources they can access and what actions they can take. IAM identities provide
increased capabilities, including the ability to require users to enter login credentials before accessing
shared resources and apply permission hierarchies to different objects within a single bucket.

For more information, see Example 1: Bucket owner granting its users bucket permissions (p. 434).

Bucket policies

With bucket policies, you can personalize bucket access to help ensure that only those users you have
approved can access resources and perform actions within them. In addition to bucket policies, you
should use bucket-level Block Public Access settings to further limit public access to your data.

For more information, see Using bucket policies (p. 397).

When creating policies, avoid the use of wildcards in the Principal element because it effectively
allows anyone to access your Amazon S3 resources. It's better to explicitly list users or groups that are

API Version 2006-03-01


19
Amazon Simple Storage Service User Guide
Access control

allowed to access the bucket. Rather than including a wildcard for their actions, grant them specific
permissions when applicable.

To further maintain the practice of least privileges, Deny statements in the Effect element should be
as broad as possible and Allow statements should be as narrow as possible. Deny effects paired with the
"s3:*" action are another good way to implement opt-in best practices for the users included in policy
condition statements.

For more information about specifying conditions for when a policy is in effect, see Amazon S3 condition
key examples (p. 300).

Buckets in a VPC setting

When adding users in a corporate setting, you can use a virtual private cloud (VPC) endpoint to allow any
users in your virtual network to access your Amazon S3 resources. VPC endpoints enable developers to
provide specific access and permissions to groups of users based on the network the user is connected to.
Rather than adding each user to an IAM role or group, you can use VPC endpoints to deny bucket access
if the request doesn’t originate from the specified endpoint.

For more information, see Controlling access from VPC endpoints with bucket policies (p. 398).

Storing and sharing data


Use the following tools and best practices to store and share your Amazon S3 data.

Versioning and Object Lock for data integrity

If you use the Amazon S3 console to manage buckets and objects, you should implement S3 Versioning
and S3 Object Lock. These features help prevent accidental changes to critical data and enable you to
roll back unintended actions. This capability is particularly useful when there are multiple users with full
write and execute permissions accessing the Amazon S3 console.

For information about S3 Versioning, see Using versioning in S3 buckets (p. 519). For information about
Object Lock, see Using S3 Object Lock (p. 559).

Object lifecycle management for cost efficiency

To manage your objects so that they are stored cost effectively throughout their lifecycle, you can pair
lifecycle policies with object versioning. Lifecycle policies define actions that you want S3 to take during
an object's lifetime. For example, you can create a lifecycle policy that will transition objects to another
storage class, archive them, or delete them after a specified period of time. You can define a lifecycle
policy for all objects or a subset of objects in the bucket by using a shared prefix or tag.

For more information, see Managing your storage lifecycle (p. 578).

Cross-Region Replication for multiple office locations

When creating buckets that are accessed by different office locations, you should consider implementing
S3 Cross-Region Replication. Cross-Region Replication helps ensure that all users have access to the
resources they need and increases operational efficiency. Cross-Region Replication offers increased
availability by copying objects across S3 buckets in different AWS Regions. However, the use of this tool
increases storage costs.

For more information, see Replicating objects (p. 623).

Permissions for secure static website hosting

When configuring a bucket to be used as a publicly accessed static website, you need to disable all Block
Public Access settings. It is important to only provide s3:GetObject actions and not ListObject or

API Version 2006-03-01


20
Amazon Simple Storage Service User Guide
Access control

PutObject permissions when writing the bucket policy for your static website. This helps ensure that
users cannot view all the objects in your bucket or add their own content.

For more information, see Setting permissions for website access (p. 954).

Amazon CloudFront provides the capabilities required to set up a secure static website. Amazon S3
static websites only support HTTP endpoints. CloudFront uses the durable storage of Amazon S3 while
providing additional security headers like HTTPS. HTTPS adds security by encrypting a normal HTTP
request and protecting against common cyber attacks.

For more information, see Getting started with a secure static website in the Amazon CloudFront
Developer Guide.

Sharing resources
There are several different ways that you can share resources with a specific group of users. You can
use the following tools to share a set of documents or other resources to a single group of users,
department, or an office. Although they can all be used to accomplish the same goal, some tools might
pair better than others with your existing settings.

User policies

You can share resources with a limited group of people using IAM groups and user policies. When
creating a new IAM user, you are prompted to create and add them to a group. However, you can create
and add users to groups at any point. If the individuals you intend to share these resources with are
already set up within IAM, you can add them to a common group and share the bucket with their group
within the user policy. You can also use IAM user policies to share individual objects within a bucket.

For more information, see Allowing an IAM user access to one of your buckets (p. 425).

Access control lists

As a general rule, we recommend that you use S3 bucket policies or IAM policies for access control.
Amazon S3 access control lists (ACLs) are a legacy access control mechanism that predates IAM. If
you already use S3 ACLs and you find them sufficient, there is no need to change. However, certain
access control scenarios require the use of ACLs. For example, when a bucket owner wants to grant
permission to objects, but not all objects are owned by the bucket owner, the object owner must first
grant permission to the bucket owner. This is done using an object ACL.

For more information, see Example 3: Bucket owner granting permissions to objects it does not
own (p. 443).

Prefixes

When trying to share specific resources from a bucket, you can replicate folder-level permissions using
prefixes. The Amazon S3 console supports the folder concept as a means of grouping objects by using a
shared name prefix for objects. You can then specify a prefix within the conditions of an IAM user's policy
to grant them explicit permission to access the resources associated with that prefix. 

For more information, see Organizing objects in the Amazon S3 console using folders (p. 147).

Tagging

If you use object tagging to categorize storage, you can share objects that have been tagged with a
specific value with specified users. Resource tagging allows you to control access to objects based on the
tags associated with the resource that a user is trying to access. To do this, use the ResourceTag/key-
name condition within an IAM user policy to allow access to the tagged resources.

API Version 2006-03-01


21
Amazon Simple Storage Service User Guide
Access control

For more information, see Controlling access to AWS resources using resource tags in the IAM User Guide.

Protecting data
Use the following tools to help protect data in transit and at rest, both of which are crucial in
maintaining the integrity and accessibility of your data.

Object encryption

Amazon S3 offers several object encryption options that protect data in transit and at rest. Server-side
encryption encrypts your object before saving it on disks in its data centers and then decrypts it when
you download the objects. As long as you authenticate your request and you have access permissions,
there is no difference in the way you access encrypted or unencrypted objects. When setting up server-
side encryption, you have three mutually exclusive options:

• Amazon S3 managed keys (SSE-S3)


• KMS keys stored in AWS Key Management Service (SSE-KMS)
• Customer-provided keys (SSE-C)

For more information, see Protecting data using server-side encryption (p. 219).

Client-side encryption is the act of encrypting data before sending it to Amazon S3. For more
information, see Protecting data using client-side encryption (p. 261).

Signing methods

Signature Version 4 is the process of adding authentication information to AWS requests sent by HTTP.
For security, most requests to AWS must be signed with an access key, which consists of an access key ID
and secret access key. These two keys are commonly referred to as your security credentials.

For more information, see Authenticating Requests (AWS Signature Version 4) and Signature Version 4
signing process.

Logging and monitoring

Monitoring is an important part of maintaining the reliability, availability, and performance of your
Amazon S3 solutions so that you can more easily debug a multi-point failure if one occurs. Logging can
provide insight into any errors users are receiving, and when and what requests are made. AWS provides
several tools for monitoring your Amazon S3 resources:

• Amazon CloudWatch
• AWS CloudTrail
• Amazon S3 Access Logs
• AWS Trusted Advisor

For more information, see Logging and monitoring in Amazon S3 (p. 508).

Amazon S3 is integrated with AWS CloudTrail, a service that provides a record of actions taken by a
user, a role, or an AWS service in Amazon S3. This feature can be paired with Amazon GuardDuty, which
monitors threats against your Amazon S3 resources by analyzing CloudTrail management events and
CloudTrail S3 data events. These data sources monitor different kinds of activity. For example, S3 related
CloudTrail management events include operations that list or configure S3 projects. GuardDuty analyzes
S3 data events from all of your S3 buckets and monitors them for malicious and suspicious activity.

For more information, see Amazon S3 protection in Amazon GuardDuty in the Amazon GuardDuty User
Guide.

API Version 2006-03-01


22
Amazon Simple Storage Service User Guide
Development resources

Development resources
To help you build applications using the language of your choice, we provide the following resources:

• Sample Code and Libraries – The AWS Developer Center has sample code and libraries written
especially for Amazon S3.

You can use these code samples as a means of understanding how to implement the Amazon S3 API.
For more information, see the AWS Developer Center.
• Tutorials – Our Resource Center offers more Amazon S3 tutorials.

These tutorials provide a hands-on approach for learning Amazon S3 functionality. For more
information, see Articles & Tutorials.
• Customer Forum – We recommend that you review the Amazon S3 forum to get an idea of what other
users are doing and to benefit from the questions they ask.

The forum can help you understand what you can and can't do with Amazon S3. The forum also serves
as a place for you to ask questions that other users or AWS representatives might answer. You can use
the forum to report issues with the service or the API. For more information, see Discussion Forums.

API Version 2006-03-01


23
Amazon Simple Storage Service User Guide
Buckets overview

Creating, configuring, and working


with Amazon S3 buckets
To store your data in Amazon S3, you work with resources known as buckets and objects. A bucket is a
container for objects. An object is a file and any metadata that describes that file.

To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the
object is in the bucket, you can open it, download it, and move it. When you no longer need an object or
a bucket, you can clean up your resources.
Note
With Amazon S3, you pay only for what you use. For more information about Amazon S3
features and pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started
with Amazon S3 for free. For more information, see AWS Free Tier.

The topics in this section provide an overview of working with buckets in Amazon S3. They include
information about naming, creating, accessing, and deleting buckets. For more information about
viewing or listing objects in a bucket, see Organizing, listing, and working with your objects (p. 141).

Topics
• Buckets overview (p. 24)
• Bucket naming rules (p. 27)
• Creating a bucket (p. 28)
• Viewing the properties for an S3 bucket (p. 33)
• Accessing a bucket (p. 33)
• Emptying a bucket (p. 35)
• Deleting a bucket (p. 37)
• Setting default server-side encryption behavior for Amazon S3 buckets (p. 40)
• Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration (p. 44)
• Using Requester Pays buckets for storage transfers and usage (p. 52)
• Bucket restrictions and limitations (p. 55)

Buckets overview
To upload your data (photos, videos, documents, etc.) to Amazon S3, you must first create an S3 bucket
in one of the AWS Regions. You can then upload any number of objects to the bucket.

In terms of implementation, buckets and objects are AWS resources, and Amazon S3 provides APIs for
you to manage them. For example, you can create a bucket and upload objects using the Amazon S3 API.
You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3
APIs to send requests to Amazon S3.

This section describes how to work with buckets. For information about working with objects, see
Amazon S3 objects overview (p. 57).

An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts.
This means that after a bucket is created, the name of that bucket cannot be used by another AWS
account in any AWS Region until the bucket is deleted. You should not depend on specific bucket naming
conventions for availability or security verification purposes. For bucket naming guidelines, see Bucket
naming rules (p. 27).

API Version 2006-03-01


24
Amazon Simple Storage Service User Guide
About permissions

Amazon S3 creates buckets in a Region that you specify. To optimize latency, minimize costs, or address
regulatory requirements, choose any AWS Region that is geographically close to you. For example, if
you reside in Europe, you might find it advantageous to create buckets in the Europe (Ireland) or Europe
(Frankfurt) Regions. For a list of Amazon S3 Regions, see Regions and Endpoints in the AWS General
Reference.
Note
Objects that belong to a bucket that you create in a specific AWS Region never leave that
Region, unless you explicitly transfer them to another Region. For example, objects that are
stored in the Europe (Ireland) Region never leave it.

Topics
• About permissions (p. 25)
• Managing public access to buckets (p. 25)
• Bucket configuration options (p. 26)

About permissions
You can use your AWS account root user credentials to create a bucket and perform any other Amazon
S3 operation. However, we recommend that you do not use the root user credentials of your AWS
account to make requests, such as to create a bucket. Instead, create an AWS Identity and Access
Management (IAM) user, and grant that user full access (users by default have no permissions).

These users are referred to as administrators. You can use the administrator user credentials, instead
of the root user credentials of your account, to interact with AWS and perform tasks, such as create a
bucket, create users, and grant them permissions.

For more information, see AWS account root user credentials and IAM user credentials in the AWS
General Reference and Security best practices in IAM in the IAM User Guide.

The AWS account that creates a resource owns that resource. For example, if you create an IAM user
in your AWS account and grant the user permission to create a bucket, the user can create a bucket.
But the user does not own the bucket; the AWS account that the user belongs to owns the bucket. The
user needs additional permission from the resource owner to perform any other bucket operations. For
more information about managing permissions for your Amazon S3 resources, see Identity and access
management in Amazon S3 (p. 274).

Managing public access to buckets


Public access is granted to buckets and objects through access control lists (ACLs), bucket policies, or
both. To help you manage public access to Amazon S3 resources, Amazon S3 provides settings to block
public access. Amazon S3 Block Public Access settings can override ACLs and bucket policies so that you
can enforce uniform limits on public access to these resources. You can apply Block Public Access settings
to individual buckets or to all buckets in your account.

To help ensure that all of your Amazon S3 buckets and objects have their public access blocked, we
recommend that you turn on all four settings for Block Public Access for your account. These settings
block all public access for all current and future buckets.

Before applying these settings, verify that your applications will work correctly without public access. If
you require some level of public access to your buckets or objects—for example, to host a static website
as described at Hosting a static website using Amazon S3 (p. 944)—you can customize the individual
settings to suit your storage use cases. For more information, see Blocking public access to your Amazon
S3 storage (p. 488).

API Version 2006-03-01


25
Amazon Simple Storage Service User Guide
Bucket configuration

Bucket configuration options


Amazon S3 supports various options for you to configure your bucket. For example, you can configure
your bucket for website hosting, add a configuration to manage the lifecycle of objects in the bucket,
and configure the bucket to log all access to the bucket. Amazon S3 supports subresources for you to
store and manage the bucket configuration information. You can use the Amazon S3 API to create and
manage these subresources. However, you can also use the console or the AWS SDKs.
Note
There are also object-level configurations. For example, you can configure object-level
permissions by configuring an access control list (ACL) specific to that object.

These are referred to as subresources because they exist in the context of a specific bucket or object. The
following table lists subresources that enable you to manage bucket-specific configurations.

Subresource Description

cors (cross-origin You can configure your bucket to allow cross-origin requests.
resource sharing)
For more information, see Using cross-origin resource sharing (CORS) (p. 477).

event notification You can enable your bucket to send you notifications of specified bucket events.

For more information, see Amazon S3 Event Notifications (p. 867).

lifecycle You can define lifecycle rules for objects in your bucket that have a well-defined
lifecycle. For example, you can define a rule to archive objects one year after
creation, or delete an object 10 years after creation.

For more information, see Managing your storage lifecycle (p. 578).

location When you create a bucket, you specify the AWS Region where you want Amazon
S3 to create the bucket. Amazon S3 stores this information in the location
subresource and provides an API for you to retrieve this information.

logging Logging enables you to track requests for access to your bucket. Each access
log record provides details about a single access request, such as the requester,
bucket name, request time, request action, response status, and error code, if
any. Access log information can be useful in security and access audits. It can also
help you learn about your customer base and understand your Amazon S3 bill.  

For more information, see Logging requests using server access


logging (p. 833).

object locking To use S3 Object Lock, you must enable it for a bucket. You can also optionally
configure a default retention mode and period that applies to new objects that
are placed in the bucket.

For more information, see Bucket configuration (p. 561).

policy and ACL All your resources (such as buckets and objects) are private by default. Amazon
(access control list) S3 supports both bucket policy and access control list (ACL) options for you to
grant and manage bucket-level permissions. Amazon S3 stores the permission
information in the policy and acl subresources.

For more information, see Identity and access management in Amazon


S3 (p. 274).

API Version 2006-03-01


26
Amazon Simple Storage Service User Guide
Naming rules

Subresource Description

replication Replication is the automatic, asynchronous copying of objects across buckets


in different or the same AWS Regions. For more information, see Replicating
objects (p. 623).

requestPayment By default, the AWS account that creates the bucket (the bucket owner) pays
for downloads from the bucket. Using this subresource, the bucket owner
can specify that the person requesting the download will be charged for the
download. Amazon S3 provides an API for you to manage this subresource.

For more information, see Using Requester Pays buckets for storage transfers
and usage (p. 52).

tagging You can add cost allocation tags to your bucket to categorize and track your AWS
costs. Amazon S3 provides the tagging subresource to store and manage tags on
a bucket. Using tags you apply to your bucket, AWS generates a cost allocation
report with usage and costs aggregated by your tags.

For more information, see Billing and usage reporting for S3 buckets (p. 696).

transfer Transfer Acceleration enables fast, easy, and secure transfers of files over long
acceleration distances between your client and an S3 bucket. Transfer Acceleration takes
advantage of the globally distributed edge locations of Amazon CloudFront.

For more information, see Configuring fast, secure file transfers using Amazon S3
Transfer Acceleration (p. 44).

versioning Versioning helps you recover accidental overwrites and deletes.

We recommend versioning as a best practice to recover objects from being


deleted or overwritten by mistake.

For more information, see Using versioning in S3 buckets (p. 519).

website You can configure your bucket for static website hosting. Amazon S3 stores this
configuration by creating a website subresource.

For more information, see Hosting a static website using Amazon S3 (p. 944).

Bucket naming rules


The following rules apply for naming buckets in Amazon S3:

• Bucket names must be between 3 and 63 characters long.


• Bucket names can consist only of lowercase letters, numbers, dots (.), and hyphens (-).
• Bucket names must begin and end with a letter or number.
• Bucket names must not be formatted as an IP address (for example, 192.168.5.4).
• Bucket names must not start with the prefix xn--.
• Bucket names must not end with the suffix -s3alias. This suffix is reserved for access point alias
names. For more information, see Using a bucket-style alias for your access point (p. 196).
• Bucket names must be unique within a partition. A partition is a grouping of Regions. AWS currently
has three partitions: aws (Standard Regions), aws-cn (China Regions), and aws-us-gov (AWS
GovCloud [US] Regions).

API Version 2006-03-01


27
Amazon Simple Storage Service User Guide
Example bucket names

• Buckets used with Amazon S3 Transfer Acceleration can't have dots (.) in their names. For more
information about Transfer Acceleration, see Configuring fast, secure file transfers using Amazon S3
Transfer Acceleration (p. 44).

For best compatibility, we recommend that you avoid using dots (.) in bucket names, except for buckets
that are used only for static website hosting. If you include dots in a bucket's name, you can't use virtual-
host-style addressing over HTTPS, unless you perform your own certificate validation. This is because the
security certificates used for virtual hosting of buckets don't work for buckets with dots in their names.

This limitation doesn't affect buckets used for static website hosting, because static website hosting is
only available over HTTP. For more information about virtual-host-style addressing, see Virtual hosting
of buckets (p. 1022). For more information about static website hosting, see Hosting a static website
using Amazon S3 (p. 944).
Note
Before March 1, 2018, buckets created in the US East (N. Virginia) Region could have names
that were up to 255 characters long and included uppercase letters and underscores. Beginning
March 1, 2018, new buckets in US East (N. Virginia) must conform to the same rules applied in
all other Regions.

Example bucket names


The following example bucket names are valid and follow the recommended naming guidelines:

• docexamplebucket1
• log-delivery-march-2020
• my-hosted-content

The following example bucket names are valid but not recommended for uses other than static website
hosting:

• docexamplewebsite.com
• www.docexamplewebsite.com
• my.example.s3.bucket

The following example bucket names are not valid:

• doc_example_bucket (contains underscores)


• DocExampleBucket (contains uppercase letters)
• doc-example-bucket- (ends with a hyphen)

Creating a bucket
To upload your data to Amazon S3, you must first create an Amazon S3 bucket in one of the AWS
Regions. When you create a bucket, you must choose a bucket name and Region. You can optionally
choose other storage management options for the bucket. After you create a bucket, you cannot change
the bucket name or Region. For information about naming buckets, see Bucket naming rules (p. 27).

The AWS account that creates the bucket owns it. You can upload any number of objects to the bucket.
By default, you can create up to 100 buckets in each of your AWS accounts. If you need more buckets,
you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a service limit
increase. To learn how to submit a bucket limit increase, see AWS service quotas in the AWS General
Reference. You can store any number of objects in a bucket.

API Version 2006-03-01


28
Amazon Simple Storage Service User Guide
Creating a bucket

You can use the Amazon S3 console, Amazon S3 APIs, AWS CLI, or AWS SDKs to create a bucket.

Using the S3 console


1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Create bucket.

The Create bucket wizard opens.


3. In Bucket name, enter a DNS-compliant name for your bucket.

The bucket name must:

• Be unique across all of Amazon S3.


• Be between 3 and 63 characters long.
• Not contain uppercase characters.
• Start with a lowercase letter or number.

After you create the bucket, you can't change its name. For information about naming buckets, see
Bucket naming rules (p. 27).
Important
Avoid including sensitive information, such as account numbers, in the bucket name. The
bucket name is visible in the URLs that point to the objects in the bucket.
4. In Region, choose the AWS Region where you want the bucket to reside.

Choose a Region close to you to minimize latency and costs and address regulatory requirements.
Objects stored in a Region never leave that Region unless you explicitly transfer them to another
Region. For a list of Amazon S3 AWS Regions, see AWS service endpoints in the Amazon Web Services
General Reference.
5. In Bucket settings for Block Public Access, choose the Block Public Access settings that you want to
apply to the bucket.

We recommend that you keep all settings enabled unless you know that you need to turn off one
or more of them for your use case, such as to host a public website. Block Public Access settings
that you enable for the bucket are also enabled for all access points that you create on the bucket.
For more information about blocking public access, see Blocking public access to your Amazon S3
storage (p. 488).
6. (Optional) If you want to enable S3 Object Lock, do the following:

a. Choose Advanced settings, and read the message that appears.


Important
You can only enable S3 Object Lock for a bucket when you create it. If you enable
Object Lock for the bucket, you can't disable it later. Enabling Object Lock also enables
versioning for the bucket. After you enable Object Lock for the bucket, you must
configure the Object Lock settings before any objects in the bucket will be protected.
For more information about configuring protection for objects, see Using S3 Object
Lock (p. 559).
b. If you want to enable Object Lock, enter enable in the text box and choose Confirm.

For more information about the S3 Object Lock feature, see Using S3 Object Lock (p. 559).
Note
To create an Object Lock enabled bucket, you must have the following permissions:
s3:CreateBucket, s3:PutBucketVersioning and s3:PutBucketObjectLockConfiguration.

API Version 2006-03-01


29
Amazon Simple Storage Service User Guide
Creating a bucket

7. Choose Create bucket.

Using the AWS SDKs


When you use the AWS SDKs to create a bucket, you must create a client and then use the client to send
a request to create a bucket. As a best practice, you should create your client and bucket in the same
AWS Region. If you don't specify a Region when you create a client or a bucket, Amazon S3 uses the
default Region US East (N. Virginia).

To create a client to access a dual-stack endpoint, you must specify an AWS Region. For more
information, see Dual-stack endpoints (p. 991). For a list of available AWS Regions, see Regions and
endpoints in the AWS General Reference.

When you create a client, the Region maps to the Region-specific endpoint. The client uses this endpoint
to communicate with Amazon S3: s3.<region>.amazonaws.com. If your Region launched after March
20, 2019, your client and bucket must be in the same Region. However, you can use a client in the US
East (N. Virginia) Region to create a bucket in any Region that launched before March 20, 2019. For more
information, see Legacy Endpoints (p. 1026).

These AWS SDK code examples perform the following tasks:

• Create a client by explicitly specifying an AWS Region — In the example, the client uses the s3.us-
west-2.amazonaws.com endpoint to communicate with Amazon S3. You can specify any AWS
Region. For a list of AWS Regions, see Regions and endpoints in the AWS General Reference.
• Send a create bucket request by specifying only a bucket name — The client sends a request to
Amazon S3 to create the bucket in the Region where you created a client.
• Retrieve information about the location of the bucket — Amazon S3 stores bucket location
information in the location subresource that is associated with the bucket.

Java

This example shows how to create an Amazon S3 bucket using the AWS SDK for Java. For
instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CreateBucketRequest;
import com.amazonaws.services.s3.model.GetBucketLocationRequest;

import java.io.IOException;

public class CreateBucket2 {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

API Version 2006-03-01


30
Amazon Simple Storage Service User Guide
Creating a bucket

if (!s3Client.doesBucketExistV2(bucketName)) {
// Because the CreateBucketRequest object doesn't specify a region, the
// bucket is created in the region specified in the client.
s3Client.createBucket(new CreateBucketRequest(bucketName));

// Verify that the bucket was created by retrieving it and checking its
location.
String bucketLocation = s3Client.getBucketLocation(new
GetBucketLocationRequest(bucketName));
System.out.println("Bucket location: " + bucketLocation);
}
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it and returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).

Example

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Util;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class CreateBucketTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
CreateBucketAsync().Wait();
}

static async Task CreateBucketAsync()


{
try
{
if (!(await AmazonS3Util.DoesS3BucketExistAsync(s3Client, bucketName)))
{
var putBucketRequest = new PutBucketRequest
{
BucketName = bucketName,
UseClientRegion = true
};

API Version 2006-03-01


31
Amazon Simple Storage Service User Guide
Creating a bucket

PutBucketResponse putBucketResponse = await


s3Client.PutBucketAsync(putBucketRequest);
}
// Retrieve the bucket location.
string bucketLocation = await FindBucketLocationAsync(s3Client);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
static async Task<string> FindBucketLocationAsync(IAmazonS3 client)
{
string bucketLocation;
var request = new GetBucketLocationRequest()
{
BucketName = bucketName
};
GetBucketLocationResponse response = await
client.GetBucketLocationAsync(request);
bucketLocation = response.Location.ToString();
return bucketLocation;
}
}
}

Ruby

For information about how to create and test a working sample, see Using the AWS SDK for Ruby -
Version 3 (p. 1040).

Example

require 'aws-sdk-s3'

# Creates a bucket in Amazon S3.


#
# @param s3_client [Aws::S3::Client] An initialized Amazon S3 client.
# @param bucket_name [String] The bucket's name.
# @return [Boolean] true if the bucket was created; otherwise, false.
# @example
# s3_client = Aws::S3::Client.new(region: 'us-east-1')
# exit 1 unless bucket_created?(s3_client, 'doc-example-bucket')
def bucket_created?(s3_client, bucket_name)
s3_client.create_bucket(bucket: bucket_name)
rescue StandardError => e
puts "Error while creating the bucket named '#{bucket_name}': #{e.message}"
end

Using the AWS CLI


You can also use the AWS Command Line Interface (AWS CLI) to create an S3 bucket. For more
information, see create-bucket in the AWS CLI Command Reference.

For information about the AWS CLI, see What is the AWS Command Line Interface? in the AWS Command
Line Interface User Guide.

API Version 2006-03-01


32
Amazon Simple Storage Service User Guide
Viewing bucket properties

Viewing the properties for an S3 bucket


You can view and configure the properties for an Amazon S3 bucket, including settings for versioning,
tags, default encryption, logging, notifications, and more.

To view the properties for an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to view the properties for.
3. Choose Properties.
4. On the Properties page, you can configure the following properties for the bucket.

• Bucket Versioning – Keep multiple versions of an object in one bucket by using versioning. By
default, versioning is disabled for a new bucket. For information about enabling versioning, see
Enabling versioning on buckets (p. 523).
• Tags – With AWS cost allocation, you can use bucket tags to annotate billing for your use of a
bucket. A tag is a key-value pair that represents a label that you assign to a bucket. To add tags,
choose Tags, and then choose Add tag. For more information, see Using cost allocation S3 bucket
tags (p. 695).
• Default encryption – Enabling default encryption provides you with automatic server-side
encryption. Amazon S3 encrypts an object before saving it to a disk and decrypts the object when
you download it. For more information, see Setting default server-side encryption behavior for
Amazon S3 buckets (p. 40).
• Server access logging – Get detailed records for the requests that are made to your bucket with
server access logging. By default, Amazon S3 doesn't collect server access logs. For information
about enabling server access logging, see Enabling Amazon S3 server access logging (p. 835)
• AWS CloudTrail data events – Use CloudTrail to log data events. By default, trails don't log data
events. Additional charges apply for data events. For more information, see Logging Data Events
for Trails in the AWS CloudTrail User Guide.
• Event notifications – Enable certain Amazon S3 bucket events to send notification messages to a
destination whenever the events occur. To enable events, choose Create event notification, and
then specify the settings you want to use. For more information, see Enabling and configuring
event notifications using the Amazon S3 console (p. 875).
• Transfer acceleration – Enable fast, easy, and secure transfers of files over long distances between
your client and an S3 bucket. For information about enabling transfer acceleration, see Enabling
and using S3 Transfer Acceleration (p. 47).
• Object Lock – Use S3 Object Lock to prevent an object from being deleted or overwritten for a
fixed amount of time or indefinitely. For more information, see Using S3 Object Lock (p. 559).
• Requester Pays – Enable Requester Pays if you want the requester (instead of the bucket owner)
to pay for requests and data transfers. For more information, see Using Requester Pays buckets for
storage transfers and usage (p. 52).
• Static website hosting – You can host a static website on Amazon S3. To enable static website
hosting, choose Static website hosting, and then specify the settings you want to use. For more
information, see Hosting a static website using Amazon S3 (p. 944).

Accessing a bucket
You can access your bucket using the Amazon S3 console. Using the console UI, you can perform almost
all bucket operations without having to write any code.

API Version 2006-03-01


33
Amazon Simple Storage Service User Guide
Virtual-hosted–style access

If you access a bucket programmatically, Amazon S3 supports RESTful architecture in which your buckets
and objects are resources, each with a resource URI that uniquely identifies the resource.

Amazon S3 supports both virtual-hosted–style and path-style URLs to access a bucket. Because
buckets can be accessed using path-style and virtual-hosted–style URLs, we recommend that you
create buckets with DNS-compliant bucket names. For more information, see Bucket restrictions and
limitations (p. 55).
Note
Virtual-hosted-style and path-style requests use the S3 dot Region endpoint structure
(s3.Region), for example, https://my-bucket.s3.us-west-2.amazonaws.com. However,
some older Amazon S3 Regions also support S3 dash Region endpoints s3-Region, for
example, https://my-bucket.s3-us-west-2.amazonaws.com. If your bucket is in one of
these Regions, you might see s3-Region endpoints in your server access logs or AWS CloudTrail
logs. We recommend that you do not use this endpoint structure in your requests.

Virtual-hosted–style access
In a virtual-hosted–style request, the bucket name is part of the domain name in the URL.

Amazon S3 virtual-hosted-style URLs use the following format.

https://bucket-name.s3.Region.amazonaws.com/key name

In this example, my-bucket is the bucket name, US West (Oregon) is the Region, and puppy.png is the
key name:

https://my-bucket.s3.us-west-2.amazonaws.com/puppy.png

For more information about virtual hosted style access, see Virtual Hosted-Style Requests (p. 1023).

Path-style access
In Amazon S3, path-style URLs use the following format.

https://s3.Region.amazonaws.com/bucket-name/key name

For example, if you create a bucket named mybucket in the US West (Oregon) Region, and you want to
access the puppy.jpg object in that bucket, you can use the following path-style URL:

https://s3.us-west-2.amazonaws.com/mybucket/puppy.jpg

For more information, see Path-Style Requests (p. 1022).


Important
Update (September 23, 2020) – We have decided to delay the deprecation of path-style URLs to
ensure that customers have the time that they need to transition to virtual hosted-style URLs.
For more information, see Amazon S3 Path Deprecation Plan – The Rest of the Story in the AWS
News Blog.

Accessing an S3 bucket over IPv6


Amazon S3 has a set of dual-stack endpoints, which support requests to S3 buckets over both Internet
Protocol version 6 (IPv6) and IPv4. For more information, see Making requests over IPv6 (p. 989).

API Version 2006-03-01


34
Amazon Simple Storage Service User Guide
Accessing a bucket through S3 access points

Accessing a bucket through S3 access points


In addition to accessing a bucket directly, you can access a bucket through an access point. For more
information about the S3 access points feature, see Managing data access with Amazon S3 access
points (p. 184).

S3 access points only support virtual-host-style addressing. To address a bucket through an access point,
use the following format.

https://AccessPointName-AccountId.s3-accesspoint.region.amazonaws.com.

Note

• If your access point name includes dash (-) characters, include the dashes in the URL and
insert another dash before the account ID. For example, to use an access point named
finance-docs owned by account 123456789012 in Region us-west-2, the appropriate
URL would be https://finance-docs-123456789012.s3-accesspoint.us-
west-2.amazonaws.com.
• S3 access points don't support access by HTTP, only secure access by HTTPS.

Accessing a bucket using S3://


Some AWS services require specifying an Amazon S3 bucket using S3://bucket. The following example
shows the correct format. Be aware that when using this format, the bucket name does not include the
AWS Region.

S3://bucket-name/key-name

For example, the following example uses the sample bucket described in the earlier path-style section.

S3://mybucket/puppy.jpg

Emptying a bucket
You can empty a bucket's contents using the Amazon S3 console, AWS SDKs, or AWS Command Line
Interface (AWS CLI). When you empty a bucket, you delete all the objects, but you keep the bucket.
After you empty a bucket, it cannot be undone. When you empty a bucket that has S3 Bucket Versioning
enabled or suspended, all versions of all the objects in the bucket are deleted. For more information, see
Working with objects in a versioning-enabled bucket (p. 529).

You can also specify a lifecycle configuration on a bucket to expire objects so that Amazon S3 can delete
them. For more information, see Setting lifecycle configuration on a bucket (p. 584)

Troubleshooting

Objects added to the bucket while the empty bucket action is in progress might be deleted. To prevent
new objects from being added to a bucket while the empty bucket action is in progress, you might need
to stop your AWS CloudTrail trails from logging events to the bucket. For more information, see Turning
off logging for a trail in the AWS CloudTrail User Guide.

Another alternative to stopping CloudTrail trails from being added to the bucket is to add a deny
s3:PutObject statement to your bucket policy. If you want to store new objects in the bucket, you should

API Version 2006-03-01


35
Amazon Simple Storage Service User Guide
Emptying a bucket

remove the deny s3:PutObject statement from your bucket policy. For more information, see Example —
Object operations (p. 295) and IAM JSON policy elements: Effect in the IAM User Guide

Using the S3 console


You can use the Amazon S3 console to empty a bucket, which deletes all of the objects in the bucket
without deleting the bucket.

To empty an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, select the option next to the name of the bucket that you want to empty,
and then choose Empty.
3. On the Empty bucket page, confirm that you want to empty the bucket by entering the bucket
name into the text field, and then choose Empty.
4. Monitor the progress of the bucket emptying process on the Empty bucket: Status page.

Using the AWS CLI


You can empty a bucket using the AWS CLI only if the bucket does not have Bucket Versioning enabled.
If versioning is not enabled, you can use the rm (remove) AWS CLI command with the --recursive
parameter to empty the bucket (or remove a subset of objects with a specific key name prefix).

The following rm command removes objects that have the key name prefix doc, for example, doc/doc1
and doc/doc2.

$ aws s3 rm s3://bucket-name/doc --recursive

Use the following command to remove all objects without specifying a prefix.

$ aws s3 rm s3://bucket-name --recursive

For more information, see Using high-level S3 commands with the AWS CLI in the AWS Command Line
Interface User Guide.
Note
You can't remove objects from a bucket that has versioning enabled. Amazon S3 adds a delete
marker when you delete an object, which is what this command does. For more information
about S3 Bucket Versioning, see Using versioning in S3 buckets (p. 519).

Using the AWS SDKs


You can use the AWS SDKs to empty a bucket or remove a subset of objects that have a specific key
name prefix.

For an example of how to empty a bucket using AWS SDK for Java, see Deleting a bucket (p. 37). The
code deletes all objects, regardless of whether the bucket has versioning enabled, and then it deletes the
bucket. To just empty the bucket, make sure that you remove the statement that deletes the bucket.

For more information about using other AWS SDKs, see Tools for Amazon Web Services.

Using a lifecycle configuration


If you use a lifecycle policy to empty your bucket, the lifecycle policy should include current versions,
non-current versions, delete markers, and incomplete multipart uploads.

API Version 2006-03-01


36
Amazon Simple Storage Service User Guide
Deleting a bucket

You can add lifecycle configuration rules to expire all objects or a subset of objects that have a specific
key name prefix. For example, to remove all objects in a bucket, you can set a lifecycle rule to expire
objects one day after creation.

Amazon S3 supports a bucket lifecycle rule that you can use to stop multipart uploads that don't
complete within a specified number of days after being initiated. We recommend that you configure this
lifecycle rule to minimize your storage costs. For more information, see Configuring a bucket lifecycle
policy to abort incomplete multipart uploads (p. 79).

For more information about using a lifecycle configuration to empty a bucket, see Setting lifecycle
configuration on a bucket (p. 584) and Expiring objects (p. 584).

Deleting a bucket
You can delete an empty Amazon S3 bucket. Before deleting a bucket, consider the following:

• Bucket names are unique. If you delete a bucket, another AWS user can use the name.
• If the bucket hosts a static website, and you created and configured an Amazon Route 53 hosted
zone as described in Configuring a static website using a custom domain registered with Route
53 (p. 971), you must clean up the Route 53 hosted zone settings that are related to the bucket. For
more information, see Step 2: Delete the Route 53 hosted zone (p. 985).
• If the bucket receives log data from Elastic Load Balancing (ELB): We recommend that you stop the
delivery of ELB logs to the bucket before deleting it. After you delete the bucket, if another user
creates a bucket using the same name, your log data could potentially be delivered to that bucket. For
information about ELB access logs, see Access logs in the User Guide for Classic Load Balancers and
Access logs in the User Guide for Application Load Balancers.

Troubleshooting

If you are unable to delete an Amazon S3 bucket, consider the following:

• s3:DeleteBucket permissions – If you cannot delete a bucket, work with your IAM administrator to
confirm that you have s3:DeleteBucket permissions in your IAM user policy.
• s3:DeleteBucket deny statement – If you have s3:DeleteBucket permissions in your IAM policy and
you cannot delete a bucket, the bucket policy might include a deny statement for s3:DeleteBucket.
Buckets created by ElasticBeanstalk have a policy containing this statement by default. Before you can
delete the bucket, you must delete this statement or the bucket policy.

Important
Bucket names are unique. If you delete a bucket, another AWS user can use the name. If you
want to continue to use the same bucket name, don't delete the bucket. We recommend that
you empty the bucket and keep it.

Using the S3 console


To delete an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, select the option next to the name of the bucket that you want to delete, and
then choose Delete at the top of the page.
3. On the Delete bucket page, confirm that you want to delete the bucket by entering the bucket
name into the text field, and then choose Delete bucket.

API Version 2006-03-01


37
Amazon Simple Storage Service User Guide
Deleting a bucket

Note
If the bucket contains any objects, empty the bucket before deleting it by selecting the
empty bucket configuration link in the This bucket is not empty error alert and following
the instructions on the Empty bucket page. Then return to the Delete bucket page and
delete the bucket.

Using the AWS SDK Java


The following example shows you how to delete a bucket using the AWS SDK for Java. First, the code
deletes objects in the bucket and then it deletes the bucket. For information about other AWS SDKs, see
Tools for Amazon Web Services.

Java

The following Java example deletes a bucket that contains objects. The example deletes all objects,
and then it deletes the bucket. The example works for buckets with or without versioning enabled.
Note
For buckets without versioning enabled, you can delete all objects directly and then delete
the bucket. For buckets with versioning enabled, you must delete all object versions before
deleting the bucket.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;

import java.util.Iterator;

public class DeleteBucket2 {

public static void main(String[] args) {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Delete all objects from the bucket. This is sufficient


// for unversioned buckets. For versioned buckets, when you attempt to
delete objects, Amazon S3 inserts
// delete markers for all objects, but doesn't delete the object versions.
// To delete objects from versioned buckets, delete all of the object
versions before deleting
// the bucket (see below for an example).
ObjectListing objectListing = s3Client.listObjects(bucketName);
while (true) {
Iterator<S3ObjectSummary> objIter =
objectListing.getObjectSummaries().iterator();
while (objIter.hasNext()) {
s3Client.deleteObject(bucketName, objIter.next().getKey());

API Version 2006-03-01


38
Amazon Simple Storage Service User Guide
Deleting a bucket

// If the bucket contains many objects, the listObjects() call


// might not return all of the objects in the first listing. Check to
// see whether the listing was truncated. If so, retrieve the next page
of objects
// and delete them.
if (objectListing.isTruncated()) {
objectListing = s3Client.listNextBatchOfObjects(objectListing);
} else {
break;
}
}

// Delete all object versions (required for versioned buckets).


VersionListing versionList = s3Client.listVersions(new
ListVersionsRequest().withBucketName(bucketName));
while (true) {
Iterator<S3VersionSummary> versionIter =
versionList.getVersionSummaries().iterator();
while (versionIter.hasNext()) {
S3VersionSummary vs = versionIter.next();
s3Client.deleteVersion(bucketName, vs.getKey(), vs.getVersionId());
}

if (versionList.isTruncated()) {
versionList = s3Client.listNextBatchOfVersions(versionList);
} else {
break;
}
}

// After all objects and object versions are deleted, delete the bucket.
s3Client.deleteBucket(bucketName);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client couldn't
// parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Using the AWS CLI


You can delete a bucket that contains objects with the AWS CLI if it doesn't have versioning enabled.
When you delete a bucket that contains objects, all the objects in the bucket are permanently deleted,
including objects that are transitioned to the S3 Glacier storage class.

If your bucket does not have versioning enabled, you can use the rb (remove bucket) AWS CLI command
with the --force parameter to delete the bucket and all the objects in it. This command deletes all
objects first and then deletes the bucket.

$ aws s3 rb s3://bucket-name --force

For more information, see Using High-Level S3 Commands with the AWS Command Line Interface in the
AWS Command Line Interface User Guide.

API Version 2006-03-01


39
Amazon Simple Storage Service User Guide
Setting default bucket encryption

Setting default server-side encryption behavior for


Amazon S3 buckets
With Amazon S3 default encryption, you can set the default encryption behavior for an S3 bucket so
that all new objects are encrypted when they are stored in the bucket. The objects are encrypted using
server-side encryption with either Amazon S3-managed keys (SSE-S3) or AWS KMS keys stored in AWS
Key Management Service (AWS KMS) (SSE-KMS).

When you configure your bucket to use default encryption with SSE-KMS, you can also enable S3 Bucket
Keys to decrease request traffic from Amazon S3 to AWS Key Management Service (AWS KMS) and
reduce the cost of encryption. For more information, see Reducing the cost of SSE-KMS with Amazon S3
Bucket Keys (p. 228).

When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk and
decrypts it when you download the objects. For more information about protecting data using
server-side encryption and encryption key management, see Protecting data using server-side
encryption (p. 219).

For more information about permissions required for default encryption, see PutBucketEncryption in the
Amazon Simple Storage Service API Reference.

To set up default encryption on a bucket, you can use the Amazon S3 console, AWS CLI, AWS SDKs, or
the REST API. For more information, see the section called “Enabling default encryption” (p. 41).

Encrypting existing objects

To encrypt your existing Amazon S3 objects, you can use Amazon S3 Batch Operations. You provide
S3 Batch Operations with a list of objects to operate on, and Batch Operations calls the respective API
to perform the specified operation. You can use the Batch Operations Copy operation to copy existing
unencrypted objects and write them back to the same bucket as encrypted objects. A single Batch
Operations job can perform the specified operation on billions of objects. For more information, see
Performing large-scale batch operations on Amazon S3 objects (p. 738) and the AWS Storage Blog post
Encrypting objects with Amazon S3 Batch Operations.

You can also encrypt existing objects using the Copy Object API. For more information, see the AWS
Storage Blog post Encrypting existing Amazon S3 objects with the AWS CLI.
Note
Amazon S3 buckets with default bucket encryption using SSE-KMS cannot be used as
destination buckets for the section called “Logging server access” (p. 833). Only SSE-S3
default encryption is supported for server access log destination buckets.

Using encryption for cross-account operations


Be aware of the following when using encryption for cross-account operations:

• The AWS managed key (aws/s3) is used when a AWS KMS key Amazon Resource Name (ARN) or alias is
not provided at request time, nor via the bucket's default encryption configuration.
• If you're uploading or accessing S3 objects using AWS Identity and Access Management (IAM)
principals that are in the same AWS account as your KMS key, you can use the AWS managed key (aws/
s3).
• Use a customer managed key if you want to grant cross-account access to your S3 objects. You can
configure the policy of a customer managed key to allow access from another account.
• If specifying your own KMS key, you should use a fully qualified KMS key key ARN. When using a KMS
key alias, be aware that AWS KMS will resolve the key within the requester’s account. This can result in
data encrypted with a KMS key that belongs to the requester, and not the bucket administrator.

API Version 2006-03-01


40
Amazon Simple Storage Service User Guide
Using default encryption with replication

• You must specify a key that you (the requester) have been granted Encrypt permission to. For more
information, see Allows key users to use a KMS key for cryptographic operations in the AWS Key
Management Service Developer Guide.

For more information about when to use customer managed keys and the AWS managed KMS keys, see
Should I use an AWS managed key or a customer managed key key to encrypt my objects on Amazon S3?

Using default encryption with replication


When you enable default encryption for a replication destination bucket, the following encryption
behavior applies:

• If objects in the source bucket are not encrypted, the replica objects in the destination bucket are
encrypted using the default encryption settings of the destination bucket. This results in the ETag of
the source object being different from the ETag of the replica object. You must update applications
that use the ETag to accommodate for this difference.
• If objects in the source bucket are encrypted using SSE-S3 or SSE-KMS, the replica objects in the
destination bucket use the same encryption as the source object encryption. The default encryption
settings of the destination bucket are not used.

For more information about using default encryption with SSE-KMS, see Replicating encrypted
objects (p. 675).

Using Amazon S3 Bucket Keys with default


encryption
When you configure your bucket to use default encryption for SSE-KMS on new objects, you can also
configure S3 Bucket Keys. S3 Bucket Keys decrease the number of transactions from Amazon S3 to AWS
KMS to reduce the cost of server-side encryption using AWS Key Management Service (SSE-KMS).

When you configure your bucket to use S3 Bucket Keys for SSE-KMS on new objects, AWS KMS generates
a bucket-level key that is used to create a unique data key for objects in the bucket. This bucket key is
used for a time-limited period within Amazon S3, reducing the need for Amazon S3 to make requests to
AWS KMS to complete encryption operations.

For more information about using an S3 Bucket Key, see Using Amazon S3 Bucket Keys (p. 228).

Enabling Amazon S3 default bucket encryption


You can set the default encryption behavior on an Amazon S3 bucket so that all objects are encrypted
when they are stored in the bucket. The objects are encrypted using server-side encryption with either
Amazon S3-managed keys (SSE-S3) or AWS Key Management Service (AWS KMS) keys.

When you configure default encryption using AWS KMS, you can also configure S3 Bucket Key. For more
information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys (p. 228).

Default encryption works with all existing and new Amazon S3 buckets. Without default encryption, to
encrypt all objects stored in a bucket, you must include encryption information with every object storage
request. You must also set up an Amazon S3 bucket policy to reject storage requests that don't include
encryption information.

There are no additional charges for using default encryption for S3 buckets. Requests to configure the
default encryption feature incur standard Amazon S3 request charges. For information about pricing, see

API Version 2006-03-01


41
Amazon Simple Storage Service User Guide
Enabling default encryption

Amazon S3 pricing. For SSE-KMS KMS key storage, AWS KMS charges apply and are listed at AWS KMS
pricing.

Changes to note before enabling default encryption

After you enable default encryption for a bucket, the following encryption behavior applies:

• There is no change to the encryption of the objects that existed in the bucket before default
encryption was enabled.
• When you upload objects after enabling default encryption:
• If your PUT request headers don't include encryption information, Amazon S3 uses the bucket’s
default encryption settings to encrypt the objects.
• If your PUT request headers include encryption information, Amazon S3 uses the encryption
information from the PUT request to encrypt objects before storing them in Amazon S3.
• If you use the SSE-KMS option for your default encryption configuration, you are subject to the RPS
(requests per second) limits of AWS KMS. For more information about AWS KMS limits and how to
request a limit increase, see AWS KMS limits.

Using the S3 console


To enable default encryption on an Amazon S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want.
3. Choose Properties.
4. Under Default encryption, choose Edit.
5. To enable or disable server-side encryption, choose Enable or Disable.
6. To enable server-side encryption using an Amazon S3-managed key, under Encryption key type,
choose Amazon S3 key (SSE-S3).

For more information about using Amazon S3 server-side encryption to encrypt your data, see
Protecting data using server-side encryption with Amazon S3-managed encryption keys (SSE-
S3) (p. 237).
7. To enable server-side encryption using an AWS KMS key, follow these steps:

a. Under Encryption key type, choose AWS Key Management Service key (SSE-KMS).
Important
If you use the AWS KMS option for your default encryption configuration, you are
subject to the RPS (requests per second) limits of AWS KMS. For more information
about AWS KMS quotas and how to request a quota increase, see Quotas.
b. Under AWS KMS key choose one of the following:

• AWS managed key (aws/s3)


• Choose from your KMS root keys, and choose your KMS root key.
• Enter KMS root key ARN, and enter your AWS KMS key ARN.

Important
You can only use KMS keys that are enabled in the same AWS Region as the bucket.
When you choose Choose from your KMS keys, the S3 console only lists 100 KMS
keys per Region. If you have more than 100 KMS keys in the same Region, you can only
see the first 100 KMS keys in the S3 console. To use a KMS key that is not listed in the
console, choose Custom KMS ARN, and enter the KMS key ARN.

API Version 2006-03-01


42
Amazon Simple Storage Service User Guide
Enabling default encryption

When you use an AWS KMS key for server-side encryption in Amazon S3, you must
choose a symmetric KMS key. Amazon S3 only supports symmetric KMS keys and not
asymmetric KMS keys. For more information, see Using symmetric and asymmetric keys
in the AWS Key Management Service Developer Guide.

For more information about creating an AWS KMS key, see Creating keys in the AWS Key
Management Service Developer Guide. For more information about using AWS KMS with Amazon
S3, see Protecting Data Using Server-Side Encryption with KMS keys Stored in AWS Key
Management Service (SSE-KMS) (p. 220).
8. To use S3 Bucket Keys, under Bucket Key, choose Enable.

When you configure your bucket to use default encryption with SSE-KMS, you can also enable S3
Bucket Key. S3 Bucket Keys decrease request traffic from Amazon S3 to AWS KMS and lower the
cost of encryption. For more information, see Reducing the cost of SSE-KMS with Amazon S3 Bucket
Keys (p. 228).
9. Choose Save changes.

Using the AWS CLI


These examples show you how to configure default encryption using Amazon S3-managed encryption
(SSE-S3) or AWS KMS encryption (SSE-KMS) with an S3 Bucket Key.

For more information about default encryption, see Setting default server-side encryption behavior
for Amazon S3 buckets (p. 40). For more information about using the AWS CLI to configure default
encryption, see put-bucket-encryption.

Example – Default encryption with SSE-S3

This example configures default bucket encryption with Amazon S3-managed encryption.

aws s3api put-bucket-encryption --bucket bucket-name --server-side-encryption-configuration


'{
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}
]
}'

Example – Default encryption with SSE-KMS using an S3 Bucket Key

This example configures default bucket encryption with SSE-KMS using an S3 Bucket Key.

aws s3api put-bucket-encryption --bucket bucket-name --server-side-encryption-configuration


'{
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "aws:kms",
"KMSMasterKeyID": "KMS-Key-ARN"
},
"BucketKeyEnabled": true
}
]
}'

API Version 2006-03-01


43
Amazon Simple Storage Service User Guide
Monitoring default encryption

Using the REST API


Use the REST API PUT Bucket encryption operation to enable default encryption and to set the type of
server-side encryption to use—SSE-S3 or SSE-KMS.

For more information, see PutBucketEncryption in the Amazon Simple Storage Service API Reference.

Monitoring default encryption with CloudTrail and


CloudWatch
You can track default encryption configuration requests for Amazon S3 buckets using AWS CloudTrail
events. The following API event names are used in CloudTrail logs:

• PutBucketEncryption
• GetBucketEncryption
• DeleteBucketEncryption

You can also create Amazon CloudWatch Events with S3 bucket-level operations as the event type.
For more information about CloudTrail events, see Enable logging for objects in a bucket using the
console (p. 826).

You can use CloudTrail logs for object-level Amazon S3 actions to track PUT and POST requests to
Amazon S3. You can use these actions to verify whether default encryption is being used to encrypt
objects when incoming PUT requests don't have encryption headers.

When Amazon S3 encrypts an object using the default encryption settings, the log includes
the following field as the name/value pair: "SSEApplied":"Default_SSE_S3" or
"SSEApplied":"Default_SSE_KMS".

When Amazon S3 encrypts an object using the PUT encryption headers, the log includes one of the
following fields as the name/value pair: "SSEApplied":"SSE_S3", "SSEApplied":"SSE_KMS or
"SSEApplied":"SSE_C".

For multipart uploads, this information is included in the InitiateMultipartUpload API requests. For
more information about using CloudTrail and CloudWatch, see Monitoring Amazon S3 (p. 814).

Configuring fast, secure file transfers using


Amazon S3 Transfer Acceleration
Amazon S3 Transfer Acceleration is a bucket-level feature that enables fast, easy, and secure transfers
of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage
of the globally distributed edge locations in Amazon CloudFront. As the data arrives at an edge location,
the data is routed to Amazon S3 over an optimized network path.

When you use Transfer Acceleration, additional data transfer charges might apply. For more information
about pricing, see Amazon S3 pricing.

Why use Transfer Acceleration?


You might want to use Transfer Acceleration on a bucket for various reasons:

• Your customers upload to a centralized bucket from all over the world.
• You transfer gigabytes to terabytes of data on a regular basis across continents.

API Version 2006-03-01


44
Amazon Simple Storage Service User Guide
Requirements for using Transfer Acceleration

• You can't use all of your available bandwidth over the internet when uploading to Amazon S3.

For more information about when to use Transfer Acceleration, see Amazon S3 FAQs.

Requirements for using Transfer Acceleration


The following are required when you are using Transfer Acceleration on an S3 bucket:

• Transfer Acceleration is only supported on virtual-hosted style requests. For more information about
virtual-hosted style requests, see Making requests using the REST API (p. 1020).
• The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain
periods (".").
• Transfer Acceleration must be enabled on the bucket. For more information, see Enabling and using S3
Transfer Acceleration (p. 47).

After you enable Transfer Acceleration on a bucket, it might take up to 20 minutes before the data
transfer speed to the bucket increases.
Note
Transfer Acceleration is currently not supported for buckets located in the following Regions:
• Africa (Cape Town) (af-south-1)
• Asia Pacific (Hong Kong) (ap-east-1)
• Asia Pacific (Osaka) (ap-northeast-3)
• Europe (Stockholm) (eu-north-1)
• Europe (Milan) (eu-south-1)
• Middle East (Bahrain) (me-south-1)
• To access the bucket that is enabled for Transfer Acceleration, you must use the endpoint
bucketname.s3-accelerate.amazonaws.com. Or, use the dual-stack endpoint bucketname.s3-
accelerate.dualstack.amazonaws.com to connect to the enabled bucket over IPv6.
• You must be the bucket owner to set the transfer acceleration state. The bucket owner can
assign permissions to other users to allow them to set the acceleration state on a bucket. The
s3:PutAccelerateConfiguration permission permits users to enable or disable Transfer
Acceleration on a bucket. The s3:GetAccelerateConfiguration permission permits users to
return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended. For more
information about these permissions, see Example — Bucket subresource operations (p. 296) and
Identity and access management in Amazon S3 (p. 274).

The following sections describe how to get started and use Amazon S3 Transfer Acceleration for
transferring data.

Topics
• Getting started with Amazon S3 Transfer Acceleration (p. 45)
• Enabling and using S3 Transfer Acceleration (p. 47)
• Using the Amazon S3 Transfer Acceleration Speed Comparison tool (p. 51)

Getting started with Amazon S3 Transfer


Acceleration
You can use Amazon S3 Transfer Acceleration for fast, easy, and secure transfers of files over long
distances between your client and an S3 bucket. Transfer Acceleration uses the globally distributed edge

API Version 2006-03-01


45
Amazon Simple Storage Service User Guide
Getting Started

locations in Amazon CloudFront. As the data arrives at an edge location, data is routed to Amazon S3
over an optimized network path.

To get started using Amazon S3 Transfer Acceleration, perform the following steps:

1. Enable Transfer Acceleration on a bucket

You can enable Transfer Acceleration on a bucket any of the following ways:
• Use the Amazon S3 console.
• Use the REST API PUT Bucket accelerate operation.
• Use the AWS CLI and AWS SDKs. For more information, see Developing with Amazon S3 using the
AWS SDKs, and explorers (p. 1030).

For more information, see Enabling and using S3 Transfer Acceleration (p. 47).
Note
For your bucket to work with transfer acceleration, the bucket name must conform to DNS
naming requirements and must not contain periods (".").
2. Transfer data to and from the acceleration-enabled bucket

Use one of the following s3-accelerate endpoint domain names:


• To access an acceleration-enabled bucket, use bucketname.s3-accelerate.amazonaws.com.
• To access an acceleration-enabled bucket over IPv6, use bucketname.s3-
accelerate.dualstack.amazonaws.com.

Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and IPv4. The Transfer
Acceleration dual-stack endpoint only uses the virtual hosted-style type of endpoint name. For
more information, see Getting started making requests over IPv6 (p. 989) and Using Amazon S3
dual-stack endpoints (p. 991).
Note
You can continue to use the regular endpoint in addition to the accelerate endpoints.

You can point your Amazon S3 PUT object and GET object requests to the s3-accelerate
endpoint domain name after you enable Transfer Acceleration. For example, suppose that you
currently have a REST API application using PUT Object that uses the hostname mybucket.s3.us-
east-1.amazonaws.com in the PUT request. To accelerate the PUT, you change the hostname in
your request to mybucket.s3-accelerate.amazonaws.com. To go back to using the standard
upload speed, change the name back to mybucket.s3.us-east-1.amazonaws.com.

After Transfer Acceleration is enabled, it can take up to 20 minutes for you to realize the performance
benefit. However, the accelerate endpoint is available as soon as you enable Transfer Acceleration.

You can use the accelerate endpoint in the AWS CLI, AWS SDKs, and other tools that transfer data
to and from Amazon S3. If you are using the AWS SDKs, some of the supported languages use
an accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint
for Transfer Acceleration to bucketname.s3-accelerate.amazonaws.com. For examples of
how to use an accelerate endpoint client configuration flag, see Enabling and using S3 Transfer
Acceleration (p. 47).

You can use all Amazon S3 operations through the transfer acceleration endpoints except for the
following:

• GET Service (list buckets)


• PUT Bucket (create bucket)
• DELETE Bucket API Version 2006-03-01
46
Amazon Simple Storage Service User Guide
Enabling Transfer Acceleration

Also, Amazon S3 Transfer Acceleration does not support cross-Region copies using PUT Object - Copy.

Enabling and using S3 Transfer Acceleration


You can use Amazon S3 Transfer Acceleration transfer files quickly and securely over long distances
between your client and an S3 bucket. You can enable Transfer Acceleration using the S3 console, the
AWS Command Line Interface (AWS CLI), or the AWS SDKs.

This section provides examples of how to enable Amazon S3 Transfer Acceleration on a bucket and use
the acceleration endpoint for the enabled bucket.

For more information about Transfer Acceleration requirements, see Configuring fast, secure file
transfers using Amazon S3 Transfer Acceleration (p. 44).

Using the S3 console


Note
If you want to compare accelerated and non-accelerated upload speeds, open the Amazon S3
Transfer Acceleration Speed Comparison tool.
The Speed Comparison tool uses multipart upload to transfer a file from your browser to various
AWS Regions with and without Amazon S3 transfer acceleration. You can compare the upload
speed for direct uploads and transfer accelerated uploads by Region.

To enable transfer acceleration for an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to enable transfer acceleration for.
3. Choose Properties.
4. Under Transfer acceleration, choose Edit.
5. Choose Enable, and choose Save changes.

To access accelerated data transfers

1. After Amazon S3 enables transfer acceleration for your bucket, view the Properties tab for the
bucket.
2. Under Transfer acceleration, Accelerated endpoint displays the transfer acceleration endpoint for
your bucket. Use this endpoint to access accelerated data transfers to and from your bucket.

If you suspend transfer acceleration, the accelerate endpoint no longer works.

Using the AWS CLI


The following are examples of AWS CLI commands used for Transfer Acceleration. For instructions on
setting up the AWS CLI, see Developing with Amazon S3 using the AWS CLI (p. 1029).

Enabling Transfer Acceleration on a bucket

Use the AWS CLI put-bucket-accelerate-configuration command to enable or suspend Transfer


Acceleration on a bucket.

The following example sets Status=Enabled to enable Transfer Acceleration on a bucket. You use
Status=Suspended to suspend Transfer Acceleration.

API Version 2006-03-01


47
Amazon Simple Storage Service User Guide
Enabling Transfer Acceleration

Example

$ aws s3api put-bucket-accelerate-configuration --bucket bucketname --accelerate-


configuration Status=Enabled

Using Transfer Acceleration


You can direct all Amazon S3 requests made by s3 and s3api AWS CLI commands to the
accelerate endpoint: s3-accelerate.amazonaws.com. To do this, set the configuration value
use_accelerate_endpoint to true in a profile in your AWS Config file. Transfer Acceleration must be
enabled on your bucket to use the accelerate endpoint.

All requests are sent using the virtual style of bucket addressing: my-bucket.s3-
accelerate.amazonaws.com. Any ListBuckets, CreateBucket, and DeleteBucket requests are
not sent to the accelerate endpoint because the endpoint doesn't support those operations.

For more information about use_accelerate_endpoint, see AWS CLI S3 Configuration in the AWS CLI
Command Reference.

The following example sets use_accelerate_endpoint to true in the default profile.

Example

$ aws configure set default.s3.use_accelerate_endpoint true

If you want to use the accelerate endpoint for some AWS CLI commands but not others, you can use
either one of the following two methods:

• Use the accelerate endpoint for any s3 or s3api command by setting the --endpoint-url parameter
to https://s3-accelerate.amazonaws.com.
• Set up separate profiles in your AWS Config file. For example, create one profile that sets
use_accelerate_endpoint to true and a profile that does not set use_accelerate_endpoint.
When you run a command, specify which profile you want to use, depending upon whether you want
to use the accelerate endpoint.

Uploading an object to a bucket enabled for Transfer Acceleration


The following example uploads a file to a bucket enabled for Transfer Acceleration by using the default
profile that has been configured to use the accelerate endpoint.

Example

$ aws s3 cp file.txt s3://bucketname/keyname --region region

The following example uploads a file to a bucket enabled for Transfer Acceleration by using the --
endpoint-url parameter to specify the accelerate endpoint.

Example

$ aws configure set s3.addressing_style virtual


$ aws s3 cp file.txt s3://bucketname/keyname --region region --endpoint-url https://s3-
accelerate.amazonaws.com

Using the AWS SDKs


The following are examples of using Transfer Acceleration to upload objects to Amazon S3 using
the AWS SDK. Some of the AWS SDK supported languages (for example, Java and .NET) use an

API Version 2006-03-01


48
Amazon Simple Storage Service User Guide
Enabling Transfer Acceleration

accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint for Transfer
Acceleration to bucketname.s3-accelerate.amazonaws.com.

Java

Example

The following example shows how to use an accelerate endpoint to upload an object to Amazon S3.
The example does the following:

• Creates an AmazonS3Client that is configured to use accelerate endpoints. All buckets that the
client accesses must have Transfer Acceleration enabled.
• Enables Transfer Acceleration on a specified bucket. This step is necessary only if the bucket you
specify doesn't already have Transfer Acceleration enabled.
• Verifies that transfer acceleration is enabled for the specified bucket.
• Uploads a new object to the specified bucket using the bucket's accelerate endpoint.

For more information about using Transfer Acceleration, see Getting started with Amazon S3
Transfer Acceleration (p. 45). For instructions on creating and testing a working sample, see
Testing the Amazon S3 Java Code Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketAccelerateConfiguration;
import com.amazonaws.services.s3.model.BucketAccelerateStatus;
import com.amazonaws.services.s3.model.GetBucketAccelerateConfigurationRequest;
import com.amazonaws.services.s3.model.SetBucketAccelerateConfigurationRequest;

public class TransferAcceleration {


public static void main(String[] args) {
Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ***";

try {
// Create an Amazon S3 client that is configured to use the accelerate
endpoint.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.enableAccelerateMode()
.build();

// Enable Transfer Acceleration for the specified bucket.


s3Client.setBucketAccelerateConfiguration(
new SetBucketAccelerateConfigurationRequest(bucketName,
new BucketAccelerateConfiguration(
BucketAccelerateStatus.Enabled)));

// Verify that transfer acceleration is enabled for the bucket.


String accelerateStatus = s3Client.getBucketAccelerateConfiguration(
new GetBucketAccelerateConfigurationRequest(bucketName))
.getStatus();
System.out.println("Bucket accelerate status: " + accelerateStatus);

// Upload a new object using the accelerate endpoint.

API Version 2006-03-01


49
Amazon Simple Storage Service User Guide
Enabling Transfer Acceleration

s3Client.putObject(bucketName, keyName, "Test object for transfer


acceleration");
System.out.println("Object \"" + keyName + "\" uploaded with transfer
acceleration.");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

The following example shows how to use the AWS SDK for .NET to enable Transfer Acceleration on
a bucket. For instructions on how to create and test a working sample, see Running the Amazon
S3 .NET Code Examples (p. 1039).

Example

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class TransferAccelerationTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
EnableAccelerationAsync().Wait();
}

static async Task EnableAccelerationAsync()


{
try
{
var putRequest = new PutBucketAccelerateConfigurationRequest
{
BucketName = bucketName,
AccelerateConfiguration = new AccelerateConfiguration
{
Status = BucketAccelerateStatus.Enabled
}
};
await s3Client.PutBucketAccelerateConfigurationAsync(putRequest);

var getRequest = new GetBucketAccelerateConfigurationRequest


{
BucketName = bucketName
};

API Version 2006-03-01


50
Amazon Simple Storage Service User Guide
Speed Comparison tool

var response = await


s3Client.GetBucketAccelerateConfigurationAsync(getRequest);

Console.WriteLine("Acceleration state = '{0}' ", response.Status);


}
catch (AmazonS3Exception amazonS3Exception)
{
Console.WriteLine(
"Error occurred. Message:'{0}' when setting transfer
acceleration",
amazonS3Exception.Message);
}
}
}
}

When uploading an object to a bucket that has Transfer Acceleration enabled, you specify using the
acceleration endpoint at the time of creating a client.

var client = new AmazonS3Client(new AmazonS3Config


{
RegionEndpoint = TestRegionEndpoint,
UseAccelerateEndpoint = true
}

Javascript

For an example of enabling Transfer Acceleration by using the AWS SDK for JavaScript, see Calling
the putBucketAccelerateConfiguration operation in the AWS SDK for JavaScript API Reference.
Python (Boto)

For an example of enabling Transfer Acceleration by using the SDK for Python, see
put_bucket_accelerate_configuration in the AWS SDK for Python (Boto3) API Reference.
Other

For information about using other AWS SDKs, see Sample Code and Libraries.

Using the Amazon S3 Transfer Acceleration Speed


Comparison tool
You can use the Amazon S3 Transfer Acceleration Speed Comparison tool to compare accelerated and
non-accelerated upload speeds across Amazon S3 Regions. The Speed Comparison tool uses multipart
uploads to transfer a file from your browser to various Amazon S3 Regions with and without using
Transfer Acceleration.

You can access the Speed Comparison tool using either of the following methods:

• Copy the following URL into your browser window, replacing region with the AWS Region that you
are using (for example, us-west-2) and yourBucketName with the name of the bucket that you
want to evaluate:

https://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-
speed-comparsion.html?region=region&origBucketName=yourBucketName

For a list of the Regions supported by Amazon S3, see Amazon S3 endpoints and quotas in the AWS
General Reference.

API Version 2006-03-01


51
Amazon Simple Storage Service User Guide
Using Requester Pays

• Use the Amazon S3 console.

Using Requester Pays buckets for storage transfers


and usage
In general, bucket owners pay for all Amazon S3 storage and data transfer costs that are associated with
their bucket. However, you can configure a bucket to be a Requester Pays bucket. With Requester Pays
buckets, the requester instead of the bucket owner pays the cost of the request and the data download
from the bucket. The bucket owner always pays the cost of storing data.

Typically, you configure buckets to be Requester Pays buckets when you want to share data but not
incur charges associated with others accessing the data. For example, you might use Requester Pays
buckets when making available large datasets, such as zip code directories, reference data, geospatial
information, or web crawling data.
Important
If you enable Requester Pays on a bucket, anonymous access to that bucket is not allowed.

You must authenticate all requests involving Requester Pays buckets. The request authentication enables
Amazon S3 to identify and charge the requester for their use of the Requester Pays bucket.

When the requester assumes an AWS Identity and Access Management (IAM) role before making their
request, the account to which the role belongs is charged for the request. For more information about
IAM roles, see IAM roles in the IAM User Guide.

After you configure a bucket to be a Requester Pays bucket, requesters must include x-amz-request-
payer in their requests either in the header, for POST, GET and HEAD requests, or as a parameter in
a REST request to show that they understand that they will be charged for the request and the data
download.

Requester Pays buckets do not support the following:

• Anonymous requests
• SOAP requests
• Using a Requester Pays bucket as the target bucket for end-user logging, or vice versa. However, you
can turn on end-user logging on a Requester Pays bucket where the target bucket is not a Requester
Pays bucket.

How Requester Pays charges work


The charge for successful Requester Pays requests is straightforward: The requester pays for the data
transfer and the request, and the bucket owner pays for the data storage. However, the bucket owner is
charged for the request under the following conditions:

• The requester doesn't include the parameter x-amz-request-payer in the header (GET, HEAD, or
POST) or as a parameter (REST) in the request (HTTP code 403).
• Request authentication fails (HTTP code 403).
• The request is anonymous (HTTP code 403).
• The request is a SOAP request.

For more information Requester Pays, see the topics below.

Topics

API Version 2006-03-01


52
Amazon Simple Storage Service User Guide
Configuring Requester Pays

• Configuring Requester Pays on a bucket (p. 53)


• Retrieving the requestPayment configuration using the REST API (p. 54)
• Downloading objects in Requester Pays buckets (p. 54)

Configuring Requester Pays on a bucket


You can configure an Amazon S3 bucket to be a Requester Pays bucket so that the requester pays the
cost of the request and data download instead of the bucket owner.

This section provides examples of how to configure Requester Pays on an Amazon S3 bucket using the
console and the REST API.

Using the S3 console


To enable Requester Pays for an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to enable Requester Pays for.
3. Choose Properties.
4. Under Requester pays, choose Edit.
5. Choose Enable, and choose Save changes.

Amazon S3 enables Requester Pays for your bucket and displays your Bucket overview. Under
Requester pays, you see Enabled.

Using the REST API


Only the bucket owner can set the RequestPaymentConfiguration.payer configuration value of a
bucket to BucketOwner (the default) or Requester. Setting the requestPayment resource is optional.
By default, the bucket is not a Requester Pays bucket.

To revert a Requester Pays bucket to a regular bucket, you use the value BucketOwner. Typically, you
would use BucketOwner when uploading data to the Amazon S3 bucket, and then you would set the
value to Requester before publishing the objects in the bucket.

To set requestPayment

• Use a PUT request to set the Payer value to Requester on a specified bucket.

PUT ?requestPayment HTTP/1.1


Host: [BucketName].s3.amazonaws.com
Content-Length: 173
Date: Wed, 01 Mar 2009 12:00:00 GMT
Authorization: AWS [Signature]

<RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Payer>Requester</Payer>
</RequestPaymentConfiguration>

If the request succeeds, Amazon S3 returns a response similar to the following.

HTTP/1.1 200 OK
x-amz-id-2: [id]

API Version 2006-03-01


53
Amazon Simple Storage Service User Guide
Retrieving the requestPayment configuration

x-amz-request-id: [request_id]
Date: Wed, 01 Mar 2009 12:00:00 GMT
Content-Length: 0
Connection: close
Server: AmazonS3
x-amz-request-charged:requester

You can set Requester Pays only at the bucket level. You can't set Requester Pays for specific objects
within the bucket.

You can configure a bucket to be BucketOwner or Requester at any time. However, there might be a
few minutes before the new configuration value takes effect.
Note
Bucket owners who give out presigned URLs should consider carefully before configuring a
bucket to be Requester Pays, especially if the URL has a long lifetime. The bucket owner is
charged each time the requester uses a presigned URL that uses the bucket owner's credentials.

Retrieving the requestPayment configuration using


the REST API
You can determine the Payer value that is set on a bucket by requesting the resource requestPayment.

To return the requestPayment resource

• Use a GET request to obtain the requestPayment resource, as shown in the following request.

GET ?requestPayment HTTP/1.1


Host: [BucketName].s3.amazonaws.com
Date: Wed, 01 Mar 2009 12:00:00 GMT
Authorization: AWS [Signature]

If the request succeeds, Amazon S3 returns a response similar to the following.

HTTP/1.1 200 OK
x-amz-id-2: [id]
x-amz-request-id: [request_id]
Date: Wed, 01 Mar 2009 12:00:00 GMT
Content-Type: [type]
Content-Length: [length]
Connection: close
Server: AmazonS3

<?xml version="1.0" encoding="UTF-8"?>


<RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Payer>Requester</Payer>
</RequestPaymentConfiguration>

This response shows that the payer value is set to Requester.

Downloading objects in Requester Pays buckets


Because requesters are charged for downloading data from Requester Pays buckets, the requests must
contain a special parameter, x-amz-request-payer, which confirms that the requester knows that
they will be charged for the download. To access objects in Requester Pays buckets, requests must
include one of the following.

API Version 2006-03-01


54
Amazon Simple Storage Service User Guide
Restrictions and limitations

• For GET, HEAD, and POST requests, include x-amz-request-payer : requester in the header
• For signed URLs, include x-amz-request-payer=requester in the request

If the request succeeds and the requester is charged, the response includes the header x-amz-request-
charged:requester. If x-amz-request-payer is not in the request, Amazon S3 returns a 403 error
and charges the bucket owner for the request.
Note
Bucket owners do not need to add x-amz-request-payer to their requests.
Ensure that you have included x-amz-request-payer and its value in your signature
calculation. For more information, see Constructing the CanonicalizedAmzHeaders
Element (p. 1059).

Using the REST API


To download objects from a Requester Pays bucket

• Use a GET request to download an object from a Requester Pays bucket, as shown in the following
request.

GET / [destinationObject] HTTP/1.1


Host: [BucketName].s3.amazonaws.com
x-amz-request-payer : requester
Date: Wed, 01 Mar 2009 12:00:00 GMT
Authorization: AWS [Signature]

If the GET request succeeds and the requester is charged, the response includes x-amz-request-
charged:requester.

Amazon S3 can return an Access Denied error for requests that try to get objects from a Requester
Pays bucket. For more information, see Error Responses in the Amazon Simple Storage Service API
Reference.

Using the AWS CLI


To download objects from a Requester Pays bucket using the AWS CLI, you specify --request-payer
requester as part of your get-object request. For more information, see get-object in the AWS CLI
Reference.

Bucket restrictions and limitations


An Amazon S3 bucket is owned by the AWS account that created it. Bucket ownership is not transferable
to another account.

When you create a bucket, you choose its name and the AWS Region to create it in. After you create a
bucket, you can't change its name or Region.

When naming a bucket, choose a name that is relevant to you or your business. Avoid using names
associated with others. For example, you should avoid using AWS or Amazon in your bucket name.

By default, you can create up to 100 buckets in each of your AWS accounts. If you need additional
buckets, you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a
service limit increase. There is no difference in performance whether you use many buckets or just a few.

For information about how to increase your bucket limit, see AWS service quotas in the AWS General
Reference.

API Version 2006-03-01


55
Amazon Simple Storage Service User Guide
Restrictions and limitations

Reusing bucket names

If a bucket is empty, you can delete it. After a bucket is deleted, the name becomes available for reuse.
However, after you delete the bucket, you might not be able to reuse the name for various reasons.

For example, when you delete the bucket and the name becomes available for reuse, another AWS
account might create a bucket with that name. In addition, some time might pass before you can reuse
the name of a deleted bucket. If you want to use the same bucket name, we recommend that you don't
delete the bucket.

For more information about bucket names, see Bucket naming rules (p. 27)

Objects and buckets

There is no limit to the number of objects that you can store in a bucket. You can store all of your objects
in a single bucket, or you can organize them across several buckets. However, you can't create a bucket
from within another bucket.

Bucket operations

The high availability engineering of Amazon S3 is focused on get, put, list, and delete operations. Because
bucket operations work against a centralized, global resource space, it is not appropriate to create or
delete buckets on the high availability code path of your application. It's better to create or delete
buckets in a separate initialization or setup routine that you run less often.

Bucket naming and automatically created buckets

If your application automatically creates buckets, choose a bucket naming scheme that is unlikely to
cause naming conflicts. Ensure that your application logic will choose a different bucket name if a bucket
name is already taken.

For more information about bucket naming, see Bucket naming rules (p. 27).

API Version 2006-03-01


56
Amazon Simple Storage Service User Guide
Objects

Uploading, downloading, and


working with objects in Amazon S3
To store your data in Amazon S3, you work with resources known as buckets and objects. A bucket is a
container for objects. An object is a file and any metadata that describes that file.

To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the
object is in the bucket, you can open it, download it, and move it. When you no longer need an object or
a bucket, you can clean up these resources.

With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and
pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for
free. For more information, see AWS Free Tier.

Topics
• Amazon S3 objects overview (p. 57)
• Creating object key names (p. 58)
• Working with object metadata (p. 61)
• Uploading objects (p. 66)
• Uploading and copying objects using multipart upload (p. 74)
• Copying objects (p. 108)
• Downloading an object (p. 115)
• Deleting Amazon S3 objects (p. 121)
• Organizing, listing, and working with your objects (p. 141)
• Using presigned URLs (p. 150)
• Transforming objects with S3 Object Lambda (p. 161)

Amazon S3 objects overview


Amazon S3 is an object store that uses unique key-values to store as many objects as you want. You store
these objects in one or more buckets, and each object can be up to 5 TB in size. An object consists of the
following:

Key

The name that you assign to an object. You use the object key to retrieve the object. For more
information, see Working with object metadata (p. 61).
Version ID

Within a bucket, a key and version ID uniquely identify an object. The version ID is a string that
Amazon S3 generates when you add an object to a bucket. For more information, see Using
versioning in S3 buckets (p. 519).
Value

The content that you are storing.

API Version 2006-03-01


57
Amazon Simple Storage Service User Guide
Subresources

An object value can be any sequence of bytes. Objects can range in size from zero to 5 TB. For more
information, see Uploading objects (p. 66).
Metadata

A set of name-value pairs with which you can store information regarding the object. You can assign
metadata, referred to as user-defined metadata, to your objects in Amazon S3. Amazon S3 also
assigns system-metadata to these objects, which it uses for managing objects. For more information,
see Working with object metadata (p. 61).
Subresources

Amazon S3 uses the subresource mechanism to store object-specific additional information. Because
subresources are subordinates to objects, they are always associated with some other entity such as
an object or a bucket. For more information, see Object subresources (p. 58).
Access control information

You can control access to the objects you store in Amazon S3. Amazon S3 supports both the
resource-based access control, such as an access control list (ACL) and bucket policies, and user-
based access control. For more information, see Identity and access management in Amazon
S3 (p. 274).

Your Amazon S3 resources (for example, buckets and objects) are private by default. You must
explicitly grant permission for others to access these resources. For more information about sharing
objects, see Sharing an object with a presigned URL (p. 151).

Object subresources
Amazon S3 defines a set of subresources associated with buckets and objects. Subresources are
subordinates to objects. This means that subresources don't exist on their own. They are always
associated with some other entity, such as an object or a bucket.

The following table lists the subresources associated with Amazon S3 objects.

Subresource Description

acl Contains a list of grants identifying the grantees and the permissions granted. When
you create an object, the acl identifies the object owner as having full control over the
object. You can retrieve an object ACL or replace it with an updated list of grants. Any
update to an ACL requires you to replace the existing ACL. For more information about
ACLs, see Access control list (ACL) overview (p. 460).

Creating object key names


The object key (or key name) uniquely identifies the object in an Amazon S3 bucket. Object metadata
is a set of name-value pairs. For more information about object metadata, see Working with object
metadata (p. 61).

When you create an object, you specify the key name, which uniquely identifies the object in the bucket.
For example, on the Amazon S3 console, when you highlight a bucket, a list of objects in your bucket
appears. These names are the object keys. The name for a key is a sequence of Unicode characters whose
UTF-8 encoding is at most 1,024 bytes long.

The Amazon S3 data model is a flat structure: You create a bucket, and the bucket store objects. There
is no hierarchy of subbuckets or subfolders. However, you can infer logical hierarchy using key name

API Version 2006-03-01


58
Amazon Simple Storage Service User Guide
Object key naming guidelines

prefixes and delimiters as the Amazon S3 console does. The Amazon S3 console supports a concept of
folders. For more information about how to edit metadata from the Amazon S3 console, see Editing
object metadata in the Amazon S3 console (p. 64).

Suppose that your bucket (admin-created) has four objects with the following object keys:

Development/Projects.xls

Finance/statement1.pdf

Private/taxdocument.pdf

s3-dg.pdf

The console uses the key name prefixes (Development/, Finance/, and Private/) and delimiter ('/')
to present a folder structure. The s3-dg.pdf key does not have a prefix, so its object appears directly at
the root level of the bucket. If you open the Development/ folder, you see the Projects.xlsx object
in it.

• Amazon S3 supports buckets and objects, and there is no hierarchy. However, by using prefixes and
delimiters in an object key name, the Amazon S3 console and the AWS SDKs can infer hierarchy and
introduce the concept of folders.
• The Amazon S3 console implements folder object creation by creating a zero-byte object with the
folder prefix and delimiter value as the key. These folder objects don't appear in the console. Otherwise
they behave like any other objects and can be viewed and manipulated through the REST API, AWS CLI,
and AWS SDKs.

Object key naming guidelines


You can use any UTF-8 character in an object key name. However, using certain characters in key names
can cause problems with some applications and protocols. The following guidelines help you maximize
compliance with DNS, web-safe characters, XML parsers, and other APIs.

Safe characters
The following character sets are generally safe for use in key names.

Alphanumeric characters • 0-9


• a-z
• A-Z

Special characters • Forward slash (/)


• Exclamation point (!)
• Hyphen (-)
• Underscore (_)
• Period (.)
• Asterisk (*)
• Single quote (')
• Open parenthesis (()
• Close parenthesis ())

The following are examples of valid object key names:

API Version 2006-03-01


59
Amazon Simple Storage Service User Guide
Object key naming guidelines

• 4my-organization
• my.great_photos-2014/jan/myvacation.jpg
• videos/2014/birthday/video1.wmv

Note
Objects with key names ending with period(s) "." downloaded using the Amazon S3 console will
have the period(s) "." removed from the key name of the downloaded object. To download an
object with the key name ending in period(s) "." retained in the downloaded object, you must
use the AWS Command Line Interface (AWS CLI), AWS SDKs, or REST API.
In addition, be aware of the following prefix limitations:

• Objects with a prefix of "./" must uploaded or downloaded with the AWS Command Line
Interface (AWS CLI), AWS SDKs, or REST API. You cannot use the Amazon S3 console.
• Objects with a prefix of "../" cannot be uploaded using the AWS Command Line Interface (AWS
CLI) or Amazon S3 console.

Characters that might require special handling


The following characters in a key name might require additional code handling and likely need to be URL
encoded or referenced as HEX. Some of these are non-printable characters that your browser might not
handle, which also requires special handling:

• Ampersand ("&")
• Dollar ("$")
• ASCII character ranges 00–1F hex (0–31 decimal) and 7F (127 decimal)
• 'At' symbol ("@")
• Equals ("=")
• Semicolon (";")
• Colon (":")
• Plus ("+")
• Space – Significant sequences of spaces might be lost in some uses (especially multiple spaces)
• Comma (",")
• Question mark ("?")

Characters to avoid
Avoid the following characters in a key name because of significant special handling for consistency
across all applications.

• Backslash ("\")
• Left curly brace ("{")
• Non-printable ASCII characters (128–255 decimal characters)
• Caret ("^")
• Right curly brace ("}")
• Percent character ("%")
• Grave accent / back tick ("`")
• Right square bracket ("]")

API Version 2006-03-01


60
Amazon Simple Storage Service User Guide
Working with metadata

• Quotation marks
• 'Greater Than' symbol (">")
• Left square bracket ("[")
• Tilde ("~")
• 'Less Than' symbol ("<")
• 'Pound' character ("#")
• Vertical bar / pipe ("|")

XML related object key constraints


As specified by the XML standard on end-of-line handling, all XML text is normalized such that single
carriage returns (ASCII code 13) and carriage returns immediately followed by a line feed (ASCII code
10) are replaced by a single line feed character. To ensure the correct parsing of object keys in XML
requests, carriage returns and other special characters must be replaced with their equivalent XML entity
code when they are inserted within XML tags. The following is a list of such special characters and their
equivalent entity codes:

• ' as &apos;
• ” as &quot;
• & as &amp;
• < as &lt;
• > as &gt;
• \r as &#13; or &#x0D;
• \n as &#10; or &#x0A;

Example

The following example illustrates the use of an XML entity code as a substitution for a carriage return.
This DeleteObjects request deletes an object with the key parameter: /some/prefix/objectwith
\rcarriagereturn (where the \r is the carriage return).

<Delete xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Object>
<Key>/some/prefix/objectwith&#13;carriagereturn</Key>
</Object>
</Delete>

Working with object metadata


You can set object metadata in Amazon S3 at the time you upload the object. Object metadata is a set
of name-value pairs. After you upload the object, you cannot modify object metadata. The only way to
modify object metadata is to make a copy of the object and set the metadata.

When you create an object, you also specify the key name, which uniquely identifies the object in the
bucket. The object key (or key name) uniquely identifies the object in an Amazon S3 bucket. For more
information, see Creating object key names (p. 58).

There are two kinds of metadata in Amazon S3: system-defined metadata and user-defined metadata. The
sections below provide more information about system-defined and user-defined metadata. For more

API Version 2006-03-01


61
Amazon Simple Storage Service User Guide
System-defined object metadata

information about editing metadata using the Amazon S3 console, see Editing object metadata in the
Amazon S3 console (p. 64).

System-defined object metadata


For each object stored in a bucket, Amazon S3 maintains a set of system metadata. Amazon S3 processes
this system metadata as needed. For example, Amazon S3 maintains object creation date and size
metadata and uses this information as part of object management.

There are two categories of system metadata:

1. Metadata such as object creation date is system controlled, where only Amazon S3 can modify the
value.
2. Other system metadata, such as the storage class configured for the object and whether the object
has server-side encryption enabled, are examples of system metadata whose values you control.
If your bucket is configured as a website, sometimes you might want to redirect a page request to
another page or an external URL. In this case, a webpage is an object in your bucket. Amazon S3 stores
the page redirect value as system metadata whose value you control.

When you create objects, you can configure values of these system metadata items or update the
values when you need to. For more information about storage classes, see Using Amazon S3 storage
classes (p. 567).

For more information about server-side encryption, see Protecting data using encryption (p. 219).

Note
The PUT request header is limited to 8 KB in size. Within the PUT request header, the system-
defined metadata is limited to 2 KB in size. The size of system-defined metadata is measured by
taking the sum of the number of bytes in the US-ASCII encoding of each key and value.

The following table provides a list of system-defined metadata and whether you can update it.

Name Description Can user


modify the
value?

Date Current date and time. No

Content-Length Object size in bytes. No

Content-Type Object type. Yes

Last-Modified Object creation date or the last modified date, whichever is No


the latest.

Content-MD5 The base64-encoded 128-bit MD5 digest of the object. No

x-amz-server-side- Indicates whether server-side encryption is enabled for Yes


encryption the object, and whether that encryption is from the AWS
Key Management Service (AWS KMS) or from Amazon S3
managed encryption (SSE-S3). For more information, see
Protecting data using server-side encryption (p. 219).

x-amz-version-id Object version. When you enable versioning on a bucket, No


Amazon S3 assigns a version number to objects added to
the bucket. For more information, see Using versioning in S3
buckets (p. 519).

API Version 2006-03-01


62
Amazon Simple Storage Service User Guide
User-defined object metadata

Name Description Can user


modify the
value?

x-amz-delete-marker In a bucket that has versioning enabled, this Boolean marker No


indicates whether the object is a delete marker.

x-amz-storage-class Storage class used for storing the object. For more Yes
information, see Using Amazon S3 storage classes (p. 567).

x-amz-website- Redirects requests for the associated object to another Yes


redirect-location object in the same bucket or an external URL. For more
information, see (Optional) Configuring a webpage
redirect (p. 958).

x-amz-server-side- If x-amz-server-side-encryption is present and has the value Yes


encryption-aws-kms- of aws:kms, this indicates the ID of the AWS KMS symmetric
key-id KMS key that was used for the object.

x-amz-server-side- Indicates whether server-side encryption with customer- Yes


encryption-customer- provided encryption keys (SSE-C) is enabled. For more
algorithm information, see Protecting data using server-side
encryption with customer-provided encryption keys (SSE-
C) (p. 248).

User-defined object metadata


When uploading an object, you can also assign metadata to the object. You provide this optional
information as a name-value (key-value) pair when you send a PUT or POST request to create the object.
When you upload objects using the REST API, the optional user-defined metadata names must begin
with "x-amz-meta-" to distinguish them from other HTTP headers. When you retrieve the object using
the REST API, this prefix is returned. When you upload objects using the SOAP API, the prefix is not
required. When you retrieve the object using the SOAP API, the prefix is removed, regardless of which API
you used to upload the object.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

When metadata is retrieved through the REST API, Amazon S3 combines headers that have the same
name (ignoring case) into a comma-delimited list. If some metadata contains unprintable characters,
it is not returned. Instead, the x-amz-missing-meta header is returned with a value of the number
of unprintable metadata entries. The HeadObject action retrieves metadata from an object without
returning the object itself. This operation is useful if you're only interested in an object's metadata.
To use HEAD, you must have READ access to the object. For more information, see HeadObject in the
Amazon Simple Storage Service API Reference.

User-defined metadata is a set of key-value pairs. Amazon S3 stores user-defined metadata keys in
lowercase.

Amazon S3 allows arbitrary Unicode characters in your metadata values.

To avoid issues around the presentation of these metadata values, you should conform to using US-ASCII
characters when using REST and UTF-8 when using SOAP or browser-based uploads via POST.

When using non US-ASCII characters in your metadata values, the provided Unicode string is examined
for non US-ASCII characters. If the string contains only US-ASCII characters, it is presented as is. If the

API Version 2006-03-01


63
Amazon Simple Storage Service User Guide
Editing object metadata

string contains non US-ASCII characters, it is first character-encoded using UTF-8 and then encoded into
US-ASCII.

The following is an example.

PUT /Key HTTP/1.1


Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-nonascii: ÄMÄZÕÑ S3

HEAD /Key HTTP/1.1


Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-nonascii: =?UTF-8?B?w4PChE3Dg8KEWsODwpXDg8KRIFMz?=

PUT /Key HTTP/1.1


Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-ascii: AMAZONS3

HEAD /Key HTTP/1.1


Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-ascii: AMAZONS3

Note
The PUT request header is limited to 8 KB in size. Within the PUT request header, the user-
defined metadata is limited to 2 KB in size. The size of user-defined metadata is measured by
taking the sum of the number of bytes in the UTF-8 encoding of each key and value.

For information about changing the metadata of your object after it’s been uploaded by creating a copy
of the object, modifying it, and replacing the old object, or creating a new version, see Editing object
metadata in the Amazon S3 console (p. 64).

Editing object metadata in the Amazon S3 console


You can use the Amazon S3 console to edit metadata of existing S3 objects. Some metadata is set by
Amazon S3 when you upload the object. For example, Content-Length is the key (name) and the value
is the size of the object in bytes.

You can also set some metadata when you upload the object and later edit it as your needs change. For
example, you might have a set of objects that you initially store in the STANDARD storage class. Over
time, you might no longer need this data to be highly available. So you change the storage class to
GLACIER by editing the value of the x-amz-storage-class key from STANDARD to GLACIER.
Note
Consider the following issues when you are editing object metadata in Amazon S3:

• This action creates a copy of the object with updated settings and the last-modified date. If S3
Versioning is enabled, a new version of the object is created, and the existing object becomes
an older version. If S3 Versioning is not enabled, a new copy of the object replaces the original
object. The IAM role that changes the property also becomes the owner of the new object or
(object version).
• Editing metadata updates values for existing key names.
• Objects that are encrypted with customer-provided encryption keys (SSE-C) cannot be copied
using the console. You must use the AWS CLI, AWS SDK, or the Amazon S3 REST API.

Warning
When editing metadata of folders, wait for the Edit metadata operation to finish before
adding new objects to the folder. Otherwise, new objects might also be edited.

API Version 2006-03-01


64
Amazon Simple Storage Service User Guide
Editing object metadata

The following topics describe how to edit metadata of an object using the Amazon S3 console.

Editing system-defined metadata


You can configure some, but not all, system metadata for an S3 object. For a list of system-defined
metadata and whether you can modify their values, see System-defined object metadata (p. 62).

To edit system-defined metadata of an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to your Amazon S3 bucket or folder, and select the check box to the left of the names of
the objects with metadata you want to edit.
3. On the Actions menu, choose Edit actions, and choose Edit metadata.
4. Review the objects listed, and choose Add metadata.
5. For metadata Type, select System-defined.
6. Specify a unique Key and the metadata Value.
7. To edit additional metadata, choose Add metadata. You can also choose Remove to remove a set of
type-key-values.
8. When you are done, choose Edit metadata and Amazon S3 edits the metadata of the specified
objects.

Editing user-defined metadata


You can edit user-defined metadata of an object by combining the metadata prefix, x-amz-meta-, and
a name you choose to create a custom key. For example, if you add the custom name alt-name, the
metadata key would be x-amz-meta-alt-name.

User-defined metadata can be as large as 2 KB total. To calculate the total size of user-defined metadata,
sum the number of bytes in the UTF-8 encoding for each key and value. Both keys and their values must
conform to US-ASCII standards. For more information, see User-defined object metadata (p. 63).

To edit user-defined metadata of an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the objects that you want to add
metadata to.

You can also optionally navigate to a folder.


3. In the Objects list, select the check box next to the names of the objects that you want to add
metadata to.
4. On the Actions menu, choose Edit metadata.
5. Review the objects listed, and choose Add metadata.
6. For metadata Type, choose User-defined.
7. Enter a unique custom Key following x-amz-meta-. Also enter a metadata Value.
8. To add additional metadata, choose Add metadata. You can also choose Remove to remove a set of
type-key-values.
9. Choose Edit metadata.

Amazon S3 edits the metadata of the specified objects.

API Version 2006-03-01


65
Amazon Simple Storage Service User Guide
Uploading objects

Uploading objects
When you upload a file to Amazon S3, it is stored as an S3 object. Objects consist of the file data and
metadata that describes the object. You can have an unlimited number of objects in a bucket. Before
you can upload files to an Amazon S3 bucket, you need write permissions for the bucket. For more
information about access permissions, see Identity and access management in Amazon S3 (p. 274).

You can upload any file type—images, backups, data, movies, etc.—into an S3 bucket. The maximum size
of a file that you can upload by using the Amazon S3 console is 160 GB. To upload a file larger than 160
GB, use the AWS CLI, AWS SDK, or Amazon S3 REST API.

If you upload an object with a key name that already exists in a versioning-enabled bucket, Amazon
S3 creates another version of the object instead of replacing the existing object. For more information
about versioning, see Using the S3 console (p. 524).

Depending on the size of the data you are uploading, Amazon S3 offers the following options:

• Upload an object in a single operation using the AWS SDKs, REST API, or AWS CLI—With a single
PUT operation, you can upload a single object up to 5 GB in size.
• Upload a single object using the Amazon S3 Console—With the Amazon S3 Console, you can upload
a single object up to 160 GB in size.
• Upload an object in parts using the AWS SDKs, REST API, or AWS CLI—Using the multipart upload
API, you can upload a single large object, up to 5 TB in size.

The multipart upload API is designed to improve the upload experience for larger objects. You can
upload an object in parts. These object parts can be uploaded independently, in any order, and in
parallel. You can use a multipart upload for objects from 5 MB to 5 TB in size. For more information,
see Uploading and copying objects using multipart upload (p. 74).

When uploading an object, you can optionally request that Amazon S3 encrypt it before saving
it to disk, and decrypt it when you download it. For more information, see Protecting data using
encryption (p. 219).

Using the S3 console


This procedure explains how to upload objects and folders to an S3 bucket using the console.

When you upload an object, the object key name is the file name and any optional prefixes. In the
Amazon S3 console, you can create folders to organize your objects. In Amazon S3, folders are
represented as prefixes that appear in the object key name. If you upload an individual object to a folder
in the Amazon S3 console, the folder name is included in the object key name.

For example, if you upload an object named sample1.jpg to a folder named backup, the key name is
backup/sample1.jpg. However, the object is displayed in the console as sample1.jpg in the backup
folder. For more information about key names, see Working with object metadata (p. 61).
Note
If you rename an object or change any of the properties in the S3 console, for example Storage
Class, Encryption, Metadata, a new object is created to replace the old one. If S3 Versioning
is enabled, a new version of the object is created, and the existing object becomes an older
version. The role that changes the property also becomes the owner of the new object or (object
version).

When you upload a folder, Amazon S3 uploads all of the files and subfolders from the specified folder
to your bucket. It then assigns an object key name that is a combination of the uploaded file name
and the folder name. For example, if you upload a folder named /images that contains two files,
sample1.jpg and sample2.jpg, Amazon S3 uploads the files and then assigns the corresponding key

API Version 2006-03-01


66
Amazon Simple Storage Service User Guide
Uploading objects

names, images/sample1.jpg and images/sample2.jpg. The key names include the folder name
as a prefix. The Amazon S3 console displays only the part of the key name that follows the last “/”. For
example, within an images folder the images/sample1.jpg and images/sample2.jpg objects are
displayed as sample1.jpg and a sample2.jpg.

To upload folders and files to an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to upload your folders or files to.
3. Choose Upload.
4. In the Upload window, do one of the following:

• Drag and drop files and folders to the Upload window.


• Choose Add file or Add folder, choose files or folders to upload, and choose Open.
5. To enable versioning, under Destination, choose Enable Bucket Versioning.
6. To upload the listed files and folders without configuring additional upload options, at the bottom
of the page, choose Upload.

Amazon S3 uploads your objects and folders. When the upload completes, you can see a success
message on the Upload: status page.
7. To configure additional object properties before uploading, see To configure additional object
properties (p. 67).

To configure additional object properties

1. To configure additional object properties, choose Additional upload options.


2. Under Storage class, choose the storage class for the files you're uploading.

For more information about storage classes, see Using Amazon S3 storage classes (p. 567).
3. To update the encryption settings for your objects, under Server-side encryption settings, do the
following.

a. Choose Override default encryption bucket settings.


b. To encrypt the uploaded files using keys that are managed by Amazon S3, choose Amazon S3
key (SSE-S3).

For more information, see Protecting data using server-side encryption with Amazon S3-
managed encryption keys (SSE-S3) (p. 237).
c. To encrypt the uploaded files using the AWS Key Management Service (AWS KMS), choose AWS
Key Management Service key (SSE-KMS). Then choose an option for AWS KMS key.

• AWS managed key - Choose an AWS managed key.


• Choose from your KMS root keys - Choose a customer managed key from a list of KMS keys in
the same Region as your bucket.

For more information about creating a customer managed key, see Creating Keys in the AWS
Key Management Service Developer Guide. For more information about protecting data with
AWS KMS, see Protecting Data Using Server-Side Encryption with KMS keys Stored in AWS
Key Management Service (SSE-KMS) (p. 220).
• Enter KMS root key ARN - Specify the AWS KMS key ARN for a customer managed key, and
enter the Amazon Resource Name (ARN).

You can use the KMS root key ARN to give an external account the ability to use an object
that is protected by an AWS KMS key. To do this, choose Enter KMS root key ARN, and enter

API Version 2006-03-01


67
Amazon Simple Storage Service User Guide
Uploading objects

the Amazon Resource Name (ARN) for the external account. Administrators of an external
account that have usage permissions to an object protected by your KMS key can further
restrict access by creating a resource-level IAM policy.

Note
To encrypt objects in a bucket, you can use only AWS KMS keys that are available in the
same AWS Region as the bucket.
4. To change access control list permissions, under Access control list (ACL), edit permissions.

For information about object access permissions, see Using the S3 console to set ACL permissions for
an object (p. 470). You can grant read access to your objects to the general public (everyone in the
world) for all of the files that you're uploading. We recommend that you do not change the default
setting for public read access. Granting public read access is applicable to a small subset of use cases
such as when buckets are used for websites. You can always make changes to object permissions
after you upload the object.
5. To add tags to all of the objects that you are uploading, choose Add tag. Type a tag name in the Key
field. Type a value for the tag.

Object tagging gives you a way to categorize storage. Each tag is a key-value pair. Key and tag
values are case sensitive. You can have up to 10 tags per object. A tag key can be up to 128 Unicode
characters in length and tag values can be up to 255 Unicode characters in length. For more
information about object tags, see Categorizing your storage using tags (p. 685).
6. To add metadata, choose Add metadata.

a. Under Type, choose System defined or User defined.

For system-defined metadata, you can select common HTTP headers, such as Content-Type
and Content-Disposition. For a list of system-defined metadata and information about whether
you can add the value, see System-defined object metadata (p. 62). Any metadata starting
with prefix x-amz-meta- is treated as user-defined metadata. User-defined metadata is stored
with the object and is returned when you download the object. Both the keys and their values
must conform to US-ASCII standards. User-defined metadata can be as large as 2 KB. For
more information about system defined and user defined metadata, see Working with object
metadata (p. 61).
b. For Key, choose a key.
c. Type a value for the key.
7. To upload your objects, choose Upload.

Amazon S3 uploads your object. When the upload completes, you can see a success message on the
Upload: status page.
8. Choose Exit.

Using the AWS SDKs


You can use the AWS SDK to upload objects in Amazon S3. The SDK provides wrapper libraries for you to
upload data easily. For information, see the List of supported SDKs.

Here are a few examples with a few select SDKs:

.NET

The following C# code example creates two objects with two PutObjectRequest requests:

• The first PutObjectRequest request saves a text string as sample object data. It also specifies
the bucket and object key names.

API Version 2006-03-01


68
Amazon Simple Storage Service User Guide
Uploading objects

• The second PutObjectRequest request uploads a file by specifying the file name. This request
also specifies the ContentType header and optional object metadata (a title).

For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadObjectTest
{
private const string bucketName = "*** bucket name ***";
// For simplicity the example creates two objects from the same file.
// You specify key names for these objects.
private const string keyName1 = "*** key name for first object created ***";
private const string keyName2 = "*** key name for second object created ***";
private const string filePath = @"*** file path ***";
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.EUWest1;

private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
WritingAnObjectAsync().Wait();
}

static async Task WritingAnObjectAsync()


{
try
{
// 1. Put object-specify only key name for the new object.
var putRequest1 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName1,
ContentBody = "sample text"
};

PutObjectResponse response1 = await client.PutObjectAsync(putRequest1);

// 2. Put the object-set ContentType and add metadata.


var putRequest2 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName2,
FilePath = filePath,
ContentType = "text/plain"
};

putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
PutObjectResponse response2 = await client.PutObjectAsync(putRequest2);
}
catch (AmazonS3Exception e)
{
Console.WriteLine(
"Error encountered ***. Message:'{0}' when writing an object"
, e.Message);
}

API Version 2006-03-01


69
Amazon Simple Storage Service User Guide
Uploading objects

catch (Exception e)
{
Console.WriteLine(
"Unknown encountered on server. Message:'{0}' when writing an
object"
, e.Message);
}
}
}
}

Java

The following example creates two objects. The first object has a text string as data, and the second
object is a file. The example creates the first object by specifying the bucket name, object key, and
text data directly in a call to AmazonS3Client.putObject(). The example creates the second
object by using a PutObjectRequest that specifies the bucket name, object key, and file path. The
PutObjectRequest also specifies the ContentType header and title metadata.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;

import java.io.File;
import java.io.IOException;

public class UploadObject {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String stringObjKeyName = "*** String object key name ***";
String fileObjKeyName = "*** File object key name ***";
String fileName = "*** Path to file to upload ***";

try {
//This code expects that you have AWS credentials set up per:
// https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-
credentials.html
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();

// Upload a text string as a new object.


s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");

// Upload a file as a new object with ContentType and title specified.


PutObjectRequest request = new PutObjectRequest(bucketName, fileObjKeyName,
new File(fileName));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("plain/text");
metadata.addUserMetadata("title", "someTitle");
request.setMetadata(metadata);
s3Client.putObject(request);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process

API Version 2006-03-01


70
Amazon Simple Storage Service User Guide
Uploading objects

// it, so it returned an error response.


e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

JavaScript

The following example upload an existing file to an Amazon S3 bucket. in a specific Region.

// Import required AWS SDK clients and commands for Node.js.


import { PutObjectCommand } from "@aws-sdk/client-s3";
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates Amazon
S3 service client module.
import {path} from "path";
import {fs} from "fs";

const file = "OBJECT_PATH_AND_NAME"; // Path to and name of object. For example '../
myFiles/index.js'.
const fileStream = fs.createReadStream(file);

// Set the parameters


export const uploadParams = {
Bucket: "BUCKET_NAME",
// Add the required 'Key' parameter using the 'path' module.
Key: path.basename(file),
// Add the required 'Body' parameter
Body: fileStream,
};

// Upload file to specified bucket.


export const run = async () => {
try {
const data = await s3Client.send(new PutObjectCommand(uploadParams));
console.log("Success", data);
return data; // For unit tests.
} catch (err) {
console.log("Error", err);
}
};
run();

PHP

This topic guides you through using classes from the AWS SDK for PHP to upload an object of
up to 5 GB in size. For larger files, you must use multipart upload API. For more information, see
Uploading and copying objects using multipart upload (p. 74).

This topic assumes that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly installed.

Example — Creating an object in an Amazon S3 bucket by uploading data

The following PHP example creates an object in a specified bucket by uploading data using the
putObject() method. For information about running the PHP examples in this guide, see Running
PHP Examples (p. 1040).

API Version 2006-03-01


71
Amazon Simple Storage Service User Guide
Uploading objects

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

try {
// Upload data.
$result = $s3->putObject([
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => 'Hello, world!',
'ACL' => 'public-read'
]);

// Print the URL to the object.


echo $result['ObjectURL'] . PHP_EOL;
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

Ruby

The AWS SDK for Ruby - Version 3 has two ways of uploading an object to Amazon S3. The first
uses a managed file uploader, which makes it easy to upload files of any size from disk. To use the
managed file uploader method:

1. Create an instance of the Aws::S3::Resource class.


2. Reference the target object by bucket name and key. Objects live in a bucket and have unique
keys that identify each object.
3. Call#upload_file on the object.

Example

require 'aws-sdk-s3'

# Uploads an object to a bucket in Amazon Simple Storage Service (Amazon S3).


#
# Prerequisites:
#
# - An S3 bucket.
# - An object to upload to the bucket.
#
# @param s3_client [Aws::S3::Resource] An initialized S3 resource.
# @param bucket_name [String] The name of the bucket.
# @param object_key [String] The name of the object.
# @param file_path [String] The path and file name of the object to upload.
# @return [Boolean] true if the object was uploaded; otherwise, false.
# @example
# exit 1 unless object_uploaded?(
# Aws::S3::Resource.new(region: 'us-east-1'),
# 'doc-example-bucket',
# 'my-file.txt',

API Version 2006-03-01


72
Amazon Simple Storage Service User Guide
Uploading objects

# './my-file.txt'
# )
def object_uploaded?(s3_resource, bucket_name, object_key, file_path)
object = s3_resource.bucket(bucket_name).object(object_key)
object.upload_file(file_path)
return true
rescue StandardError => e
puts "Error uploading object: #{e.message}"
return false
end

The second way that AWS SDK for Ruby - Version 3 can upload an object uses the #put method of
Aws::S3::Object. This is useful if the object is a string or an I/O object that is not a file on disk. To
use this method:

1. Create an instance of the Aws::S3::Resource class.


2. Reference the target object by bucket name and key.
3. Call#put, passing in the string or I/O object.

Example

require 'aws-sdk-s3'

# Uploads an object to a bucket in Amazon Simple Storage Service (Amazon S3).


#
# Prerequisites:
#
# - An S3 bucket.
# - An object to upload to the bucket.
#
# @param s3_client [Aws::S3::Resource] An initialized S3 resource.
# @param bucket_name [String] The name of the bucket.
# @param object_key [String] The name of the object.
# @param file_path [String] The path and file name of the object to upload.
# @return [Boolean] true if the object was uploaded; otherwise, false.
# @example
# exit 1 unless object_uploaded?(
# Aws::S3::Resource.new(region: 'us-east-1'),
# 'doc-example-bucket',
# 'my-file.txt',
# './my-file.txt'
# )
def object_uploaded?(s3_resource, bucket_name, object_key, file_path)
object = s3_resource.bucket(bucket_name).object(object_key)
File.open(file_path, 'rb') do |file|
object.put(body: file)
end
return true
rescue StandardError => e
puts "Error uploading object: #{e.message}"
return false
end

Using the REST API


You can send REST requests to upload an object. You can send a PUT request to upload data in a single
operation. For more information, see PUT Object.

API Version 2006-03-01


73
Amazon Simple Storage Service User Guide
Using multipart upload

Using the AWS CLI


You can send a PUT request to upload an object of up to 5 GB in a single operation. For more
information, see the PutObject example in the AWS CLI Command Reference.

Uploading and copying objects using multipart


upload
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion
of the object's data. You can upload these object parts independently and in any order. If transmission
of any part fails, you can retransmit that part without affecting other parts. After all parts of your object
are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size
reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single
operation.

Using multipart upload provides the following advantages:

• Improved throughput - You can upload parts in parallel to improve throughput.


• Quick recovery from any network issues - Smaller part size minimizes the impact of restarting a failed
upload due to a network error.
• Pause and resume object uploads - You can upload object parts over time. After you initiate a
multipart upload, there is no expiry; you must explicitly complete or stop the multipart upload.
• Begin an upload before you know the final object size - You can upload an object as you are creating it.

We recommend that you use multipart upload in the following ways:

• If you're uploading large objects over a stable high-bandwidth network, use multipart upload to
maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded
performance.
• If you're uploading over a spotty network, use multipart upload to increase resiliency to network errors
by avoiding upload restarts. When using multipart upload, you need to retry uploading only parts that
are interrupted during the upload. You don't need to restart uploading your object from the beginning.

Multipart upload process


Multipart upload is a three-step process: You initiate the upload, you upload the object parts, and after
you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete
multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then
access the object just as you would any other object in your bucket.

You can list all of your in-progress multipart uploads or get a list of the parts that you have uploaded for
a specific multipart upload. Each of these operations is explained in this section.

Multipart upload initiation

When you send a request to initiate a multipart upload, Amazon S3 returns a response with an upload
ID, which is a unique identifier for your multipart upload. You must include this upload ID whenever you
upload parts, list the parts, complete an upload, or stop an upload. If you want to provide any metadata
describing the object being uploaded, you must provide it in the request to initiate multipart upload.

Parts upload

API Version 2006-03-01


74
Amazon Simple Storage Service User Guide
Concurrent multipart upload operations

When uploading a part, in addition to the upload ID, you must specify a part number. You can choose
any part number between 1 and 10,000. A part number uniquely identifies a part and its position in the
object you are uploading. The part number that you choose doesn’t need to be in a consecutive sequence
(for example, it can be 1, 5, and 14). If you upload a new part using the same part number as a previously
uploaded part, the previously uploaded part is overwritten.

Whenever you upload a part, Amazon S3 returns an ETag header in its response. For each part upload,
you must record the part number and the ETag value. You must include these values in the subsequent
request to complete the multipart upload.
Note
After you initiate a multipart upload and upload one or more parts, you must either complete
or stop the multipart upload in order to stop getting charged for storage of the uploaded parts.
Only after you either complete or stop a multipart upload will Amazon S3 free up the parts
storage and stop charging you for the parts storage.

Multipart upload completion

When you complete a multipart upload, Amazon S3 creates an object by concatenating the parts in
ascending order based on the part number. If any object metadata was provided in the initiate multipart
upload request, Amazon S3 associates that metadata with the object. After a successful complete
request, the parts no longer exist.

Your complete multipart upload request must include the upload ID and a list of both part numbers
and corresponding ETag values. The Amazon S3 response includes an ETag that uniquely identifies the
combined object data. This ETag is not necessarily an MD5 hash of the object data.

You can optionally stop the multipart upload. After stopping a multipart upload, you cannot upload any
part using that upload ID again. All storage from any part of the canceled multipart upload is then freed.
If any part uploads were in-progress, they can still succeed or fail even after you stop. To free all storage
consumed by all parts, you must stop a multipart upload only after all part uploads have completed.

Multipart upload listings

You can list the parts of a specific multipart upload or all in-progress multipart uploads. The list parts
operation returns the parts information that you have uploaded for a specific multipart upload. For each
list parts request, Amazon S3 returns the parts information for the specified multipart upload, up to a
maximum of 1,000 parts. If there are more than 1,000 parts in the multipart upload, you must send a
series of list part requests to retrieve all the parts. Note that the returned list of parts doesn't include
parts that haven't completed uploading. Using the list multipart uploads operation, you can obtain a list
of multipart uploads in progress.

An in-progress multipart upload is an upload that you have initiated, but have not yet completed or
stopped. Each request returns at most 1,000 multipart uploads. If there are more than 1,000 multipart
uploads in progress, you need to send additional requests to retrieve the remaining multipart uploads.
Only use the returned listing for verification. You should not use the result of this listing when sending
a complete multipart upload request. Instead, maintain your own list of the part numbers you specified
when uploading parts and the corresponding ETag values that Amazon S3 returns

Concurrent multipart upload operations


In a distributed development environment, it is possible for your application to initiate several updates
on the same object at the same time. Your application might initiate several multipart uploads using
the same object key. For each of these uploads, your application can then upload parts and send a
complete upload request to Amazon S3 to create the object. When the buckets have versioning enabled,
completing a multipart upload always creates a new version. For buckets that don't have versioning
enabled, it is possible that some other request received between the time when a multipart upload is
initiated and when it is completed might take precedence.

API Version 2006-03-01


75
Amazon Simple Storage Service User Guide
Multipart upload and pricing

Note
It is possible for some other request received between the time you initiated a multipart upload
and completed it to take precedence. For example, if another operation deletes a key after you
initiate a multipart upload with that key, but before you complete it, the complete multipart
upload response might indicate a successful object creation without you ever seeing the object.

Multipart upload and pricing


After you initiate a multipart upload, Amazon S3 retains all the parts until you either complete or
stop the upload. Throughout its lifetime, you are billed for all storage, bandwidth, and requests for
this multipart upload and its associated parts. If you stop the multipart upload, Amazon S3 deletes
upload artifacts and any parts that you have uploaded, and you are no longer billed for them. For more
information about pricing, see Amazon S3 pricing.

API support for multipart upload


These libraries provide a high-level abstraction that makes uploading multipart objects easy. However, if
your application requires, you can use the REST API directly. The following sections in the Amazon Simple
Storage Service API Reference describe the REST API for multipart upload.

• Create Multipart Upload


• Upload Part
• Upload Part (Copy)
• Complete Multipart Upload
• Abort Multipart Upload
• List Parts
• List Multipart Uploads

AWS Command Line Interface support for multipart


upload
The following topics in the AWS Command Line Interface describe the operations for multipart upload.

• Initiate Multipart Upload


• Upload Part
• Upload Part (Copy)
• Complete Multipart Upload
• Abort Multipart Upload
• List Parts
• List Multipart Uploads

AWS SDK support for multipart upload


You can use an AWS SDKs to upload an object in parts. For a list of AWS SDKs supported by API action
see:

• Create Multipart Upload


• Upload Part
• Upload Part (Copy)

API Version 2006-03-01


76
Amazon Simple Storage Service User Guide
Multipart upload API and permissions

• Complete Multipart Upload


• Abort Multipart Upload
• List Parts
• List Multipart Uploads

Multipart upload API and permissions


You must have the necessary permissions to use the multipart upload operations. You can use access
control lists (ACLs), the bucket policy, or the user policy to grant individuals permissions to perform these
operations. The following table lists the required permissions for various multipart upload operations
when using ACLs, a bucket policy, or a user policy.

Action Required permissions

Create You must be allowed to perform the s3:PutObject action on an object to create
Multipart multipart upload.
Upload
The bucket owner can allow other principals to perform the s3:PutObject action.

Initiate You must be allowed to perform the s3:PutObject action on an object to initiate
Multipart multipart upload.
Upload
The bucket owner can allow other principals to perform the s3:PutObject action.

Initiator Container element that identifies who initiated the multipart upload. If the initiator is
an AWS account, this element provides the same information as the Owner element.
If the initiator is an IAM User, this element provides the user ARN and display name.

Upload Part You must be allowed to perform the s3:PutObject action on an object to upload a
part.

The bucket owner must allow the initiator to perform the s3:PutObject action on
an object in order for the initiator to upload a part for that object.

Upload Part You must be allowed to perform the s3:PutObject action on an object to upload a
(Copy) part. Because you are uploading a part from an existing object, you must be allowed
s3:GetObject on the source object.

For the initiator to upload a part for an object, the owner of the bucket must allow
the initiator to perform the s3:PutObject action on the object.

Complete You must be allowed to perform the s3:PutObject action on an object to complete
Multipart a multipart upload.
Upload
The bucket owner must allow the initiator to perform the s3:PutObject action on
an object in order for the initiator to complete a multipart upload for that object.

Stop Multipart You must be allowed to perform the s3:AbortMultipartUpload action to stop a
Upload multipart upload.

By default, the bucket owner and the initiator of the multipart upload are allowed
to perform this action. If the initiator is an IAM user, that user's AWS account is also
allowed to stop that multipart upload.

In addition to these defaults, the bucket owner can allow other principals to perform
the s3:AbortMultipartUpload action on an object. The bucket owner can deny
any principal the ability to perform the s3:AbortMultipartUpload action.

API Version 2006-03-01


77
Amazon Simple Storage Service User Guide
Multipart upload API and permissions

Action Required permissions

List Parts You must be allowed to perform the s3:ListMultipartUploadParts action to


list parts in a multipart upload.

By default, the bucket owner has permission to list parts for any multipart upload to
the bucket. The initiator of the multipart upload has the permission to list parts of
the specific multipart upload. If the multipart upload initiator is an IAM user, the AWS
account controlling that IAM user also has permission to list parts of that upload.

In addition to these defaults, the bucket owner can allow other principals to perform
the s3:ListMultipartUploadParts action on an object. The bucket owner can
also deny any principal the ability to perform the s3:ListMultipartUploadParts
action.

List Multipart You must be allowed to perform the s3:ListBucketMultipartUploads action on


Uploads a bucket to list multipart uploads in progress to that bucket.

In addition to the default, the bucket owner can allow other principals to perform the
s3:ListBucketMultipartUploads action on the bucket.

AWS KMS To perform a multipart upload with encryption using an AWS Key Management
Encrypt and Service (AWS KMS) KMS key, the requester must have permission to the
Decrypt kms:Decrypt and kms:GenerateDataKey* actions on the key. These permissions
related are required because Amazon S3 must decrypt and read data from the encrypted file
permissions parts before it completes the multipart upload.

For more information, see Uploading a large file to Amazon S3 with encryption using
an AWS KMS key in the AWS Knowledge Center.

If your IAM user or role is in the same AWS account as the KMS key, then you must
have these permissions on the key policy. If your IAM user or role belongs to a
different account than the KMS key, then you must have the permissions on both the
key policy and your IAM user or role.

For information on the relationship between ACL permissions and permissions in access policies, see
Mapping of ACL permissions and access policy permissions (p. 463). For information on IAM users, go to
Working with Users and Groups.

Topics
• Configuring a bucket lifecycle policy to abort incomplete multipart uploads (p. 79)
• Uploading an object using multipart upload (p. 80)
• Uploading a directory using the high-level .NET TransferUtility class (p. 93)
• Listing multipart uploads (p. 95)
• Tracking a multipart upload (p. 97)
• Aborting a multipart upload (p. 99)
• Copying an object using multipart upload (p. 103)
• Amazon S3 multipart upload limits (p. 107)

API Version 2006-03-01


78
Amazon Simple Storage Service User Guide
Configuring a lifecycle policy

Configuring a bucket lifecycle policy to abort


incomplete multipart uploads
As a best practice, we recommend you configure a lifecycle rule using the
AbortIncompleteMultipartUpload action to minimize your storage costs. For more information
about aborting a multipart upload, see Aborting a multipart upload (p. 99).

Amazon S3 supports a bucket lifecycle rule that you can use to direct Amazon S3 to stop multipart
uploads that don't complete within a specified number of days after being initiated. When a multipart
upload is not completed within the timeframe, it becomes eligible for an abort operation and Amazon S3
stops the multipart upload (and deletes the parts associated with the multipart upload).

The following is an example lifecycle configuration that specifies a rule with the
AbortIncompleteMultipartUpload action.

<LifecycleConfiguration>
<Rule>
<ID>sample-rule</ID>
<Prefix></Prefix>
<Status>Enabled</Status>
<AbortIncompleteMultipartUpload>
<DaysAfterInitiation>7</DaysAfterInitiation>
</AbortIncompleteMultipartUpload>
</Rule>
</LifecycleConfiguration>

In the example, the rule does not specify a value for the Prefix element (object key name prefix).
Therefore, it applies to all objects in the bucket for which you initiated multipart uploads. Any multipart
uploads that were initiated and did not complete within seven days become eligible for an abort
operation. The abort action has no effect on completed multipart uploads.

For more information about the bucket lifecycle configuration, see Managing your storage
lifecycle (p. 578).
Note
If the multipart upload is completed within the number of days specified in the rule, the
AbortIncompleteMultipartUpload lifecycle action does not apply (that is, Amazon S3 does
not take any action). Also, this action does not apply to objects. No objects are deleted by this
lifecycle action.

The following put-bucket-lifecycle-configuration CLI command adds the lifecycle


configuration for the specified bucket.

$ aws s3api put-bucket-lifecycle-configuration  \


--bucket bucketname  \
--lifecycle-configuration filename-containing-lifecycle-configuration

To test the CLI command, do the following:

1. Set up the AWS CLI. For instructions, see Developing with Amazon S3 using the AWS CLI (p. 1029).
2. Save the following example lifecycle configuration in a file (lifecycle.json). The example
configuration specifies empty prefix and therefore it applies to all objects in the bucket. You can
specify a prefix to restrict the policy to a subset of objects.

{
"Rules": [

API Version 2006-03-01


79
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

{
"ID": "Test Rule",
"Status": "Enabled",
"Filter": {
"Prefix": ""
},
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}
]
}

3. Run the following CLI command to set lifecycle configuration on your bucket.

aws s3api put-bucket-lifecycle-configuration   \


--bucket bucketname  \
--lifecycle-configuration file://lifecycle.json

4. To verify, retrieve the lifecycle configuration using the get-bucket-lifecycle CLI command.

aws s3api get-bucket-lifecycle  \


--bucket bucketname

5. To delete the lifecycle configuration, use the delete-bucket-lifecycle CLI command.

aws s3api delete-bucket-lifecycle \


--bucket bucketname

Uploading an object using multipart upload


You can use the multipart upload to programmatically upload a single object to Amazon S3.

For more information, see the following sections.

Using the AWS SDKs (high-level API)


The AWS SDK exposes a high-level API, called TransferManager, that simplifies multipart uploads. For
more information, see Uploading and copying objects using multipart upload (p. 74).

You can upload data from a file or a stream. You can also set advanced options, such as the part size
you want to use for the multipart upload, or the number of concurrent threads you want to use when
uploading the parts. You can also set optional object properties, the storage class, or the access control
list (ACL). You use the PutObjectRequest and the TransferManagerConfiguration classes to set
these advanced options.

When possible, TransferManager tries to use multiple threads to upload multiple parts of a single
upload at once. When dealing with large content sizes and high bandwidth, this can increase throughput
significantly.

In addition to file-upload functionality, the TransferManager class enables you to stop an in-progress
multipart upload. An upload is considered to be in progress after you initiate it and until you complete or
stop it. The TransferManager stops all in-progress multipart uploads on a specified bucket that were
initiated before a specified date and time.

If you need to pause and resume multipart uploads, vary part sizes during the upload, or do not know
the size of the data in advance, use the low-level PHP API. For more information about multipart
uploads, including additional functionality offered by the low-level API methods, see Using the AWS
SDKs (low-level-level API) (p. 87).

API Version 2006-03-01


80
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

Java

The following example loads an object using the high-level multipart upload Java API (the
TransferManager class). For instructions on creating and testing a working sample, see Testing the
Amazon S3 Java Code Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;

import java.io.File;

public class HighLevelMultipartUpload {

public static void main(String[] args) throws Exception {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Object key ***";
String filePath = "*** Path for file to upload ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();

// TransferManager processes all transfers asynchronously,


// so this call returns immediately.
Upload upload = tm.upload(bucketName, keyName, new File(filePath));
System.out.println("Object upload started");

// Optionally, wait for the upload to finish before continuing.


upload.waitForCompletion();
System.out.println("Object upload complete");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

To upload a file to an S3 bucket, use the TransferUtility class. When uploading data from a file,
you must provide the object's key name. If you don't, the API uses the file name for the key name.
When uploading data from a stream, you must provide the object's key name.

API Version 2006-03-01


81
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

To set advanced upload options—such as the part size, the number of threads when
uploading the parts concurrently, metadata, the storage class, or ACL—use the
TransferUtilityUploadRequest class.

The following C# example uploads a file to an Amazon S3 bucket in multiple parts. It shows how to
use various TransferUtility.Upload overloads to upload a file. Each successive call to upload
replaces the previous upload. For information about the example's compatibility with a specific
version of the AWS SDK for .NET and instructions for creating and testing a working sample, see
Running the Amazon S3 .NET Code Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadFileMPUHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file to
upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
UploadFileAsync().Wait();
}

private static async Task UploadFileAsync()


{
try
{
var fileTransferUtility =
new TransferUtility(s3Client);

// Option 1. Upload a file. The file name is used as the object key
name.
await fileTransferUtility.UploadAsync(filePath, bucketName);
Console.WriteLine("Upload 1 completed");

// Option 2. Specify object key name explicitly.


await fileTransferUtility.UploadAsync(filePath, bucketName, keyName);
Console.WriteLine("Upload 2 completed");

// Option 3. Upload data from a type of System.IO.Stream.


using (var fileToUpload =
new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
await fileTransferUtility.UploadAsync(fileToUpload,
bucketName, keyName);
}
Console.WriteLine("Upload 3 completed");

// Option 4. Specify advanced settings.


var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{

API Version 2006-03-01


82
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

BucketName = bucketName,
FilePath = filePath,
StorageClass = S3StorageClass.StandardInfrequentAccess,
PartSize = 6291456, // 6 MB.
Key = keyName,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtilityRequest.Metadata.Add("param1", "Value1");
fileTransferUtilityRequest.Metadata.Add("param2", "Value2");

await fileTransferUtility.UploadAsync(fileTransferUtilityRequest);
Console.WriteLine("Upload 4 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}

}
}
}

PHP

This topic explains how to use the high-level Aws\S3\Model\MultipartUpload\UploadBuilder


class from the AWS SDK for PHP for multipart file uploads. It assumes that you are already following
the instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 1039) and have the
AWS SDK for PHP properly installed.

The following PHP example uploads a file to an Amazon S3 bucket. The example demonstrates how
to set parameters for the MultipartUploader object.

For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).

require 'vendor/autoload.php';

use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Prepare the upload parameters.


$uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [
'bucket' => $bucket,
'key' => $keyname
]);

// Perform the upload.


try {
$result = $uploader->upload();

API Version 2006-03-01


83
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL;


} catch (MultipartUploadException $e) {
echo $e->getMessage() . PHP_EOL;
}

Python

The following example loads an object using the high-level multipart upload Python API (the
TransferManager class).

"""
Use Boto 3 managed file transfers to manage multipart uploads to and downloads
from an Amazon S3 bucket.

When the file to transfer is larger than the specified threshold, the transfer
manager automatically uses multipart uploads or downloads. This demonstration
shows how to use several of the available transfer manager settings and reports
thread usage and time to transfer.
"""

import sys
import threading

import boto3
from boto3.s3.transfer import TransferConfig

MB = 1024 * 1024
s3 = boto3.resource('s3')

class TransferCallback:
"""
Handle callbacks from the transfer manager.

The transfer manager periodically calls the __call__ method throughout


the upload and download process so that it can take action, such as
displaying progress to the user and collecting data about the transfer.
"""

def __init__(self, target_size):


self._target_size = target_size
self._total_transferred = 0
self._lock = threading.Lock()
self.thread_info = {}

def __call__(self, bytes_transferred):


"""
The callback method that is called by the transfer manager.

Display progress during file transfer and collect per-thread transfer


data. This method can be called by multiple threads, so shared instance
data is protected by a thread lock.
"""
thread = threading.current_thread()
with self._lock:
self._total_transferred += bytes_transferred
if thread.ident not in self.thread_info.keys():
self.thread_info[thread.ident] = bytes_transferred
else:
self.thread_info[thread.ident] += bytes_transferred

target = self._target_size * MB
sys.stdout.write(
f"\r{self._total_transferred} of {target} transferred "

API Version 2006-03-01


84
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

f"({(self._total_transferred / target) * 100:.2f}%).")


sys.stdout.flush()

def upload_with_default_configuration(local_file_path, bucket_name,


object_key, file_size_mb):
"""
Upload a file from a local folder to an Amazon S3 bucket, using the default
configuration.
"""
transfer_callback = TransferCallback(file_size_mb)
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
Callback=transfer_callback)
return transfer_callback.thread_info

def upload_with_chunksize_and_meta(local_file_path, bucket_name, object_key,


file_size_mb, metadata=None):
"""
Upload a file from a local folder to an Amazon S3 bucket, setting a
multipart chunk size and adding metadata to the Amazon S3 object.

The multipart chunk size controls the size of the chunks of data that are
sent in the request. A smaller chunk size typically results in the transfer
manager using more threads for the upload.

The metadata is a set of key-value pairs that are stored with the object
in Amazon S3.
"""
transfer_callback = TransferCallback(file_size_mb)

config = TransferConfig(multipart_chunksize=1 * MB)


extra_args = {'Metadata': metadata} if metadata else None
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
Config=config,
ExtraArgs=extra_args,
Callback=transfer_callback)
return transfer_callback.thread_info

def upload_with_high_threshold(local_file_path, bucket_name, object_key,


file_size_mb):
"""
Upload a file from a local folder to an Amazon S3 bucket, setting a
multipart threshold larger than the size of the file.

Setting a multipart threshold larger than the size of the file results
in the transfer manager sending the file as a standard upload instead of
a multipart upload.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB)
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
Config=config,
Callback=transfer_callback)
return transfer_callback.thread_info

def upload_with_sse(local_file_path, bucket_name, object_key,


file_size_mb, sse_key=None):

API Version 2006-03-01


85
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

"""
Upload a file from a local folder to an Amazon S3 bucket, adding server-side
encryption with customer-provided encryption keys to the object.

When this kind of encryption is specified, Amazon S3 encrypts the object


at rest and allows downloads only when the expected encryption key is
provided in the download request.
"""
transfer_callback = TransferCallback(file_size_mb)
if sse_key:
extra_args = {
'SSECustomerAlgorithm': 'AES256',
'SSECustomerKey': sse_key}
else:
extra_args = None
s3.Bucket(bucket_name).upload_file(
local_file_path,
object_key,
ExtraArgs=extra_args,
Callback=transfer_callback)
return transfer_callback.thread_info

def download_with_default_configuration(bucket_name, object_key,


download_file_path, file_size_mb):
"""
Download a file from an Amazon S3 bucket to a local folder, using the
default configuration.
"""
transfer_callback = TransferCallback(file_size_mb)
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
Callback=transfer_callback)
return transfer_callback.thread_info

def download_with_single_thread(bucket_name, object_key,


download_file_path, file_size_mb):
"""
Download a file from an Amazon S3 bucket to a local folder, using a
single thread.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(use_threads=False)
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
Config=config,
Callback=transfer_callback)
return transfer_callback.thread_info

def download_with_high_threshold(bucket_name, object_key,


download_file_path, file_size_mb):
"""
Download a file from an Amazon S3 bucket to a local folder, setting a
multipart threshold larger than the size of the file.

Setting a multipart threshold larger than the size of the file results
in the transfer manager sending the file as a standard download instead
of a multipart download.
"""
transfer_callback = TransferCallback(file_size_mb)
config = TransferConfig(multipart_threshold=file_size_mb * 2 * MB)
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
Config=config,

API Version 2006-03-01


86
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

Callback=transfer_callback)
return transfer_callback.thread_info

def download_with_sse(bucket_name, object_key, download_file_path,


file_size_mb, sse_key):
"""
Download a file from an Amazon S3 bucket to a local folder, adding a
customer-provided encryption key to the request.

When this kind of encryption is specified, Amazon S3 encrypts the object


at rest and allows downloads only when the expected encryption key is
provided in the download request.
"""
transfer_callback = TransferCallback(file_size_mb)

if sse_key:
extra_args = {
'SSECustomerAlgorithm': 'AES256',
'SSECustomerKey': sse_key}
else:
extra_args = None
s3.Bucket(bucket_name).Object(object_key).download_file(
download_file_path,
ExtraArgs=extra_args,
Callback=transfer_callback)
return transfer_callback.thread_info

Using the AWS SDKs (low-level-level API)


The AWS SDK exposes a low-level API that closely resembles the Amazon S3 REST API for multipart
uploads (see Uploading and copying objects using multipart upload (p. 74). Use the low-level API
when you need to pause and resume multipart uploads, vary part sizes during the upload, or do not
know the size of the upload data in advance. When you don't have these requirements, use the high-level
API (see Using the AWS SDKs (high-level API) (p. 80)).

Java

The following example shows how to use the low-level Java classes to upload a file. It performs the
following steps:

• Initiates a multipart upload using the AmazonS3Client.initiateMultipartUpload()


method, and passes in an InitiateMultipartUploadRequest object.
• Saves the upload ID that the AmazonS3Client.initiateMultipartUpload() method returns.
You provide this upload ID for each subsequent multipart upload operation.
• Uploads the parts of the object. For each part, you call the AmazonS3Client.uploadPart()
method. You provide part upload information using an UploadPartRequest object.
• For each part, saves the ETag from the response of the AmazonS3Client.uploadPart()
method in a list. You use the ETag values to complete the multipart upload.
• Calls the AmazonS3Client.completeMultipartUpload() method to complete the multipart
upload.

Example

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).

API Version 2006-03-01


87
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;

import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class LowLevelMultipartUpload {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ***";
String filePath = "*** Path to file to upload ***";

File file = new File(filePath);


long contentLength = file.length();
long partSize = 5 * 1024 * 1024; // Set part size to 5 MB.

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Create a list of ETag objects. You retrieve ETags for each object part
uploaded,
// then, after each individual part has been uploaded, pass the list of
ETags to
// the request to complete the upload.
List<PartETag> partETags = new ArrayList<PartETag>();

// Initiate the multipart upload.


InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest(bucketName, keyName);
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);

// Upload the file parts.


long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Because the last part could be less than 5 MB, adjust the part size
as needed.
partSize = Math.min(partSize, (contentLength - filePosition));

// Create the request to upload a part.


UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(bucketName)
.withKey(keyName)
.withUploadId(initResponse.getUploadId())
.withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);

// Upload the part and add the response's ETag to our list.
UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest);
partETags.add(uploadResult.getPartETag());

filePosition += partSize;

API Version 2006-03-01


88
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

// Complete the multipart upload.


CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(bucketName, keyName,
initResponse.getUploadId(), partETags);
s3Client.completeMultipartUpload(compRequest);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

The following C# example shows how to use the low-level AWS SDK for .NET multipart upload API
to upload a file to an S3 bucket. For information about Amazon S3 multipart uploads, see Uploading
and copying objects using multipart upload (p. 74).
Note
When you use the AWS SDK for .NET API to upload large objects, a timeout might occur
while data is being written to the request stream. You can set an explicit timeout using the
UploadPartRequest.

The following C# example uploads a file to an S3 bucket using the low-level multipart upload API.
For information about the example's compatibility with a specific version of the AWS SDK for .NET
and instructions for creating and testing a working sample, see Running the Amazon S3 .NET Code
Examples (p. 1039).

using Amazon;
using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadFileMPULowLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file to
upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
Console.WriteLine("Uploading an object");
UploadObjectAsync().Wait();
}

API Version 2006-03-01


89
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

private static async Task UploadObjectAsync()


{
// Create list to store upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();

// Setup information required to initiate the multipart upload.


InitiateMultipartUploadRequest initiateRequest = new
InitiateMultipartUploadRequest
{
BucketName = bucketName,
Key = keyName
};

// Initiate the upload.


InitiateMultipartUploadResponse initResponse =
await s3Client.InitiateMultipartUploadAsync(initiateRequest);

// Upload parts.
long contentLength = new FileInfo(filePath).Length;
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB

try
{
Console.WriteLine("Uploading parts");

long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++)
{
UploadPartRequest uploadRequest = new UploadPartRequest
{
BucketName = bucketName,
Key = keyName,
UploadId = initResponse.UploadId,
PartNumber = i,
PartSize = partSize,
FilePosition = filePosition,
FilePath = filePath
};

// Track upload progress.


uploadRequest.StreamTransferProgress +=
new
EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback);

// Upload a part and add the response to our list.


uploadResponses.Add(await s3Client.UploadPartAsync(uploadRequest));

filePosition += partSize;
}

// Setup to complete the upload.


CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest
{
BucketName = bucketName,
Key = keyName,
UploadId = initResponse.UploadId
};
completeRequest.AddPartETags(uploadResponses);

// Complete the upload.


CompleteMultipartUploadResponse completeUploadResponse =
await s3Client.CompleteMultipartUploadAsync(completeRequest);
}
catch (Exception exception)

API Version 2006-03-01


90
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

{
Console.WriteLine("An AmazonS3Exception was thrown: { 0}",
exception.Message);

// Abort the upload.


AbortMultipartUploadRequest abortMPURequest = new
AbortMultipartUploadRequest
{
BucketName = bucketName,
Key = keyName,
UploadId = initResponse.UploadId
};
await s3Client.AbortMultipartUploadAsync(abortMPURequest);
}
}
public static void UploadPartProgressEventCallback(object sender,
StreamTransferProgressArgs e)
{
// Process event.
Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
}
}

PHP

This topic shows how to use the low-level uploadPart method from version 3 of the AWS SDK for
PHP to upload a file in multiple parts. It assumes that you are already following the instructions for
Using the AWS SDK for PHP and Running PHP Examples (p. 1039) and have the AWS SDK for PHP
properly installed.

The following PHP example uploads a file to an Amazon S3 bucket using the low-level PHP API
multipart upload. For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';
$filename = '*** Path to and Name of the File to Upload ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

$result = $s3->createMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'StorageClass' => 'REDUCED_REDUNDANCY',
'Metadata' => [
'param1' => 'value 1',
'param2' => 'value 2',
'param3' => 'value 3'
]
]);
$uploadId = $result['UploadId'];

// Upload the file in parts.


try {
$file = fopen($filename, 'r');
$partNumber = 1;

API Version 2006-03-01


91
Amazon Simple Storage Service User Guide
Uploading an object using multipart upload

while (!feof($file)) {
$result = $s3->uploadPart([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId,
'PartNumber' => $partNumber,
'Body' => fread($file, 5 * 1024 * 1024),
]);
$parts['Parts'][$partNumber] = [
'PartNumber' => $partNumber,
'ETag' => $result['ETag'],
];
$partNumber++;

echo "Uploading part {$partNumber} of {$filename}." . PHP_EOL;


}
fclose($file);
} catch (S3Exception $e) {
$result = $s3->abortMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId
]);

echo "Upload of {$filename} failed." . PHP_EOL;


}

// Complete the multipart upload.


$result = $s3->completeMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId,
'MultipartUpload' => $parts,
]);
$url = $result['Location'];

echo "Uploaded {$filename} to {$url}." . PHP_EOL;

Using the AWS SDK for Ruby


The AWS SDK for Ruby version 3 supports Amazon S3 multipart uploads in two ways. For the first option,
you can use managed file uploads. For more information, see Uploading Files to Amazon S3 in the AWS
Developer Blog. Managed file uploads are the recommended method for uploading files to a bucket. They
provide the following benefits:

• Manage multipart uploads for objects larger than 15MB.


• Correctly open files in binary mode to avoid encoding issues.
• Use multiple threads for uploading parts of large objects in parallel.

Alternatively, you can use the following multipart upload client operations directly:

• create_multipart_upload – Initiates a multipart upload and returns an upload ID.


• upload_part – Uploads a part in a multipart upload.
• upload_part_copy – Uploads a part by copying data from an existing object as data source.
• complete_multipart_upload – Completes a multipart upload by assembling previously uploaded parts.
• abort_multipart_upload – Stops a multipart upload.

For more information, see Using the AWS SDK for Ruby - Version 3 (p. 1040).

API Version 2006-03-01


92
Amazon Simple Storage Service User Guide
Uploading a directory

Using the REST API


The following sections in the Amazon Simple Storage Service API Reference describe the REST API for
multipart upload.

• Initiate Multipart Upload


• Upload Part
• Complete Multipart Upload
• Stop Multipart Upload
• List Parts
• List Multipart Uploads

Using the AWS CLI


The following sections in the AWS Command Line Interface (AWS CLI) describe the operations for
multipart upload.

• Initiate Multipart Upload


• Upload Part
• Upload Part (Copy)
• Complete Multipart Upload
• Abort Multipart Upload
• List Parts
• List Multipart Uploads

You can also use the REST API to make your own REST requests, or you can use one of the AWS SDKs. For
more information about the REST API, see Using the REST API (p. 93). For more information about the
SDKs, see Uploading an object using multipart upload (p. 80).

Uploading a directory using the high-level .NET


TransferUtility class
You can use the TransferUtility class to upload an entire directory. By default, the API uploads only
the files at the root of the specified directory. You can, however, specify recursively uploading files in all
of the subdirectories.

To select files in the specified directory based on filtering criteria, specify filtering expressions. For
example, to upload only the .pdf files from a directory, specify the "*.pdf" filter expression.

When uploading files from a directory, you don't specify the key names for the resulting objects. Amazon
S3 constructs the key names using the original file path. For example, assume that you have a directory
called c:\myfolder with the following structure:

Example

C:\myfolder
\a.txt
\b.pdf
\media\
An.mp3

When you upload this directory, Amazon S3 uses the following key names:

API Version 2006-03-01


93
Amazon Simple Storage Service User Guide
Uploading a directory

Example

a.txt
b.pdf
media/An.mp3

Example

The following C# example uploads a directory to an Amazon S3 bucket. It shows how to use various
TransferUtility.UploadDirectory overloads to upload the directory. Each successive call to
upload replaces the previous upload. For instructions on how to create and test a working sample, see
Running the Amazon S3 .NET Code Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadDirMPUHighLevelAPITest
{
private const string existingBucketName = "*** bucket name ***";
private const string directoryPath = @"*** directory path ***";
// The example uploads only .txt files.
private const string wildCard = "*.txt";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
UploadDirAsync().Wait();
}

private static async Task UploadDirAsync()


{
try
{
var directoryTransferUtility =
new TransferUtility(s3Client);

// 1. Upload a directory.
await directoryTransferUtility.UploadDirectoryAsync(directoryPath,
existingBucketName);
Console.WriteLine("Upload statement 1 completed");

// 2. Upload only the .txt files from a directory


// and search recursively.
await directoryTransferUtility.UploadDirectoryAsync(
directoryPath,
existingBucketName,
wildCard,
SearchOption.AllDirectories);
Console.WriteLine("Upload statement 2 completed");

// 3. The same as Step 2 and some optional configuration.


// Search recursively for .txt files to upload.
var request = new TransferUtilityUploadDirectoryRequest
{
BucketName = existingBucketName,

API Version 2006-03-01


94
Amazon Simple Storage Service User Guide
Listing multipart uploads

Directory = directoryPath,
SearchOption = SearchOption.AllDirectories,
SearchPattern = wildCard
};

await directoryTransferUtility.UploadDirectoryAsync(request);
Console.WriteLine("Upload statement 3 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine(
"Error encountered ***. Message:'{0}' when writing an object",
e.Message);
}
catch (Exception e)
{
Console.WriteLine(
"Unknown encountered on server. Message:'{0}' when writing an object",
e.Message);
}
}
}
}

Listing multipart uploads


You can use the AWS SDKs (low-level API) to retrieve a list of in-progress multipart uploads in Amazon
S3.

Listing multipart uploads using the AWS SDK (low-level API)


Java

The following tasks guide you through using the low-level Java classes to list all in-progress
multipart uploads on a bucket.

Low-level API multipart uploads listing process

1 Create an instance of the ListMultipartUploadsRequest class and provide the


bucket name.

2 Run the AmazonS3Client.listMultipartUploads method. The method returns


an instance of the MultipartUploadListing class that gives you information
about the multipart uploads in progress.

The following Java code example demonstrates the preceding tasks.

Example

ListMultipartUploadsRequest allMultpartUploadsRequest =
new ListMultipartUploadsRequest(existingBucketName);
MultipartUploadListing multipartUploadListing =
s3Client.listMultipartUploads(allMultpartUploadsRequest);

.NET

To list all of the in-progress multipart uploads on a specific bucket, use the AWS SDK
for .NET low-level multipart upload API's ListMultipartUploadsRequest class.

API Version 2006-03-01


95
Amazon Simple Storage Service User Guide
Listing multipart uploads

The AmazonS3Client.ListMultipartUploads method returns an instance of the


ListMultipartUploadsResponse class that provides information about the in-progress
multipart uploads.

An in-progress multipart upload is a multipart upload that has been initiated using the initiate
multipart upload request, but has not yet been completed or stopped. For more information about
Amazon S3 multipart uploads, see Uploading and copying objects using multipart upload (p. 74).

The following C# example shows how to use the AWS SDK for .NET to list all in-progress multipart
uploads on a bucket. For information about the example's compatibility with a specific version of
the AWS SDK for .NET and instructions on how to create and test a working sample, see Running the
Amazon S3 .NET Code Examples (p. 1039).

ListMultipartUploadsRequest request = new ListMultipartUploadsRequest


{
BucketName = bucketName // Bucket receiving the uploads.
};

ListMultipartUploadsResponse response = await


AmazonS3Client.ListMultipartUploadsAsync(request);

PHP

This topic shows how to use the low-level API classes from version 3 of the AWS SDK for PHP to
list all in-progress multipart uploads on a bucket. It assumes that you are already following the
instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 1039) and have the AWS
SDK for PHP properly installed.

The following PHP example demonstrates listing all in-progress multipart uploads on a bucket.

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Retrieve a list of the current multipart uploads.


$result = $s3->listMultipartUploads([
'Bucket' => $bucket
]);

// Write the list of uploads to the page.


print_r($result->toArray());

Listing multipart uploads using the REST API


The following sections in the Amazon Simple Storage Service API Reference describe the REST API for
listing multipart uploads:

• ListParts‐list the uploaded parts for a specific multipart upload.


• ListMultipartUploads‐list in-progress multipart uploads.

API Version 2006-03-01


96
Amazon Simple Storage Service User Guide
Tracking a multipart upload

Listing multipart uploads using the AWS CLI


The following sections in the AWS Command Line Interface describe the operations for listing multipart
uploads.

• list-parts‐list the uploaded parts for a specific multipart upload.


• list-multipart-uploads‐list in-progress multipart uploads.

Tracking a multipart upload


The high-level multipart upload API provides a listen interface, ProgressListener, to track the upload
progress when uploading an object to Amazon S3. Progress events occur periodically and notify the
listener that bytes have been transferred.

Java

Example

TransferManager tm = new TransferManager(new ProfileCredentialsProvider());

PutObjectRequest request = new PutObjectRequest(


existingBucketName, keyName, new File(filePath));

// Subscribe to the event and provide event handler.


request.setProgressListener(new ProgressListener() {
public void progressChanged(ProgressEvent event) {
System.out.println("Transferred bytes: " +
event.getBytesTransfered());
}
});

Example
The following Java code uploads a file and uses the ProgressListener to track the upload
progress. For instructions on how to create and test a working sample, see Testing the Amazon S3
Java Code Examples (p. 1038).

import java.io.File;

import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.event.ProgressEvent;
import com.amazonaws.event.ProgressListener;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.Upload;

public class TrackMPUProgressUsingHighLevelAPI {

public static void main(String[] args) throws Exception {


String existingBucketName = "*** Provide bucket name ***";
String keyName = "*** Provide object key ***";
String filePath = "*** file to upload ***";

TransferManager tm = new TransferManager(new ProfileCredentialsProvider());

// For more advanced uploads, you can create a request object


// and supply additional request parameters (ex: progress listeners,
// canned ACLs, etc.)

API Version 2006-03-01


97
Amazon Simple Storage Service User Guide
Tracking a multipart upload

PutObjectRequest request = new PutObjectRequest(


existingBucketName, keyName, new File(filePath));

// You can ask the upload for its progress, or you can
// add a ProgressListener to your request to receive notifications
// when bytes are transferred.
request.setGeneralProgressListener(new ProgressListener() {
@Override
public void progressChanged(ProgressEvent progressEvent) {
System.out.println("Transferred bytes: " +
progressEvent.getBytesTransferred());
}
});

// TransferManager processes all transfers asynchronously,


// so this call will return immediately.
Upload upload = tm.upload(request);

try {
// You can block and wait for the upload to finish
upload.waitForCompletion();
} catch (AmazonClientException amazonClientException) {
System.out.println("Unable to upload file, upload aborted.");
amazonClientException.printStackTrace();
}
}
}

.NET

The following C# example uploads a file to an S3 bucket using the TransferUtility class, and
tracks the progress of the upload. For information about the example's compatibility with a specific
version of the AWS SDK for .NET and instructions for creating and testing a working sample, see
Running the Amazon S3 .NET Code Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class TrackMPUUsingHighLevelAPITest
{
private const string bucketName = "*** provide the bucket name ***";
private const string keyName = "*** provide the name for the uploaded object
***";
private const string filePath = " *** provide the full path name of the file to
upload **";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
TrackMPUAsync().Wait();
}

private static async Task TrackMPUAsync()


{
try

API Version 2006-03-01


98
Amazon Simple Storage Service User Guide
Aborting a multipart upload

{
var fileTransferUtility = new TransferUtility(s3Client);

// Use TransferUtilityUploadRequest to configure options.


// In this example we subscribe to an event.
var uploadRequest =
new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
Key = keyName
};

uploadRequest.UploadProgressEvent +=
new EventHandler<UploadProgressArgs>
(uploadRequest_UploadPartProgressEvent);

await fileTransferUtility.UploadAsync(uploadRequest);
Console.WriteLine("Upload completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}

static void uploadRequest_UploadPartProgressEvent(object sender,


UploadProgressArgs e)
{
// Process event.
Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
}
}

Aborting a multipart upload


After you initiate a multipart upload, you begin uploading parts. Amazon S3 stores these parts, but it
creates the object from the parts only after you upload all of them and send a successful request
to complete the multipart upload (you should verify that your request to complete multipart upload is
successful). Upon receiving the complete multipart upload request, Amazon S3 assembles the parts and
creates an object. If you don't send the complete multipart upload request successfully, Amazon S3 does
not assemble the parts and does not create any object.

You are billed for all storage associated with uploaded parts. For more information, see Multipart upload
and pricing (p. 76). So it's important that you either complete the multipart upload to have the object
created or stop the multipart upload to remove any uploaded parts.

You can stop an in-progress multipart upload in Amazon S3 using the AWS Command Line Interface
(AWS CLI), REST API, or AWS SDKs. You can also stop an incomplete multipart upload using a bucket
lifecycle policy.

API Version 2006-03-01


99
Amazon Simple Storage Service User Guide
Aborting a multipart upload

Using the AWS SDKs (high-level API)


Java

The TransferManager class provides the abortMultipartUploads method to stop multipart


uploads in progress. An upload is considered to be in progress after you initiate it and until you
complete it or stop it. You provide a Date value, and this API stops all the multipart uploads on that
bucket that were initiated before the specified Date and are still in progress.

The following tasks guide you through using the high-level Java classes to stop multipart uploads.

High-level API multipart uploads stopping process

1 Create an instance of the TransferManager class.

2 Run the TransferManager.abortMultipartUploads method by passing the


bucket name and a Date value.

The following Java code stops all multipart uploads in progress that were initiated on a specific
bucket over a week ago. For instructions on how to create and test a working sample, see Testing the
Amazon S3 Java Code Examples (p. 1038).

import java.util.Date;

import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.transfer.TransferManager;

public class AbortMPUUsingHighLevelAPI {

public static void main(String[] args) throws Exception {


String existingBucketName = "*** Provide existing bucket name ***";

TransferManager tm = new TransferManager(new ProfileCredentialsProvider());

int sevenDays = 1000 * 60 * 60 * 24 * 7;


Date oneWeekAgo = new Date(System.currentTimeMillis() - sevenDays);

try {
tm.abortMultipartUploads(existingBucketName, oneWeekAgo);
} catch (AmazonClientException amazonClientException) {
System.out.println("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
}
}

Note
You can also stop a specific multipart upload. For more information, see Using the AWS
SDKs (low-level API) (p. 101).
.NET

The following C# example stops all in-progress multipart uploads that were initiated on a specific
bucket over a week ago. For information about the example's compatibility with a specific version of
the AWS SDK for .NET and instructions on creating and testing a working sample, see Running the
Amazon S3 .NET Code Examples (p. 1039).

using Amazon;

API Version 2006-03-01


100
Amazon Simple Storage Service User Guide
Aborting a multipart upload

using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class AbortMPUUsingHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
AbortMPUAsync().Wait();
}

private static async Task AbortMPUAsync()


{
try
{
var transferUtility = new TransferUtility(s3Client);

// Abort all in-progress uploads initiated before the specified date.


await transferUtility.AbortMultipartUploadsAsync(
bucketName, DateTime.Now.AddDays(-7));
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

Note
You can also stop a specific multipart upload. For more information, see Using the AWS
SDKs (low-level API) (p. 101).

Using the AWS SDKs (low-level API)


You can stop an in-progress multipart upload by calling the AmazonS3.abortMultipartUpload
method. This method deletes any parts that were uploaded to Amazon S3 and frees up the resources.
You must provide the upload ID, bucket name, and key name. The following Java code example
demonstrates how to stop an in-progress multipart upload.

To stop a multipart upload, you provide the upload ID, and the bucket and key names that are used in
the upload. After you have stopped a multipart upload, you can't use the upload ID to upload additional
parts. For more information about Amazon S3 multipart uploads, see Uploading and copying objects
using multipart upload (p. 74).

API Version 2006-03-01


101
Amazon Simple Storage Service User Guide
Aborting a multipart upload

Java

The following Java code example stops an in-progress multipart upload.

Example

InitiateMultipartUploadRequest initRequest =
new InitiateMultipartUploadRequest(existingBucketName, keyName);
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);

AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());


s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(
existingBucketName, keyName, initResponse.getUploadId()));

Note
Instead of a specific multipart upload, you can stop all your multipart uploads initiated
before a specific time that are still in progress. This clean-up operation is useful to stop old
multipart uploads that you initiated but did not complete or stop. For more information,
see Using the AWS SDKs (high-level API) (p. 100).
.NET

The following C# example shows how to stop a multipart upload. For a complete C# sample that
includes the following code, see Using the AWS SDKs (low-level-level API) (p. 87).

AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest


{
BucketName = existingBucketName,
Key = keyName,
UploadId = initResponse.UploadId
};
await AmazonS3Client.AbortMultipartUploadAsync(abortMPURequest);

You can also abort all in-progress multipart uploads that were initiated prior to a specific time. This
clean-up operation is useful for aborting multipart uploads that didn't complete or were aborted.
For more information, see Using the AWS SDKs (high-level API) (p. 100).
PHP

This example shows how to use a class from version 3 of the AWS SDK for PHP to abort a multipart
upload that is in progress. It assumes that you are already following the instructions for Using the
AWS SDK for PHP and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly
installed. The example the abortMultipartUpload() method.

For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';
$uploadId = '*** Upload ID of upload to Abort ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

API Version 2006-03-01


102
Amazon Simple Storage Service User Guide
Copying an object

// Abort the multipart upload.


$s3->abortMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId,
]);

Using the REST API


For more information about using the REST API to stop a multipart upload, see AbortMultipartUpload in
the Amazon Simple Storage Service API Reference.

Using the AWS CLI


For more information about using the AWS CLI to stop a multipart upload, see abort-multipart-upload in
the AWS CLI Command Reference.

Copying an object using multipart upload


The examples in this section show you how to copy objects greater than 5 GB using the multipart
upload API. You can copy objects less than 5 GB in a single operation. For more information, see Copying
objects (p. 108).

Using the AWS SDKs


To copy an object using the low-level API, do the following:

• Initiate a multipart upload by calling the AmazonS3Client.initiateMultipartUpload() method.


• Save the upload ID from the response object that the
AmazonS3Client.initiateMultipartUpload() method returns. You provide this upload ID for
each part-upload operation.
• Copy all of the parts. For each part that you need to copy, create a new instance of the
CopyPartRequest class. Provide the part information, including the source and destination bucket
names, source and destination object keys, upload ID, locations of the first and last bytes of the part,
and part number.
• Save the responses of the AmazonS3Client.copyPart() method calls. Each response includes
the ETag value and part number for the uploaded part. You need this information to complete the
multipart upload.
• Call the AmazonS3Client.completeMultipartUpload() method to complete the copy operation.

Java

Example

The following example shows how to use the Amazon S3 low-level Java API to perform a multipart
copy. For instructions on creating and testing a working sample, see Testing the Amazon S3 Java
Code Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;

API Version 2006-03-01


103
Amazon Simple Storage Service User Guide
Copying an object

import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

public class LowLevelMultipartCopy {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String sourceBucketName = "*** Source bucket name ***";
String sourceObjectKey = "*** Source object key ***";
String destBucketName = "*** Target bucket name ***";
String destObjectKey = "*** Target object key ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Initiate the multipart upload.


InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest(destBucketName, destObjectKey);
InitiateMultipartUploadResult initResult =
s3Client.initiateMultipartUpload(initRequest);

// Get the object size to track the end of the copy operation.
GetObjectMetadataRequest metadataRequest = new
GetObjectMetadataRequest(sourceBucketName, sourceObjectKey);
ObjectMetadata metadataResult =
s3Client.getObjectMetadata(metadataRequest);
long objectSize = metadataResult.getContentLength();

// Copy the object using 5 MB parts.


long partSize = 5 * 1024 * 1024;
long bytePosition = 0;
int partNum = 1;
List<CopyPartResult> copyResponses = new ArrayList<CopyPartResult>();
while (bytePosition < objectSize) {
// The last part might be smaller than partSize, so check to make sure
// that lastByte isn't beyond the end of the object.
long lastByte = Math.min(bytePosition + partSize - 1, objectSize - 1);

// Copy this part.


CopyPartRequest copyRequest = new CopyPartRequest()
.withSourceBucketName(sourceBucketName)
.withSourceKey(sourceObjectKey)
.withDestinationBucketName(destBucketName)
.withDestinationKey(destObjectKey)
.withUploadId(initResult.getUploadId())
.withFirstByte(bytePosition)
.withLastByte(lastByte)
.withPartNumber(partNum++);
copyResponses.add(s3Client.copyPart(copyRequest));
bytePosition += partSize;
}

// Complete the upload request to concatenate all uploaded parts and make
the copied object available.
CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest(
destBucketName,
destObjectKey,

API Version 2006-03-01


104
Amazon Simple Storage Service User Guide
Copying an object

initResult.getUploadId(),
getETags(copyResponses));
s3Client.completeMultipartUpload(completeRequest);
System.out.println("Multipart copy complete.");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}

// This is a helper function to construct a list of ETags.


private static List<PartETag> getETags(List<CopyPartResult> responses) {
List<PartETag> etags = new ArrayList<PartETag>();
for (CopyPartResult response : responses) {
etags.add(new PartETag(response.getPartNumber(), response.getETag()));
}
return etags;
}
}

.NET

The following C# example shows how to use the AWS SDK for .NET to copy an Amazon S3 object
that is larger than 5 GB from one source location to another, such as from one bucket to another. To
copy objects that are smaller than 5 GB, use the single-operation copy procedure described in Using
the AWS SDKs (p. 110). For more information about Amazon S3 multipart uploads, see Uploading
and copying objects using multipart upload (p. 74).

This example shows how to copy an Amazon S3 object that is larger than 5 GB from one S3
bucket to another using the AWS SDK for .NET multipart upload API. For information about SDK
compatibility and instructions for creating and testing a working sample, see Running the Amazon
S3 .NET Code Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class CopyObjectUsingMPUapiTest
{
private const string sourceBucket = "*** provide the name of the bucket with
source object ***";
private const string targetBucket = "*** provide the name of the bucket to copy
the object to ***";
private const string sourceObjectKey = "*** provide the name of object to copy
***";
private const string targetObjectKey = "*** provide the name of the object copy
***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()

API Version 2006-03-01


105
Amazon Simple Storage Service User Guide
Copying an object

{
s3Client = new AmazonS3Client(bucketRegion);
Console.WriteLine("Copying an object");
MPUCopyObjectAsync().Wait();
}
private static async Task MPUCopyObjectAsync()
{
// Create a list to store the upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();
List<CopyPartResponse> copyResponses = new List<CopyPartResponse>();

// Setup information required to initiate the multipart upload.


InitiateMultipartUploadRequest initiateRequest =
new InitiateMultipartUploadRequest
{
BucketName = targetBucket,
Key = targetObjectKey
};

// Initiate the upload.


InitiateMultipartUploadResponse initResponse =
await s3Client.InitiateMultipartUploadAsync(initiateRequest);

// Save the upload ID.


String uploadId = initResponse.UploadId;

try
{
// Get the size of the object.
GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest
{
BucketName = sourceBucket,
Key = sourceObjectKey
};

GetObjectMetadataResponse metadataResponse =
await s3Client.GetObjectMetadataAsync(metadataRequest);
long objectSize = metadataResponse.ContentLength; // Length in bytes.

// Copy the parts.


long partSize = 5 * (long)Math.Pow(2, 20); // Part size is 5 MB.

long bytePosition = 0;
for (int i = 1; bytePosition < objectSize; i++)
{
CopyPartRequest copyRequest = new CopyPartRequest
{
DestinationBucket = targetBucket,
DestinationKey = targetObjectKey,
SourceBucket = sourceBucket,
SourceKey = sourceObjectKey,
UploadId = uploadId,
FirstByte = bytePosition,
LastByte = bytePosition + partSize - 1 >= objectSize ?
objectSize - 1 : bytePosition + partSize - 1,
PartNumber = i
};

copyResponses.Add(await s3Client.CopyPartAsync(copyRequest));

bytePosition += partSize;
}

// Set up to complete the copy.


CompleteMultipartUploadRequest completeRequest =
new CompleteMultipartUploadRequest

API Version 2006-03-01


106
Amazon Simple Storage Service User Guide
Multipart upload limits

{
BucketName = targetBucket,
Key = targetObjectKey,
UploadId = initResponse.UploadId
};
completeRequest.AddPartETags(copyResponses);

// Complete the copy.


CompleteMultipartUploadResponse completeUploadResponse =
await s3Client.CompleteMultipartUploadAsync(completeRequest);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

Using the REST API


The following sections in the Amazon Simple Storage Service API Reference describe the REST API for
multipart upload. For copying an existing object, use the Upload Part (Copy) API and specify the source
object by adding the x-amz-copy-source request header in your request.

• Initiate Multipart Upload


• Upload Part
• Upload Part (Copy)
• Complete Multipart Upload
• Abort Multipart Upload
• List Parts
• List Multipart Uploads

You can use these APIs to make your own REST requests, or you can use one the SDKs we provide. For
more information about using Multipart Upload with the AWS CLI, see Using the AWS CLI (p. 93). For
more information about the SDKs, see AWS SDK support for multipart upload (p. 76).

Amazon S3 multipart upload limits


The following table provides multipart upload core specifications. For more information, see Uploading
and copying objects using multipart upload (p. 74).

Item Specification

Maximum object size 5 TB

Maximum number of parts per upload 10,000

Part numbers 1 to 10,000 (inclusive)

API Version 2006-03-01


107
Amazon Simple Storage Service User Guide
Copying objects

Item Specification

Part size 5 MB to 5 GB. There is no minimum size limit on the last part
of your multipart upload.

Maximum number of parts returned 1000


for a list parts request

Maximum number of multipart 1000


uploads returned in a list multipart
uploads request

Copying objects
The copy operation creates a copy of an object that is already stored in Amazon S3.

You can create a copy of your object up to 5 GB in a single atomic operation. However, to copy an object
that is greater than 5 GB, you must use the multipart upload API.

Using the copy operation, you can:

• Create additional copies of objects


• Rename objects by copying them and deleting the original ones
• Move objects across Amazon S3 locations (e.g., us-west-1 and Europe)
• Change object metadata

Each Amazon S3 object has metadata. It is a set of name-value pairs. You can set object metadata at
the time you upload it. After you upload the object, you cannot modify object metadata. The only way
to modify object metadata is to make a copy of the object and set the metadata. In the copy operation
you set the same object as the source and target.

Each object has metadata. Some of it is system metadata and other user-defined. Users control some of
the system metadata such as storage class configuration to use for the object, and configure server-side
encryption. When you copy an object, user-controlled system metadata and user-defined metadata are
also copied. Amazon S3 resets the system-controlled metadata. For example, when you copy an object,
Amazon S3 resets the creation date of the copied object. You don't need to set any of these values in
your copy request.

When copying an object, you might decide to update some of the metadata values. For example, if
your source object is configured to use standard storage, you might choose to use reduced redundancy
storage for the object copy. You might also decide to alter some of the user-defined metadata values
present on the source object. Note that if you choose to update any of the object's user-configurable
metadata (system or user-defined) during the copy, then you must explicitly specify all of the user-
configurable metadata present on the source object in your request, even if you are only changing only
one of the metadata values.

For more information about the object metadata, see Working with object metadata (p. 61).
Note

• Copying objects across locations incurs bandwidth charges.


• If the source object is archived in S3 Glacier or S3 Glacier Deep Archive, you
must first restore a temporary copy before you can copy the object to another bucket. For
information about archiving objects, see Transitioning to the S3 Glacier and S3 Glacier Deep
Archive storage classes (object archival) (p. 581).

API Version 2006-03-01


108
Amazon Simple Storage Service User Guide
To copy an object

When copying objects, you can request Amazon S3 to save the target object encrypted with an AWS
KMS key, an Amazon S3-managed encryption key, or a customer-provided encryption key. Accordingly,
you must specify encryption information in your request. If the copy source is an object that is stored in
Amazon S3 using server-side encryption with customer provided key, you will need to provide encryption
information in your request so Amazon S3 can decrypt the object for copying. For more information, see
Protecting data using encryption (p. 219).

To copy more than one Amazon S3 object with a single request, you can use Amazon S3 batch
operations. You provide S3 Batch Operations with a list of objects to operate on. S3 Batch Operations
calls the respective API to perform the specified operation. A single Batch Operations job can perform
the specified operation on billions of objects containing exabytes of data.

The S3 Batch Operations feature tracks progress, sends notifications, and stores a detailed completion
report of all actions, providing a fully managed, auditable, serverless experience. You can use S3
Batch Operations through the AWS Management Console, AWS CLI, AWS SDKs, or REST API. For more
information, see the section called “Batch Operations basics” (p. 738).

To copy an object
To copy an object, use the examples below.

Using the S3 console


In the S3 console, you can copy or move an object. For more information, see the procedures below.

To copy an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to copy.
3. Select the check box to the left of the names of the objects that you want to copy.
4. Choose Actions and choose Copy from the list of options that appears.

Alternatively, choose Copy from the options in the upper right.


5. Select the destination type and destination account. To specify the destination path, choose Browse
S3, navigate to the destination, and select the check box to the left of the destination. Choose
Choose destination in the lower right.

Alternatively, enter the destination path.


6. If you do not have bucket versioning enabled, you might be asked to acknowledge that existing
objects with the same name are overwritten. If this is OK, select the check box and proceed. If you
want to keep all versions of objects in this bucket, select Enable Bucket Versioning. You can also
update default encryption and Object Lock properties.
7. Choose Copy in the bottom right and Amazon S3 moves your objects to the destination.

To move objects

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to move.
3. Select the check box to the left of the names of the objects that you want to move.
4. Choose Actions and choose Move from the list of options that appears.

API Version 2006-03-01


109
Amazon Simple Storage Service User Guide
To copy an object

Alternatively, choose Move from the options in the upper right.


5. To specify the destination path, choose Browse S3, navigate to the destination, and select the check
box to the left of the destination. Choose Choose destination in the lower right.

Alternatively, enter the destination path.


6. If you do not have bucket versioning enabled, you might be asked to acknowledge that existing
objects with the same name are overwritten. If this is OK, select the check box and proceed. If you
want to keep all versions of objects in this bucket, select Enable Bucket Versioning. You can also
update default encryption and Object Lock properties.
7. Choose Move in the bottom right and Amazon S3 moves your objects to the destination.

Note

• This action creates a copy of all specified objects with updated settings, updates the last-
modified date in the specified location, and adds a delete marker to the original object.
• When moving folders, wait for the move action to finish before making additional changes in
the folders.
• Objects encrypted with customer-provided encryption keys (SSE-C) cannot be copied using
the S3 console. To copy objects encrypted with SSE-C, use the AWS CLI, AWS SDK, or the
Amazon S3 REST API.
• This action updates metadata for bucket versioning, encryption, Object Lock features, and
archived objects.

Using the AWS SDKs


The examples in this section show how to copy objects up to 5 GB in a single operation. For copying
objects greater than 5 GB, you must use multipart upload API. For more information, see Copying an
object using multipart upload (p. 103).

Java

Example

The following example copies an object in Amazon S3 using the AWS SDK for Java. For instructions
on creating and testing a working sample, see Testing the Amazon S3 Java Code Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CopyObjectRequest;

import java.io.IOException;

public class CopyObjectSingleOperation {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String sourceKey = "*** Source object key *** ";
String destinationKey = "*** Destination object key ***";

API Version 2006-03-01


110
Amazon Simple Storage Service User Guide
To copy an object

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Copy the object into a new object in the same bucket.


CopyObjectRequest copyObjRequest = new CopyObjectRequest(bucketName,
sourceKey, bucketName, destinationKey);
s3Client.copyObject(copyObjRequest);
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

The following C# example uses the high-level AWS SDK for .NET to copy objects that are as large
as 5 GB in a single operation. For objects that are larger than 5 GB, use the multipart upload copy
example described in Copying an object using multipart upload (p. 103).

This example makes a copy of an object that is a maximum of 5 GB. For information about the
example's compatibility with a specific version of the AWS SDK for .NET and instructions on how to
create and test a working sample, see Running the Amazon S3 .NET Code Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class CopyObjectTest
{
private const string sourceBucket = "*** provide the name of the bucket with
source object ***";
private const string destinationBucket = "*** provide the name of the bucket to
copy the object to ***";
private const string objectKey = "*** provide the name of object to copy ***";
private const string destObjectKey = "*** provide the destination object key
name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
Console.WriteLine("Copying an object");
CopyingObjectAsync().Wait();
}

private static async Task CopyingObjectAsync()


{

API Version 2006-03-01


111
Amazon Simple Storage Service User Guide
To copy an object

try
{
CopyObjectRequest request = new CopyObjectRequest
{
SourceBucket = sourceBucket,
SourceKey = objectKey,
DestinationBucket = destinationBucket,
DestinationKey = destObjectKey
};
CopyObjectResponse response = await s3Client.CopyObjectAsync(request);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

PHP

This topic guides you through using classes from version 3 of the AWS SDK for PHP to copy a single
object and multiple objects within Amazon S3, from one bucket to another or within the same
bucket.

This topic assumes that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly installed.

The following PHP example illustrates the use of the copyObject() method to copy a single object
within Amazon S3 and using a batch of calls to CopyObject using the getcommand() method to
make multiple copies of an object.

Copying objects

1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class


constructor.

2 To make multiple copies of an object, you run a batch of calls to the Amazon S3 client
getCommand() method, which is inherited from the Aws\CommandInterface class.
You provide the CopyObject command as the first argument and an array containing
the source bucket, source key name, target bucket, and target key name as the second
argument.

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$sourceBucket = '*** Your Source Bucket Name ***';


$sourceKeyname = '*** Your Source Object Key ***';
$targetBucket = '*** Your Target Bucket Name ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'

API Version 2006-03-01


112
Amazon Simple Storage Service User Guide
To copy an object

]);

// Copy an object.
$s3->copyObject([
'Bucket' => $targetBucket,
'Key' => "{$sourceKeyname}-copy",
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
]);

// Perform a batch of CopyObject operations.


$batch = array();
for ($i = 1; $i <= 3; $i++) {
$batch[] = $s3->getCommand('CopyObject', [
'Bucket' => $targetBucket,
'Key' => "{targetKeyname}-{$i}",
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
]);
}
try {
$results = CommandPool::batch($s3, $batch);
foreach($results as $result) {
if ($result instanceof ResultInterface) {
// Result handling here
}
if ($result instanceof AwsException) {
// AwsException handling here
}
}
} catch (\Exception $e) {
// General error handling here
}

Ruby

The following tasks guide you through using the Ruby classes to copy an object in Amazon S3 from
one bucket to another or within the same bucket.

Copying objects

1 Use the Amazon S3 modularized gem for version 3 of the AWS SDK for Ruby, require
'aws-sdk-s3', and provide your AWS credentials. For more information about how
to provide your credentials, see Making requests using AWS account or IAM user
credentials (p. 996).

2 Provide the request information, such as source bucket name, source key name,
destination bucket name, and destination key.

The following Ruby code example demonstrates the preceding tasks using the #copy_object
method to copy an object from one bucket to another.

require 'aws-sdk-s3'

# Copies an object from one Amazon S3 bucket to another.


#
# Prerequisites:
#
# - Two S3 buckets (a source bucket and a target bucket).
# - An object in the source bucket to be copied.
#
# @param s3_client [Aws::S3::Client] An initialized Amazon S3 client.
# @param source_bucket_name [String] The source bucket's name.
# @param source_key [String] The name of the object

API Version 2006-03-01


113
Amazon Simple Storage Service User Guide
To copy an object

# in the source bucket to be copied.


# @param target_bucket_name [String] The target bucket's name.
# @param target_key [String] The name of the copied object.
# @return [Boolean] true if the object was copied; otherwise, false.
# @example
# s3_client = Aws::S3::Client.new(region: 'us-east-1')
# exit 1 unless object_copied?(
# s3_client,
# 'doc-example-bucket1',
# 'my-source-file.txt',
# 'doc-example-bucket2',
# 'my-target-file.txt'
# )
def object_copied?(
s3_client,
source_bucket_name,
source_key,
target_bucket_name,
target_key)

return true if s3_client.copy_object(


bucket: target_bucket_name,
copy_source: source_bucket_name + '/' + source_key,
key: target_key
)
rescue StandardError => e
puts "Error while copying object: #{e.message}"
end

Copying an object using the REST API


This example describes how to copy an object using REST. For more information about the REST API, go
to PUT Object (Copy).

This example copies the flotsam object from the pacific bucket to the jetsam object of the
atlantic bucket, preserving its metadata.

PUT /jetsam HTTP/1.1


Host: atlantic.s3.amazonaws.com
x-amz-copy-source: /pacific/flotsam
Authorization: AWS AKIAIOSFODNN7EXAMPLE:ENoSbxYByFA0UGLZUqJN5EUnLDg=
Date: Wed, 20 Feb 2008 22:12:21 +0000

The signature was generated from the following information.

PUT\r\n
\r\n
\r\n
Wed, 20 Feb 2008 22:12:21 +0000\r\n

x-amz-copy-source:/pacific/flotsam\r\n
/atlantic/jetsam

Amazon S3 returns the following response that specifies the ETag of the object and when it was last
modified.

HTTP/1.1 200 OK
x-amz-id-2: Vyaxt7qEbzv34BnSu5hctyyNSlHTYZFMWK4FtzO+iX8JQNyaLdTshL0KxatbaOZt
x-amz-request-id: 6B13C3C5B34AF333
Date: Wed, 20 Feb 2008 22:13:01 +0000

API Version 2006-03-01


114
Amazon Simple Storage Service User Guide
Downloading an object

Content-Type: application/xml
Transfer-Encoding: chunked
Connection: close
Server: AmazonS3
<?xml version="1.0" encoding="UTF-8"?>

<CopyObjectResult>
<LastModified>2008-02-20T22:13:01</LastModified>
<ETag>"7e9c608af58950deeb370c98608ed097"</ETag>
</CopyObjectResult>

Downloading an object
This section explains how to download objects from an S3 bucket.

Data transfer fees apply when you download objects. For information about Amazon S3 features, and
pricing, see Amazon S3.

You can download a single object per request using the Amazon S3 console. To download multiple
objects, use the AWS CLI, AWS SDKs, or REST API.

When you download an object programmatically, its metadata is returned in the response headers. There
are times when you want to override certain response header values returned in a GET response. For
example, you might override the Content-Disposition response header value in your GET request.
The REST GET Object API (see GET Object) allows you to specify query string parameters in your GET
request to override these values. The AWS SDKs for Java, .NET, and PHP also provide necessary objects
you can use to specify values for these response headers in your GET request.

When retrieving objects that are stored encrypted using server-side encryption, you must provide
appropriate request headers. For more information, see Protecting data using encryption (p. 219).

Using the S3 console


This section explains how to use the Amazon S3 console to download an object from an S3 bucket using
a presigned URL.
Note

• You can only download one object at a time.


• Objects with key names ending with period(s) "." downloaded using the Amazon S3 console
will have the period(s) "." removed from the key name of the downloaded object. To download
an object with the key name ending in period(s) "." retained in the downloaded object, you
must use the AWS Command Line Interface (AWS CLI), AWS SDKs, or REST API.

To download an object from an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to download an object from.

3. You can download an object from an S3 bucket in any of the following ways:

• Choose the name of the object that you want to download.

API Version 2006-03-01


115
Amazon Simple Storage Service User Guide
Downloading an object

On the Overview page, select the object and from the Actions menu choose Download or
Download as if you want to download the object to a specific folder.
• Choose the object that you want to download and then from the Object actions menu choose
Download or Download as if you want to download the object to a specific folder.
• If you want to download a specific version of the object, choose the name of the object that you
want to download. Choose the Versions tab and then from the Actions menu choose Download
or Download as if you want to download the object to a specific folder.

Using the AWS SDKs


Java

When you download an object through the AWS SDK for Java, Amazon S3 returns all of the object's
metadata and an input stream from which to read the object's contents.

To retrieve an object, you do the following:

• Execute the AmazonS3Client.getObject() method, providing the bucket name and object key
in the request.
• Execute one of the S3Object instance methods to process the input stream.

Note
Your network connection remains open until you read all of the data or close the input
stream. We recommend that you read the content of the stream as quickly as possible.

The following are some variations you might use:

• Instead of reading the entire object, you can read only a portion of the object data by specifying
the byte range that you want in the request.
• You can optionally override the response header values by using a ResponseHeaderOverrides
object and setting the corresponding request property. For example, you can use this feature to
indicate that the object should be downloaded into a file with a different file name than the object
key name.

The following example retrieves an object from an Amazon S3 bucket three ways: first, as a
complete object, then as a range of bytes from the object, then as a complete object with overridden
response header values. For more information about getting objects from Amazon S3, see GET
Object. For instructions on creating and testing a working sample, see Testing the Amazon S3 Java
Code Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ResponseHeaderOverrides;
import com.amazonaws.services.s3.model.S3Object;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;

API Version 2006-03-01


116
Amazon Simple Storage Service User Guide
Downloading an object

public class GetObject2 {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String key = "*** Object key ***";

S3Object fullObject = null, objectPortion = null, headerOverrideObject = null;


try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Get an object and print its contents.


System.out.println("Downloading an object");
fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
System.out.println("Content-Type: " +
fullObject.getObjectMetadata().getContentType());
System.out.println("Content: ");
displayTextInputStream(fullObject.getObjectContent());

// Get a range of bytes from an object and print the bytes.


GetObjectRequest rangeObjectRequest = new GetObjectRequest(bucketName, key)
.withRange(0, 9);
objectPortion = s3Client.getObject(rangeObjectRequest);
System.out.println("Printing bytes retrieved.");
displayTextInputStream(objectPortion.getObjectContent());

// Get an entire object, overriding the specified response headers, and


print the object's content.
ResponseHeaderOverrides headerOverrides = new ResponseHeaderOverrides()
.withCacheControl("No-cache")
.withContentDisposition("attachment; filename=example.txt");
GetObjectRequest getObjectRequestHeaderOverride = new
GetObjectRequest(bucketName, key)
.withResponseHeaders(headerOverrides);
headerOverrideObject = s3Client.getObject(getObjectRequestHeaderOverride);
displayTextInputStream(headerOverrideObject.getObjectContent());
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
} finally {
// To ensure that the network connection doesn't remain open, close any
open input streams.
if (fullObject != null) {
fullObject.close();
}
if (objectPortion != null) {
objectPortion.close();
}
if (headerOverrideObject != null) {
headerOverrideObject.close();
}
}
}

private static void displayTextInputStream(InputStream input) throws IOException {


// Read the text input stream one line at a time and display each line.
BufferedReader reader = new BufferedReader(new InputStreamReader(input));

API Version 2006-03-01


117
Amazon Simple Storage Service User Guide
Downloading an object

String line = null;


while ((line = reader.readLine()) != null) {
System.out.println(line);
}
System.out.println();
}
}

.NET

When you download an object, you get all of the object's metadata and a stream from which to read
the contents. You should read the content of the stream as quickly as possible because the data is
streamed directly from Amazon S3 and your network connection will remain open until you read all
the data or close the input stream. You do the following to get an object:

• Execute the getObject method by providing bucket name and object key in the request.
• Execute one of the GetObjectResponse methods to process the stream.

The following are some variations you might use:

• Instead of reading the entire object, you can read only the portion of the object data by specifying
the byte range in the request, as shown in the following C# example:

Example

GetObjectRequest request = new GetObjectRequest


{
BucketName = bucketName,
Key = keyName,
ByteRange = new ByteRange(0, 10)
};

• When retrieving an object, you can optionally override the response header values (see
Downloading an object (p. 115)) by using the ResponseHeaderOverrides object and setting
the corresponding request property. The following C# code example shows how to do this. For
example, you can use this feature to indicate that the object should be downloaded into a file with
a different file name than the object key name.

Example

GetObjectRequest request = new GetObjectRequest


{
BucketName = bucketName,
Key = keyName
};

ResponseHeaderOverrides responseHeaders = new ResponseHeaderOverrides();


responseHeaders.CacheControl = "No-cache";
responseHeaders.ContentDisposition = "attachment; filename=testing.txt";

request.ResponseHeaderOverrides = responseHeaders;

Example

The following C# code example retrieves an object from an Amazon S3 bucket. From the response,
the example reads the object data using the GetObjectResponse.ResponseStream property.

API Version 2006-03-01


118
Amazon Simple Storage Service User Guide
Downloading an object

The example also shows how you can use the GetObjectResponse.Metadata collection to read
object metadata. If the object you retrieve has the x-amz-meta-title metadata, the code prints
the metadata value.

For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class GetObjectTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** object key ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
ReadObjectDataAsync().Wait();
}

static async Task ReadObjectDataAsync()


{
string responseBody = "";
try
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = keyName
};
using (GetObjectResponse response = await
client.GetObjectAsync(request))
using (Stream responseStream = response.ResponseStream)
using (StreamReader reader = new StreamReader(responseStream))
{
string title = response.Metadata["x-amz-meta-title"]; // Assume you
have "title" as medata added to the object.
string contentType = response.Headers["Content-Type"];
Console.WriteLine("Object metadata, Title: {0}", title);
Console.WriteLine("Content type: {0}", contentType);

responseBody = reader.ReadToEnd(); // Now you process the response


body.
}
}
catch (AmazonS3Exception e)
{
// If bucket or object does not exist
Console.WriteLine("Error encountered ***. Message:'{0}' when reading
object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
reading object", e.Message);

API Version 2006-03-01


119
Amazon Simple Storage Service User Guide
Downloading an object

}
}
}
}

PHP

This topic explains how to use a class from the AWS SDK for PHP to retrieve an Amazon S3 object.
You can retrieve an entire object or a byte range from the object. We assume that you are already
following the instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 1039) and
have the AWS SDK for PHP properly installed.

When retrieving an object, you can optionally override the response header values by
adding the response keys, ResponseContentType, ResponseContentLanguage,
ResponseContentDisposition, ResponseCacheControl, and ResponseExpires, to the
getObject() method, as shown in the following PHP code example:

Example

$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname,
'ResponseContentType' => 'text/plain',
'ResponseContentLanguage' => 'en-US',
'ResponseContentDisposition' => 'attachment; filename=testing.txt',
'ResponseCacheControl' => 'No-cache',
'ResponseExpires' => gmdate(DATE_RFC2822, time() + 3600),
]);

For more information about retrieving objects, see Downloading an object (p. 115).

The following PHP example retrieves an object and displays the content of the object in the browser.
The example shows how to use the getObject() method. For information about running the PHP
examples in this guide, see Running PHP Examples (p. 1040).

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

try {
// Get the object.
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname
]);

// Display the object in the browser.


header("Content-Type: {$result['ContentType']}");
echo $result['Body'];
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;

API Version 2006-03-01


120
Amazon Simple Storage Service User Guide
Deleting objects

Using the REST API


You can use the AWS SDK to retrieve object keys from a bucket. However, if your application requires it,
you can send REST requests directly. You can send a GET request to retrieve object keys.

For more information about the request and response format, see Get Object.

Using the AWS CLI


The example below shows you how you can use the AWS CLI to download an object from Amazon S3. For
more information and examples, see get-object in the AWS CLI Command Reference.

aws s3api get-object --bucket DOC-EXAMPLE-BUCKET1 --key dir/my_images.tar.bz2


my_images.tar.bz2

Deleting Amazon S3 objects


You can delete one or more objects directly from Amazon S3 using the Amazon S3 console, AWS SDKs,
AWS Command Line Interface (AWS CLI), or REST API. Because all objects in your S3 bucket incur
storage costs, you should delete objects that you no longer need. For example, if you're collecting log
files, it's a good idea to delete them when they're no longer needed. You can set up a lifecycle rule to
automatically delete objects such as log files. For more information, see the section called “Setting
lifecycle configuration” (p. 584).

For information about Amazon S3 features and pricing, see Amazon S3 pricing.

You have the following API options when deleting an object:

• Delete a single object — Amazon S3 provides the DELETE API that you can use to delete one object in
a single HTTP request.
• Delete multiple objects — Amazon S3 provides the Multi-Object Delete API that you can use to delete
up to 1,000 objects in a single HTTP request.

When deleting objects from a bucket that is not version-enabled, you provide only the object key name.
However, when deleting objects from a version-enabled bucket, you can optionally provide the version ID
of the object to delete a specific version of the object.

Programmatically deleting objects from a version-


enabled bucket
If your bucket is version-enabled, multiple versions of the same object can exist in the bucket. When
working with version-enabled buckets, the delete API enables the following options:

• Specify a non-versioned delete request — Specify only the object's key, and not the version ID. In
this case, Amazon S3 creates a delete marker and returns its version ID in the response. This makes
your object disappear from the bucket. For information about object versioning and the delete marker
concept, see Using versioning in S3 buckets (p. 519).
• Specify a versioned delete request — Specify both the key and also a version ID. In this case the
following two outcomes are possible:

API Version 2006-03-01


121
Amazon Simple Storage Service User Guide
Deleting objects from an MFA-enabled bucket

• If the version ID maps to a specific object version, Amazon S3 deletes the specific version of the
object.
• If the version ID maps to the delete marker of that object, Amazon S3 deletes the delete marker.
This makes the object reappear in your bucket.

Deleting objects from an MFA-enabled bucket


When deleting objects from a multi-factor authentication (MFA)-enabled bucket, note the following:

• If you provide an invalid MFA token, the request always fails.


• If you have an MFA-enabled bucket, and you make a versioned delete request (you provide an object
key and version ID), the request fails if you don't provide a valid MFA token. In addition, when using
the Multi-Object Delete API on an MFA-enabled bucket, if any of the deletes are a versioned delete
request (that is, you specify object key and version ID), the entire request fails if you don't provide an
MFA token.

However, in the following cases the request succeeds:

• If you have an MFA-enabled bucket, and you make a non-versioned delete request (you are not
deleting a versioned object), and you don't provide an MFA token, the delete succeeds.
• If you have a Multi-Object Delete request specifying only non-versioned objects to delete from an
MFA-enabled bucket, and you don't provide an MFA token, the deletions succeed.

For information about MFA delete, see Configuring MFA delete (p. 528).

Topics
• Deleting a single object (p. 122)
• Deleting multiple objects (p. 129)

Deleting a single object


You can use the Amazon S3 console or the DELETE API to delete a single existing object from an S3
bucket.

Because all objects in your S3 bucket incur storage costs, you should delete objects that you no longer
need. For example, if you are collecting log files, it's a good idea to delete them when they're no longer
needed. You can set up a lifecycle rule to automatically delete objects such as log files. For more
information, see the section called “Setting lifecycle configuration” (p. 584).

For information about Amazon S3 features and pricing, see Amazon S3 pricing.

Using the S3 console


Follow these steps to use the Amazon S3 console to delete a single object from a bucket.

To delete an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to delete an object from.
3. To delete an object in a versioning-enabled bucket with versioning:

API Version 2006-03-01


122
Amazon Simple Storage Service User Guide
Deleting a single object

• Off, Amazon S3 creates a delete marker. To delete the object, select the object, and choose delete
and confirm your choice by typing delete in the text field.
• On, Amazon S3 will permanently delete the object version. Select the object version that you
want to delete, and choose delete and confirm your choice by typing permanently delete in
the text field.

Using the AWS SDKs


The following examples show how you can use the AWS SDKs to delete an object from a bucket. For
more information, see DELETE Object in the Amazon Simple Storage Service API Reference

If you have S3 Versioning enabled on the bucket, you have the following options:

• Delete a specific object version by specifying a version ID.


• Delete an object without specifying a version ID, in which case Amazon S3 adds a delete marker to the
object.

For more information about S3 Versioning, see Using versioning in S3 buckets (p. 519).

Java

Example Example 1: Deleting an object (non-versioned bucket)

The following example assumes that the bucket is not versioning-enabled and the object doesn't
have any version IDs. In the delete request, you specify only the object key and not a version ID.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.DeleteObjectRequest;

import java.io.IOException;

public class DeleteObjectNonVersionedBucket {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ****";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

s3Client.deleteObject(new DeleteObjectRequest(bucketName, keyName));


} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {

API Version 2006-03-01


123
Amazon Simple Storage Service User Guide
Deleting a single object

// Amazon S3 couldn't be contacted for a response, or the client


// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Example Example 2: Deleting an object (versioned bucket)

The following example deletes an object from a versioned bucket. The example deletes a specific
object version by specifying the object key name and version ID.

The example does the following:

1. Adds a sample object to the bucket. Amazon S3 returns the version ID of the newly added object.
The example uses this version ID in the delete request.
2. Deletes the object version by specifying both the object key name and a version ID. If there are no
other versions of that object, Amazon S3 deletes the object entirely. Otherwise, Amazon S3 only
deletes the specified version.
Note
You can get the version IDs of an object by sending a ListVersions request.

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketVersioningConfiguration;
import com.amazonaws.services.s3.model.DeleteVersionRequest;
import com.amazonaws.services.s3.model.PutObjectResult;

import java.io.IOException;

public class DeleteObjectVersionEnabledBucket {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ****";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Check to ensure that the bucket is versioning-enabled.


String bucketVersionStatus =
s3Client.getBucketVersioningConfiguration(bucketName).getStatus();
if (!bucketVersionStatus.equals(BucketVersioningConfiguration.ENABLED)) {
System.out.printf("Bucket %s is not versioning-enabled.", bucketName);
} else {
// Add an object.
PutObjectResult putResult = s3Client.putObject(bucketName, keyName,
"Sample content for deletion example.");
System.out.printf("Object %s added to bucket %s\n", keyName,
bucketName);

API Version 2006-03-01


124
Amazon Simple Storage Service User Guide
Deleting a single object

// Delete the version of the object that we just created.


System.out.println("Deleting versioned object " + keyName);
s3Client.deleteVersion(new DeleteVersionRequest(bucketName, keyName,
putResult.getVersionId()));
System.out.printf("Object %s, version %s deleted\n", keyName,
putResult.getVersionId());
}
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

The following examples show how to delete an object from both versioned and non-versioned
buckets. For more information about S3 Versioning, see Using versioning in S3 buckets (p. 519).

Example Deleting an object from a non-versioned bucket

The following C# example deletes an object from a non-versioned bucket. The example assumes that
the objects don't have version IDs, so you don't specify version IDs. You specify only the object key.

For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class DeleteObjectNonVersionedBucketTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** object key ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
DeleteObjectNonVersionedBucketAsync().Wait();
}
private static async Task DeleteObjectNonVersionedBucketAsync()
{
try
{
var deleteObjectRequest = new DeleteObjectRequest
{
BucketName = bucketName,
Key = keyName
};

API Version 2006-03-01


125
Amazon Simple Storage Service User Guide
Deleting a single object

Console.WriteLine("Deleting an object");
await client.DeleteObjectAsync(deleteObjectRequest);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
deleting an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
deleting an object", e.Message);
}
}
}
}

Example Deleting an object from a versioned bucket

The following C# example deletes an object from a versioned bucket. It deletes a specific version of
the object by specifying the object key name and version ID.

The code performs the following tasks:

1. Enables S3 Versioning on a bucket that you specify (if S3 Versioning is already enabled, this has
no effect).
2. Adds a sample object to the bucket. In response, Amazon S3 returns the version ID of the newly
added object. The example uses this version ID in the delete request.
3. Deletes the sample object by specifying both the object key name and a version ID.
Note
You can also get the version ID of an object by sending a ListVersions request.

var listResponse = client.ListVersions(new ListVersionsRequest { BucketName =


bucketName, Prefix = keyName });

For information about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class DeleteObjectVersion
{
private const string bucketName = "*** versioning-enabled bucket name ***";
private const string keyName = "*** Object Key Name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
CreateAndDeleteObjectVersionAsync().Wait();
}

API Version 2006-03-01


126
Amazon Simple Storage Service User Guide
Deleting a single object

private static async Task CreateAndDeleteObjectVersionAsync()


{
try
{
// Add a sample object.
string versionID = await PutAnObject(keyName);

// Delete the object by specifying an object key and a version ID.


DeleteObjectRequest request = new DeleteObjectRequest
{
BucketName = bucketName,
Key = keyName,
VersionId = versionID
};
Console.WriteLine("Deleting an object");
await client.DeleteObjectAsync(request);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
deleting an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
deleting an object", e.Message);
}
}

static async Task<string> PutAnObject(string objectKey)


{
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketName,
Key = objectKey,
ContentBody = "This is the content body!"
};
PutObjectResponse response = await client.PutObjectAsync(request);
return response.VersionId;
}
}
}

PHP

This example shows how to use classes from version 3 of the AWS SDK for PHP to delete an object
from a non-versioned bucket. For information about deleting an object from a versioned bucket, see
Using the REST API (p. 129).

This example assumes that you are already following the instructions for Using the AWS SDK for
PHP and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly installed. For
information about running the PHP examples in this guide, see Running PHP Examples (p. 1040).

The following PHP example deletes an object from a bucket. Because this example shows how to
delete objects from non-versioned buckets, it provides only the bucket name and object key (not a
version ID) in the delete request.

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

API Version 2006-03-01


127
Amazon Simple Storage Service User Guide
Deleting a single object

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// 1. Delete the object from the bucket.


try
{
echo 'Attempting to delete ' . $keyname . '...' . PHP_EOL;

$result = $s3->deleteObject([
'Bucket' => $bucket,
'Key' => $keyname
]);

if ($result['DeleteMarker'])
{
echo $keyname . ' was deleted or does not exist.' . PHP_EOL;
} else {
exit('Error: ' . $keyname . ' was not deleted.' . PHP_EOL);
}
}
catch (S3Exception $e) {
exit('Error: ' . $e->getAwsErrorMessage() . PHP_EOL);
}

// 2. Check to see if the object was deleted.


try
{
echo 'Checking to see if ' . $keyname . ' still exists...' . PHP_EOL;

$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname
]);

echo 'Error: ' . $keyname . ' still exists.';


}
catch (S3Exception $e) {
exit($e->getAwsErrorMessage());
}

Javascript

This example shows how to use version 3 of the AWS SDK for JavaScript to delete an object. For
more information about AWS SDK for JavaScript see, Using the AWS SDK for JavaScript (p. 1042).

import { DeleteObjectCommand } from "@aws-sdk/client-s3";


import { s3Client } from "./libs/s3Client.js" // Helper function that creates Amazon S3
service client module.

export const bucketParams = { Bucket: "BUCKET_NAME", Key: "KEY" };

export const run = async () => {


try {
const data = await s3Client.send(new DeleteObjectCommand(bucketParams));
console.log("Success. Object deleted.", data);
return data; // For unit tests.
} catch (err) {
console.log("Error", err);
}

API Version 2006-03-01


128
Amazon Simple Storage Service User Guide
Deleting multiple objects

};
run();

Using the AWS CLI

To delete one object per request, use the DELETE API. For more information, see DELETE Object. For
more information about using the CLI to delete an object, see delete-object.

Using the REST API


You can use the AWS SDKs to delete an object. However, if your application requires it, you can send
REST requests directly. For more information, see DELETE Object in the Amazon Simple Storage Service
API Reference.

Deleting multiple objects


Because all objects in your S3 bucket incur storage costs, you should delete objects that you no longer
need. For example, if you are collecting log files, it's a good idea to delete them when they're no longer
needed. You can set up a lifecycle rule to automatically delete objects such as log files. For more
information, see the section called “Setting lifecycle configuration” (p. 584).

For information about Amazon S3 features and pricing, see Amazon S3 pricing.

You can use the Amazon S3 console or the Multi-Object Delete API to delete multiple objects
simultaneously from an S3 bucket.

Using the S3 console


Follow these steps to use the Amazon S3 console to delete multiple objects from a bucket.

To delete objects

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Navigate to the Amazon S3 bucket or folder that contains the objects that you want to delete.
3. Select the check box to the left of the names of the objects that you want to delete.
4. Choose Actions and choose Delete from the list of options that appears.

Alternatively, choose Delete from the options in the upper right.


5. Enter delete if asked to confirm that you want to delete these objects.
6. Choose Delete objects in the bottom right and Amazon S3 deletes the specified objects.

Warning

• Deleting the specified objects cannot be undone.


• This action deletes all specified objects. When deleting folders, wait for the delete action to
finish before adding new objects to the folder. Otherwise, new objects might be deleted as
well.
• To delete an object in a versioning-enabled bucket with versioning Off, Amazon S3 creates a
delete marker. Tto undo the delete action, delete this delete marker. To confirm this action,
type delete.
• To delete an object version in a versioning-enabled bucket with versioning On, Amazon S3 will
permanently delete the object version. To confirm this action, type permanently delete.

API Version 2006-03-01


129
Amazon Simple Storage Service User Guide
Deleting multiple objects

Using the AWS SDKs


Amazon S3 provides the Multi-Object Delete API , which you can use to delete multiple objects in
a single request. The API supports two modes for the response: verbose and quiet. By default, the
operation uses verbose mode. In verbose mode, the response includes the result of the deletion of
each key that is specified in your request. In quiet mode, the response includes only keys for which the
delete operation encountered an error. If all keys are successfully deleted when you're using quiet mode,
Amazon S3 returns an empty response. For more information, see Delete - Multi-Object Delete.

To learn more about object deletion, see Deleting Amazon S3 objects (p. 121).

Java

The AWS SDK for Java provides the AmazonS3Client.deleteObjects() method for deleting
multiple objects. For each object that you want to delete, you specify the key name. If the bucket is
versioning-enabled, you have the following options:

• Specify only the object's key name. Amazon S3 adds a delete marker to the object.
• Specify both the object's key name and a version ID to be deleted. Amazon S3 deletes the
specified version of the object.

Example

The following example uses the Multi-Object Delete API to delete objects from a bucket that
is not version-enabled. The example uploads sample objects to the bucket and then uses the
AmazonS3Client.deleteObjects() method to delete the objects in a single request. In the
DeleteObjectsRequest, the example specifies only the object key names because the objects do
not have version IDs.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;

import java.io.IOException;
import java.util.ArrayList;

public class DeleteMultipleObjectsNonVersionedBucket {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();

// Upload three sample objects.


ArrayList<KeyVersion> keys = new ArrayList<KeyVersion>();
for (int i = 0; i < 3; i++) {
String keyName = "delete object example " + i;

API Version 2006-03-01


130
Amazon Simple Storage Service User Guide
Deleting multiple objects

s3Client.putObject(bucketName, keyName, "Object number " + i + " to be


deleted.");
keys.add(new KeyVersion(keyName));
}
System.out.println(keys.size() + " objects successfully created.");

// Delete the sample objects.


DeleteObjectsRequest multiObjectDeleteRequest = new
DeleteObjectsRequest(bucketName)
.withKeys(keys)
.withQuiet(false);

// Verify that the objects were deleted successfully.


DeleteObjectsResult delObjRes =
s3Client.deleteObjects(multiObjectDeleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " objects successfully deleted.");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Example

The following example uses the Multi-Object Delete API to delete objects from a version-enabled
bucket. It does the following:

1. Creates sample objects and then deletes them, specifying the key name and version ID for each
object to delete. The operation deletes only the specified object versions.
2. Creates sample objects and then deletes them by specifying only the key names. Because the
example doesn't specify version IDs, the operation adds a delete marker to each object, without
deleting any specific object versions. After the delete markers are added, these objects will not
appear in the AWS Management Console.
3. Removes the delete markers by specifying the object keys and version IDs of the delete markers.
The operation deletes the delete markers, which results in the objects reappearing in the AWS
Management Console.

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketVersioningConfiguration;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;
import com.amazonaws.services.s3.model.DeleteObjectsResult.DeletedObject;
import com.amazonaws.services.s3.model.PutObjectResult;

import java.io.IOException;
import java.util.ArrayList;

API Version 2006-03-01


131
Amazon Simple Storage Service User Guide
Deleting multiple objects

import java.util.List;

public class DeleteMultipleObjectsVersionEnabledBucket {


private static AmazonS3 S3_CLIENT;
private static String VERSIONED_BUCKET_NAME;

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
VERSIONED_BUCKET_NAME = "*** Bucket name ***";

try {
S3_CLIENT = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Check to make sure that the bucket is versioning-enabled.


String bucketVersionStatus =
S3_CLIENT.getBucketVersioningConfiguration(VERSIONED_BUCKET_NAME).getStatus();
if (!bucketVersionStatus.equals(BucketVersioningConfiguration.ENABLED)) {
System.out.printf("Bucket %s is not versioning-enabled.",
VERSIONED_BUCKET_NAME);
} else {
// Upload and delete sample objects, using specific object versions.
uploadAndDeleteObjectsWithVersions();

// Upload and delete sample objects without specifying version IDs.


// Amazon S3 creates a delete marker for each object rather than
deleting
// specific versions.
DeleteObjectsResult unversionedDeleteResult =
uploadAndDeleteObjectsWithoutVersions();

// Remove the delete markers placed on objects in the non-versioned


create/delete method.
multiObjectVersionedDeleteRemoveDeleteMarkers(unversionedDeleteResult);
}
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}

private static void uploadAndDeleteObjectsWithVersions() {


System.out.println("Uploading and deleting objects with versions specified.");

// Upload three sample objects.


ArrayList<KeyVersion> keys = new ArrayList<KeyVersion>();
for (int i = 0; i < 3; i++) {
String keyName = "delete object without version ID example " + i;
PutObjectResult putResult = S3_CLIENT.putObject(VERSIONED_BUCKET_NAME,
keyName,
"Object number " + i + " to be deleted.");
// Gather the new object keys with version IDs.
keys.add(new KeyVersion(keyName, putResult.getVersionId()));
}

// Delete the specified versions of the sample objects.


DeleteObjectsRequest multiObjectDeleteRequest = new
DeleteObjectsRequest(VERSIONED_BUCKET_NAME)
.withKeys(keys)

API Version 2006-03-01


132
Amazon Simple Storage Service User Guide
Deleting multiple objects

.withQuiet(false);

// Verify that the object versions were successfully deleted.


DeleteObjectsResult delObjRes =
S3_CLIENT.deleteObjects(multiObjectDeleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " objects successfully deleted");
}

private static DeleteObjectsResult uploadAndDeleteObjectsWithoutVersions() {


System.out.println("Uploading and deleting objects with no versions
specified.");

// Upload three sample objects.


ArrayList<KeyVersion> keys = new ArrayList<KeyVersion>();
for (int i = 0; i < 3; i++) {
String keyName = "delete object with version ID example " + i;
S3_CLIENT.putObject(VERSIONED_BUCKET_NAME, keyName, "Object number " + i +
" to be deleted.");
// Gather the new object keys without version IDs.
keys.add(new KeyVersion(keyName));
}

// Delete the sample objects without specifying versions.


DeleteObjectsRequest multiObjectDeleteRequest = new
DeleteObjectsRequest(VERSIONED_BUCKET_NAME).withKeys(keys)
.withQuiet(false);

// Verify that delete markers were successfully added to the objects.


DeleteObjectsResult delObjRes =
S3_CLIENT.deleteObjects(multiObjectDeleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " objects successfully marked for
deletion without versions.");
return delObjRes;
}

private static void


multiObjectVersionedDeleteRemoveDeleteMarkers(DeleteObjectsResult response) {
List<KeyVersion> keyList = new ArrayList<KeyVersion>();
for (DeletedObject deletedObject : response.getDeletedObjects()) {
// Note that the specified version ID is the version ID for the delete
marker.
keyList.add(new KeyVersion(deletedObject.getKey(),
deletedObject.getDeleteMarkerVersionId()));
}
// Create a request to delete the delete markers.
DeleteObjectsRequest deleteRequest = new
DeleteObjectsRequest(VERSIONED_BUCKET_NAME).withKeys(keyList);

// Delete the delete markers, leaving the objects intact in the bucket.
DeleteObjectsResult delObjRes = S3_CLIENT.deleteObjects(deleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " delete markers successfully deleted");
}
}

.NET

The AWS SDK for .NET provides a convenient method for deleting multiple objects:
DeleteObjects. For each object that you want to delete, you specify the key name and the version
of the object. If the bucket is not versioning-enabled, you specify null for the version ID. If an
exception occurs, review the DeleteObjectsException response to determine which objects were
not deleted and why.

API Version 2006-03-01


133
Amazon Simple Storage Service User Guide
Deleting multiple objects

Example Deleting multiple objects from a non-versioning bucket

The following C# example uses the multi-object delete API to delete objects from a bucket that
is not version-enabled. The example uploads the sample objects to the bucket, and then uses the
DeleteObjects method to delete the objects in a single request. In the DeleteObjectsRequest,
the example specifies only the object key names because the version IDs are null.

For information about creating and testing a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class DeleteMultipleObjectsNonVersionedBucketTest
{
private const string bucketName = "*** versioning-enabled bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
MultiObjectDeleteAsync().Wait();
}

static async Task MultiObjectDeleteAsync()


{
// Create sample objects (for subsequent deletion).
var keysAndVersions = await PutObjectsAsync(3);

// a. multi-object delete by specifying the key names and version IDs.


DeleteObjectsRequest multiObjectDeleteRequest = new DeleteObjectsRequest
{
BucketName = bucketName,
Objects = keysAndVersions // This includes the object keys and null
version IDs.
};
// You can add specific object key to the delete request using the .AddKey.
// multiObjectDeleteRequest.AddKey("TickerReference.csv", null);
try
{
DeleteObjectsResponse response = await
s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} items",
response.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionErrorStatus(e);
}
}

private static void PrintDeletionErrorStatus(DeleteObjectsException e)


{
// var errorResponse = e.ErrorResponse;
DeleteObjectsResponse errorResponse = e.Response;
Console.WriteLine("x {0}", errorResponse.DeletedObjects.Count);

API Version 2006-03-01


134
Amazon Simple Storage Service User Guide
Deleting multiple objects

Console.WriteLine("No. of objects successfully deleted = {0}",


errorResponse.DeletedObjects.Count);
Console.WriteLine("No. of objects failed to delete = {0}",
errorResponse.DeleteErrors.Count);

Console.WriteLine("Printing error data...");


foreach (DeleteError deleteError in errorResponse.DeleteErrors)
{
Console.WriteLine("Object Key: {0}\t{1}\t{2}", deleteError.Key,
deleteError.Code, deleteError.Message);
}
}

static async Task<List<KeyVersion>> PutObjectsAsync(int number)


{
List<KeyVersion> keys = new List<KeyVersion>();
for (int i = 0; i < number; i++)
{
string key = "ExampleObject-" + new System.Random().Next();
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketName,
Key = key,
ContentBody = "This is the content body!",
};

PutObjectResponse response = await s3Client.PutObjectAsync(request);


KeyVersion keyVersion = new KeyVersion
{
Key = key,
// For non-versioned bucket operations, we only need object key.
// VersionId = response.VersionId
};
keys.Add(keyVersion);
}
return keys;
}
}
}

Example Multi-object deletion for a version-enabled bucket


The following C# example uses the multi-object delete API to delete objects from a version-enabled
bucket. The example performs the following actions:

1. Creates sample objects and deletes them by specifying the key name and version ID for each
object. The operation deletes specific versions of the objects.
2. Creates sample objects and deletes them by specifying only the key names. Because the example
doesn't specify version IDs, the operation only adds delete markers. It doesn't delete any specific
versions of the objects. After deletion, these objects don't appear in the Amazon S3 console.
3. Deletes the delete markers by specifying the object keys and version IDs of the delete markers.
When the operation deletes the delete markers, the objects reappear in the console.

For information about creating and testing a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;

API Version 2006-03-01


135
Amazon Simple Storage Service User Guide
Deleting multiple objects

using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class DeleteMultipleObjVersionedBucketTest
{
private const string bucketName = "*** versioning-enabled bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
DeleteMultipleObjectsFromVersionedBucketAsync().Wait();
}

private static async Task DeleteMultipleObjectsFromVersionedBucketAsync()


{

// Delete objects (specifying object version in the request).


await DeleteObjectVersionsAsync();

// Delete objects (without specifying object version in the request).


var deletedObjects = await DeleteObjectsAsync();

// Additional exercise - remove the delete markers S3 returned in the


preceding response.
// This results in the objects reappearing in the bucket (you can
// verify the appearance/disappearance of objects in the console).
await RemoveDeleteMarkersAsync(deletedObjects);
}

private static async Task<List<DeletedObject>> DeleteObjectsAsync()


{
// Upload the sample objects.
var keysAndVersions2 = await PutObjectsAsync(3);

// Delete objects using only keys. Amazon S3 creates a delete marker and
// returns its version ID in the response.
List<DeletedObject> deletedObjects = await
NonVersionedDeleteAsync(keysAndVersions2);
return deletedObjects;
}

private static async Task DeleteObjectVersionsAsync()


{
// Upload the sample objects.
var keysAndVersions1 = await PutObjectsAsync(3);

// Delete the specific object versions.


await VersionedDeleteAsync(keysAndVersions1);
}

private static void PrintDeletionReport(DeleteObjectsException e)


{
var errorResponse = e.Response;
Console.WriteLine("No. of objects successfully deleted = {0}",
errorResponse.DeletedObjects.Count);
Console.WriteLine("No. of objects failed to delete = {0}",
errorResponse.DeleteErrors.Count);
Console.WriteLine("Printing error data...");
foreach (var deleteError in errorResponse.DeleteErrors)
{
Console.WriteLine("Object Key: {0}\t{1}\t{2}", deleteError.Key,
deleteError.Code, deleteError.Message);

API Version 2006-03-01


136
Amazon Simple Storage Service User Guide
Deleting multiple objects

}
}

static async Task VersionedDeleteAsync(List<KeyVersion> keys)


{
// a. Perform a multi-object delete by specifying the key names and version
IDs.
var multiObjectDeleteRequest = new DeleteObjectsRequest
{
BucketName = bucketName,
Objects = keys // This includes the object keys and specific version
IDs.
};
try
{
Console.WriteLine("Executing VersionedDelete...");
DeleteObjectsResponse response = await
s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} items",
response.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
}
}

static async Task<List<DeletedObject>> NonVersionedDeleteAsync(List<KeyVersion>


keys)
{
// Create a request that includes only the object key names.
DeleteObjectsRequest multiObjectDeleteRequest = new DeleteObjectsRequest();
multiObjectDeleteRequest.BucketName = bucketName;

foreach (var key in keys)


{
multiObjectDeleteRequest.AddKey(key.Key);
}
// Execute DeleteObjects - Amazon S3 add delete marker for each object
// deletion. The objects disappear from your bucket.
// You can verify that using the Amazon S3 console.
DeleteObjectsResponse response;
try
{
Console.WriteLine("Executing NonVersionedDelete...");
response = await s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} items",
response.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
throw; // Some deletes failed. Investigate before continuing.
}
// This response contains the DeletedObjects list which we use to delete
the delete markers.
return response.DeletedObjects;
}

private static async Task RemoveDeleteMarkersAsync(List<DeletedObject>


deletedObjects)
{
var keyVersionList = new List<KeyVersion>();

foreach (var deletedObject in deletedObjects)


{

API Version 2006-03-01


137
Amazon Simple Storage Service User Guide
Deleting multiple objects

KeyVersion keyVersion = new KeyVersion


{
Key = deletedObject.Key,
VersionId = deletedObject.DeleteMarkerVersionId
};
keyVersionList.Add(keyVersion);
}
// Create another request to delete the delete markers.
var multiObjectDeleteRequest = new DeleteObjectsRequest
{
BucketName = bucketName,
Objects = keyVersionList
};

// Now, delete the delete marker to bring your objects back to the bucket.
try
{
Console.WriteLine("Removing the delete markers .....");
var deleteObjectResponse = await
s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} delete markers",
deleteObjectResponse.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
}
}

static async Task<List<KeyVersion>> PutObjectsAsync(int number)


{
var keys = new List<KeyVersion>();

for (var i = 0; i < number; i++)


{
string key = "ObjectToDelete-" + new System.Random().Next();
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketName,
Key = key,
ContentBody = "This is the content body!",

};

var response = await s3Client.PutObjectAsync(request);


KeyVersion keyVersion = new KeyVersion
{
Key = key,
VersionId = response.VersionId
};

keys.Add(keyVersion);
}
return keys;
}
}
}

PHP

These examples show how to use classes from version 3 of the AWS SDK for PHP to delete multiple
objects from versioned and non-versioned Amazon S3 buckets. For more information about
versioning, see Using versioning in S3 buckets (p. 519).

API Version 2006-03-01


138
Amazon Simple Storage Service User Guide
Deleting multiple objects

The examples assume that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly installed.

Example Deleting multiple objects from a non-versioned bucket

The following PHP example uses the deleteObjects() method to delete multiple objects from a
bucket that is not version-enabled.

For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// 1. Create a few objects.


for ($i = 1; $i <= 3; $i++) {
$s3->putObject([
'Bucket' => $bucket,
'Key' => "key{$i}",
'Body' => "content {$i}",
]);
}

// 2. List the objects and get the keys.


$keys = $s3->listObjects([
'Bucket' => $bucket
]);

// 3. Delete the objects.


foreach ($keys['Contents'] as $key)
{
$s3->deleteObjects([
'Bucket' => $bucket,
'Delete' => [
'Objects' => [
[
'Key' => $key['Key']
]
]
]
]);
}

Example Deleting multiple objects from a version-enabled bucket

The following PHP example uses the deleteObjects() method to delete multiple objects from a
version-enabled bucket.

For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).

<?php

API Version 2006-03-01


139
Amazon Simple Storage Service User Guide
Deleting multiple objects

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// 1. Enable object versioning for the bucket.


$s3->putBucketVersioning([
'Bucket' => $bucket,
'VersioningConfiguration' => [
'Status' => 'Enabled'
]
]);

// 2. Create a few versions of an object.


for ($i = 1; $i <= 3; $i++) {
$s3->putObject([
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => "content {$i}",
]);
}

// 3. List the objects versions and get the keys and version IDs.
$versions = $s3->listObjectVersions(['Bucket' => $bucket]);

// 4. Delete the object versions.


$deletedResults = 'The following objects were deleted successfully:' . PHP_EOL;
$deleted = false;
$errorResults = 'The following objects could not be deleted:' . PHP_EOL;
$errors = false;

foreach ($versions['Versions'] as $version)


{
$result = $s3->deleteObjects([
'Bucket' => $bucket,
'Delete' => [
'Objects' => [
[
'Key' => $version['Key'],
'VersionId' => $version['VersionId']
]
]
]
]);

if (isset($result['Deleted']))
{
$deleted = true;

$deletedResults .= "Key: {$result['Deleted'][0]['Key']}, " .


"VersionId: {$result['Deleted'][0]['VersionId']}" . PHP_EOL;
}

if (isset($result['Errors']))
{
$errors = true;

$errorResults .= "Key: {$result['Errors'][0]['Key']}, " .

API Version 2006-03-01


140
Amazon Simple Storage Service User Guide
Organizing and listing objects

"VersionId: {$result['Errors'][0]['VersionId']}, " .


"Message: {$result['Errors'][0]['Message']}" . PHP_EOL;
}
}

if ($deleted)
{
echo $deletedResults;
}

if ($errors)
{
echo $errorResults;
}

// 5. Suspend object versioning for the bucket.


$s3->putBucketVersioning([
'Bucket' => $bucket,
'VersioningConfiguration' => [
'Status' => 'Suspended'
]
]);

Using the REST API


You can use the AWS SDKs to delete multiple objects using the Multi-Object Delete API. However, if your
application requires it, you can send REST requests directly.

For more information, see Delete Multiple Objects in the Amazon Simple Storage Service API Reference.

Organizing, listing, and working with your objects


In Amazon S3, you can use prefixes to organize your storage. A prefix is a logical grouping of the objects
in a bucket. The prefix value is similar to a directory name that enables you to store similar data under
the same directory in a bucket. When you programmatically upload objects, you can use prefixes to
organize your data.

In the Amazon S3 console, prefixes are called folders. You can view all your objects and folders in the
S3 console by navigating to a bucket. You can also view information about each object, including object
properties.

For more information about listing and organizing your data in Amazon S3, see the following topics.

Topics
• Organizing objects using prefixes (p. 141)
• Listing object keys programmatically (p. 143)
• Organizing objects in the Amazon S3 console using folders (p. 147)
• Viewing an object overview in the Amazon S3 console (p. 149)
• Viewing object properties in the Amazon S3 console (p. 150)

Organizing objects using prefixes


You can use prefixes to organize the data that you store in Amazon S3 buckets. A prefix value is
similar to a directory name that enables you to group similar objects together in a bucket. When you
programmatically upload objects, you can use prefixes to organize your data.

API Version 2006-03-01


141
Amazon Simple Storage Service User Guide
Using prefixes

The prefix limits the results to only those keys that begin with the specified prefix. The delimiter causes a
list operation to roll up all the keys that share a common prefix into a single summary list result.

The purpose of the prefix and delimiter parameters is to help you organize and then browse your keys
hierarchically. To do this, first pick a delimiter for your bucket, such as slash (/), that doesn't occur in any
of your anticipated key names. Next, construct your key names by concatenating all containing levels of
the hierarchy, separating each level with the delimiter.

For example, if you were storing information about cities, you might naturally organize them by
continent, then by country, then by province or state. Because these names don't usually contain
punctuation, you might use slash (/) as the delimiter. The following examples use a slash (/) delimiter.

• Europe/France/Nouvelle-Aquitaine/Bordeaux
• North America/Canada/Quebec/Montreal
• North America/USA/Washington/Bellevue
• North America/USA/Washington/Seattle

If you stored data for every city in the world in this manner, it would become awkward to manage
a flat key namespace. By using Prefix and Delimiter with the list operation, you can use the
hierarchy you've created to list your data. For example, to list all the states in USA, set Delimiter='/'
and Prefix='North America/USA/'. To list all the provinces in Canada for which you have data, set
Delimiter='/' and Prefix='North America/Canada/'.

Listing objects using prefixes and delimiters


A list request with a delimiter lets you browse your hierarchy at just one level, skipping over and
summarizing the (possibly millions of) keys nested at deeper levels. For example, assume that you have a
bucket (ExampleBucket) with the following keys.

sample.jpg

photos/2006/January/sample.jpg

photos/2006/February/sample2.jpg

photos/2006/February/sample3.jpg

photos/2006/February/sample4.jpg

The sample bucket has only the sample.jpg object at the root level. To list only the root level objects
in the bucket, you send a GET request on the bucket with "/" delimiter character. In response, Amazon S3
returns the sample.jpg object key because it does not contain the "/" delimiter character. All other keys
contain the delimiter character. Amazon S3 groups these keys and returns a single CommonPrefixes
element with prefix value photos/ that is a substring from the beginning of these keys to the first
occurrence of the specified delimiter.

Example

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>ExampleBucket</Name>
<Prefix></Prefix>
<Marker></Marker>
<MaxKeys>1000</MaxKeys>
<Delimiter>/</Delimiter>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>sample.jpg</Key>

API Version 2006-03-01


142
Amazon Simple Storage Service User Guide
Listing objects

<LastModified>2011-07-24T19:39:30.000Z</LastModified>
<ETag>&quot;d1a7fb5eab1c16cb4f7cf341cf188c3d&quot;</ETag>
<Size>6</Size>
<Owner>
<ID>75cc57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
<DisplayName>displayname</DisplayName>
</Owner>
<StorageClass>STANDARD</StorageClass>
</Contents>
<CommonPrefixes>
<Prefix>photos/</Prefix>
</CommonPrefixes>
</ListBucketResult>

For more information about listing object keys programmatically, see Listing object keys
programmatically (p. 143).

Listing object keys programmatically


In Amazon S3, keys can be listed by prefix. You can choose a common prefix for the names of related
keys and mark these keys with a special character that delimits hierarchy. You can then use the list
operation to select and browse keys hierarchically. This is similar to how files are stored in directories
within a file system.

Amazon S3 exposes a list operation that lets you enumerate the keys contained in a bucket. Keys are
selected for listing by bucket and prefix. For example, consider a bucket named "dictionary" that
contains a key for every English word. You might make a call to list all the keys in that bucket that start
with the letter "q". List results are always returned in UTF-8 binary order.

Both the SOAP and REST list operations return an XML document that contains the names of matching
keys and information about the object identified by each key.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

Groups of keys that share a prefix terminated by a special delimiter can be rolled up by that common
prefix for the purposes of listing. This enables applications to organize and browse their keys
hierarchically, much like how you would organize your files into directories in a file system.

For example, to extend the dictionary bucket to contain more than just English words, you might form
keys by prefixing each word with its language and a delimiter, such as "French/logical". Using this
naming scheme and the hierarchical listing feature, you could retrieve a list of only French words. You
could also browse the top-level list of available languages without having to iterate through all the
lexicographically intervening keys. For more information about this aspect of listing, see Organizing
objects using prefixes (p. 141).

REST API

If your application requires it, you can send REST requests directly. You can send a GET request to return
some or all of the objects in a bucket or you can use selection criteria to return a subset of the objects
in a bucket. For more information, see GET Bucket (List Objects) Version 2 in the Amazon Simple Storage
Service API Reference.

List implementation efficiency

List performance is not substantially affected by the total number of keys in your bucket. It's also not
affected by the presence or absence of the prefix, marker, maxkeys, or delimiter arguments.

API Version 2006-03-01


143
Amazon Simple Storage Service User Guide
Listing objects

Iterating through multipage results

As buckets can contain a virtually unlimited number of keys, the complete results of a list query can
be extremely large. To manage large result sets, the Amazon S3 API supports pagination to split them
into multiple responses. Each list keys response returns a page of up to 1,000 keys with an indicator
indicating if the response is truncated. You send a series of list keys requests until you have received all
the keys. AWS SDK wrapper libraries provide the same pagination.

Java

The following example lists the object keys in a bucket. The example uses pagination to retrieve
a set of object keys. If there are more keys to return after the first page, Amazon S3 includes a
continuation token in the response. The example uses the continuation token in the subsequent
request to fetch the next set of object keys.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ListObjectsV2Request;
import com.amazonaws.services.s3.model.ListObjectsV2Result;
import com.amazonaws.services.s3.model.S3ObjectSummary;

import java.io.IOException;

public class ListKeys {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

System.out.println("Listing objects");

// maxKeys is set to 2 to demonstrate the use of


// ListObjectsV2Result.getNextContinuationToken()
ListObjectsV2Request req = new
ListObjectsV2Request().withBucketName(bucketName).withMaxKeys(2);
ListObjectsV2Result result;

do {
result = s3Client.listObjectsV2(req);

for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {


System.out.printf(" - %s (size: %d)\n", objectSummary.getKey(),
objectSummary.getSize());
}
// If there are more than maxKeys keys in the bucket, get a
continuation token
// and list the next objects.
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);

API Version 2006-03-01


144
Amazon Simple Storage Service User Guide
Listing objects

req.setContinuationToken(token);
} while (result.isTruncated());
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

.NET

The following C# example lists the object keys for a bucket. In the example, pagination is used to
retrieve a set of object keys. If there are more keys to return, Amazon S3 includes a continuation
token in the response. The code uses the continuation token in the subsequent request to fetch the
next set of object keys.

For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class ListObjectsTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;

private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
ListingObjectsAsync().Wait();
}

static async Task ListingObjectsAsync()


{
try
{
ListObjectsV2Request request = new ListObjectsV2Request
{
BucketName = bucketName,
MaxKeys = 10
};
ListObjectsV2Response response;
do
{
response = await client.ListObjectsV2Async(request);

// Process the response.


foreach (S3Object entry in response.S3Objects)
{

API Version 2006-03-01


145
Amazon Simple Storage Service User Guide
Listing objects

Console.WriteLine("key = {0} size = {1}",


entry.Key, entry.Size);
}
Console.WriteLine("Next Continuation Token: {0}",
response.NextContinuationToken);
request.ContinuationToken = response.NextContinuationToken;
} while (response.IsTruncated);
}
catch (AmazonS3Exception amazonS3Exception)
{
Console.WriteLine("S3 error occurred. Exception: " +
amazonS3Exception.ToString());
Console.ReadKey();
}
catch (Exception e)
{
Console.WriteLine("Exception: " + e.ToString());
Console.ReadKey();
}
}
}
}

PHP

This example guides you through using classes from version 3 of the AWS SDK for PHP to list the
object keys contained in an Amazon S3 bucket.

This example assumes that you are already following the instructions for Using the AWS SDK for
PHP and Running PHP Examples (p. 1039) and have the AWS SDK for PHP properly installed.

To list the object keys contained in a bucket using the AWS SDK for PHP, you first must list the
objects contained in the bucket and then extract the key from each of the listed objects. When
listing objects in a bucket you have the option of using the low-level Aws\S3\S3Client::listObjects()
method or the high-level Aws\ResultPaginator class.

The low-level listObjects() method maps to the underlying Amazon S3 REST API. Each
listObjects() request returns a page of up to 1,000 objects. If you have more than 1,000 objects
in the bucket, your response will be truncated and you must send another listObjects() request
to retrieve the next set of 1,000 objects.

You can use the high-level ListObjects paginator to make it easier to list the objects contained
in a bucket. To use the ListObjects paginator to create an object list, run the Amazon S3
client getPaginator() method (inherited from the Aws/AwsClientInterface class) with the
ListObjects command as the first argument and an array to contain the returned objects from the
specified bucket as the second argument.

When used as a ListObjects paginator, the getPaginator() method returns all the objects
contained in the specified bucket. There is no 1,000 object limit, so you don't need to worry whether
the response is truncated.

The following tasks guide you through using the PHP Amazon S3 client methods to list the objects
contained in a bucket from which you can list the object keys.

Example Listing object keys

The following PHP example demonstrates how to list the keys from a specified bucket. It shows
how to use the high-level getIterator() method to list the objects in a bucket and then extract

API Version 2006-03-01


146
Amazon Simple Storage Service User Guide
Using folders

the key from each of the objects in the list. It also shows how to use the low-level listObjects()
method to list the objects in a bucket and then extract the key from each of the objects in the
list returned. For information about running the PHP examples in this guide, see Running PHP
Examples (p. 1040).

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';

// Instantiate the client.


$s3 = new S3Client([
'version' => 'latest',
'region' => 'us-east-1'
]);

// Use the high-level iterators (returns ALL of your objects).


try {
$results = $s3->getPaginator('ListObjects', [
'Bucket' => $bucket
]);

foreach ($results as $result) {


foreach ($result['Contents'] as $object) {
echo $object['Key'] . PHP_EOL;
}
}
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

// Use the plain API (returns ONLY up to 1000 of your objects).


try {
$objects = $s3->listObjects([
'Bucket' => $bucket
]);
foreach ($objects['Contents'] as $object) {
echo $object['Key'] . PHP_EOL;
}
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

Organizing objects in the Amazon S3 console using


folders
In Amazon S3, buckets and objects are the primary resources, and objects are stored in buckets. Amazon
S3 has a flat structure instead of a hierarchy like you would see in a file system. However, for the sake
of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping
objects. It does this by using a shared name prefix for objects (that is, objects have names that begin with
a common string). Object names are also referred to as key names.

For example, you can create a folder on the console named photos and store an object named
myphoto.jpg in it. The object is then stored with the key name photos/myphoto.jpg, where
photos/ is the prefix.

Here are two more examples:

API Version 2006-03-01


147
Amazon Simple Storage Service User Guide
Using folders

• If you have three objects in your bucket—logs/date1.txt, logs/date2.txt, and logs/


date3.txt—the console will show a folder named logs. If you open the folder in the console, you
will see three objects: date1.txt, date2.txt, and date3.txt.
• If you have an object named photos/2017/example.jpg, the console will show you a folder named
photos containing the folder 2017. The folder 2017 will contain the object example.jpg.

You can have folders within folders, but not buckets within buckets. You can upload and copy objects
directly into a folder. Folders can be created, deleted, and made public, but they cannot be renamed.
Objects can be copied from one folder to another.
Important
The Amazon S3 console treats all objects that have a forward slash ("/") character as the last
(trailing) character in the key name as a folder, for example examplekeyname/. You can't
upload an object that has a key name with a trailing "/" character using the Amazon S3 console.
However, you can upload objects that are named with a trailing "/" with the Amazon S3 API by
using the AWS CLI, AWS SDKs, or REST API.
An object that is named with a trailing "/" appears as a folder in the Amazon S3 console. The
Amazon S3 console does not display the content and metadata for such an object. When you
use the console to copy an object named with a trailing "/", a new folder is created in the
destination location, but the object's data and metadata are not copied.

Topics
• Creating a folder (p. 148)
• Making folders public (p. 148)
• Deleting folders (p. 149)

Creating a folder
This section describes how to use the Amazon S3 console to create a folder.
Important
If your bucket policy prevents uploading objects to this bucket without encryption, tags,
metadata, or access control list (ACL) grantees, you will not be able to create a folder using
this configuration. Instead, upload an empty folder and specify these settings in the upload
configuration.

To create a folder

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to create a folder in.
3. Choose Create folder.
4. Enter a name for the folder (for example, favorite-pics). Then choose Create folder.

Making folders public


We recommend blocking all public access to your Amazon S3 folders and buckets unless you specifically
require a public folder or bucket. When you make a folder public, anyone on the internet can view all the
objects that are grouped in that folder.

In the Amazon S3 console, you can make a folder public. You can also make a folder public by creating a
bucket policy that limits access by prefix. For more information, see Identity and access management in
Amazon S3 (p. 274).

API Version 2006-03-01


148
Amazon Simple Storage Service User Guide
Viewing an object overview

Warning
After you make a folder public in the Amazon S3 console, you can't make it private again.
Instead, you must set permissions on each individual object in the public folder so that the
objects have no public access. For more information, see Configuring ACLs (p. 467).

Deleting folders
This section explains how to use the Amazon S3 console to delete folders from an S3 bucket.

For information about Amazon S3 features and pricing, see Amazon S3.

To delete folders from an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that you want to delete folders from.
3. In the Objects list, select the check box next to the folders and objects that you want to delete.
4. Choose Delete.
5. On the Delete objects page, verify that the names of the folders you selected for deletion are listed.
6. In the Delete objects box, enter delete, and choose Delete objects.

Warning
This action deletes all specified objects. When deleting folders, wait for the delete action to
finish before adding new objects to the folder. Otherwise, new objects might be deleted as well.

Viewing an object overview in the Amazon S3


console
You can use the Amazon S3 console to view an overview of an object. The object overview in the console
provides all the essential information for an object in one place.

To open the overview pane for an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object for which you want an overview.

The object overview opens.


4. To download the object, choose Object actions, and then choose Download. To copy the path of the
object to the clipboard, under Object URL, choose the URL.
5. If versioning is enabled on the bucket, choose Versions to list the versions of the object.

• To download an object version, select the check box next to the version ID, choose Actions, and
then choose Download.
• To delete an object version, select the check box next to the version ID, and choose Delete.

Important
You can undelete an object only if it was deleted as the latest (current) version. You can't
undelete a previous version of an object that was deleted.

API Version 2006-03-01


149
Amazon Simple Storage Service User Guide
Viewing object properties

Viewing object properties in the Amazon S3 console


You can use the Amazon S3 console to view the properties of an object, including storage class,
encryption settings, tags, and metadata.

To view the properties of an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, choose the name of the bucket that contains the object.
3. In the Objects list, choose the name of the object you want to view properties for.

The Object overview for your object opens. You can scroll down to view the object properties.
4. On the Object overview page, you can configure the following properties for the object.
Note
If you change the Storage Class, Encryption, or Metadata properties, a new object is
created to replace the old one. If S3 Versioning is enabled, a new version of the object
is created, and the existing object becomes an older version. The role that changes the
property also becomes the owner of the new object or (object version).

a. Storage class – Each object in Amazon S3 has a storage class associated with it. The storage
class that you choose to use depends on how frequently you access the object. The default
storage class for S3 objects is STANDARD. You choose which storage class to use when you
upload an object. For more information about storage classes, see Using Amazon S3 storage
classes (p. 567).

To change the storage class after you upload an object, choose Storage class. Choose the
storage class that you want, and then choose Save.
b. Server-side encryption settings – You can use server-side encryption to encrypt your S3
objects. For more information, see Specifying server-side encryption with AWS KMS (SSE-
KMS) (p. 223) or Specifying Amazon S3 encryption (p. 238).
c. Metadata – Each object in Amazon S3 has a set of name-value pairs that represents its
metadata. For information about adding metadata to an S3 object, see Editing object metadata
in the Amazon S3 console (p. 64).
d. Tags – You categorize storage by adding tags to an S3 object. For more information, see
Categorizing your storage using tags (p. 685).
e. Object lock legal hold and rentention – You can prevent an object from being deleted. For
more information, see Using S3 Object Lock (p. 559).

Using presigned URLs


All objects and buckets are private by default. However, you can use a presigned URL to optionally share
objects or enable your customers/users to upload objects to buckets without AWS security credentials or
permissions.

Limiting presigned URL capabilities


You can use presigned URLs to generate a URL that can be used to access your S3 buckets. When you
create a presigned URL, you associate it with a specific action. You can share the URL, and anyone with
access to it can perform the action embedded in the URL as if they were the original signing user. The
URL will expire and no longer work when it reaches its expiration time. The capabilities of the URL are
limited by the permissions of the user who created the presigned URL.

API Version 2006-03-01


150
Amazon Simple Storage Service User Guide
Sharing an object with a presigned URL

In essence, presigned URLs are a bearer token that grants access to customers who possess them. As
such, we recommend that you protect them appropriately.

If you want to restrict the use of presigned URLs and all S3 access to particular network paths, you
can write AWS Identity and Access Management (IAM) policies that require a particular network path.
These policies can be set on the IAM principal that makes the call, the Amazon S3 bucket, or both. A
network-path restriction on the principal requires the user of those credentials to make requests from
the specified network. A restriction on the bucket limits access to that bucket only to requests originating
from the specified network. Realize that these restrictions also apply outside of the presigned URL
scenario.

The IAM global condition that you use depends on the type of endpoint. If you are using the public
endpoint for Amazon S3, use aws:SourceIp. If you are using a VPC endpoint to Amazon S3, use
aws:SourceVpc or aws:SourceVpce.

The following IAM policy statement requires the principal to access AWS from only the specified network
range. With this policy statement in place, all access is required to originate from that range. This
includes the case of someone using a presigned URL for S3.

{
"Sid": "NetworkRestrictionForIAMPrincipal",
"Effect": "Deny",
"Action": "",
"Resource": "",
"Condition": {
"NotIpAddressIfExists": { "aws:SourceIp": "IP-address" },
"BoolIfExists": { "aws:ViaAWSService": "false" }
}
}

For more information about using a presigned URL to share or upload objects, see the topics below.

Topics
• Sharing an object with a presigned URL (p. 151)
• Uploading objects using presigned URLs (p. 155)

Sharing an object with a presigned URL


All objects by default are private. Only the object owner has permission to access these objects. However,
the object owner can optionally share objects with others by creating a presigned URL, using their own
security credentials, to grant time-limited permission to download the objects.

When you create a presigned URL for your object, you must provide your security credentials, specify a
bucket name, an object key, specify the HTTP method (GET to download the object) and expiration date
and time. The presigned URLs are valid only for the specified duration.

Anyone who receives the presigned URL can then access the object. For example, if you have a video
in your bucket and both the bucket and the object are private, you can share the video with others by
generating a presigned URL.
Note

• Anyone with valid security credentials can create a presigned URL. However, in order to
successfully access an object, the presigned URL must be created by someone who has
permission to perform the operation that the presigned URL is based upon.
• The credentials that you can use to create a presigned URL include:
• IAM instance profile: Valid up to 6 hours

API Version 2006-03-01


151
Amazon Simple Storage Service User Guide
Sharing an object with a presigned URL

• AWS Security Token Service : Valid up to 36 hours when signed with permanent credentials,
such as the credentials of the AWS account root user or an IAM user
• IAM user: Valid up to 7 days when using AWS Signature Version 4

To create a presigned URL that's valid for up to 7 days, first designate IAM user credentials
(the access key and secret access key) to the SDK that you're using. Then, generate a
presigned URL using AWS Signature Version 4.
• If you created a presigned URL using a temporary token, then the URL expires when the token
expires, even if the URL was created with a later expiration time.
• Since presigned URLs grant access to your Amazon S3 buckets to whoever has the URL, we
recommend that you protect them appropriately. For more details about protecting presigned
URLs, see Limiting presigned URL capabilities (p. 150).

Generating a presigned URL


You can generate a presigned URL programmatically using the REST API, the AWS Command Line
Interface, and the AWS SDK for Java, .NET, Ruby, PHP, Node.js, Python, and Go.

Using AWS Explorer for Visual Studio

If you are using Visual Studio, you can generate a presigned URL for an object without writing any
code by using AWS Explorer for Visual Studio. Anyone with this URL can download the object. For more
information, go to Using Amazon S3 from AWS Explorer.

For instructions about how to install the AWS Explorer, see Developing with Amazon S3 using the AWS
SDKs, and explorers (p. 1030).

Using the AWS SDKs

The following examples generates a presigned URL that you can give to others so that they can retrieve
an object. For more information, see Sharing an object with a presigned URL (p. 151).

.NET

Example

The following example generates a presigned URL that you can give to others so that they can
retrieve an object. For more information, see Sharing an object with a presigned URL (p. 151).

For instructions about how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;

namespace Amazon.DocSamples.S3
{
class GenPresignedURLTest
{
private const string bucketName = "*** bucket name ***";
private const string objectKey = "*** object key ***";
// Specify how long the presigned URL lasts, in hours
private const double timeoutDuration = 12;
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

API Version 2006-03-01


152
Amazon Simple Storage Service User Guide
Sharing an object with a presigned URL

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
string urlString = GeneratePreSignedURL(timeoutDuration);
}
static string GeneratePreSignedURL(double duration)
{
string urlString = "";
try
{
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Expires = DateTime.UtcNow.AddHours(duration)
};
urlString = s3Client.GetPreSignedURL(request1);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when
writing an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
return urlString;
}
}
}

Go

You can use SDK for Go to upload an object. You can send a PUT request to upload data in a single
operation. For more information, see Generate a Pre-Signed URL for an Amazon S3 PUT Operation
with a Specific Payload in the AWS SDK for Go Developer Guide.
Java

Example

The following example generates a presigned URL that you can give to others so that they can
retrieve an object from an S3 bucket. For more information, see Sharing an object with a presigned
URL (p. 151).

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.HttpMethod;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;

import java.io.IOException;
import java.net.URL;

API Version 2006-03-01


153
Amazon Simple Storage Service User Guide
Sharing an object with a presigned URL

import java.time.Instant;

public class GeneratePresignedURL {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String objectKey = "*** Object key ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Set the presigned URL to expire after one hour.


java.util.Date expiration = new java.util.Date();
long expTimeMillis = Instant.now().toEpochMilli();
expTimeMillis += 1000 * 60 * 60;
expiration.setTime(expTimeMillis);

// Generate the presigned URL.


System.out.println("Generating pre-signed URL.");
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.GET)
.withExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);

System.out.println("Pre-Signed URL: " + url.toString());


} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

PHP

For more information about using AWS SDK for PHP Version 3 to generate a presigned URL, see
Amazon S3 pre-signed URL with AWS SDK for PHP Version 3 in the AWS SDK for PHP Developer
Guide.
Python

Generate a presigned URL to share an object by using the SDK for Python (Boto3). For example, use
a Boto3 client and the generate_presigned_url function to generate a presigned URL that GETs
an object.

import boto3
url = boto3.client('s3').generate_presigned_url(
ClientMethod='get_object',
Params={'Bucket': 'BUCKET_NAME', 'Key': 'OBJECT_KEY'},
ExpiresIn=3600)

For a complete example that shows how to generate presigned URLs and how to use the Requests
package to upload and download objects, see the PHP presigned URL example on GitHub. For more

API Version 2006-03-01


154
Amazon Simple Storage Service User Guide
Uploading objects using presigned URLs

information about using SDK for Python (Boto3) to generate a presigned URL, see Python in the
AWS SDK for PHP API Reference.

Uploading objects using presigned URLs


A presigned URL gives you access to the object identified in the URL, provided that the creator of the
presigned URL has permissions to access that object. That is, if you receive a presigned URL to upload an
object, you can upload the object only if the creator of the presigned URL has the necessary permissions
to upload that object.

All objects and buckets by default are private. The presigned URLs are useful if you want your user/
customer to be able to upload a specific object to your bucket, but you don't require them to have AWS
security credentials or permissions.

When you create a presigned URL, you must provide your security credentials and then specify a bucket
name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The
presigned URLs are valid only for the specified duration. That is, you must start the action before the
expiration date and time. If the action consists of multiple steps, such as a multipart upload, all steps
must be started before the expiration, otherwise you will receive an error when Amazon S3 attempts to
start a step with an expired URL.

You can use the presigned URL multiple times, up to the expiration date and time.

Presigned URL access

Since presigned URLs grant access to your Amazon S3 buckets to whoever has the URL, we recommend
that you protect them appropriately. For more details about protecting presigned URLs, see Limiting
presigned URL capabilities (p. 150).

Anyone with valid security credentials can create a presigned URL. However, for you to successfully
upload an object, the presigned URL must be created by someone who has permission to perform the
operation that the presigned URL is based upon.

Generate a presigned URL for object upload

You can generate a presigned URL programmatically using the REST API, .NET, AWS SDK for Java, Ruby,
AWS SDK for JavaScript, PHP, , and Python.

If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a presigned
object URL without writing any code. Anyone who receives a valid presigned URL can then
programmatically upload an object. For more information, see Using Amazon S3 from AWS Explorer. For
instructions on how to install AWS Explorer, see Developing with Amazon S3 using the AWS SDKs, and
explorers (p. 1030).

You can use the AWS SDK to generate a presigned URL that you, or anyone you give the URL, can use
to upload an object to Amazon S3. When you use the URL to upload an object, Amazon S3 creates the
object in the specified bucket. If an object with the same key that is specified in the presigned URL
already exists in the bucket, Amazon S3 replaces the existing object with the uploaded object.

Examples
The following examples show how to upload objects using presigned URLs.

.NET

The following C# example shows how to use the AWS SDK for .NET to upload an object to an S3
bucket using a presigned URL.

API Version 2006-03-01


155
Amazon Simple Storage Service User Guide
Uploading objects using presigned URLs

This example generates a presigned URL for a specific object and uses it to upload a file. For
information about the example's compatibility with a specific version of the AWS SDK for .NET and
instructions about how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 1039).

using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Net;

namespace Amazon.DocSamples.S3
{
class UploadObjectUsingPresignedURLTest
{
private const string bucketName = "*** provide bucket name ***";
private const string objectKey = "*** provide the name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file
to upload ***";
// Specify how long the presigned URL lasts, in hours
private const double timeoutDuration = 12;
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
var url = GeneratePreSignedURL(timeoutDuration);
UploadObject(url);
}

private static void UploadObject(string url)


{
HttpWebRequest httpRequest = WebRequest.Create(url) as HttpWebRequest;
httpRequest.Method = "PUT";
using (Stream dataStream = httpRequest.GetRequestStream())
{
var buffer = new byte[8000];
using (FileStream fileStream = new FileStream(filePath, FileMode.Open,
FileAccess.Read))
{
int bytesRead = 0;
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
dataStream.Write(buffer, 0, bytesRead);
}
}
}
HttpWebResponse response = httpRequest.GetResponse() as HttpWebResponse;
}

private static string GeneratePreSignedURL(double duration)


{
var request = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Verb = HttpVerb.PUT,
Expires = DateTime.UtcNow.AddHours(duration)
};

string url = s3Client.GetPreSignedURL(request);

API Version 2006-03-01


156
Amazon Simple Storage Service User Guide
Uploading objects using presigned URLs

return url;
}
}
}

Java

To successfully complete an upload, you must do the following:

• Specify the HTTP PUT verb when creating the GeneratePresignedUrlRequest and
HttpURLConnection objects.
• Interact with the HttpURLConnection object in some way after finishing the upload. The
following example accomplishes this by using the HttpURLConnection object to check the HTTP
response code.

Example

This example generates a presigned URL and uses it to upload sample data as an object. For
instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 1038).

import com.amazonaws.AmazonServiceException;
import com.amazonaws.HttpMethod;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;
import com.amazonaws.services.s3.model.S3Object;

import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import java.net.URL;

public class GeneratePresignedUrlAndUploadObject {

public static void main(String[] args) throws IOException {


Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String objectKey = "*** Object key ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Set the pre-signed URL to expire after one hour.


java.util.Date expiration = new java.util.Date();
long expTimeMillis = expiration.getTime();
expTimeMillis += 1000 * 60 * 60;
expiration.setTime(expTimeMillis);

// Generate the pre-signed URL.


System.out.println("Generating pre-signed URL.");
GeneratePresignedUrlRequest generatePresignedUrlRequest = new
GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.PUT)
.withExpiration(expiration);

API Version 2006-03-01


157
Amazon Simple Storage Service User Guide
Uploading objects using presigned URLs

URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);

// Create the connection and use it to upload the new object using the pre-
signed URL.
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStreamWriter out = new
OutputStreamWriter(connection.getOutputStream());
out.write("This text uploaded as an object via presigned URL.");
out.close();

// Check the HTTP response code. To complete the upload and make the object
available,
// you must interact with the connection object in some way.
connection.getResponseCode();
System.out.println("HTTP response code: " + connection.getResponseCode());

// Check to make sure that the object was uploaded successfully.


S3Object object = s3Client.getObject(bucketName, objectKey);
System.out.println("Object " + object.getKey() + " created in bucket " +
object.getBucketName());
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

JavaScript

Example

For an AWS SDK for JavaScript example on using the presigned URL to upload objects, see Create a
presigned URL to upload objects to an Amazon S3 bucket.

Example

The following AWS SDK for JavaScript example uses a presigned URL to delete an object:

// Import the required AWS SDK clients and commands for Node.js
import {
CreateBucketCommand,
DeleteObjectCommand,
PutObjectCommand,
DeleteBucketCommand }
from "@aws-sdk/client-s3";
import { s3Client } from "./libs/s3Client.js"; // Helper function that creates Amazon
S3 service client module.
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import fetch from "node-fetch";

// Set parameters
// Create a random names for the Amazon Simple Storage Service (Amazon S3) bucket and
key

API Version 2006-03-01


158
Amazon Simple Storage Service User Guide
Uploading objects using presigned URLs

export const bucketParams = {


Bucket: `test-bucket-${Math.ceil(Math.random() * 10 ** 10)}`,
Key: `test-object-${Math.ceil(Math.random() * 10 ** 10)}`,
Body: "BODY"
};
export const run = async () => {
try {
// Create an Amazon S3 bucket.
console.log(`Creating bucket ${bucketParams.Bucket}`);
await s3Client.send(new CreateBucketCommand({ Bucket: bucketParams.Bucket }));
console.log(`Waiting for "${bucketParams.Bucket}" bucket creation...`);
} catch (err) {
console.log("Error creating bucket", err);
}
try {
// Create the command.
const command = new PutObjectCommand(bucketParams);

// Create the presigned URL.


const signedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 3600,
});
console.log(
`\nPutting "${bucketParams.Key}" using signedUrl with body "${bucketParams.Body}"
in v3`
);
console.log(signedUrl);
const response = await fetch(signedUrl);
console.log(
`\nResponse returned by signed URL: ${await response.text()}\n`
);
return response;
} catch (err) {
console.log("Error creating presigned URL", err);
}
try {
// Delete the object.
console.log(`\nDeleting object "${bucketParams.Key}"} from bucket`);
await s3Client.send(
new DeleteObjectCommand({ Bucket: bucketParams.Bucket, Key: bucketParams.Key })
);
} catch (err) {
console.log("Error deleting object", err);
}
try {
// Delete the Amazon S3 bucket.
console.log(`\nDeleting bucket ${bucketParams.Bucket}`);
await s3.send(new DeleteBucketCommand({ Bucket: bucketParams.Bucket }));
} catch (err) {
console.log("Error deleting bucket", err);
}
};
run();

Python

Generate a presigned URL to upload an object by using the SDK for Python (Boto3). For example,
use a Boto3 client and the generate_presigned_url function to generate a presigned URL that
PUTs an object.

import boto3
url = boto3.client('s3').generate_presigned_url(

API Version 2006-03-01


159
Amazon Simple Storage Service User Guide
Uploading objects using presigned URLs

ClientMethod='put_object',
Params={'Bucket': 'BUCKET_NAME', 'Key': 'OBJECT_KEY'},
ExpiresIn=3600)

For a complete example that shows how to generate presigned URLs and how to use the Requests
package to upload and download objects, see the Python presigned URL example on GitHub. For
more information about using SDK for Python (Boto3) to generate a presigned URL, see Python in
the AWS SDK for Python (Boto) API Reference.
Ruby

The following tasks guide you through using a Ruby script to upload an object using a presigned URL
for SDK for Ruby - Version 3.

Uploading objects - SDK for Ruby - version 3

1 Create an instance of the Aws::S3::Resource class.

2 Provide a bucket name and an object key by calling the #bucket[] and the
#object[] methods of your Aws::S3::Resource class instance.

Generate a presigned URL by creating an instance of the URI class, and use it to parse
the .presigned_url method of your Aws::S3::Resource class instance. You
must specify :put as an argument to .presigned_url, and you must specify PUT to
Net::HTTP::Session#send_request if you want to upload an object.

3 Anyone with the presigned URL can upload an object.

The upload creates an object or replaces any existing object with the same key that is
specified in the presigned URL.

The following Ruby code example demonstrates the preceding tasks for SDK for Ruby - Version 3.

Example

require 'aws-sdk-s3'
require 'net/http'

# Uploads an object to a bucket in Amazon Simple Storage Service (Amazon S3)


# by using a presigned URL.
#
# Prerequisites:
#
# - An S3 bucket.
# - An object in the bucket to upload content to.
#
# @param s3_client [Aws::S3::Resource] An initialized S3 resource.
# @param bucket_name [String] The name of the bucket.
# @param object_key [String] The name of the object.
# @param object_content [String] The content to upload to the object.
# @param http_client [Net::HTTP] An initialized HTTP client.
# This is especially useful for testing with mock HTTP clients.
# If not specified, a default HTTP client is created.
# @return [Boolean] true if the object was uploaded; otherwise, false.
# @example
# exit 1 unless object_uploaded_to_presigned_url?(
# Aws::S3::Resource.new(region: 'us-east-1'),
# 'doc-example-bucket',
# 'my-file.txt',
# 'This is the content of my-file.txt'

API Version 2006-03-01


160
Amazon Simple Storage Service User Guide
Transforming objects

# )
def object_uploaded_to_presigned_url?(
s3_resource,
bucket_name,
object_key,
object_content,
http_client = nil
)
object = s3_resource.bucket(bucket_name).object(object_key)
url = URI.parse(object.presigned_url(:put))

if http_client.nil?
Net::HTTP.start(url.host) do |http|
http.send_request(
'PUT',
url.request_uri,
object_content,
'content-type' => ''
)
end
else
http_client.start(url.host) do |http|
http.send_request(
'PUT',
url.request_uri,
object_content,
'content-type' => ''
)
end
end
content = object.get.body
puts "The presigned URL for the object '#{object_key}' in the bucket " \
"'#{bucket_name}' is:\n\n"
puts url
puts "\nUsing this presigned URL to get the content that " \
"was just uploaded to this object, the object\'s content is:\n\n"
puts content.read
return true
rescue StandardError => e
puts "Error uploading to presigned URL: #{e.message}"
return false
end

Transforming objects with S3 Object Lambda


With S3 Object Lambda you can add your own code to Amazon S3 GET requests to modify and process
data as it is returned to an application. You can use custom code to modify the data returned by
standard S3 GET requests to filter rows, dynamically resize images, redact confidential data, and more.
Powered by AWS Lambda functions, your code runs on infrastructure that is fully managed by AWS,
eliminating the need to create and store derivative copies of your data or to run proxies, all with no
changes required to applications.

S3 Object Lambda uses AWS Lambda functions to automatically process the output of a standard S3 GET
request. AWS Lambda is a serverless compute service that runs customer-defined code without requiring
management of underlying compute resources. You can author and execute your own custom Lambda
functions, tailoring data transformation to your specific use cases. You can configure a Lambda function
and attach it to an S3 Object Lambda service endpoint and S3 will automatically call your function.
Then any data retrieved using an S3 GET request through the S3 Object Lambda endpoint will return a
transformed result back to the application. All other requests will be processed as normal, as illustrated
in the following diagram.

API Version 2006-03-01


161
Amazon Simple Storage Service User Guide
Creating Object Lambda Access Points

The topics in this section describe how to work with Object Lambda access points.

Topics
• Creating Object Lambda Access Points (p. 162)
• Configuring IAM policies for Object Lambda access points (p. 166)
• Writing and debugging Lambda functions for S3 Object Lambda Access Points (p. 168)
• Using AWS built Lambda functions (p. 179)
• Best practices and guidelines for S3 Object Lambda (p. 181)
• Security considerations for S3 Object Lambda access points (p. 182)

Creating Object Lambda Access Points


An Object Lambda access point is associated with exactly one standard access point and thus one
Amazon S3 bucket. To create an Object Lambda access point, you need the following resources:

API Version 2006-03-01


162
Amazon Simple Storage Service User Guide
Creating Object Lambda Access Points

• An IAM policy
• An Amazon S3 bucket
• A standard S3 access point
• An AWS Lambda function

The following sections describe how to create an Object Lambda access point using the AWS
Management Console and AWS CLI.

Create an Object Lambda access point


For information about how to create an Object Lambda access point using the REST API, see
CreateAccessPointForObjectLambda in the Amazon Simple Storage Service API Reference.

Using the S3 console

To create an Object Lambda access point using the console

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the navigation pane on the left side of the console, choose Object Lambda access points.
3. On the Object Lambda access points page, choose Create Object Lambda access point.
4. For Object Lambda access point name, enter the name you want to use for the access point.

As with standard access points, there are rules for naming. For more information, see Rules for
naming Amazon S3 access points (p. 189).
5. For Supporting access point, enter or browse to the standard access point that you want to use. The
access point must be in the same AWS Region as the objects you want to transform.
6. For Invoke Lambda function, you can choose to use a prebuilt function or enter the Amazon
Resource Name (ARN) of an AWS Lambda function in your AWS account.

For more information about prebuilt functions, see Using AWS built Lambda functions (p. 179).
7. (Optional) For Range and part number, you must enable this option in order to process GET
requests with range and part number headers. Selecting this option confirms that your Lambda
function is able to recognize and process these requests. For more information about range headers
and part numbers, see Working with Range and partNumber headers (p. 177).
8. (Optional) Under Payload, add JSON text to provide your Lambda function with additional
information. A payload is optional JSON that you can provide to your Lambda function as input. You
can configure payloads with different parameters for different Object Lambda access points that
invoke the same Lambda function, thereby extending the flexibility of your Lambda function.
9. (Optional) For Request metrics, choose enable or disable to add Amazon S3 monitoring to your
Object Lambda access point. Request metrics are billed at the standard CloudWatch rate.
10. (Optional) Under Object Lambda access point policy, set a resource policy. This resource policy
grants GetObject permission for the specified Object Lambda access point.
11. Choose Create Object Lambda access point.

Using the AWS CLI

The following example creates an Object Lambda access point named my-object-lambda-ap for the
bucket DOC-EXAMPLE-BUCKET1 in account 111122223333. This example assumes that a standard
access point named example-ap has already been created. For information about creating a standard
access point, see the section called “Creating access points” (p. 189).

API Version 2006-03-01


163
Amazon Simple Storage Service User Guide
Creating Object Lambda Access Points

To create an Object Lambda access point using the AWS CLI

This example uses the AWS prebuilt function compress. For example AWS Lambda functions, see the
section called “Using AWS built functions” (p. 179).

1. Create a bucket. In this example we will use DOC-EXAMPLE-BUCKET1. For information about
creating buckets, see the section called “Creating a bucket” (p. 28).
2. Create a standard access point and attach it to your bucket. In this example we will use example-
ap. For information about creating standard access points, see the section called “Creating access
points” (p. 189)
3. Create a Lambda function in your account that you would like to use to transform your S3 object.
See Using Lambda with the AWS CLI in the AWS Lambda Developer Guide. You can also use an AWS
prebuilt Lambda function.
4. Create an JSON configuration file named my-olap-configuration.json. In this configuration
provide the supporting access point and Lambda function ARN created in the previous steps.

Example

{
"SupportingAccessPoint" : "arn:aws:s3:us-east-1:111122223333:accesspoint/example-
ap",
"TransformationConfigurations": [{
"Actions" : ["GetObject"],
"ContentTransformation" : {
"AwsLambda": {
"FunctionPayload" : "{\"compressionType\":\"gzip\"}",
"FunctionArn" : "arn:aws:lambda:us-east-1:111122223333:function/
compress"
}
}
}]
}

5. Run create-access-point-for-object-lambda to create your Object Lambda access point.

aws s3control create-access-point-for-object-lambda --account-id 111122223333 --name


my-object-lambda-ap --configuration file://my-olap-configuration.json

6. (Optional) Create an JSON policy file named my-olap-policy.json.

This resource policy grants GetObject permission for account 444455556666 to the specified
Object Lambda access point.

Example

{
"Version" : "2008-10-17",
"Statement":[{
"Sid": "Grant account 444455556666 GetObject access",
"Effect":"Allow",
"Principal" : {
"AWS": "arn:aws:iam::444455556666:root"
}
}
]
}

7. (Optional) Run put-access-point-policy-for-object-lambda to set your resource policy.

API Version 2006-03-01


164
Amazon Simple Storage Service User Guide
Creating Object Lambda Access Points

aws s3control put-access-point-policy-for-object-lambda --account-id 123456789012 --


name my-object-lambda-ap --policy file://my-olap-policy.json

8. (Optional) Specify a Payload.

A payload is optional JSON that you can provide to your AWS Lambda function as input. You can
configure payloads with different parameters for different Object Lambda access points that invoke
the same Lambda function, thereby extending the flexibility of your Lambda function.

The following Object Lambda access point configuration shows a payload with two parameters.

{
"SupportingAccessPoint": "AccessPointArn",
"CloudWatchMetricsEnabled": false,
"TransformationConfigurations": [{
"Actions": ["GetObject"],
"ContentTransformation": {
"AwsLambda": {
"FunctionArn": "FunctionArn",
"FunctionPayload": "{\"res-x\": \"100\",\"res-y\": \"100\"}"
}
}
}]
}

The following Object Lambda access point configuration shows a payload with one parameter and
range and part number enabled.

{
"SupportingAccessPoint":"AccessPointArn",
"CloudWatchMetricsEnabled": false,
"AllowedFeatures": ["GetObject-Range", "GetObject-PartNumber"],
"TransformationConfigurations": [{
"Actions": ["GetObject"],
"ContentTransformation": {
"AwsLambda": {
"FunctionArn":"FunctionArn",
"FunctionPayload": "{\"compression-amount\": \"5\"}"
}
}
}]
}

Important
When using Object Lambda access points the payload should not contain any confidential
information.

Using AWS CloudFormation

For more information about configuring Object Lambda access points using AWS CloudFormation, see
AWS::S3ObjectLambda::AccessPoint in the AWS CloudFormation User Guide.

Using AWS Cloud Development Kit (CDK)

For more information about configuring Object Lambda access points using the AWS CDK, see
AWS::S3ObjectLambda Construct Library in the AWS Cloud Development Kit (CDK) API Reference.

API Version 2006-03-01


165
Amazon Simple Storage Service User Guide
Configuring IAM policies

Configuring IAM policies for Object Lambda access


points
S3 access points support AWS Identity and Access Management (IAM) resource policies that allow you to
control the use of the access point by resource, user, or other conditions.

In the case of a single AWS account, the following four resources must have permissions granted to work
with Object Lambda access points:

• The IAM user or role


• The bucket and associated standard access point
• The Object Lambda access point
• The AWS Lambda function

These examples assume that you have the following resources:

• An Amazon S3 bucket with following Amazon Resource Name (ARN):

arn:aws:s3:::DOC-EXAMPLE-BUCKET1

The bucket has the permissions delegated to your access point, such as the example below. For more
information, see Delegating access control to access points (p. 185).

Example bucket policy delegating access control to access points

{
"Version": "2012-10-17",
"Statement" : [
{
"Effect": "Allow",
"Principal" : { "AWS":"account-ARN"},
"Action" : "*",
"Resource" : [ "DOC-EXAMPLE-BUCKET1", "DOC-EXAMPLE-BUCKET1/*"],
"Condition": {
"StringEquals" : { "s3:DataAccessPointAccount" : "Bucket owner's account
ID" }
}
}]
}

• An Amazon S3 standard access point on this bucket with the following ARN:

arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point
• An Object Lambda access point with the following ARN:

arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/my-object-lambda-
ap
• An AWS Lambda function with the following ARN:

arn:aws:lambda:us-east-1:111122223333:function/MyObjectLambdaFunction

Note
If using a Lambda function from your account you must include the function version in your
policy statement. For example, arn:aws:lambda:us-east-1:111122223333:function/
MyObjectLambdaFunction:$LATEST
API Version 2006-03-01
166
Amazon Simple Storage Service User Guide
Configuring IAM policies

The following IAM policy grants a user permission to interact with these resources.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLambdaInvocation",
"Action": [
"lambda:InvokeFunction"
],
"Effect": "Allow",
"Resource": "arn:aws:lambda:us-east-1:111122223333:function/MyObjectLambdaFunction:
$LATEST",
"Condition": {
"ForAnyValue:StringEquals": {
"aws:CalledVia": [
"s3-object-lambda.amazonaws.com"
]
}
}
},
{
"Sid": "AllowStandardAccessPointAccess",
"Action": [
"s3: Get*",
"s3: List*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point/*",
"Condition": {
"ForAnyValue:StringEquals": {
"aws:CalledVia": [
"s3-object-lambda.amazonaws.com"
]
}
}
},
{
"Sid": "AllowObjectLambdaAccess",
"Action": [
"s3-object-lambda:Get*",
"s3-object-lambda:List*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/my-object-
lambda-ap"
}
]
}

Lambda execution role


Your Lambda function needs permission to send data to S3 Object Lambda when requests
are made to an Object Lambda access point. This is provided by enabling the s3-object-
lambda:WriteGetObjectResponse permission on your Lambda function's execution role. You can
create a new execution role or update an existing one.

To create an execution role in the IAM console

1. Open the Roles page in the IAM console.


2. Choose Create role.
3. Under Common use cases, choose Lambda.

API Version 2006-03-01


167
Amazon Simple Storage Service User Guide
Writing Lambda functions

4. Choose Next: Permissions.


5. Under Attach permissions policies, choose the AWS managed policy
AmazonS3ObjectLambdaExecutionRolePolicy.
6. Choose Next: Tags.
7. Choose Next: Review.
8. For Role name, enter s3-object-lambda-role.
9. Choose Create role.
10. Apply the newly created s3-object-lambda-role as your Lambda function's execution role.

For detailed instructions, see Creating a role for an AWS service (console) in the IAM User Guide.

To update your Lambda function's execution role

Add the following statement to the execution role that is used by the Lambda function.

{
{
"Sid": "AllowObjectLambdaAccess",
"Action": ["s3-object-lambda:WriteGetObjectResponse"],
"Effect": "Allow",
"Resource": "*"
}

For more information about execution roles see, Lambda execution role in the AWS Lambda Developer
Guide.

Using context keys with Object Lambda access points


With S3 Object Lambda, GET requests will automatically invoke Lambda functions and all other
requests will be forwarded to S3. S3 Object Lambda will evaluate context keys such as s3-object-
lambda:TlsVersion or s3-object-lambda:AuthType related to the connection or signing of the
request. All other context keys, such as s3:prefix, are evaluated by S3.

Writing and debugging Lambda functions for S3


Object Lambda Access Points
This section details about writing and debugging Lambda functions for use with Object Lambda access
points.

Topics
• Working with WriteGetObjectResponse (p. 168)
• Debugging S3 Object Lambda (p. 176)
• Working with Range and partNumber headers (p. 177)
• Event context format and usage (p. 178)

Working with WriteGetObjectResponse


S3 Object Lambda exposes a new Amazon S3 API, WriteGetObjectResponse which enables the
Lambda function to provide customized data and response headers to the GetObject caller.
WriteGetObjectResponse affords the Lambda author extensive control over the status code, response
headers and response body based on their processing needs. You can use WriteGetObjectResponse to
respond with the whole transformed object, portions of the transformed object, or other responses

API Version 2006-03-01


168
Amazon Simple Storage Service User Guide
Writing Lambda functions

based on the context of your application. The following section shows unique examples of using the
WriteGetObjectResponse.

• Example 1: Respond with a 403 Forbidden


• Example 2: Respond with a transformed image
• Example 3: Stream compressed content

Example 1:

You can use WriteGetObjectResponse to respond with a 403 Forbidden based on the content of the
object.

Java

package com.amazon.s3.objectlambda;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.events.S3ObjectLambdaEvent;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.WriteGetObjectResponseRequest;

import java.io.ByteArrayInputStream;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

public class Example1 {

public void handleRequest(S3ObjectLambdaEvent event, Context context) throws


Exception {
AmazonS3 s3Client = AmazonS3Client.builder().build();

// We're checking to see if the request contains all of the information we


need.
// If it does not, we send a 4XX response and a custom error code and message.
// If we're happy with the request, we retrieve the object from S3 and stream
it
// to the client unchanged.
var tokenIsNotPresent = !
event.getUserRequest().getHeaders().containsKey("requiredToken");
if (tokenIsNotPresent) {
s3Client.writeGetObjectResponse(new WriteGetObjectResponseRequest()
.withRequestRoute(event.outputRoute())
.withRequestToken(event.outputToken())
.withStatusCode(403)
.withContentLength(0L).withInputStream(new ByteArrayInputStream(new
byte[0]))
.withErrorCode("MissingRequiredToken")
.withErrorMessage("The required token was not present in the
request."));
return;
}

// Prepare the presigned URL for use and make the request to S3.
HttpClient httpClient = HttpClient.newBuilder().build();
var presignedResponse = httpClient.send(
HttpRequest.newBuilder(new URI(event.inputS3Url())).GET().build(),
HttpResponse.BodyHandlers.ofInputStream());

// Stream the original bytes back to the caller.

API Version 2006-03-01


169
Amazon Simple Storage Service User Guide
Writing Lambda functions

s3Client.writeGetObjectResponse(new WriteGetObjectResponseRequest()
.withRequestRoute(event.outputRoute())
.withRequestToken(event.outputToken())
.withInputStream(presignedResponse.body()));
}
}

Python

import boto3
import requests

def handler(event, context):


s3 = boto3.client('s3')

"""
Retrieve the operation context object from event. This has info to where the
WriteGetObjectResponse request
should be delivered and a presigned URL in `inputS3Url` where we can download the
requested object from.
The `userRequest` object has information related to the user which made this
`GetObject` request to S3OL.
"""
get_context = event["getObjectContext"]
user_request_headers = event["userRequest"]["headers"]

route = get_context["outputRoute"]
token = get_context["outputToken"]
s3_url = get_context["inputS3Url"]

# Check for the presence of a `CustomHeader` header and deny or allow based on that
header
is_token_present = "SuperSecretToken" in user_request_headers

if is_token_present:
# If the user presented our custom `SuperSecretToken` header we send the
requested object back to the user.
response = requests.get(s3_url)
s3.write_get_object_response(RequestRoute=route, RequestToken=token,
Body=response.content)
else:
# If the token is not present we send an error back to the user.
s3.write_get_object_response(RequestRoute=route, RequestToken=token,
StatusCode=403,
ErrorCode="NoSuperSecretTokenFound", ErrorMessage="The request was not secret
enough.")

# Gracefully exit the Lambda function


return { 'status_code': 200 }

NodeJS

const { S3 } = require('aws-sdk');
const axios = require('axios').default;

exports.handler = async (event) => {


const s3 = new S3();

// Retrieve the operation context object from event. This has info to where the
WriteGetObjectResponse request
// should be delivered and a presigned URL in `inputS3Url` where we can download
the requested object from.

API Version 2006-03-01


170
Amazon Simple Storage Service User Guide
Writing Lambda functions

// The `userRequest` object has information related to the user which made this
`GetObject` request to S3OL.
const { userRequest, getObjectContext } = event;
const { outputRoute, outputToken, inputS3Url } = getObjectContext;

// Check for the presence of a `CustomHeader` header and deny or allow based on
that header
const isTokenPresent = Object
.keys(userRequest.headers)
.includes("SuperSecretToken");

if (!isTokenPresent) {
// If the token is not present we send an error back to the user. Notice the
`await` infront of the request as
// we want to wait for this request to finish sending before moving on.
await s3.writeGetObjectResponse({
RequestRoute: outputRoute,
RequestToken: outputToken,
StatusCode: 403,
ErrorCode: "NoSuperSecretTokenFound",
ErrorMessage: "The request was not secret enough.",
}).promise();
} else {
// If the user presented our custom `SuperSecretToken` header we send the
requested object back to the user.
// Again notice the presence of `await`.
const presignedResponse = await axios.get(inputS3Url);
await s3.writeGetObjectResponse({
RequestRoute: outputRoute,
RequestToken: outputToken,
Body: presignedResponse.data,
}).promise();
}

// Gracefully exit the Lambda function


return { statusCode: 200 };
}

Example 2:

When performing an image transformation, you may find that you need all the bytes of the source
object before you can start processing them. Consequently, your WriteGetObjectResponse will return the
whole object to the requesting application in one go.

Java

package com.amazon.s3.objectlambda;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.events.S3ObjectLambdaEvent;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.WriteGetObjectResponseRequest;

import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.awt.Image;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;

API Version 2006-03-01


171
Amazon Simple Storage Service User Guide
Writing Lambda functions

import java.net.http.HttpResponse;

public class Example2 {

private static final int HEIGHT = 250;


private static final int WIDTH = 250;

public void handleRequest(S3ObjectLambdaEvent event, Context context) throws


Exception {
AmazonS3 s3Client = AmazonS3Client.builder().build();
HttpClient httpClient = HttpClient.newBuilder().build();

// Prepare the presigned URL for use and make the request to S3.
var presignedResponse = httpClient.send(
HttpRequest.newBuilder(new URI(event.inputS3Url())).GET().build(),
HttpResponse.BodyHandlers.ofInputStream());

// The entire image is loaded into memory here so that we can resize it.
// Once the resizing is completed, we write the bytes into the body
// of the WriteGetObjectResponse.
var originalImage = ImageIO.read(presignedResponse.body());
var resizingImage = originalImage.getScaledInstance(WIDTH, HEIGHT,
Image.SCALE_DEFAULT);
var resizedImage = new BufferedImage(WIDTH, HEIGHT,
BufferedImage.TYPE_INT_RGB);
resizedImage.createGraphics().drawImage(resizingImage, 0, 0, WIDTH, HEIGHT,
null);

var baos = new ByteArrayOutputStream();


ImageIO.write(resizedImage, "png", baos);

// Stream the bytes back to the caller.


s3Client.writeGetObjectResponse(new WriteGetObjectResponseRequest()
.withRequestRoute(event.outputRoute())
.withRequestToken(event.outputToken())
.withInputStream(new ByteArrayInputStream(baos.toByteArray())));
}
}

Python

import boto3
import requests
import io
from PIL import Image

def handler(event, context):


"""
Retrieve the operation context object from event. This has info to where the
WriteGetObjectResponse request
should be delivered and a presigned URL in `inputS3Url` where we can download the
requested object from.
The `userRequest` object has information related to the user which made this
`GetObject` request to S3OL.
"""
get_context = event["getObjectContext"]
route = get_context["outputRoute"]
token = get_context["outputToken"]
s3_url = get_context["inputS3Url"]

"""
In this case we're resizing `.png` images which are stored in S3 and are accessible
via the presigned url
`inputS3Url`.

API Version 2006-03-01


172
Amazon Simple Storage Service User Guide
Writing Lambda functions

"""
image_request = requests.get(s3_url)
image = Image.open(io.BytesIO(image_request.content))
image.thumbnail((256,256), Image.ANTIALIAS)

transformed = io.BytesIO()
image.save(transformed, "png")

# Sending the resized image back to the client


s3 = boto3.client('s3')
s3.write_get_object_response(Body=transformed.getvalue(), RequestRoute=route,
RequestToken=token)

# Gracefully exit the Lambda function


return { 'status_code': 200 }

NodeJS

const { S3 } = require('aws-sdk');
const axios = require('axios').default;
const sharp = require('sharp');

exports.handler = async (event) => {


const s3 = new S3();

// Retrieve the operation context object from event. This has info to where the
WriteGetObjectResponse request
// should be delivered and a presigned URL in `inputS3Url` where we can download
the requested object from
const { getObjectContext } = event;
const { outputRoute, outputToken, inputS3Url } = getObjectContext;

// In this case we're resizing `.png` images which are stored in S3 and are
accessible via the presigned url
// `inputS3Url`.
const { data } = await axios.get(inputS3Url, { responseType: 'arraybuffer' });

// Resizing the image


const resized = await sharp(data)
.resize({ width: 256, height: 256 })
.toBuffer();

// Sending the resized image back to the client


await s3.writeGetObjectResponse({
RequestRoute: outputRoute,
RequestToken: outputToken,
Body: resized,
}).promise();

// Gracefully exit the Lambda function


return { statusCode: 200 };
}

Example 3:

When compressing objects, compressed data is produced incrementally. Consequently, your


WriteGetObjectResponse can be used to return the compressed data as soon as it's ready. As shown in
this example, it is not necessary to know the length of the completed transformation.

Java

API Version 2006-03-01


173
Amazon Simple Storage Service User Guide
Writing Lambda functions

package com.amazon.s3.objectlambda;

import com.amazonaws.services.lambda.runtime.events.S3ObjectLambdaEvent;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.WriteGetObjectResponseRequest;

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

public class Example3 {

public void handleRequest(S3ObjectLambdaEvent event, Context context) throws


Exception {
AmazonS3 s3Client = AmazonS3Client.builder().build();
HttpClient httpClient = HttpClient.newBuilder().build();

// Request the original object from S3.


var presignedResponse = httpClient.send(
HttpRequest.newBuilder(new URI(event.inputS3Url())).GET().build(),
HttpResponse.BodyHandlers.ofInputStream());

// We're consuming the incoming response body from the presigned request,
// applying our transformation on that data and emitting the transformed bytes
// into the body of the WriteGetObjectResponse request as soon as they're
ready.
// This example compresses the data from S3, but any processing pertinent
// to your application can be performed here.
var bodyStream = new GZIPCompressingInputStream(presignedResponse.body());

// Stream the bytes back to the caller.


s3Client.writeGetObjectResponse(new WriteGetObjectResponseRequest()
.withRequestRoute(event.outputRoute())
.withRequestToken(event.outputToken())
.withInputStream(bodyStream));
}

Python

import boto3
import requests
import zlib
from botocore.config import Config

"""
A helper class to work with content iterators. Takes an interator and compresses the
bytes that come from it. It
implements `read` and `__iter__` so the SDK can stream the response
"""
class Compress:
def __init__(self, content_iter):
self.content = content_iter
self.compressed_obj = zlib.compressobj()

def read(self, _size):


for data in self.__iter__()
return data

API Version 2006-03-01


174
Amazon Simple Storage Service User Guide
Writing Lambda functions

def __iter__(self):
while True:
data = next(self.content)
chunk = self.compressed_obj.compress(data)
if not chunk:
break

yield chunk

yield self.compressed_obj.flush()

def handler(event, context):


"""
Setting the `payload_signing_enabled` property to False will allow us to send a
streamed response back to the client
in this scenario a streamed response means that the bytes are not buffered into
memory as we're compressing them
but are sent straight to the user
"""
my_config = Config(
region_name='eu-west-1',
signature_version='s3v4',
s3={
"payload_signing_enabled": False
}
)
s3 = boto3.client('s3', config=my_config)

"""
Retrieve the operation context object from event. This has info to where the
WriteGetObjectResponse request
should be delivered and a presigned URL in `inputS3Url` where we can download the
requested object from.
The `userRequest` object has information related to the user which made this
`GetObject` request to S3OL.
"""
get_context = event["getObjectContext"]
route = get_context["outputRoute"]
token = get_context["outputToken"]
s3_url = get_context["inputS3Url"]

# Compress the `get` request stream


with requests.get(s3_url, stream=True) as r:
compressed = Compress(r.iter_content())

# Send the stream back to the client


s3.write_get_object_response(Body=compressed, RequestRoute=route,
RequestToken=token, ContentType="text/plain",
ContentEncoding="gzip")

# Gracefully exit the Lambda function


return {'status_code': 200}

NodeJS

const { S3 } = require('aws-sdk');
const axios = require('axios').default;
const zlib = require('zlib');

exports.handler = async (event) => {


const s3 = new S3();

API Version 2006-03-01


175
Amazon Simple Storage Service User Guide
Writing Lambda functions

// Retrieve the operation context object from event. This has info to where the
WriteGetObjectResponse request
// should be delivered and a presigned URL in `inputS3Url` where we can download
the requested object from
const { getObjectContext } = event;
const { outputRoute, outputToken, inputS3Url } = getObjectContext;

// Let's download the object from S3 and process it as a stream as it might be a


huge object and we don't want to
// buffer it in memory. Notice the `await` as we want to wait for
`writeGetObjectResponse` to complete before we can
// exit the Lambda function
await axios({
method: 'GET',
url: inputS3Url,
responseType: 'stream',
}).then(
// Gzip the stream
response => response.data.pipe(zlib.createGzip())
).then(
// Finally send the gzip-ed stream back to the client
stream => s3.writeGetObjectResponse({
RequestRoute: outputRoute,
RequestToken: outputToken,
Body: stream,
ContentType: "text/plain",
ContentEncoding: "gzip",
}).promise()
);

// Gracefully exit the Lambda function


return { statusCode: 200 };
}

Note
While S3 Object Lambda allows up to 60 seconds to send a complete response to the caller
via WriteGetObjectResponse the actual amount of time available may be less, for instance if
your Lambda function timeout is less than 60 seconds. In other cases the caller may have more
stringent timeouts.

The WriteGetObjectResponse call must be made for the original caller to receive a non-500 response.
If the Lambda function returns, exceptionally or otherwise, before the WriteGetObjectResponse API
is called the original caller will receive a 500 response. Exceptions thrown during the time it takes to
complete the response will result in truncated responses to the caller. If the Lambda receives a 200
Response from the WriteGetObjectResponse API call then the original caller has sent the complete
request. The Lambda response, exceptional or not, is ignored by S3 Object Lambda.

When calling this API, S3 requires the route and request token from the event context. For more
information, see Event context format and usage (p. 178).

These parameters are required to connect the WriteGetObjectResult with the original caller. While
it is always appropriate to retry 500 responses, note that the request token is a single use token
and subsequent attempts to use it may result in 400 Bad Request responses. While the call to
WriteGetObjectResponse with the route and request tokens does not need to be made from the invoked
Lambda, it does need to be made by an identity in the same account and must be completed before the
Lambda finishes execution.

Debugging S3 Object Lambda


Get Object requests to S3 Object Lambda access points may result in a new error responses when
something goes wrong with the Lambda invocation or execution. These errors follow the same format

API Version 2006-03-01


176
Amazon Simple Storage Service User Guide
Writing Lambda functions

as the standard S3 errors. For information about S3 Object Lambda errors, see S3 Object Lambda Error
Code List in the Amazon Simple Storage Service API Reference.

For more information on general Lambda function debugging see, Monitoring and troubleshooting
Lambda applications in the AWS Lambda Developer Guide.

For information about standard Amazon S3 errors, see Error Responses in the Amazon Simple Storage
Service API Reference.

You can enable request metrics in CloudWatch for your Object Lambda access points. These metrics can
be used to monitor the operational performance of your access point.

CloudTrail Data Events can be enabled to get more granular logging about requests made to your Object
Lambda access points. For more information see, Logging data events for trails in the AWS CloudTrail
User Guide.

Working with Range and partNumber headers


When working with large objects you can use the Range HTTP header to download a specified byte-
range from an object fetching only the specified portion. You can use concurrent connections to Amazon
S3 to fetch different byte ranges from within the same object. You can also use partNumber (integer
between 1 and 10,000) which effectively performs a ‘ranged’ GET request for the specified part from the
object. For more information, see GetObject Request Syntax in the Amazon Simple Storage Service API
Reference.

When receiving a GET request, S3 Object Lambda invokes your specified Lambda function first, hence
if your GET request contains range or part number parameters, you must ensure that your Lambda
function is equipped to recognize and manage these parameters. Because there can be multiple entities
connected in such a setup (requesting client and services like Lambda, S3, other) it is advised that all
involved entities interpret the requested range (or partNumber) in a uniform manner. This ensures that
the ranges the application is expecting match with the ranges your Lambda function is processing.
When building a function to handle requests with range headers test all combinations of response sizes,
original object sizes, and request range sizes that your application plans to use.

By default, S3 Object Lambda access points will respond with a 501 to any GetObject request that
contains a range or part number parameter, either in the headers or query parameters. You can confirm
that your Lambda function is prepared to handle range or part requests by updating your Object Lambda
access point configuration through the AWS Management Console or the AWS CLI.

The following code example demonstrates how to retrieve the Range header from the GET request and
add it to the presignedURL that Lambda can use to retrieve the requested range from S3.

private HttpRequest.Builder applyRangeHeader(ObjectLambdaEvent event, HttpRequest.Builder


presignedRequest) {
var header = event.getUserRequest().getHeaders().entrySet().stream()
.filter(e -> e.getKey().toLowerCase(Locale.ROOT).equals("range"))
.findFirst();

// Add check in the query string itself.


header.ifPresent(entry -> presignedRequest.header(entry.getKey(), entry.getValue()));
return presignedRequest;
}

Range requests to S3 can be made using headers or query parameters. If the original request used the
Range header it can be found in the event context at userRequest.headers.Range. If the original
request used a query parameter then it will be present in userRequest.url as ‘Range’. In both cases,
the presigned URL that is provided will not contain the specified range, and the range header should be
added to it in order to retrieve the requested range from S3.

API Version 2006-03-01


177
Amazon Simple Storage Service User Guide
Writing Lambda functions

Part requests to S3 are made using query parameters. If the original request included a part number it
can be found in the query parameters in userRequest.url as ‘partNumber’. The presigned URL that is
provided will not contain the specified partNumber.

Event context format and usage


S3 Object Lambda provides context about the request being made in the event passed to Lambda. The
following shows an example request and field descriptions.

{
"xAmzRequestId": "requestId",
"getObjectContext": {
"inputS3Url": "https://my-s3-ap-111122223333.s3-accesspoint.us-
east-1.amazonaws.com/example?X-Amz-Security-Token=<snip>",
"outputRoute": "io-use1-001",
"outputToken": "OutputToken"
},
"configuration": {
"accessPointArn": "arn:aws:s3-object-lambda:us-east-1:111122223333:accesspoint/
example-object-lambda-ap",
"supportingAccessPointArn": "arn:aws:s3:us-east-1:111122223333:accesspoint/example-
ap",
"payload": "{}"
},
"userRequest": {
"url": "https://object-lambda-111122223333.s3-object-lambda.us-
east-1.amazonaws.com/example",
"headers": {
"Host": "object-lambda-111122223333.s3-object-lambda.us-east-1.amazonaws.com",
"Accept-Encoding": "identity",
"X-Amz-Content-SHA256": "e3b0c44298fc1example"
}
},
"userIdentity": {
"type": "AssumedRole",
"principalId": "principalId",
"arn": "arn:aws:sts::111122223333:assumed-role/Admin/example",
"accountId": "111122223333",
"accessKeyId": "accessKeyId",
"sessionContext": {
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "Wed Mar 10 23:41:52 UTC 2021"
},
"sessionIssuer": {
"type": "Role",
"principalId": "principalId",
"arn": "arn:aws:iam::111122223333:role/Admin",
"accountId": "111122223333",
"userName": "Admin"
}
}
},
"protocolVersion": "1.00"
}

• xAmzRequestId ‐ The Amazon S3 request ID for this request. We recommend that you log this value
to help with debugging.
• getObjectContext ‐ The input and output details for connections to Amazon S3 and S3 Object
Lambda.
• inputS3Url ‐ A presigned URL that can be used to fetch the original object from Amazon S3. The
URL is signed using the original caller’s identity, and their permissions will apply when the URL is

API Version 2006-03-01


178
Amazon Simple Storage Service User Guide
Using AWS built functions

used. If there are signed headers in the URL, the Lambda function must include these in the call to
Amazon S3, except for the Host.
• outputRoute ‐ A routing token that is added to the S3 Object Lambda URL when the Lambda
function calls WriteGetObjectResponse.
• outputToken ‐ An opaque token used by S3 Object Lambda to match the
WriteGetObjectResponse call with the original caller.
• configuration ‐ Configuration information about the S3 Object Lambda access point.
• accessPointArn ‐ The Amazon Resource Name (ARN) of the S3 Object Lambda access point that
received this request.
• supportingAccessPointArn ‐ The ARN of the supporting access point that is specified in the S3
Object Lambda access point configuration.
• payload ‐ Custom data that is applied to the S3 Object Lambda access point configuration. S3
Object Lambda treats this as an opaque string, so it might need to be decoded before use.
• userRequest ‐ Information about the original call to S3 Object Lambda.
• url ‐ The decoded URL of the request as received by S3 Object Lambda, excluding any
authorization-related query parameters.
• headers ‐ A map of string to strings containing the HTTP headers and their values from the original
call, excluding any authorization-related headers. If the same header appears multiple times, their
values are combined into a comma-delimited list. The case of the original headers is retained in this
map.
• userIdentity ‐ Details about the identity that made the call to S3 Object Lambda. For more
information see, Logging data events for trails in the AWS CloudTrail User Guide.
• type ‐ The type of identity.
• accountId ‐ The AWS account to which the identity belongs.
• userName ‐ The friendly name of the identity that made the call.
• principalId ‐ The unique identifier for the identity that made the call.
• arn ‐ The ARN of the principal that made the call. The last section of the ARN contains the user or
role that made the call.
• sessionContext ‐ If the request was made with temporary security credentials, this element
provides information about the session that was created for those credentials.
• invokedBy ‐ The name of the AWS service that made the request, such as Amazon EC2 Auto Scaling
or AWS Elastic Beanstalk.
• sessionIssuer ‐ If the request was made with temporary security credentials, this element
provides information about how the credentials were obtained.
• protocolVersion ‐ The version ID of the context provided. The format of this field is {Major
Version}.{Minor Version}. The minor version numbers are always two-digit numbers. Any
removal or change to the semantics of a field will necessitate a major version bump and will require
active opt-in. Amazon S3 can add new fields at any time, at which point you might experience a minor
version bump. Due to the nature of software rollouts, it is possible that you might see multiple minor
versions in use at once.

Using AWS built Lambda functions


AWS provides some prebuilt Lambda functions that you can use with S3 Object Lambda to detect and
redact personally identifiable information (PII) and decompress S3 objects. These Lambda functions
are available in the AWS Serverless Application Repository and can be selected through the AWS
Management Console when you create your Object Lambda access point.

For more information on how to deploy severless applications from the AWS Serverless Application
Repository, see Deploying Applications in the AWS
API Version Serverless Application Repository Developer Guide.
2006-03-01
179
Amazon Simple Storage Service User Guide
Using AWS built functions

Example 1: PII Access Control


This Lambda function uses Amazon Comprehend, a natural language processing (NLP) service
using machine learning to find insights and relationships in text. It automatically detects personally
identifiable information (PII) such as names, addresses, dates, credit card numbers, and social security
numbers from documents in your Amazon S3 bucket. If you have documents in your bucket that include
PII, you can configure the PII Access Control S3 Object Lambda function to detect these PII entity types
and restrict access to unauthorized users.

To get started, simply deploy the following Lambda function in your account and add the ARN in your
Object Lambda access point configuration.

ARN:

arn:aws:serverlessrepo:us-east-1:839782855223:applications/
ComprehendPiiAccessControlS3ObjectLambda

You can add the view this function on the AWS Management Console using the following SAR link:
ComprehendPiiAccessControlS3ObjectLambda.

To view this function on GitHub see Amazon Comprehend S3 Object Lambda.

Example 2: PII Redaction


This Lambda function uses Amazon Comprehend, a natural language processing (NLP) service
using machine learning to find insights and relationships in text. It automatically redacts personally
identifiable information (PII) such as names, addresses, dates, credit card numbers, and social security
numbers from documents in your Amazon S3 bucket. If you have documents in your bucket that
include information such as credit card numbers or bank account information, you can configure the PII
Redaction S3 Object Lambda function to detect PII and then return a copy of these documents in which
PII entity types are redacted.

To get started, simply deploy the following Lambda function in your account and add the ARN in your
Object Lambda access point configuration.

ARN:

arn:aws:serverlessrepo:us-east-1:839782855223:applications/
ComprehendPiiRedactionS3ObjectLambda

You can add the view this function on the AWS Management Console using the following SAR link:
ComprehendPiiRedactionS3ObjectLambda.

To view this function on GitHub see Amazon Comprehend S3 Object Lambda.

Example 3: Decompression
The Lambda function S3ObjectLambdaDecompression, is equipped to decompress objects stored in S3 in
one of six compressed file formats including bzip2, gzip, snappy, zlib, zstandard and ZIP. To get started,
simply deploy the following Lambda function in your account and add the ARN in your Object Lambda
access point configuration.

ARN:

arn:aws:serverlessrepo:eu-west-1:123065155563:applications/S3ObjectLambdaDecompression

API Version 2006-03-01


180
Amazon Simple Storage Service User Guide
Best practices and guidelines for S3 Object Lambda

You can add the view this function on the AWS Management Console using the following SAR link:
S3ObjectLambdaDecompression.

To view this function on GitHub see S3 Object Lambda Decompression.

Best practices and guidelines for S3 Object Lambda


When using S3 Object Lambda, follow these best practices and guidelines to optimize operations and
performance.

Topics
• Working with S3 Object Lambda (p. 181)
• AWS Services used in connection with S3 Object Lambda (p. 181)
• Working with Range and partNumber GET headers (p. 181)
• Working with AWS CLI and SDKs (p. 182)

Working with S3 Object Lambda


S3 Object Lambda only support processing GetObject requests. Any non-GET requests, such as
ListObjects or HeadObject, will not invoke Lambda and return standard, non-transformed API responses.
You can create a maximum of 1,000 Object Lambda access points per AWS account per Region. The
AWS Lambda function that you use must be in the same AWS account and Region as the Object Lambda
access point.

S3 Object Lambda allows up to 60 seconds to stream a complete response to its caller. Your function
is also subject to Lambda default quotas. For more information, see Lambda quotas in the AWS
Lambda Developer Guide. Using S3 Object Lambda invokes your specified Lambda function and you are
responsible for ensuring that any data overwritten or deleted from S3 by your specified Lambda function
or application is intended and correct.

You can only use S3 Object Lambda to perform operations on objects. You cannot use them to perform
other Amazon S3 operations, such as modifying or deleting buckets. For a complete list of S3 operations
that support access points see, Access point compatibility with AWS services (p. 198).

In addition to this list, S3 Object Lambda access points do not support POST Object, Copy (as the source),
or Select Object Content.

AWS Services used in connection with S3 Object Lambda


S3 Object Lambda connects Amazon S3, AWS Lambda, and optionally, other AWS services of your
choosing to deliver objects relevant to requesting applications. All AWS services used in connection with
S3 Object Lambda will continue to be governed by their respective Service Level Agreements (SLA). For
example, in the event that any AWS service does not meet its Service Commitment, you will be eligible to
receive a Service Credit as documented in the service’s SLA.

Working with Range and partNumber GET headers


When working with large objects you can use the Range HTTP header to download a specified byte-
range from an object fetching only the specified portion. You can use concurrent connections to Amazon
S3 to fetch different byte ranges from within the same object. You can also use partNumber (integer
between 1 and 10,000) which effectively performs a ‘ranged’ GET request for the specified part from the
object. For more information, see GetObject Request Syntax in the Amazon Simple Storage Service API
Reference.

When receiving a GET request, S3 Object Lambda invokes your specified Lambda function first, hence
if your GET request contains range or part number parameters, you must ensure that your Lambda

API Version 2006-03-01


181
Amazon Simple Storage Service User Guide
Security considerations

function is equipped to recognize and manage these parameters. Because there can be multiple entities
connected in such a setup (requesting client and services like Lambda, S3, other) it is advised that all
involved entities interpret the requested range (or partNumber) in a uniform manner. This ensures that
the ranges the application is expecting match with the ranges your Lambda function is processing.
When building a function to handle requests with range headers test all combinations of response sizes,
original object sizes, and request range sizes that your application plans to use.

By default, S3 Object Lambda access points will respond with a 501 to any GetObject request that
contains a range or part number parameter, either in the headers or query parameters. You can confirm
that your Lambda function is prepared to handle range or part requests by updating your Object Lambda
access point configuration through the AWS Management Console or the AWS CLI.

Working with AWS CLI and SDKs


AWS CLI S3 subcommands (cp, mv and sync) and use of Transfer Manager is not supported in
conjunction with S3 Object Lambda.

Security considerations for S3 Object Lambda access


points
S3 Object Lambda allows customers the ability to perform custom transformations on data as it leaves
S3 using the scale and flexibility of AWS Lambda as a compute platform. S3 and Lambda remain secure
by default, but special consideration by the Lambda author is required in order to maintain this security.
S3 Object Lambda requires that all access be made by authenticated principals (no anonymous access)
and over HTTPS.

To mitigate this risk we recommend that the Lambda execution role be carefully scoped to the smallest
set of privileges possible. Additionally, the Lambda should make its S3 accesses via the provided pre-
signed URL whenever possible.

Configuring IAM policies


S3 access points support AWS Identity and Access Management (IAM) resource policies that allow you
to control the use of the access point by resource, user, or other conditions. For more information, see
Configuring IAM policies for Object Lambda access points (p. 166).

Encryption behavior
Since Object Lambda access point use both Amazon S3 and AWS Lambda there are differences in
encryption behavior. For more information about default S3 encryption behavior, see Setting default
server-side encryption behavior for Amazon S3 buckets (p. 40).

• When using S3 server-side encryption with Object Lambda access points the object will be decrypted
before being sent to AWS Lambda where it will be processed unencrypted up to the original caller (in
case of a GET).
• To prevent the key being logged, S3 will reject GET requests for objects encrypted via server-side
encryption using customer provided keys. The Lambda function may still retrieve these objects
provided it has access to the client provided key.
• When using S3 client-side encryption with Object Lambda access points make sure Lambda has access
to the key to decrypt and reencrypt the object.

Access points security


S3 Object Lambda uses two access points, an Object Lambda access point and a standard S3 access
point, referred to as the supporting access point. When you make a request to an Object Lambda access

API Version 2006-03-01


182
Amazon Simple Storage Service User Guide
Security considerations

point, S3 either invokes Lambda on your behalf or delegates the request to the supporting access point,
depending upon the S3 Object Lambda configuration. When Lambda is invoked for GetObject, S3 will
generate a pre-signed URL to your object on your behalf through the supporting access point. Your
Lambda function will receive this URL as input when invoked.

You may set your Lambda function to use this URL to retrieve the original object, instead of invoking
S3 directly. This model allows you to apply better security boundaries to your objects. You can limit
direct object access through S3 buckets or S3 access points to a limited set of IAM roles or users. This
also protects your Lambda functions from being subject to the Confused Deputy problem, where a
misconfigured function with different permissions than your GetObject invoker could allow or deny
access to objects when it should not.

Object Lambda Access Point public access


S3 Object Lambda does not allow anonymous or public access because Amazon S3 needs to authorize
your identity to complete any S3 Object Lambda request. When invoking GetObject requests through
an Object Lambda access point, you need the lambda:InvokeFunction permission for the configured
Lambda function. Similarly, when invoking other APIs through an Object Lambda access point, you need
to have the required s3:* permissions.

Without these permissions, requests to invoke Lambda or delegate to S3 will fail as a 403 Forbidden
error. All access must be made by authenticated principals. If you require public access, Lambda@Edge
can be used as a possible alternative. For more information, see Customizing at the edge with
Lambda@Edge in the Amazon CloudFront Developer Guide.

For more information about standard access points, see Managing data access with Amazon S3 access
points (p. 184).

For information about working with buckets, see Buckets overview (p. 24). For information about
working with objects, see Amazon S3 objects overview (p. 57).

API Version 2006-03-01


183
Amazon Simple Storage Service User Guide
Configuring IAM policies

Managing data access with Amazon


S3 access points
Amazon S3 access points simplify data access for any AWS service or customer application that stores
data in S3. Access points are named network endpoints that are attached to buckets that you can use
to perform S3 object operations, such as GetObject and PutObject. Each access point has distinct
permissions and network controls that S3 applies for any request that is made through that access point.
Each access point enforces a customized access point policy that works in conjunction with the bucket
policy that is attached to the underlying bucket. You can configure any access point to accept requests
only from a virtual private cloud (VPC) to restrict Amazon S3 data access to a private network. You can
also configure custom block public access settings for each access point.
Note

• You can only use access points to perform operations on objects. You can't use access points
to perform other Amazon S3 operations, such as modifying or deleting buckets. For a
complete list of S3 operations that support access points, see Access point compatibility with
AWS services (p. 198).
• Access points work with some, but not all, AWS services and features. For example, you can't
configure Cross-Region Replication to operate through an access point. For a complete list of
AWS services that are compatible with S3 access points, see Access point compatibility with
AWS services (p. 198).

This section explains how to work with Amazon S3 access points. For information about working with
buckets, see Buckets overview (p. 24). For information about working with objects, see Amazon S3
objects overview (p. 57).

Topics
• Configuring IAM policies for using access points (p. 184)
• Creating access points (p. 189)
• Using access points (p. 193)
• Access points restrictions and limitations (p. 200)

Configuring IAM policies for using access points


Amazon S3 access points support AWS Identity and Access Management (IAM) resource policies that
allow you to control the use of the access point by resource, user, or other conditions. For an application
or user to be able to access objects through an access point, both the access point and the underlying
bucket must permit the request.
Important
Adding an S3 access point to a bucket doesn't change the bucket's behavior when accessed
through the existing bucket name or ARN. All existing operations against the bucket will
continue to work as before. Restrictions that you include in an access point policy apply only to
requests made through that access point.

API Version 2006-03-01


184
Amazon Simple Storage Service User Guide
Condition keys

Condition keys
S3 access points introduce three new condition keys that can be used in IAM policies to control access to
your resources:

s3:DataAccessPointArn

This is a string that you can use to match on an access point ARN. The following example matches all
access points for AWS account 123456789012 in Region us-west-2:

"Condition" : {
"StringLike": {
"s3:DataAccessPointArn": "arn:aws:s3:us-west-2:123456789012:accesspoint/*"
}
}

s3:DataAccessPointAccount

This is a string operator that you can use to match on the account ID of the owner of an access point.
The following example matches all access points owned by AWS account 123456789012.

"Condition" : {
"StringEquals": {
"s3:DataAccessPointAccount": "123456789012"
}
}

s3:AccessPointNetworkOrigin

This is a string operator that you can use to match on the network origin, either Internet or VPC.
The following example matches only access points with a VPC origin.

"Condition" : {
"StringEquals": {
"s3:AccessPointNetworkOrigin": "VPC"
}
}

For more information about using condition keys with Amazon S3, see Actions, resources, and condition
keys for Amazon S3 (p. 310).

Delegating access control to access points


You can delegate access control for a bucket to the bucket's access points. The following example bucket
policy allows full access to all access points owned by the bucket owner's account. Thus, all access to
this bucket is controlled by the policies attached to its access points. We recommend configuring your
buckets this way for all use cases that don't require direct access to the bucket.

Example Bucket policy delegating access control to access points

{
"Version": "2012-10-17",
"Statement" : [
{
"Effect": "Allow",

API Version 2006-03-01


185
Amazon Simple Storage Service User Guide
Access point policy examples

"Principal" : { "AWS": "*" },


"Action" : "*",
"Resource" : [ "Bucket ARN", "Bucket ARN/*"],
"Condition": {
"StringEquals" : { "s3:DataAccessPointAccount" : "Bucket owner's account ID" }
}
}]
}

Access point policy examples


The following examples demonstrate how to create IAM policies to control requests made through an
access point.
Note
Permissions granted in an access point policy are only effective if the underlying bucket also
allows the same access. You can accomplish this in two ways:

1. (Recommended) Delegate access control from the bucket to the access point as described in
Delegating access control to access points (p. 185).
2. Add the same permissions contained in the access point policy to the underlying bucket's
policy. The first access point policy example demonstrates how to modify the underlying
bucket policy to allow the necessary access.

Example Access point policy grant

The following access point policy grants IAM user Alice in account 123456789012 permissions to
GET and PUT objects with the prefix Alice/ through access point my-access-point in account
123456789012.

{
"Version":"2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/Alice"
},
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/
Alice/*"
}]
}

Note
For the access point policy to effectively grant access to Alice, the underlying bucket must also
allow the same access to Alice. You can delegate access control from the bucket to the access
point as described in Delegating access control to access points (p. 185). Or, you can add the
following policy to the underlying bucket to grant the necessary permissions to Alice. Note that
the Resource entry differs between the access point and bucket policies.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {

API Version 2006-03-01


186
Amazon Simple Storage Service User Guide
Access point policy examples

"AWS": "arn:aws:iam::123456789012:user/Alice"
},
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::awsexamplebucket1/Alice/*"
}]
}

Example Access point policy with tag condition

The following access point policy grants IAM user Bob in account 123456789012 permissions to GET
objects through access point my-access-point in account 123456789012 that have the tag key data
set with a value of finance.

{
"Version":"2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Principal" : {
"AWS": "arn:aws:iam::123456789012:user/Bob"
},
"Action":"s3:GetObject",
"Resource" : "arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/
*",
"Condition" : {
"StringEquals": {
"s3:ExistingObjectTag/data": "finance"
}
}
}]
}

Example Access point policy allowing bucket listing

The following access point policy allows IAM user Charles in account 123456789012 permission
to view the objects contained in the bucket underlying access point my-access-point in account
123456789012.

{
"Version":"2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/Charles"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point"
}]
}

Example Service control policy

The following service control policy requires all new access points to be created with a VPC network
origin. With this policy in place, users in your organization can't create new access points that are
accessible from the internet.

{
"Version": "2012-10-17",

API Version 2006-03-01


187
Amazon Simple Storage Service User Guide
Access point policy examples

"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:CreateAccessPoint",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"s3:AccessPointNetworkOrigin": "VPC"
}
}
}]
}

Example Bucket policy to limit S3 operations to VPC network origins

The following bucket policy limits access to all S3 object operations for bucket examplebucket to
access points with a VPC network origin.
Important
Before using a statement like this example, make sure you don't need to use features that aren't
supported by access points, such as Cross-Region Replication.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": [
"s3:AbortMultipartUpload",
"s3:BypassGovernanceRetention",
"s3:DeleteObject",
"s3:DeleteObjectTagging",
"s3:DeleteObjectVersion",
"s3:DeleteObjectVersionTagging",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectLegalHold",
"s3:GetObjectRetention",
"s3:GetObjectTagging",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTagging",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectLegalHold",
"s3:PutObjectRetention",
"s3:PutObjectTagging",
"s3:PutObjectVersionAcl",
"s3:PutObjectVersionTagging",
"s3:RestoreObject"
],
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotEquals": {
"s3:AccessPointNetworkOrigin": "VPC"
}
}
}
]
}

API Version 2006-03-01


188
Amazon Simple Storage Service User Guide
Creating access points

Creating access points


Amazon S3 provides functionality for creating and managing access points. You can create S3 access
points using the AWS Management Console, AWS Command Line Interface (AWS CLI), AWS SDKs, or
Amazon S3 REST API.

By default, you can create up to 1,000 access points per Region for each of your AWS accounts. If you
need more than 1,000 access points for a single account in a single Region, you can request a service
quota increase. For more information about service quotas and requesting an increase, see AWS Service
Quotas in the AWS General Reference.
Note
Because you might want to publicize your access point name in order to allow users to use the
access point, we recommend that you avoid including sensitive information in the access point
name.

Rules for naming Amazon S3 access points


Access point names must meet the following conditions:

• Must be unique within a single AWS account and Region


• Must comply with DNS naming restrictions
• Must begin with a number or lowercase letter
• Must be between 3 and 50 characters long
• Can't begin or end with a dash
• Can't contain underscores, uppercase letters, or periods
• Can't end with the suffix -s3alias. This suffix is reserved for access point alias names. For more
information, see Using a bucket-style alias for your access point (p. 196).

To create an access point, see the topics below.

Topics
• Creating an access point (p. 189)
• Creating access points restricted to a virtual private cloud (p. 190)
• Managing public access to access points (p. 192)

Creating an access point


An access point is associated with exactly one Amazon S3 bucket. Before you begin, make sure that you
have created a bucket that you want to use with this access point. For more information about creating
buckets, see Creating, configuring, and working with Amazon S3 buckets (p. 24). Amazon S3 access
points support AWS Identity and Access Management (IAM) resource policies that allow you to control
the use of the access point by resource, user, or other conditions. For more information, see Configuring
IAM policies for using access points (p. 184).

By default, you can create up to 1,000 access points per Region for each of your AWS accounts. If you
need more than 1,000 access points for a single account in a single Region, you can request a service
quota increase. For more information about service quotas and requesting an increase, see AWS Service
Quotas in the AWS General Reference.

The following examples demonstrate how to create an access point with the AWS CLI and the S3 console.
For more information about how to create access points using the REST API, see CreateAccessPoint in the
Amazon Simple Storage Service API Reference.

API Version 2006-03-01


189
Amazon Simple Storage Service User Guide
Creating access points restricted to a VPC

Using the S3 console


To create an access point

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the navigation pane on the left side of the console, choose Access points.
3. On the access points page, choose Create access point.
4. In the Access point name field, enter your desired name for the access point. For more information
about naming access points, see Rules for naming Amazon S3 access points (p. 189).
5. In the Bucket name field, enter the name of a bucket in your account to which you want to attach
the access point, for example DOC-EXAMPLE-BUCKET1. Optionally, you can choose Browse S3 to
browse and search buckets in your account. If you choose Browse S3, select the desired bucket and
choose Choose path to populate the Bucket name field with that bucket's name.
6. (Optional) Choose View to view the contents of the specified bucket in a new browser window.
7. Select a Network origin. If you choose Virtual private cloud (VPC), enter the VPC ID that you want
to use with the access point.

For more information about network origins for access points, see Creating access points restricted
to a virtual private cloud (p. 190).
8. Under Block Public Access settings for this Access Point, select the block public access settings
that you want to apply to the access point. All block public access settings are enabled by default for
new access points, and we recommend that you leave all settings enabled unless you know you have
a specific need to disable any of them. Amazon S3 currently doesn't support changing an access
point's block public access settings after the access point has been created.

For more information about using Amazon S3 Block Public Access with access points, see Managing
public access to access points (p. 192).
9. (Optional) Under Access Point policy - optional, specify the access point policy. For more
information about specifying an access point policy, see Access point policy examples (p. 186).
10. Choose Create access point.

Using the AWS CLI


The following example creates an access point named example-ap for bucket example-bucket in
account 123456789012. To create the access point, you send a request to Amazon S3, specifying the
access point name, the name of the bucket that you want to associate the access point with, and the
account ID for the AWS account that owns the bucket. For information about naming rules, see the
section called “Rules for naming Amazon S3 access points” (p. 189).

aws s3control create-access-point --name example-ap --account-id 123456789012 --bucket


example-bucket

Creating access points restricted to a virtual private


cloud
When you create an access point, you can choose to make the access point accessible from the internet,
or you can specify that all requests made through that access point must originate from a specific virtual
private cloud (VPC). An access point that's accessible from the internet is said to have a network origin of
Internet. It can be used from anywhere on the internet, subject to any other access restrictions in place
for the access point, underlying bucket, and related resources, such as the requested objects. An access

API Version 2006-03-01


190
Amazon Simple Storage Service User Guide
Creating access points restricted to a VPC

point that's only accessible from a specified VPC has a network origin of VPC, and Amazon S3 rejects any
request made to the access point that doesn't originate from that VPC.
Important
You can only specify an access point's network origin when you create the access point. After
you create the access point, you can't change its network origin.

To restrict an access point to VPC-only access, you include the VpcConfiguration parameter with the
request to create the access point. In the VpcConfiguration parameter, you specify the VPC ID that
you want to be able to use the access point. Amazon S3 then rejects requests made through the access
point unless they originate from that VPC.

You can retrieve an access point's network origin using the AWS CLI, AWS SDKs, or REST APIs. If an access
point has a VPC configuration specified, its network origin is VPC. Otherwise, the access point's network
origin is Internet.

Example
Example: Create an access point Restricted to VPC Access

The following example creates an access point named example-vpc-ap for bucket example-bucket
in account 123456789012 that allows access only from VPC vpc-1a2b3c. The example then verifies
that the new access point has a network origin of VPC.

AWS CLI

aws s3control create-access-point --name example-vpc-ap --account-id 123456789012 --


bucket example-bucket --vpc-configuration VpcId=vpc-1a2b3c

aws s3control get-access-point --name example-vpc-ap --account-id 123456789012

{
"Name": "example-vpc-ap",
"Bucket": "example-bucket",
"NetworkOrigin": "VPC",
"VpcConfiguration": {
"VpcId": "vpc-1a2b3c"
},
"PublicAccessBlockConfiguration": {
"BlockPublicAcls": true,
"IgnorePublicAcls": true,
"BlockPublicPolicy": true,
"RestrictPublicBuckets": true
},
"CreationDate": "2019-11-27T00:00:00Z"
}

To use an access point with a VPC, you must modify the access policy for your VPC endpoint. VPC
endpoints allow traffic to flow from your VPC to Amazon S3. They have access-control policies that
control how resources within the VPC are allowed to interact with S3. Requests from your VPC to S3 only
succeed through an access point if the VPC endpoint policy grants access to both the access point and
the underlying bucket.

The following example policy statement configures a VPC endpoint to allow calls to GetObject for a
bucket named awsexamplebucket1 and an access point named example-vpc-ap.

{
"Version": "2012-10-17",
"Statement": [
{

API Version 2006-03-01


191
Amazon Simple Storage Service User Guide
Managing public access

"Principal": "*",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::awsexamplebucket1/*",
"arn:aws:s3:us-west-2:123456789012:accesspoint/example-vpc-ap/object/*"
]
}]
}

Note
The "Resource" declaration in this example uses an Amazon Resource Name (ARN) to
specify the access point. For more information about access point ARNs, see Using access
points (p. 193).

For more information about VPC endpoint policies, see Using Endpoint Policies for Amazon S3 in the
virtual private cloud (VPC) User Guide.

Managing public access to access points


Amazon S3 access points support independent block public access settings for each access point. When
you create an access point, you can specify block public access settings that apply to that access point.
For any request made through an access point, Amazon S3 evaluates the block public access settings for
that access point, the underlying bucket, and the bucket owner's account. If any of these settings indicate
that the request should be blocked, Amazon S3 rejects the request.

For more information about the S3 Block Public Access feature, see Blocking public access to your
Amazon S3 storage (p. 488).
Important

• All block public access settings are enabled by default for access points. You must explicitly
disable any settings that you don't want to apply to an access point.
• Amazon S3 currently doesn't support changing an access point's block public access settings
after the access point has been created.

Example
Example: Create an access point with Custom Block Public Access Settings

This example creates an access point named example-ap for bucket example-bucket in account
123456789012 with non-default Block Public Access settings. The example then retrieves the new
access point's configuration to verify its Block Public Access settings.

AWS CLI

aws s3control create-access-point --name example-ap --account-id


123456789012 --bucket example-bucket --public-access-block-configuration
BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=true,RestrictPublicBuckets=true

aws s3control get-access-point --name example-ap --account-id 123456789012

{
"Name": "example-ap",
"Bucket": "example-bucket",
"NetworkOrigin": "Internet",
"PublicAccessBlockConfiguration": {

API Version 2006-03-01


192
Amazon Simple Storage Service User Guide
Using access points

"BlockPublicAcls": false,
"IgnorePublicAcls": false,
"BlockPublicPolicy": true,
"RestrictPublicBuckets": true
},
"CreationDate": "2019-11-27T00:00:00Z"
}

Using access points


You can access the objects in an Amazon S3 bucket with an access point using the AWS Management
Console, AWS CLI, AWS SDKs, or the S3 REST APIs.

Access points have Amazon Resource Names (ARNs). Access point ARNs are similar to bucket ARNs, but
they are explicitly typed and encode the access point's Region and the AWS account ID of the access
point's owner. For more information about ARNs, see Amazon Resource Names (ARNs) in the AWS
General Reference.

Access point ARNs use the format arn:aws:s3:region:account-id:accesspoint/resource. For


example:

• arn:aws:s3:us-west-2:123456789012:accesspoint/test represents the access point named test,


owned by account 123456789012 in Region us-west-2.
• arn:aws:s3:us-west-2:123456789012:accesspoint/* represents all access points under account
123456789012 in Region us-west-2.

ARNs for objects accessed through an access point use the format arn:aws:s3:region:account-
id:accesspoint/access-point-name/object/resource. For example:

• arn:aws:s3:us-west-2:123456789012:accesspoint/test/object/unit-01 represents the object


unit-01, accessed through the access point named test, owned by account 123456789012 in
Region us-west-2.
• arn:aws:s3:us-west-2:123456789012:accesspoint/test/object/* represents all objects for access
point test, in account 123456789012 in Region us-west-2.
• arn:aws:s3:us-west-2:123456789012:accesspoint/test/object/unit-01/finance/* represents all
objects under prefix unit-01/finance/ for access point test, in account 123456789012 in Region
us-west-2.

Topics
• Monitoring and logging access points (p. 193)
• Using Amazon S3 access points with the Amazon S3 console (p. 194)
• Using a bucket-style alias for your access point (p. 196)
• Using access points with compatible Amazon S3 operations (p. 197)

Monitoring and logging access points


Amazon S3 logs requests made through access points and requests made to the APIs that manage access
points, such as CreateAccessPoint and GetAccessPointPolicy.

• A bucket named DOC-EXAMPLE-BUCKET1 in Region us-west-2 that contains object my-image.jpg


• An access point named my-bucket-ap that is associated with my-bucket
• Your AWS account ID is 123456789012

API Version 2006-03-01


193
Amazon Simple Storage Service User Guide
Managing access points

CloudTrail log entries for requests made through access points will include the access point ARN in the
resources section of the log.

"resources": [
{
"type": "AWS::S3::Object",
"ARN": "arn:aws:s3:::DOC-EXAMPLE-BUCKET1/my-image.jpg"
},
{
"accountId": "123456789012",
"type": "AWS::S3::Bucket",
"ARN": "arn:aws:s3:::DOC-EXAMPLE-BUCKET1"
},
{
"accountId": "DOC-EXAMPLE-BUCKET1",
"type": "AWS::S3::AccessPoint",
"ARN": "arn:aws:s3:us-west-2:DOC-EXAMPLE-BUCKET1:accesspoint/my-bucket-ap"
}
]

For more information about S3 Server Access Logs, see Logging requests using server access
logging (p. 833). For more information about AWS CloudTrail, see What is AWS CloudTrail? in the AWS
CloudTrail User Guide.
Note
S3 access points aren't currently compatible with Amazon CloudWatch metrics.

Using Amazon S3 access points with the Amazon S3


console
This section explains how to manage and use your Amazon S3 access points using the AWS Management
Console. Before you begin, navigate to the detail page for the access point you want to manage or use,
as described in the following procedure.

Topics
• Listing access points for your account (p. 194)
• Listing access points for a bucket (p. 195)
• Viewing configuration details for an access point (p. 195)
• Using an access point (p. 195)
• Viewing block public access settings for an access point (p. 195)
• Editing an access point policy (p. 195)
• Deleting an access point (p. 196)

Listing access points for your account


1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the navigation pane on the left side of the console, choose access points.
3. On the access points page, under access points, select the AWS Region that contains the access
points you want to list.
4. (Optional) Search for access points by name by entering a name into the text field next to the
Region dropdown menu.
5. Choose the name of the access point you want to manage or use.

API Version 2006-03-01


194
Amazon Simple Storage Service User Guide
Managing access points

Listing access points for a bucket


To list all access points for a single bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the navigation pane on the left side of the console, choose Buckets.
3. On the Buckets page, select the name of the bucket whose access points you want to list.
4. On the bucket detail page, choose the access points tab.
5. Choose the name of the access point you want to manage or use.

Viewing configuration details for an access point


1. Navigate to the access point detail page for the access point whose details you want to view, as
described in Listing access points for your account (p. 194).
2. Under access point overview, view configuration details and properties for the selected access
point.

Using an access point


1. Navigate to the access point detail page for the access point you want to use, as described in Listing
access points for your account (p. 194).
2. Under the Objects tab, choose the name of an object or objects that you want to access through the
access point. On the object operation pages, the console displays a label above the name of your
bucket that shows the access point that you're currently using. While you're using the access point,
you can only perform the object operations that are allowed by the access point permissions.
Note

• The console view always shows all objects in the bucket. Using an access point as
described in this procedure restricts the operations you can perform on those objects, but
not whether you can see that they exist in the bucket.
• The S3 Management Console doesn't support using virtual private cloud (VPC) access
points to access bucket resources. To access bucket resources from a VPC access point, use
the AWS CLI, AWS SDKs, or Amazon S3 REST APIs.

Viewing block public access settings for an access point


1. Navigate to the access point detail page for the access point whose settings you want to view, as
described in Listing access points for your account (p. 194).
2. Choose Permissions.
3. Under access point policy, review the access point's Block Public Access settings.
Note
You can't change the Block Public Access settings for an access point after the access point
is created.

Editing an access point policy


1. Navigate to the access point detail page for the access point whose policy you want to edit, as
described in Listing access points for your account (p. 194).

API Version 2006-03-01


195
Amazon Simple Storage Service User Guide
Using a bucket-style alias for your access point

2. Choose Permissions.
3. Under access point policy, choose Edit.
4. Enter the access point policy in the text field. The console automatically displays the Amazon
Resource Name (ARN) for the access point, which you can use in the policy.

Deleting an access point


1. Navigate to the list of access points for your account or for a specific bucket, as described in Listing
access points for your account (p. 194).
2. Select the option button next to the name of the access point that you want to delete.
3. Choose Delete.
4. Confirm that you want to delete your access point by entering its name in the text field that appears,
and choose Delete.

Using a bucket-style alias for your access point


When you create an access point, Amazon S3 automatically generates an alias that you can use instead
of an Amazon S3 bucket name for data access. You can use this access point alias instead of an Amazon
Resource Name (ARN) for any access point data plane operation. For a list of these operations, see Access
point compatibility with AWS services (p. 198).

The following shows an example ARN and access point alias for an access point named my-access-
point.

• ARN — arn:aws:s3:region:account-id:accesspoint/my-access-point
• Access point alias — my-access-point-hrzrlukc5m36ft7okagglf3gmwluquse1b-s3alias

For more information about ARNs, see Amazon Resource Names (ARNs) in the AWS General Reference.

Access point alias names


An access point alias name is created within the same namespace as an Amazon S3 bucket. This alias
name is automatically generated and cannot be changed. An access point alias name meets all the
requirements of a valid Amazon S3 bucket name and consists of the following parts:

[Access point prefix]-[Metadata]-s3alias


Note
The -s3alias suffix is reserved for access point alias names and can't be used for bucket or
access point names. For more information about Amazon S3 bucket naming rules, see Bucket
naming rules (p. 27).

Access point alias use cases and limitations


When adopting access points, you can use access point alias names without requiring extensive code
changes.

When you create an access point, Amazon S3 automatically generates an access point alias name, as
shown in the following example.

aws s3control create-access-point --bucket DOC-EXAMPLE-BUCKET1 --name my-access-point --


account-id 111122223333
{

API Version 2006-03-01


196
Amazon Simple Storage Service User Guide
Using access points

"AccessPointArn":
"arn:aws:s3:region:111122223333:accesspoint/my-access-point",
"Alias": "my-access-point-aqfqprnstn7aefdfbarligizwgyfouse1a-s3alias"
}

You can use this access point alias name instead of an Amazon S3 bucket name in any data plane
operation. For a list of these operations, see Access point compatibility with AWS services (p. 198).

aws s3api get-object --bucket my-access-point-aqfqprnstn7aefdfbarligizwgyfouse1a-s3alias --


key dir/my_data.rtf my_data.rtf
{
"AcceptRanges": "bytes",
"LastModified": "2020-01-08T22:16:28+00:00",
"ContentLength": 910,
"ETag": "\"00751974dc146b76404bb7290f8f51bb\"",
"VersionId": "null",
"ContentType": "text/rtf",
"Metadata": {}
}

Limitations

• Aliases cannot be configured by customers.


• Aliases cannot be deleted or modified or disabled on an Access Point.
• You can use this access point alias name instead of an Amazon S3 bucket name in some data plane
operation. For a list of these operations, see Access point compatibility with S3 operations (p. 198).
• You can't use an access point alias name for Amazon S3 control plane operations. For a list of Amazon
S3 control plane operations, see Amazon S3 Control in the Amazon Simple Storage Service API
Reference.
• Aliases cannot be used in IAM policies.
• Aliases cannot be used as a logging destination for S3 server access logs.
• Aliases cannot be used as a logging destination for AWS CloudTrail logs.
• Amazon SageMaker GroundTruth and Amazon SageMaker Feature Store do not support access point
alias.
• Unload command for RedShift does not support using an access point alias.

Using access points with compatible Amazon S3


operations
The following examples demonstrate how to use access points with compatible operations in Amazon
S3.

Topics
• Access point compatibility with AWS services (p. 198)
• Access point compatibility with S3 operations (p. 198)
• Request an object through an access point (p. 198)
• Upload an object through an access point alias (p. 199)
• Delete an object through an access point (p. 199)
• List objects through an access point alias (p. 199)
• Add a tag set to an object through an access point (p. 199)
• Grant access permissions through an access point using an ACL (p. 199)

API Version 2006-03-01


197
Amazon Simple Storage Service User Guide
Using access points

Access point compatibility with AWS services


Amazon S3 access points aliases allow any application that requires an S3 bucket name to easily use an
access point. You can use S3 access point aliases anywhere you use S3 bucket names to access data in S3.

Access point compatibility with S3 operations


You can use access points to access a bucket using the following subset of Amazon S3 APIs. All the
operations listed below can accept either access point ARNs or access point aliases:

S3 operations

• AbortMultipartUpload
• CompleteMultipartUpload
• CopyObject (same-region copies only)
• CreateMultipartUpload
• DeleteObject
• DeleteObjectTagging
• GetBucketLocation
• GetObject
• GetObjectAcl
• GetObjectLegalHold
• GetObjectRetention
• GetObjectTagging
• HeadBucket
• HeadObject
• ListMultipartUploads
• ListObjects
• ListObjectsV2
• ListParts
• Presign
• PutObject
• PutObjectLegalHold
• PutObjectRetention
• PutObjectAcl
• PutObjectTagging
• RestoreObject
• UploadPart
• UploadPartCopy (same-region copies only)

Request an object through an access point


The following example requests the object my-image.jpg through the access point prod owned by
account ID 123456789012 in Region us-west-2, and saves the downloaded file as download.jpg.

AWS CLI

aws s3api get-object --key my-image.jpg --bucket arn:aws:s3:us-


west-2:123456789012:accesspoint/prod download.jpg

API Version 2006-03-01


198
Amazon Simple Storage Service User Guide
Using access points

Upload an object through an access point alias


The following example uploads the object my-image.jpg through the access point alias my-access-
point-hrzrlukc5m36ft7okagglf3gmwluquse1b-s3alias owned by account ID 123456789012 in
Region us-west-2.

AWS CLI

aws s3api put-object --bucket my-access-point-hrzrlukc5m36ft7okagglf3gmwluquse1b-


s3alias --key my-image.jpg --body my-image.jpg

Delete an object through an access point


The following example deletes the object my-image.jpg through the access point prod owned by
account ID 123456789012 in Region us-west-2.

AWS CLI

aws s3api delete-object --bucket arn:aws:s3:us-west-2:123456789012:accesspoint/prod --


key my-image.jpg

List objects through an access point alias


The following example lists objects through the access point alias my-access-point-
hrzrlukc5m36ft7okagglf3gmwluquse1b-s3alias owned by account ID 123456789012 in Region
us-west-2.

AWS CLI

aws s3api list-objects-v2 --bucket my-access-point-hrzrlukc5m36ft7okagglf3gmwluquse1b-


s3alias

Add a tag set to an object through an access point


The following example adds a tag set to the existing object my-image.jpg through the access point
prod owned by account ID 123456789012 in Region us-west-2.

AWS CLI

aws s3api put-object-tagging --bucket arn:aws:s3:us-west-2:123456789012:accesspoint/


prod --key my-image.jpg --tagging TagSet=[{Key="finance",Value="true"}]

Grant access permissions through an access point using an ACL


The following example applies an ACL to an existing object my-image.jpg through the access point
prod owned by account ID 123456789012 in Region us-west-2.

AWS CLI

aws s3api put-object-acl --bucket arn:aws:s3:us-west-2:123456789012:accesspoint/prod --


key my-image.jpg --acl private

API Version 2006-03-01


199
Amazon Simple Storage Service User Guide
Restrictions and limitations

Access points restrictions and limitations


Amazon S3 access points have the following restrictions and limitations:

• You can only create access points for buckets that you own.
• Each access point is associated with exactly one bucket, which you must specify when you create the
access point. After you create an access point, you can't associate it with a different bucket. However,
you can delete an access point and then create another one with the same name associated with a
different bucket.
• Access point names must meet certain conditions. For more information about naming access points,
see Rules for naming Amazon S3 access points (p. 189).
• After you create an access point, you can't change its virtual private cloud (VPC) configuration.
• Access point policies are limited to 20 KB in size.
• You can create a maximum of 1,000 access points per AWS account per Region. If you need more than
1,000 access points for a single account in a single Region, you can request a service quota increase.
For more information about service quotas and requesting an increase, see AWS Service Quotas in the
AWS General Reference.
• You can't use an access point as a destination for S3 Replication. For more information about
replication, see Replicating objects (p. 623).
• You can only address access points using virtual-host-style URLs. For more information about virtual-
host-style addressing, see Accessing a bucket (p. 33).
• APIs that control access point functionality (for example, PutAccessPoint and
GetAccessPointPolicy) don't support cross-account calls.
• You must use AWS Signature Version 4 when making requests to an access point using the REST APIs.
For more information about authenticating requests, see Authenticating Requests (AWS Signature
Version 4) in the Amazon Simple Storage Service API Reference.
• Access points only support access over HTTPS.
• Access points don't support anonymous access.

API Version 2006-03-01


200
Amazon Simple Storage Service User Guide

Multi-Region Access Points in


Amazon S3
Amazon S3 Multi-Region Access Points provide a global endpoint that applications can use to fulfill
requests from S3 buckets located in multiple AWS Regions. You can use Multi-Region Access Points to
build multi-Region applications with the same simple architecture used in a single Region, and then
run those applications anywhere in the world. Instead of sending requests over the congested public
internet, Multi-Region Access Points provide built-in network resilience with acceleration of internet-
based requests to Amazon S3. Application requests made to a Multi-Region Access Point global endpoint
use AWS Global Accelerator to automatically route over the AWS global network to the S3 bucket with
the lowest network latency.

When you create a Multi-Region Access Point, you specify a set of Regions where you want to store data
to be served through that Multi-Region Access Point. You can use S3 Cross-Region Replication (CRR)
to synchronize data among buckets in those Regions. You can then request or write data through the
Multi-Region Access Point global endpoint. Amazon S3 automatically serves the request to the replicated
dataset from the available Region over the AWS global network with the lowest latency. Multi-Region
Access Points are also compatible with applications running in Amazon virtual private clouds (VPCs),
including those using AWS PrivateLink for Amazon S3 (p. 266).

The following is a graphical representation of a Multi-Region Access Point and how it routes requests to
buckets.

API Version 2006-03-01


201
Amazon Simple Storage Service User Guide
Creating Multi-Region Access Points

Topics
• Creating Multi-Region Access Points (p. 202)
• Making requests using a Multi-Region Access Point (p. 208)
• Managing Multi-Region Access Points (p. 213)
• Monitoring and logging requests made through a Multi-Region Access Point to underlying
resources (p. 214)
• Multi-Region Access Point restrictions and limitations (p. 216)

Creating Multi-Region Access Points


To create a Multi-Region Access Point in Amazon S3, you specify the name, choose one bucket in each
AWS Region that you want to serve requests for the Multi-Region Access Point, and configure the
Amazon S3 Block Public Access settings for the Multi-Region Access Point. You provide this information
in a create request, which Amazon S3 processes asynchronously. Amazon S3 provides a token that you
can use to monitor the status of the asynchronous creation request.

API Version 2006-03-01


202
Amazon Simple Storage Service User Guide
Rules for naming Amazon S3 Multi-Region Access Points

When you use the API, the request to create a Multi-Region Access Point is asynchronous. When you
submit a request to create a Multi-Region Access Point, Amazon S3 synchronously authorizes the
request. It then immediately returns a token that you can use to track the progress of the creation
request. For more information about tracking asynchronous requests to create and manage Multi-Region
Access Points, see Managing Multi-Region Access Points (p. 213).

After you create the Multi-Region Access Point, you can create an access control policy for it. Each Multi-
Region Access Point can have an associated policy. A Multi-Region Access Point policy is a resource-
based policy that allows you to limit the use of the Multi-Region Access Point by resource, user, or other
conditions.
Note
For an application or user to be able to access an object through a Multi-Region Access Point,
both the access policy for the Multi-Region Access Point and the access policy for the underlying
buckets that contain the object must permit the request. When the two policies are different,
the more restrictive policy takes precedence.

Using a bucket with a Multi-Region Access Point does not change the bucket's behavior when the
bucket is accessed through the existing bucket name or an Amazon Resource Name (ARN). All existing
operations against the bucket continue to work as before. Restrictions that you include in a Multi-Region
Access Point policy apply only to requests that are made through the Multi-Region Access Point.

You can update the policy for a Multi-Region Access Point after creating it, but you can't delete the
policy. The closest possible approximation to deleting a policy is to update the Multi-Region Access Point
policy to deny all permissions.

Topics
• Rules for naming Amazon S3 Multi-Region Access Points (p. 203)
• Rules for choosing buckets for Amazon S3 Multi-Region Access Points (p. 204)
• Blocking public access with Amazon S3 Multi-Region Access Points (p. 204)
• Creating Amazon S3 Multi-Region Access Points (p. 205)
• Configuring a Multi-Region Access Point for use with AWS PrivateLink (p. 206)

Rules for naming Amazon S3 Multi-Region Access


Points
When you create a Multi-Region Access Point, you give it a name, which is a string that you choose. You
can't change the name of the Multi-Region Access Point after it is created. The name must be unique in
your AWS account, and it must conform to the naming requirements listed in Multi-Region Access Point
restrictions and limitations (p. 216). To help you identify the Multi-Region Access Point, use a name
that is meaningful to you, to your organization, or that reflects the scenario.

You use this name when invoking Multi-Region Access Point management operations, such as
GetMultiRegionAccessPoint and UpdateMultiRegionAccessPointPolicy. The name is not
used to send requests to the Multi-Region Access Point, and it doesn’t need to be exposed to clients who
make requests using the Multi-Region Access Point.

When Amazon S3 creates a Multi-Region Access Point, it automatically assigns an alias to it. This alias is
a unique alphanumeric string that ends in .mrap. The alias is used to construct the hostname and the
Amazon Resource Name (ARN) for a Multi-Region Access Point. The fully qualified name is also based on
the alias for the Multi-Region Access Point.

You can’t determine the name of a Multi-Region Access Point from its alias, so you can disclose an
alias without risk of exposing the name, purpose, or owner of the Multi-Region Access Point. Amazon
S3 selects the alias for each new Multi-Region Access Point, and the alias can’t be changed. For more

API Version 2006-03-01


203
Amazon Simple Storage Service User Guide
Rules for choosing buckets for
Amazon S3 Multi-Region Access Points

information about addressing a Multi-Region Access Point, see Making requests using a Multi-Region
Access Point (p. 208).

Multi-Region Access Point aliases are unique throughout time and aren’t based on the name or
configuration of a Multi-Region Access Point. If you create a Multi-Region Access Point, and then delete
it and create another one with the same name and configuration, the second Multi-Region Access Point
will have a different alias than the first. New Multi-Region Access Points can never have the same alias as
a previous Multi-Region Access Point.

Rules for choosing buckets for Amazon S3 Multi-


Region Access Points
Each Multi-Region Access Point is associated with the Regions where you want to fulfill requests. The
Multi-Region Access Point must be associated with exactly one bucket in each of those Regions. You
specify the name of each bucket in the request to create the Multi-Region Access Point. Each bucket that
supports the Multi-Region Access Point must be owned by the same AWS account that owns the Multi-
Region Access Point.

A single bucket can be used by multiple Multi-Region Access Points.


Important

• You can specify the buckets that are associated with a Multi-Region Access Point only at the
time that you create it. After it is created, you can’t add, modify, or remove buckets from the
Multi-Region Access Point configuration. To change the buckets, you must delete the entire
Multi-Region Access Point and create a new one.
• You can't delete a bucket that is part of a Multi-Region Access Point. If you want to delete a
bucket attached to a Multi-Region Access Point, delete the Multi-Region Access Point first.
• The AWS account that owns the Multi-Region Access Point must also own the associated
buckets. For more information about using permissions with Multi-Region Access Points, see
Multi-Region Access Point permissions (p. 210).
• Not all Regions support Multi-Region Access Points. To see the list of supported Regions, see
Multi-Region Access Point restrictions and limitations (p. 216).

You can create replication rules to synchronize data between buckets. These rules enable you to
automatically copy data from source buckets to destination buckets. Having buckets connected to a
Multi-Region Access Point does not affect how replication works. Configuring replication with Multi-
Region Access Points is described in a later section.

It is important to realize that when you make a request to a Multi-Region Access Point, the Multi-Region
Access Point does not make any considerations about which bucket can fulfill the request. This is why
replication is recommended. Otherwise, one of the buckets in the Multi-Region Access Point might have
the necessary data, but there's no way to guarantee it will receive the request. For more information, see
Configuring bucket replication for use with Multi-Region Access Points (p. 212).

Blocking public access with Amazon S3 Multi-Region


Access Points
Each Multi-Region Access Point has distinct settings for Amazon S3 Block Public Access. These settings
operate in conjunction with the Block Public Access settings for the buckets that underly the Multi-
Region Access Point and for the AWS account that owns both the Multi-Region Access Point and the
underlying buckets.

When Amazon S3 authorizes a request, it applies the most restrictive combination of these settings. If
the Block Public Access settings for any of these resources (the Multi-Region Access Point, the underlying

API Version 2006-03-01


204
Amazon Simple Storage Service User Guide
Creating Amazon S3 Multi-Region Access Points

bucket, or the owner account) block access for the requested action or resource, Amazon S3 rejects the
request.

We recommend that you enable all Block Public Access settings unless you have a specific need to
disable any of them. By default, all Block Public Access settings are enabled for a Multi-Region Access
Point. Be ware that if Block Public Access is enabled, the Multi-Region Access Point will not be able to
accept internet-based requests.
Important
Amazon S3 doesn’t currently support changing the Block Public Access settings for a Multi-
Region Access Point after it has been created.

For more information about Amazon S3 Block Public Access, see Blocking public access to your Amazon
S3 storage (p. 488).

Creating Amazon S3 Multi-Region Access Points


The following example demonstrates how to create a Multi-Region Access Point using the AWS
Management Console.

Using the S3 console


To create a Multi-Region Access Point

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the navigation pane, choose Multi-Region Access Points.
3. In the Multi-Region Access Point name field, supply a name for the Multi-Region Access Point.
4. To select the buckets that will be associated with this Multi-Region Access Point, choose Add
buckets .

To create a new bucket, choose Create bucket. After creating the bucket, choose Add buckets to add
the bucket to the Multi-Region Access Point.

For more information about creating buckets, see Creating a bucket (p. 28).
5. Under Block Public Access settings for this Multi-Region Access Point, select the Block Public
Access settings that you want to apply to the Multi-Region Access Point. By default, all Block Public
Access settings are enabled for new Multi-Region Access Points. We recommend that you leave all
settings enabled unless you know that you have a specific need to disable any of them.
Note
Amazon S3 currently doesn't support changing a Multi-Region Access Point's Block Public
Access settings after the Multi-Region Access Point has been created.
6. Choose Create Multi-Region Access Point.

Using the AWS CLI


You can use the AWS CLI to create a Multi-Region Access Point. Remember that when you create the
Multi-Region Access Point, you need to provide all the buckets it will support. There is no option to add
buckets to the Multi-Region Access Point after it has been created.

The following example creates a Multi-Region Access Point with two buckets using the AWS CLI.

aws s3control create-multi-region-access-point --account-id 111122223333 --details '{


"Name": "simple-multiregionaccesspoint-with-two-regions",
"PublicAccessBlock": {

API Version 2006-03-01


205
Amazon Simple Storage Service User Guide
Configuring AWS PrivateLink

"BlockPublicAcls": true,
"IgnorePublicAcls": true,
"BlockPublicPolicy": true,
"RestrictPublicBuckets": true
},
"Regions": [
{ "Bucket": "DOC-EXAMPLE-BUCKET1" },
{ "Bucket": "DOC-EXAMPLE-BUCKET2" }
]
}'

Topics
• Configuring a Multi-Region Access Point for use with AWS PrivateLink (p. 206)

Configuring a Multi-Region Access Point for use with


AWS PrivateLink
AWS PrivateLink provides you with private connectivity to Amazon S3 using private IP addresses in
your virtual private cloud (VPC). You can provision one or more interface endpoints inside your VPC to
connect to Amazon S3 Multi-Region Access Points.

You can create com.amazonaws.s3-global.accesspoint endpoints for Multi-Region Access Points


through the AWS Management Console, AWS CLI, or AWS SDKs. To learn more about how to configure an
interface endpoint for Multi-Region Access Point, see Interface VPC endpoints in the VPC User Guide.

To make requests to a Multi-Region Access Point via interface endpoints, follow these steps to configure
the VPC and the Multi-Region Access Point.

To configure a Multi-Region Access Point to use with AWS PrivateLink

1. Create or have an appropriate VPC endpoint that can connect to Multi-Region Access Points. For
more information about creating VPC endpoints, see Interface VPC endpoints in the VPC User Guide.
Important
Make sure to create a com.amazonaws.s3-global.accesspoint endpoint. Other endpoint
types cannot access Multi-Region Access Points.

After this VPC endpoint is created, all Multi-Region Access Point requests in the VPC route through
this endpoint if you have private DNS enabled for the endpoint. This is enabled by default.
2. If the Multi-Region Access Point policy does not support connections from VPC endpoints, you will
need to update it.
3. Verify that the individual bucket policies will allow access to the users of the Multi-Region Access
Point.

Remember that Multi-Region Access Points work by routing requests to buckets, not by fulfilling requests
themselves. This is important to remember because the originator of the request must have permissions
to the Multi-Region Access Point and be allowed to access the individual buckets in the Multi-Region
Access Point. Otherwise, the request might be routed to a bucket where the originator doesn't have
permissions to fulfill the request. A Multi-Region Access Point and the buckets must be owned by the
same AWS account. However, VPCs from different accounts can use a Multi-Region Access Point if the
permissions are configured correctly.

Because of this, the VPC endpoint policy must allow access both to the Multi-Region Access Point
and to each underlying bucket that you want to be able to fulfill requests. For example, suppose
that you have a Multi-Region Access Point with alias mfzwi23gnjvgw.mrap. It is backed by buckets

API Version 2006-03-01


206
Amazon Simple Storage Service User Guide
Configuring AWS PrivateLink

doc-examplebucket1 and doc-examplebucket2, all owned by AWS account 123456789012.


In this case, the following VPCE policy would allow GetObject requests from the VPC made to
mfzwi23gnjvgw.mrap to be fulfilled by either backing bucket.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Read-buckets-and-MRAP-VPCE-policy",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::doc-examplebucket1/*",
"arn:aws:s3:::doc-examplebucket2/*",
"arn:aws:s3::123456789012:accesspoint/mfzwi23gnjvgw.mrap/object/*"
]
}]
}

As mentioned previously, you also must make sure that the Multi-Region Access Point policy is
configured to support access through a VPC endpoint. You don't need to specify the VPC endpoint that
is requesting access. The following sample policy would grant access to any requestor trying to use the
Multi-Region Access Point for the GetObject requests.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Open-read-MRAP-policy"
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3::123456789012:accesspoint/mfzwi23gnjvgw.mrap/object/*",
}]
}

And of course, the individual buckets would each need a policy to support access from requests
submitted through VPC endpoint. The following example policy grants read access to any anonymous
users, which would include requests made through the VPC endpoint.

{
"Version":"2012-10-17",
"Statement": [
{
"Sid": "Public-read",
"Effect":"Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::doc-examplebucket1",
"arn:aws:s3:::doc-examplebucket2/*"]
}]
}

For more information about editing a VPCE policy, see Control access to services with VPC endpoints in
the VPC User Guide.

API Version 2006-03-01


207
Amazon Simple Storage Service User Guide
Using a Multi-Region Access Point

Removing access to a Multi-Region Access Point from a VPC


endpoint
If you own a Multi-Region Access Point and want to remove access to it from an interface endpoint,
you must supply a new access policy to the Multi-Region Access Point that prevents access for requests
coming through VPC endpoints. Keep in mind that if the buckets in your Multi-Region Access Point
support requests through VPC endpoints, they will continue to support these requests. If you want to
prevent