0% found this document useful (0 votes)
320 views749 pages

AWS s3 DG

AWS-s3-dg

Uploaded by

Son Vu Truong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
320 views749 pages

AWS s3 DG

AWS-s3-dg

Uploaded by

Son Vu Truong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 749

Amazon Simple Storage Service

Developer Guide
API Version 2006-03-01
Amazon Simple Storage Service Developer Guide

Amazon Simple Storage Service: Developer Guide


Copyright © 2018 Amazon Web Services, Inc. and/or its affiliates. All rights reserved.
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner
that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not
owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by
Amazon.
Amazon Simple Storage Service Developer Guide

Table of Contents
What Is Amazon S3? .......................................................................................................................... 1
How Do I...? ............................................................................................................................... 1
Introduction ...................................................................................................................................... 2
Overview of Amazon S3 and This Guide ....................................................................................... 2
Advantages to Amazon S3 .......................................................................................................... 2
Amazon S3 Concepts .................................................................................................................. 3
Buckets ............................................................................................................................. 3
Objects ............................................................................................................................. 3
Keys ................................................................................................................................. 3
Regions ............................................................................................................................. 4
Amazon S3 Data Consistency Model ..................................................................................... 4
Amazon S3 Features ................................................................................................................... 6
Storage Classes .................................................................................................................. 6
Bucket Policies ................................................................................................................... 6
AWS Identity and Access Management .................................................................................. 7
Access Control Lists ............................................................................................................ 7
Versioning ......................................................................................................................... 7
Operations ........................................................................................................................ 7
Amazon S3 Application Programming Interfaces (API) ..................................................................... 8
The REST Interface ............................................................................................................. 8
The SOAP Interface ............................................................................................................ 8
Paying for Amazon S3 ................................................................................................................ 9
Related Services ......................................................................................................................... 9
Making Requests .............................................................................................................................. 10
About Access Keys .................................................................................................................... 10
AWS Account Access Keys .................................................................................................. 10
IAM User Access Keys ........................................................................................................ 10
Temporary Security Credentials .......................................................................................... 11
Request Endpoints .................................................................................................................... 11
Making Requests over IPv6 ........................................................................................................ 12
Getting Started with IPv6 .................................................................................................. 12
Using IPv6 Addresses in IAM Policies .................................................................................. 13
Testing IP Address Compatibility ........................................................................................ 14
Using Dual-Stack Endpoints ............................................................................................... 14
Making Requests Using the AWS SDKs ........................................................................................ 18
Using AWS Account or IAM User Credentials ........................................................................ 18
Using IAM User Temporary Credentials ............................................................................... 25
Using Federated User Temporary Credentials ....................................................................... 34
Making Requests Using the REST API .......................................................................................... 44
Dual-Stack Endpoints (REST API) ........................................................................................ 45
Virtual Hosting of Buckets ................................................................................................. 45
Request Redirection and the REST API ................................................................................ 49
Buckets ........................................................................................................................................... 52
Creating a Bucket ..................................................................................................................... 52
About Permissions ............................................................................................................ 53
Accessing a Bucket ................................................................................................................... 54
Bucket Configuration Options .................................................................................................... 54
Restrictions and Limitations ....................................................................................................... 56
Rules for Naming ............................................................................................................. 57
Examples of Creating a Bucket ................................................................................................... 57
Using the Amazon S3 Console ........................................................................................... 58
Using the AWS SDK for Java .............................................................................................. 58
Using the AWS SDK for .NET .............................................................................................. 59
Using the AWS SDK for Ruby Version 3 ............................................................................... 60

API Version 2006-03-01


iii
Amazon Simple Storage Service Developer Guide

Using Other AWS SDKs ..................................................................................................... 61


Deleting or Emptying a Bucket .................................................................................................. 61
Delete a Bucket ................................................................................................................ 61
Empty a Bucket ................................................................................................................ 63
Default Encryption for a Bucket ................................................................................................. 65
How to Set Up Amazon S3 Default Bucket Encryption ........................................................... 65
Moving to Default Encryption from Using Bucket Policies for Encryption Enforcement ............... 66
Using Default Encryption with Cross-Region Replication ........................................................ 66
Monitoring Default Encryption with CloudTrail and CloudWatch ............................................. 67
More Info ........................................................................................................................ 67
Bucket Website Configuration .................................................................................................... 68
Using the AWS Management Console ................................................................................. 68
Using the AWS SDK for Java .............................................................................................. 68
Using the AWS SDK for .NET .............................................................................................. 69
Using the SDK for PHP ..................................................................................................... 71
Using the REST API .......................................................................................................... 72
Transfer Acceleration ................................................................................................................ 72
Why use Transfer Acceleration? .......................................................................................... 72
Getting Started ................................................................................................................ 73
Requirements for Using Amazon S3 Transfer Acceleration ...................................................... 74
Transfer Acceleration Examples .......................................................................................... 74
Requester Pays Buckets ............................................................................................................. 79
Configure with the Console ............................................................................................... 80
Configure with the REST API ............................................................................................. 80
Charge Details ................................................................................................................. 82
Access Control ......................................................................................................................... 83
Billing and Usage Reporting ...................................................................................................... 83
Billing Reports ................................................................................................................. 83
Usage Report ................................................................................................................... 85
Understanding Billing and Usage Reports ............................................................................ 86
Using Cost Allocation Tags ................................................................................................ 93
Objects ........................................................................................................................................... 95
Object Key and Metadata .......................................................................................................... 96
Object Keys ..................................................................................................................... 96
Object Metadata .............................................................................................................. 98
Storage Classes ...................................................................................................................... 100
Storage Classes for Frequently Accessed Objects ................................................................ 101
Storage Class That Automatically Optimizes Frequently and Infrequently Accessed Objects ....... 101
Storage Classes for Infrequently Accessed Objects .............................................................. 102
Comparing the Amazon S3 Storage Classes ....................................................................... 103
Setting the Storage Class of an Object .............................................................................. 103
Subresources .......................................................................................................................... 104
Versioning ............................................................................................................................. 104
Object Tagging ....................................................................................................................... 107
API Operations Related to Object Tagging ......................................................................... 108
Object Tagging and Additional Information ....................................................................... 109
Managing Object Tags ..................................................................................................... 112
Lifecycle Management ............................................................................................................. 115
When Should I Use Lifecycle Configuration? ....................................................................... 115
How Do I Configure a Lifecycle? ....................................................................................... 116
Additional Considerations ................................................................................................ 116
Lifecycle Configuration Elements ...................................................................................... 122
Examples of Lifecycle Configuration .................................................................................. 128
Setting Lifecycle Configuration ......................................................................................... 138
Cross-Origin Resource Sharing (CORS) ....................................................................................... 146
Cross-Origin Resource Sharing: Use-case Scenarios .............................................................. 147
How Do I Configure CORS on My Bucket? .......................................................................... 147

API Version 2006-03-01


iv
Amazon Simple Storage Service Developer Guide

How Does Amazon S3 Evaluate the CORS Configuration on a Bucket? .................................... 149
Enabling CORS ............................................................................................................... 149
Troubleshooting CORS .................................................................................................... 155
Operations on Objects ............................................................................................................ 156
Getting Objects .............................................................................................................. 156
Uploading Objects .......................................................................................................... 165
Copying Objects ............................................................................................................. 207
Listing Object Keys ......................................................................................................... 218
Deleting Objects ............................................................................................................. 224
Selecting Content from Objects ........................................................................................ 243
Restoring Archived Objects .............................................................................................. 246
Querying Archived Objects .............................................................................................. 250
Storage Class Analysis ..................................................................................................................... 254
How to Set Up Storage Class Analysis ....................................................................................... 254
Storage Class Analysis ............................................................................................................. 255
How Can I Export Storage Class Analysis Data? .......................................................................... 257
Storage Class Analysis Export File Layout .......................................................................... 258
Amazon S3 Analytics REST APIs ............................................................................................... 259
Inventory ....................................................................................................................................... 260
How to Set Up Amazon S3 Inventory ........................................................................................ 260
Amazon S3 Inventory Buckets .......................................................................................... 260
Setting Up Amazon S3 Inventory ...................................................................................... 261
Inventory Lists ....................................................................................................................... 262
Inventory Consistency ..................................................................................................... 263
Location of Inventory Lists ...................................................................................................... 264
What Is an Inventory Manifest? ........................................................................................ 264
Notify When Inventory Is Complete .......................................................................................... 266
>Querying Inventory with Athena ............................................................................................. 267
Amazon S3 Inventory REST APIs .............................................................................................. 268
Managing Access ............................................................................................................................ 269
Introduction ........................................................................................................................... 269
Overview ....................................................................................................................... 270
How Amazon S3 Authorizes a Request .............................................................................. 275
Guidelines for Using the Available Access Policy Options ..................................................... 280
Example Walkthroughs: Managing Access .......................................................................... 283
Using Bucket Policies and User Policies ..................................................................................... 309
Access Policy Language Overview ..................................................................................... 309
Bucket Policy Examples ................................................................................................... 338
User Policy Examples ...................................................................................................... 347
Managing Access with ACLs ..................................................................................................... 370
Access Control List (ACL) Overview ................................................................................... 370
Managing ACLs ............................................................................................................... 376
Blocking Public Access ............................................................................................................. 382
Block Public Access Settings ............................................................................................ 382
The Meaning of "Public" .................................................................................................. 383
Permissions .................................................................................................................... 385
Examples ....................................................................................................................... 386
Protecting Data .............................................................................................................................. 388
Data Encryption ..................................................................................................................... 388
Server-Side Encryption .................................................................................................... 388
Client-Side Encryption ..................................................................................................... 418
Versioning ............................................................................................................................. 425
How to Configure Versioning on a Bucket .......................................................................... 426
MFA Delete .................................................................................................................... 427
Related Topics ................................................................................................................ 428
Examples ....................................................................................................................... 428
Managing Objects in a Versioning-Enabled Bucket .............................................................. 431

API Version 2006-03-01


v
Amazon Simple Storage Service Developer Guide

Managing Objects in a Versioning-Suspended Bucket .......................................................... 444


Locking Objects ...................................................................................................................... 447
Overview ....................................................................................................................... 448
Managing Object Locks ................................................................................................... 497
Batch Operations ............................................................................................................................ 499
Terminology ........................................................................................................................... 499
The Basics: Jobs ..................................................................................................................... 499
Specifying a Manifest ...................................................................................................... 500
Creating a Job ........................................................................................................................ 500
Create Job Request ......................................................................................................... 501
Create Job Response ....................................................................................................... 501
Granting Permissions for Batch Operations ........................................................................ 502
Related Resources ........................................................................................................... 505
Operations ............................................................................................................................. 505
PUT Object Copy ............................................................................................................ 507
Initiate Restore Object .................................................................................................... 508
Invoke a Lambda Function ............................................................................................... 509
Put Object ACL .............................................................................................................. 509
Put Object Tagging ......................................................................................................... 510
Managing Jobs ....................................................................................................................... 510
Listing Jobs ................................................................................................................... 511
Viewing Job Details ........................................................................................................ 511
Assigning Job Priority ..................................................................................................... 511
Job Status ..................................................................................................................... 511
Tracking Job Failure ........................................................................................................ 513
Notifications and Logging ................................................................................................ 514
Completion Reports ........................................................................................................ 514
Hosting a Static Website ................................................................................................................. 515
Website Endpoints .................................................................................................................. 516
Key Differences Between the Amazon Website and the REST API Endpoint ............................. 516
Configuring a Bucket for Website Hosting ................................................................................. 517
Enabling Website Hosting ................................................................................................ 517
Configuring Index Document Support ............................................................................... 518
Permissions Required for Website Access ........................................................................... 520
(Optional) Configuring Web Traffic Logging ....................................................................... 520
(Optional) Custom Error Document Support ....................................................................... 521
(Optional) Configuring a Redirect ..................................................................................... 522
Example Walkthroughs ............................................................................................................ 529
Example: Setting up a Static Website ................................................................................ 529
Example: Setting up a Static Website Using a Custom Domain .............................................. 531
Example: Speed Up Your Website with Amazon CloudFront .................................................. 537
Clean Up Example Resources ........................................................................................... 540
Notifications .................................................................................................................................. 542
Overview ............................................................................................................................... 542
How to Enable Event Notifications ............................................................................................ 543
Event Notification Types and Destinations ................................................................................. 545
Supported Event Types ................................................................................................... 545
Supported Destinations ................................................................................................... 546
Configuring Notifications with Object Key Name Filtering ............................................................ 546
Examples of Valid Notification Configurations with Object Key Name Filtering ........................ 547
Examples of Notification Configurations with Invalid Prefix/Suffix Overlapping ....................... 549
Granting Permissions to Publish Event Notification Messages to a Destination ................................ 551
Granting Permissions to Invoke an AWS Lambda Function ................................................... 551
Granting Permissions to Publish Messages to an SNS Topic or an SQS Queue .......................... 551
Example Walkthrough 1 .......................................................................................................... 553
Walkthrough Summary ................................................................................................... 553
Step 1: Create an Amazon SNS Topic ................................................................................ 554

API Version 2006-03-01


vi
Amazon Simple Storage Service Developer Guide

Step 2: Create an Amazon SQS Queue .............................................................................. 554


Step 3: Add a Notification Configuration to Your Bucket ...................................................... 555
Step 4: Test the Setup .................................................................................................... 558
Example Walkthrough 2 .......................................................................................................... 558
Event Message Structure ......................................................................................................... 558
Cross-Region Replication ................................................................................................................. 563
When to Use CRR ................................................................................................................... 563
Requirements for CRR ............................................................................................................. 563
What Does Amazon S3 Replicate? ............................................................................................. 564
What Is Replicated? ........................................................................................................ 564
What Isn't Replicated? ..................................................................................................... 565
Related Topics ................................................................................................................ 566
Overview of Setting Up CRR .................................................................................................... 566
Replication Configuration Overview .................................................................................. 567
Setting Up Permissions for CRR ........................................................................................ 575
Additional CRR Configurations ................................................................................................. 578
CRR Additional Configuration: Changing Replica Owner ....................................................... 578
CRR Additional Configuration: Replicating Encrypted Objects ............................................... 581
CRR Walkthroughs .................................................................................................................. 586
CRR Example 1: Same AWS Account ................................................................................. 586
CRR Example 2: Different AWS Accounts ............................................................................ 593
CRR Example 3: Change Replica Owner ............................................................................. 594
CRR Example 4: Replicating Encrypted Objects ................................................................... 598
CRR: Status Information .......................................................................................................... 603
Related Topics ................................................................................................................ 604
CRR: Troubleshooting .............................................................................................................. 604
Related Topics ................................................................................................................ 605
CRR: Additional Considerations ................................................................................................. 605
Lifecycle Configuration and Object Replicas ....................................................................... 605
Versioning Configuration and Replication Configuration ...................................................... 606
Logging Configuration and Replication Configuration .......................................................... 606
CRR and Destination Region ............................................................................................ 606
Pausing Replication Configuration .................................................................................... 606
Related Topics ................................................................................................................ 607
Request Routing ............................................................................................................................. 608
Request Redirection and the REST API ...................................................................................... 608
DNS Routing .................................................................................................................. 608
Temporary Request Redirection ........................................................................................ 609
Permanent Request Redirection ........................................................................................ 611
Request Redirection Examples .......................................................................................... 611
DNS Considerations ................................................................................................................ 611
Performance Optimization ............................................................................................................... 613
Request Rate and Performance Guidelines ................................................................................. 613
GET-Intensive Workloads ................................................................................................. 613
TCP Window Scaling ............................................................................................................... 613
TCP Selective Acknowledgement .............................................................................................. 614
Monitoring ..................................................................................................................................... 615
Monitoring Tools .................................................................................................................... 615
Automated Tools ............................................................................................................ 615
Manual Tools ................................................................................................................. 615
Monitoring Metrics with CloudWatch ......................................................................................... 616
Metrics and Dimensions ................................................................................................... 617
Amazon S3 CloudWatch Daily Storage Metrics for Buckets ................................................... 617
Amazon S3 CloudWatch Request Metrics ........................................................................... 617
Amazon S3 CloudWatch Dimensions ................................................................................. 620
Accessing CloudWatch Metrics .......................................................................................... 620
Related Resources ........................................................................................................... 621

API Version 2006-03-01


vii
Amazon Simple Storage Service Developer Guide

Metrics Configurations for Buckets ............................................................................................ 621


Best-Effort CloudWatch Metrics Delivery ............................................................................ 622
Filtering Metrics Configurations ........................................................................................ 622
How to Add Metrics Configurations ................................................................................... 622
Logging API Calls with AWS CloudTrail ...................................................................................... 623
Amazon S3 Information in CloudTrail ................................................................................ 624
Using CloudTrail Logs with Amazon S3 Server Access Logs and CloudWatch Logs .................... 628
Example: Amazon S3 Log File Entries ................................................................................ 628
Related Resources ........................................................................................................... 630
BitTorrent ...................................................................................................................................... 631
How You are Charged for BitTorrent Delivery ............................................................................. 631
Using BitTorrent to Retrieve Objects Stored in Amazon S3 ........................................................... 632
Publishing Content Using Amazon S3 and BitTorrent .................................................................. 632
Error Handling ............................................................................................................................... 634
The REST Error Response ........................................................................................................ 634
Response Headers .......................................................................................................... 634
Error Response ............................................................................................................... 635
The SOAP Error Response ........................................................................................................ 635
Amazon S3 Error Best Practices ................................................................................................ 636
Retry InternalErrors ........................................................................................................ 636
Tune Application for Repeated SlowDown errors ................................................................ 636
Isolate Errors ................................................................................................................. 636
Troubleshooting Amazon S3 ............................................................................................................ 638
Troubleshooting Amazon S3 by Symptom ................................................................................. 638
Significant Increases in HTTP 503 Responses to Requests to Buckets with Versioning Enabled .... 638
Unexpected Behavior When Accessing Buckets Set with CORS .............................................. 638
Getting Amazon S3 Request IDs for AWS Support ...................................................................... 639
Using HTTP to Obtain Request IDs ................................................................................... 639
Using a Web Browser to Obtain Request IDs ...................................................................... 639
Using AWS SDKs to Obtain Request IDs ............................................................................. 639
Using the AWS CLI to Obtain Request IDs .......................................................................... 641
Related Topics ........................................................................................................................ 641
Server Access Logging ..................................................................................................................... 642
How to Enable Server Access Logging ....................................................................................... 642
Log Object Key Format ........................................................................................................... 643
How Are Logs Delivered? ......................................................................................................... 643
Best Effort Server Log Delivery ................................................................................................ 644
Bucket Logging Status Changes Take Effect Over Time ................................................................ 644
Enabling Logging Using the Console ......................................................................................... 644
Enabling Logging Programmatically .......................................................................................... 644
Enabling Logging ........................................................................................................... 645
Granting the Log Delivery Group WRITE and READ_ACP Permissions ..................................... 645
Example: AWS SDK for .NET ............................................................................................. 646
More Info ...................................................................................................................... 647
Log Format ............................................................................................................................ 647
Custom Access Log Information ........................................................................................ 651
Programming Considerations for Extensible Server Access Log Format ................................... 651
Additional Logging for Copy Operations ............................................................................ 651
Deleting Log Files ................................................................................................................... 654
More Info ...................................................................................................................... 654
AWS SDKs and Explorers ................................................................................................................. 655
Specifying the Signature Version in Request Authentication ......................................................... 656
AWS Signature Version 2 Deprecation for Amazon S3 .......................................................... 657
Moving from Signature Version 2 to Signature Version 4 ..................................................... 658
Setting Up the AWS CLI .......................................................................................................... 661
Using the AWS SDK for Java .................................................................................................... 661
The Java API Organization ............................................................................................... 662

API Version 2006-03-01


viii
Amazon Simple Storage Service Developer Guide

Testing the Amazon S3 Java Code Examples ...................................................................... 662


Using the AWS SDK for .NET .................................................................................................... 663
The .NET API Organization ............................................................................................... 663
Running the Amazon S3 .NET Code Examples .................................................................... 664
Using the AWS SDK for PHP and Running PHP Examples ............................................................. 664
AWS SDK for PHP Levels ................................................................................................ 664
Running PHP Examples ................................................................................................... 665
Related Resources ........................................................................................................... 665
Using the AWS SDK for Ruby - Version 3 ................................................................................... 665
The Ruby API Organization .............................................................................................. 665
Testing the Ruby Script Examples ..................................................................................... 666
Using the AWS SDK for Python (Boto) ....................................................................................... 666
Using the AWS Mobile SDKs for iOS and Android ....................................................................... 666
More Info ...................................................................................................................... 667
Using the AWS Amplify JavaScript Library ................................................................................. 667
More Info ...................................................................................................................... 667
Appendices .................................................................................................................................... 668
Appendix A: Using the SOAP API .............................................................................................. 668
Common SOAP API Elements ........................................................................................... 668
Authenticating SOAP Requests ......................................................................................... 668
Setting Access Policy with SOAP ....................................................................................... 669
Appendix B: Authenticating Requests (AWS Signature Version 2) ................................................... 670
Authenticating Requests Using the REST API ...................................................................... 672
Signing and Authenticating REST Requests ........................................................................ 674
Browser-Based Uploads Using POST ................................................................................. 683
Resources ...................................................................................................................................... 699
SQL Reference ............................................................................................................................... 700
SELECT Command .................................................................................................................. 700
SELECT List .................................................................................................................... 700
FROM Clause ................................................................................................................. 700
WHERE Clause ................................................................................................................ 704
LIMIT Clause (Amazon S3 Select only) ............................................................................... 704
Attribute Access ............................................................................................................. 704
Case Sensitivity of Header/Attribute Names ....................................................................... 705
Using Reserved Keywords as User-Defined Terms ................................................................ 706
Scalar Expressions .......................................................................................................... 706
Data Types ............................................................................................................................ 707
Data Type Conversions .................................................................................................... 707
Supported Data Types ..................................................................................................... 707
Operators .............................................................................................................................. 707
Logical Operators ........................................................................................................... 707
Comparison Operators .................................................................................................... 708
Pattern Matching Operators ............................................................................................. 708
Math Operators .............................................................................................................. 708
Operator Precedence ....................................................................................................... 708
Reserved Keywords ................................................................................................................. 709
SQL Functions ........................................................................................................................ 713
Aggregate Functions (Amazon S3 Select only) .................................................................... 713
Conditional Functions ..................................................................................................... 714
Conversion Functions ...................................................................................................... 715
Date Functions ............................................................................................................... 715
String Functions ............................................................................................................. 721
Document History .......................................................................................................................... 724
Earlier Updates ....................................................................................................................... 726
AWS Glossary ................................................................................................................................. 740

API Version 2006-03-01


ix
Amazon Simple Storage Service Developer Guide
How Do I...?

What Is Amazon S3?


Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing
easier for developers.

Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of
data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable,
reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of
web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers.

This guide explains the core concepts of Amazon S3, such as buckets and objects, and how to work with
these resources using the Amazon S3 application programming interface (API).

How Do I...?
Information Relevant Sections

General product overview and pricing Amazon S3

Get a quick hands-on introduction to Amazon Simple Storage Service Getting Started Guide
Amazon S3

Learn about Amazon S3 key Introduction to Amazon S3 (p. 2)


terminology and concepts

How do I work with buckets? Working with Amazon S3 Buckets (p. 52)

How do I work with objects? Working with Amazon S3 Objects (p. 95)

How do I make requests? Making Requests (p. 10)

How do I manage access to my Managing Access Permissions to Your Amazon S3


resources? Resources (p. 269)

API Version 2006-03-01


1
Amazon Simple Storage Service Developer Guide
Overview of Amazon S3 and This Guide

Introduction to Amazon S3
This introduction to Amazon Simple Storage Service is intended to give you a detailed summary of this
web service. After reading this section, you should have a good idea of what it offers and how it can fit in
with your business.

Topics
• Overview of Amazon S3 and This Guide (p. 2)
• Advantages to Amazon S3 (p. 2)
• Amazon S3 Concepts (p. 3)
• Amazon S3 Features (p. 6)
• Amazon S3 Application Programming Interfaces (API) (p. 8)
• Paying for Amazon S3 (p. 9)
• Related Services (p. 9)

Overview of Amazon S3 and This Guide


Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of
data, at any time, from anywhere on the web.

This guide describes how you send requests to create buckets, store and retrieve your objects, and
manage permissions on your resources. The guide also describes access control and the authentication
process. Access control defines who can access objects and buckets within Amazon S3, and the type of
access (e.g., READ and WRITE). The authentication process verifies the identity of a user who is trying to
access Amazon Web Services (AWS).

Advantages to Amazon S3
Amazon S3 is intentionally built with a minimal feature set that focuses on simplicity and robustness.
Following are some of advantages of the Amazon S3 service:

• Create Buckets – Create and name a bucket that stores data. Buckets are the fundamental container in
Amazon S3 for data storage.
• Store data in Buckets – Store an infinite amount of data in a bucket. Upload as many objects as you
like into an Amazon S3 bucket. Each object can contain up to 5 TB of data. Each object is stored and
retrieved using a unique developer-assigned key.
• Download data – Download your data or enable others to do so. Download your data any time you like
or allow others to do the same.
• Permissions – Grant or deny access to others who want to upload or download data into your
Amazon S3 bucket. Grant upload and download permissions to three types of users. Authentication
mechanisms can help keep data secure from unauthorized access.
• Standard interfaces – Use standards-based REST and SOAP interfaces designed to work with any
Internet-development toolkit.

API Version 2006-03-01


2
Amazon Simple Storage Service Developer Guide
Amazon S3 Concepts

Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or
the AWS SDKs.

Amazon S3 Concepts
Topics
• Buckets (p. 3)
• Objects (p. 3)
• Keys (p. 3)
• Regions (p. 4)
• Amazon S3 Data Consistency Model (p. 4)

This section describes key concepts and terminology you need to understand to use Amazon S3
effectively. They are presented in the order you will most likely encounter them.

Buckets
A bucket is a container for objects stored in Amazon S3. Every object is contained in a bucket. For
example, if the object named photos/puppy.jpg is stored in the johnsmith bucket, then it is
addressable using the URL http://johnsmith.s3.amazonaws.com/photos/puppy.jpg

Buckets serve several purposes: they organize the Amazon S3 namespace at the highest level, they
identify the account responsible for storage and data transfer charges, they play a role in access control,
and they serve as the unit of aggregation for usage reporting.

You can configure buckets so that they are created in a specific region. For more information, see
Buckets and Regions (p. 54). You can also configure a bucket so that every time an object is added
to it, Amazon S3 generates a unique version ID and assigns it to the object. For more information, see
Versioning (p. 425).

For more information about buckets, see Working with Amazon S3 Buckets (p. 52).

Objects
Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata.
The data portion is opaque to Amazon S3. The metadata is a set of name-value pairs that describe
the object. These include some default metadata, such as the date last modified, and standard HTTP
metadata, such as Content-Type. You can also specify custom metadata at the time the object is stored.

An object is uniquely identified within a bucket by a key (name) and a version ID. For more information,
see Keys (p. 3) and Versioning (p. 425).

Keys
A key is the unique identifier for an object within a bucket. Every object in a bucket has exactly
one key. Because the combination of a bucket, key, and version ID uniquely identify each object,
Amazon S3 can be thought of as a basic data map between "bucket + key + version" and the object
itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web

API Version 2006-03-01


3
Amazon Simple Storage Service Developer Guide
Regions

service endpoint, bucket name, key, and optionally, a version. For example, in the URL http://
doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl, "doc" is the name of the bucket and "2006-03-01/
AmazonS3.wsdl" is the key.

For more information about object keys, see Object Keys.

Regions
You can choose the geographical region where Amazon S3 will store the buckets you create. You might
choose a region to optimize latency, minimize costs, or address regulatory requirements. Objects stored
in a region never leave the region unless you explicitly transfer them to another region. For example,
objects stored in the EU (Ireland) region never leave it.

For a list of Amazon S3 regions and endpoints, see Regions and Endpoints in the AWS General Reference.

Amazon S3 Data Consistency Model


Amazon S3 provides read-after-write consistency for PUTS of new objects in your S3 bucket in all regions
with one caveat. The caveat is that if you make a HEAD or GET request to the key name (to find if the
object exists) before creating the object, Amazon S3 provides eventual consistency for read-after-write.

Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions.

Updates to a single key are atomic. For example, if you PUT to an existing key, a subsequent read might
return the old data or the updated data, but it will never return corrupted or partial data.

Amazon S3 achieves high availability by replicating data across multiple servers within Amazon's data
centers. If a PUT request is successful, your data is safely stored. However, information about the changes
must replicate across Amazon S3, which can take some time, and so you might observe the following
behaviors:

• A process writes a new object to Amazon S3 and immediately lists keys within its bucket. Until the
change is fully propagated, the object might not appear in the list.
• A process replaces an existing object and immediately attempts to read it. Until the change is fully
propagated, Amazon S3 might return the prior data.
• A process deletes an existing object and immediately attempts to read it. Until the deletion is fully
propagated, Amazon S3 might return the deleted data.
• A process deletes an existing object and immediately lists keys within its bucket. Until the deletion is
fully propagated, Amazon S3 might list the deleted object.

Note
Amazon S3 does not currently support object locking. If two PUT requests are simultaneously
made to the same key, the request with the latest time stamp wins. If this is an issue, you will
need to build an object-locking mechanism into your application.
Updates are key-based; there is no way to make atomic updates across keys. For example, you
cannot make the update of one key dependent on the update of another key unless you design
this functionality into your application.

The following table describes the characteristics of eventually consistent read and consistent read.

Eventually Consistent Read Consistent Read

Stale reads possible No stale reads

API Version 2006-03-01


4
Amazon Simple Storage Service Developer Guide
Amazon S3 Data Consistency Model

Eventually Consistent Read Consistent Read

Lowest read latency Potential higher read latency

Highest read throughput Potential lower read throughput

Concurrent Applications
This section provides examples of eventually consistent and consistent read requests when multiple
clients are writing to the same items.

In this example, both W1 (write 1) and W2 (write 2) complete before the start of R1 (read 1) and R2 (read
2). For a consistent read, R1 and R2 both return color = ruby. For an eventually consistent read, R1
and R2 might return color = red, color = ruby, or no results, depending on the amount of time
that has elapsed.

In the next example, W2 does not complete before the start of R1. Therefore, R1 might return color =
ruby or color = garnet for either a consistent read or an eventually consistent read. Also, depending
on the amount of time that has elapsed, an eventually consistent read might return no results.

For a consistent read, R2 returns color = garnet. For an eventually consistent read, R2 might return
color = ruby, color = garnet, or no results depending on the amount of time that has elapsed.

In the last example, Client 2 performs W2 before Amazon S3 returns a success for W1, so the outcome
of the final value is unknown (color = garnet or color = brick). Any subsequent reads (consistent
read or eventually consistent) might return either value. Also, depending on the amount of time that has
elapsed, an eventually consistent read might return no results.

API Version 2006-03-01


5
Amazon Simple Storage Service Developer Guide
Amazon S3 Features

Amazon S3 Features
Topics
• Storage Classes (p. 6)
• Bucket Policies (p. 6)
• AWS Identity and Access Management (p. 7)
• Access Control Lists (p. 7)
• Versioning (p. 7)
• Operations (p. 7)

This section describes important Amazon S3 features.

Storage Classes
Amazon S3 offers a range of storage classes designed for different use cases. These include Amazon S3
STANDARD for general-purpose storage of frequently accessed data, Amazon S3 STANDARD_IA for long-
lived, but less frequently accessed data, and GLACIER for long-term archive.

For more information, see Storage Classes (p. 100).

Bucket Policies
Bucket policies provide centralized access control to buckets and objects based on a variety of conditions,
including Amazon S3 operations, requesters, resources, and aspects of the request (e.g., IP address). The
policies are expressed in our access policy language and enable centralized management of permissions.
The permissions attached to a bucket apply to all of the objects in that bucket.

Individuals as well as companies can use bucket policies. When companies register with Amazon S3
they create an account. Thereafter, the company becomes synonymous with the account. Accounts
are financially responsible for the Amazon resources they (and their employees) create. Accounts have
the power to grant bucket policy permissions and assign employees permissions based on a variety of
conditions. For example, an account could create a policy that gives a user write access:

• To a particular S3 bucket
• From an account's corporate network

API Version 2006-03-01


6
Amazon Simple Storage Service Developer Guide
AWS Identity and Access Management

• During business hours

An account can grant one user limited read and write access, but allow another to create and delete
buckets as well. An account could allow several field offices to store their daily reports in a single bucket,
allowing each office to write only to a certain set of names (e.g., "Nevada/*" or "Utah/*") and only from
the office's IP address range.

Unlike access control lists (described below), which can add (grant) permissions only on individual
objects, policies can either add or deny permissions across all (or a subset) of objects within a bucket.
With one request an account can set the permissions of any number of objects in a bucket. An account
can use wildcards (similar to regular expression operators) on Amazon resource names (ARNs) and other
values, so that an account can control access to groups of objects that begin with a common prefix or
end with a given extension such as .html.

Only the bucket owner is allowed to associate a policy with a bucket. Policies, written in the access policy
language, allow or deny requests based on:

• Amazon S3 bucket operations (such as PUT ?acl), and object operations (such as PUT Object, or
GET Object)
• Requester
• Conditions specified in the policy

An account can control access based on specific Amazon S3 operations, such as GetObject,
GetObjectVersion, DeleteObject, or DeleteBucket.

The conditions can be such things as IP addresses, IP address ranges in CIDR notation, dates, user agents,
HTTP referrer and transports (HTTP and HTTPS).

For more information, see Using Bucket Policies and User Policies (p. 309).

AWS Identity and Access Management


For example, you can use IAM with Amazon S3 to control the type of access a user or group of users has
to specific parts of an Amazon S3 bucket your AWS account owns.

For more information about IAM, see the following:

• AWS Identity and Access Management (IAM)


• Getting Started
• IAM User Guide

Access Control Lists


For more information, see Managing Access with ACLs (p. 370)

Versioning
For more information, see Object Versioning (p. 104).

Operations
Following are the most common operations you'll execute through the API.

API Version 2006-03-01


7
Amazon Simple Storage Service Developer Guide
Amazon S3 Application Programming Interfaces (API)

Common Operations

• Create a Bucket – Create and name your own bucket in which to store your objects.
• Write an Object – Store data by creating or overwriting an object. When you write an object, you
specify a unique key in the namespace of your bucket. This is also a good time to specify any access
control you want on the object.
• Read an Object – Read data back. You can download the data via HTTP or BitTorrent.
• Deleting an Object – Delete some of your data.
• Listing Keys – List the keys contained in one of your buckets. You can filter the key list based on a
prefix.

Details on this and all other functionality are described in detail later in this guide.

Amazon S3 Application Programming Interfaces


(API)
The Amazon S3 architecture is designed to be programming language-neutral, using our supported
interfaces to store and retrieve objects.

Amazon S3 provides a REST and a SOAP interface. They are similar, but there are some differences. For
example, in the REST interface, metadata is returned in HTTP headers. Because we only support HTTP
requests of up to 4 KB (not including the body), the amount of metadata you can supply is restricted.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

The REST Interface


The REST API is an HTTP interface to Amazon S3. Using REST, you use standard HTTP requests to create,
fetch, and delete buckets and objects.

You can use any toolkit that supports HTTP to use the REST API. You can even use a browser to fetch
objects, as long as they are anonymously readable.

The REST API uses the standard HTTP headers and status codes, so that standard browsers and toolkits
work as expected. In some areas, we have added functionality to HTTP (for example, we added headers
to support access control). In these cases, we have done our best to add the new functionality in a way
that matched the style of standard HTTP usage.

The SOAP Interface


Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

The SOAP API provides a SOAP 1.1 interface using document literal encoding. The most common way to
use SOAP is to download the WSDL (go to http://doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl),
use a SOAP toolkit such as Apache Axis or Microsoft .NET to create bindings, and then write code that
uses the bindings to call Amazon S3.

API Version 2006-03-01


8
Amazon Simple Storage Service Developer Guide
Paying for Amazon S3

Paying for Amazon S3


Pricing for Amazon S3 is designed so that you don't have to plan for the storage requirements of your
application. Most storage providers force you to purchase a predetermined amount of storage and
network transfer capacity: If you exceed that capacity, your service is shut off or you are charged high
overage fees. If you do not exceed that capacity, you pay as though you used it all.

Amazon S3 charges you only for what you actually use, with no hidden fees and no overage charges.
This gives developers a variable-cost service that can grow with their business while enjoying the cost
advantages of Amazon's infrastructure.

Before storing anything in Amazon S3, you need to register with the service and provide a payment
instrument that will be charged at the end of each month. There are no set-up fees to begin using the
service. At the end of the month, your payment instrument is automatically charged for that month's
usage.

For information about paying for Amazon S3 storage, see Amazon S3 Pricing.

Related Services
Once you load your data into Amazon S3, you can use it with other services that we provide. The
following services are the ones you might use most frequently:

• Amazon Elastic Compute Cloud – This web service provides virtual compute resources in the cloud.
For more information, go to the Amazon EC2 product details page.
• Amazon EMR – This web service enables businesses, researchers, data analysts, and developers to
easily and cost-effectively process vast amounts of data. It utilizes a hosted Hadoop framework
running on the web-scale infrastructure of Amazon EC2 and Amazon S3. For more information, go to
the Amazon EMR product details page.
• AWS Import/Export – AWS Import/Export enables you to mail a storage device, such as a RAID drive,
to Amazon so that we can upload your (terabytes) of data into Amazon S3. For more information, go
to the AWS Import/Export Developer Guide.

API Version 2006-03-01


9
Amazon Simple Storage Service Developer Guide
About Access Keys

Making Requests
Topics
• About Access Keys (p. 10)
• Request Endpoints (p. 11)
• Making Requests to Amazon S3 over IPv6 (p. 12)
• Making Requests Using the AWS SDKs (p. 18)
• Making Requests Using the REST API (p. 44)

Amazon S3 is a REST service. You can send requests to Amazon S3 using the REST API or the AWS
SDK (see Sample Code and Libraries) wrapper libraries that wrap the underlying Amazon S3 REST API,
simplifying your programming tasks.

Every interaction with Amazon S3 is either authenticated or anonymous. Authentication is a process


of verifying the identity of the requester trying to access an Amazon Web Services (AWS) product.
Authenticated requests must include a signature value that authenticates the request sender. The
signature value is, in part, generated from the requester's AWS access keys (access key ID and secret
access key). For more information about getting access keys, see How Do I Get Security Credentials? in
the AWS General Reference.

If you are using the AWS SDK, the libraries compute the signature from the keys you provide. However,
if you make direct REST API calls in your application, you must write the code to compute the signature
and add it to the request.

About Access Keys


The following sections review the types of access keys that you can use to make authenticated requests.

AWS Account Access Keys


The account access keys provide full access to the AWS resources owned by the account. The following
are examples of access keys:

• Access key ID (a 20-character, alphanumeric string). For example: AKIAIOSFODNN7EXAMPLE


• Secret access key (a 40-character string). For example: wJalrXUtnFEMI/K7MDENG/
bPxRfiCYEXAMPLEKEY

The access key ID uniquely identifies an AWS account. You can use these access keys to send
authenticated requests to Amazon S3.

IAM User Access Keys


You can create one AWS account for your company; however, there may be several employees in the
organization who need access to your organization's AWS resources. Sharing your AWS account access
keys reduces security, and creating individual AWS accounts for each employee might not be practical.

API Version 2006-03-01


10
Amazon Simple Storage Service Developer Guide
Temporary Security Credentials

Also, you cannot easily share resources such as buckets and objects because they are owned by different
accounts. To share resources, you must grant permissions, which is additional work.

In such scenarios, you can use AWS Identity and Access Management (IAM) to create users under your
AWS account with their own access keys and attach IAM user policies granting appropriate resource
access permissions to them. To better manage these users, IAM enables you to create groups of users and
grant group-level permissions that apply to all users in that group.

These users are referred to as IAM users that you create and manage within AWS. The parent account
controls a user's ability to access AWS. Any resources an IAM user creates are under the control of and
paid for by the parent AWS account. These IAM users can send authenticated requests to Amazon S3
using their own security credentials. For more information about creating and managing users under
your AWS account, go to the AWS Identity and Access Management product details page.

Temporary Security Credentials


In addition to creating IAM users with their own access keys, IAM also enables you to grant temporary
security credentials (temporary access keys and a security token) to any IAM user to enable them to
access your AWS services and resources. You can also manage users in your system outside AWS. These
are referred to as federated users. Additionally, users can be applications that you create to access your
AWS resources.

IAM provides the AWS Security Token Service API for you to request temporary security credentials. You
can use either the AWS STS API or the AWS SDK to request these credentials. The API returns temporary
security credentials (access key ID and secret access key), and a security token. These credentials are
valid only for the duration you specify when you request them. You use the access key ID and secret key
the same way you use them when sending requests using your AWS account or IAM user access keys. In
addition, you must include the token in each request you send to Amazon S3.

An IAM user can request these temporary security credentials for their own use or hand them out to
federated users or applications. When requesting temporary security credentials for federated users, you
must provide a user name and an IAM policy defining the permissions you want to associate with these
temporary security credentials. The federated user cannot get more permissions than the parent IAM
user who requested the temporary credentials.

You can use these temporary security credentials in making requests to Amazon S3. The API libraries
compute the necessary signature value using those credentials to authenticate your request. If you send
requests using expired credentials, Amazon S3 denies the request.

For information on signing requests using temporary security credentials in your REST API requests, see
Signing and Authenticating REST Requests (p. 674). For information about sending requests using AWS
SDKs, see Making Requests Using the AWS SDKs (p. 18).

For more information about IAM support for temporary security credentials, see Temporary Security
Credentials in the IAM User Guide.

For added security, you can require multifactor authentication (MFA) when accessing your Amazon
S3 resources by configuring a bucket policy. For information, see Adding a Bucket Policy to Require
MFA (p. 342). After you require MFA to access your Amazon S3 resources, the only way you can access
these resources is by providing temporary credentials that are created with an MFA key. For more
information, see the AWS Multi-Factor Authentication detail page and Configuring MFA-Protected API
Access in the IAM User Guide.

Request Endpoints
You send REST requests to the service's predefined endpoint. For a list of all AWS services and their
corresponding endpoints, go to Regions and Endpoints in the AWS General Reference.

API Version 2006-03-01


11
Amazon Simple Storage Service Developer Guide
Making Requests over IPv6

Making Requests to Amazon S3 over IPv6


Amazon Simple Storage Service (Amazon S3) supports the ability to access S3 buckets using the Internet
Protocol version 6 (IPv6), in addition to the IPv4 protocol. Amazon S3 dual-stack endpoints support
requests to S3 buckets over IPv6 and IPv4. There are no additional charges for accessing Amazon S3 over
IPv6. For more information about pricing, see Amazon S3 Pricing.

Topics
• Getting Started Making Requests over IPv6 (p. 12)
• Using IPv6 Addresses in IAM Policies (p. 13)
• Testing IP Address Compatibility (p. 14)
• Using Amazon S3 Dual-Stack Endpoints (p. 14)

Getting Started Making Requests over IPv6


To make a request to an S3 bucket over IPv6, you need to use a dual-stack endpoint. The next section
describes how to make requests over IPv6 by using dual-stack endpoints.

The following are some things you should know before trying to access a bucket over IPv6:

• The client and the network accessing the bucket must be enabled to use IPv6.
• Both virtual hosted-style and path style requests are supported for IPv6 access. For more information,
see Amazon S3 Dual-Stack Endpoints (p. 14).
• If you use source IP address filtering in your AWS Identity and Access Management (IAM) user or bucket
policies, you need to update the policies to include IPv6 address ranges. For more information, see
Using IPv6 Addresses in IAM Policies (p. 13).
• When using IPv6, server access log files output IP addresses in an IPv6 format. You need to update
existing tools, scripts, and software that you use to parse Amazon S3 log files so that they can
parse the IPv6 formatted Remote IP addresses. For more information, see Server Access Log
Format (p. 647) and Amazon S3 Server Access Logging (p. 642).
Note
If you experience issues related to the presence of IPv6 addresses in log files, contact AWS
Support.

Making Requests over IPv6 by Using Dual-Stack Endpoints


You make requests with Amazon S3 API calls over IPv6 by using dual-stack endpoints. The Amazon
S3 API operations work the same way whether you're accessing Amazon S3 over IPv6 or over IPv4.
Performance should be the same too.

When using the REST API, you access a dual-stack endpoint directly. For more information, see Dual-
Stack Endpoints (p. 14).

When using the AWS Command Line Interface (AWS CLI) and AWS SDKs, you can use a parameter or flag
to change to a dual-stack endpoint. You can also specify the dual-stack endpoint directly as an override
of the Amazon S3 endpoint in the config file.

You can use a dual-stack endpoint to access a bucket over IPv6 from any of the following:

• The AWS CLI, see Using Dual-Stack Endpoints from the AWS CLI (p. 15).
• The AWS SDKs, see Using Dual-Stack Endpoints from the AWS SDKs (p. 16).

API Version 2006-03-01


12
Amazon Simple Storage Service Developer Guide
Using IPv6 Addresses in IAM Policies

• The REST API, see Making Requests to Dual-Stack Endpoints by Using the REST API (p. 45).

Features Not Available over IPv6


The following features are not currently supported when accessing an S3 bucket over IPv6:

• Static website hosting from an S3 bucket


• BitTorrent

Using IPv6 Addresses in IAM Policies


Before trying to access a bucket using IPv6, you must ensure that any IAM user or S3 bucket polices that
are used for IP address filtering are updated to include IPv6 address ranges. IP address filtering policies
that are not updated to handle IPv6 addresses may result in clients incorrectly losing or gaining access
to the bucket when they start using IPv6. For more information about managing access permissions with
IAM, see Managing Access Permissions to Your Amazon S3 Resources (p. 269).

IAM policies that filter IP addresses use IP Address Condition Operators. The following bucket policy
identifies the 54.240.143.* range of allowed IPv4 addresses by using IP address condition operators. Any
IP addresses outside of this range will be denied access to the bucket (examplebucket). Since all IPv6
addresses are outside of the allowed range, this policy prevents IPv6 addresses from being able to access
examplebucket.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "54.240.143.0/24"}
}
}
]
}

You can modify the bucket policy's Condition element to allow both IPv4 (54.240.143.0/24) and
IPv6 (2001:DB8:1234:5678::/64) address ranges as shown in the following example. You can use the
same type of Condition block shown in the example to update both your IAM user and bucket policies.

"Condition": {
"IpAddress": {
"aws:SourceIp": [
"54.240.143.0/24",
"2001:DB8:1234:5678::/64"
]
}
}

Before using IPv6 you must update all relevant IAM user and bucket policies that use IP address filtering
to allow IPv6 address ranges. We recommend that you update your IAM policies with your organization's
IPv6 address ranges in addition to your existing IPv4 address ranges. For an example of a bucket policy
that allows access over both IPv6 and IPv4, see Restricting Access to Specific IP Addresses (p. 340).

API Version 2006-03-01


13
Amazon Simple Storage Service Developer Guide
Testing IP Address Compatibility

You can review your IAM user policies using the IAM console at https://console.aws.amazon.com/iam/.
For more information about IAM, see the IAM User Guide. For information about editing S3 bucket
policies, see How Do I Add an S3 Bucket Policy? in the Amazon Simple Storage Service Console User Guide.

Testing IP Address Compatibility


If you are using use Linux/Unix or Mac OS X, you can test whether you can access a dual-stack endpoint
over IPv6 by using the curl command as shown in the following example:

Example

curl -v http://s3.dualstack.us-west-2.amazonaws.com/

You get back information similar to the following example. If you are connected over IPv6 the connected
IP address will be an IPv6 address.

* About to connect() to s3-us-west-2.amazonaws.com port 80 (#0)


* Trying IPv6 address... connected
* Connected to s3.dualstack.us-west-2.amazonaws.com (IPv6 address) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1t
zlib/1.2.3
> Host: s3.dualstack.us-west-2.amazonaws.com

If you are using Microsoft Windows 7 or Windows 10, you can test whether you can access a dual-stack
endpoint over IPv6 or IPv4 by using the ping command as shown in the following example.

ping ipv6.s3.dualstack.us-west-2.amazonaws.com

Using Amazon S3 Dual-Stack Endpoints


Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and IPv4. This section
describes how to use dual-stack endpoints.

Topics
• Amazon S3 Dual-Stack Endpoints (p. 14)
• Using Dual-Stack Endpoints from the AWS CLI (p. 15)
• Using Dual-Stack Endpoints from the AWS SDKs (p. 16)
• Using Dual-Stack Endpoints from the REST API (p. 17)

Amazon S3 Dual-Stack Endpoints


When you make a request to a dual-stack endpoint, the bucket URL resolves to an IPv6 or an IPv4
address. For more information about accessing a bucket over IPv6, see Making Requests to Amazon S3
over IPv6 (p. 12).

When using the REST API, you directly access an Amazon S3 endpoint by using the endpoint name (URI).
You can access an S3 bucket through a dual-stack endpoint by using a virtual hosted-style or a path-style
endpoint name. Amazon S3 supports only regional dual-stack endpoint names, which means that you
must specify the region as part of the name.

Use the following naming conventions for the dual-stack virtual hosted-style and path-style endpoint
names:

API Version 2006-03-01


14
Amazon Simple Storage Service Developer Guide
Using Dual-Stack Endpoints

• Virtual hosted-style dual-stack endpoint:

bucketname.s3.dualstack.aws-region.amazonaws.com

 
• Path-style dual-stack endpoint:

s3.dualstack.aws-region.amazonaws.com/bucketname

For more information about endpoint name style, see Accessing a Bucket (p. 54). For a list of Amazon
S3 endpoints, see Regions and Endpoints in the AWS General Reference.
Important
You can use transfer acceleration with dual-stack endpoints. For more information, see Getting
Started with Amazon S3 Transfer Acceleration (p. 73).

When using the AWS Command Line Interface (AWS CLI) and AWS SDKs, you can use a parameter or flag
to change to a dual-stack endpoint. You can also specify the dual-stack endpoint directly as an override
of the Amazon S3 endpoint in the config file. The following sections describe how to use dual-stack
endpoints from the AWS CLI and the AWS SDKs.

Using Dual-Stack Endpoints from the AWS CLI


This section provides examples of AWS CLI commands used to make requests to a dual-stack endpoint.
For instructions on setting up the AWS CLI, see Setting Up the AWS CLI (p. 661).

You set the configuration value use_dualstack_endpoint to true in a profile in your AWS Config
file to direct all Amazon S3 requests made by the s3 and s3api AWS CLI commands to the dual-stack
endpoint for the specified region. You specify the region in the config file or in a command using the --
region option.

When using dual-stack endpoints with the AWS CLI, both path and virtual addressing styles are
supported. The addressing style, set in the config file, controls if the bucket name is in the hostname or
part of the URL. By default, the CLI will attempt to use virtual style where possible, but will fall back to
path style if necessary. For more information, see AWS CLI Amazon S3 Configuration.

You can also make configuration changes by using a command, as shown in the following example,
which sets use_dualstack_endpoint to true and addressing_style to virtual in the default
profile.

$ aws configure set default.s3.use_dualstack_endpoint true


$ aws configure set default.s3.addressing_style virtual

If you want to use a dual-stack endpoint for specified AWS CLI commands only (not all commands), you
can use either of the following methods:

• You can use the dual-stack endpoint per command by setting the --endpoint-url parameter
to https://s3.dualstack.aws-region.amazonaws.com or http://s3.dualstack.aws-
region.amazonaws.com for any s3 or s3api command.

$ aws s3api list-objects --bucket bucketname --endpoint-url https://s3.dualstack.aws-


region.amazonaws.com

• You can set up separate profiles in your AWS Config file. For example, create one profile that sets
use_dualstack_endpoint to true and a profile that does not set use_dualstack_endpoint.
When you run a command, specify which profile you want to use, depending upon whether or not you
want to use the dual-stack endpoint.

API Version 2006-03-01


15
Amazon Simple Storage Service Developer Guide
Using Dual-Stack Endpoints

Note
When using the AWS CLI you currently cannot use transfer acceleration with dual-stack
endpoints. However, support for the AWS CLI is coming soon. For more information, see Using
Transfer Acceleration from the AWS Command Line Interface (AWS CLI) (p. 75).

Using Dual-Stack Endpoints from the AWS SDKs


This section provides examples of how to access a dual-stack endpoint by using the AWS SDKs.

AWS SDK for Java Dual-Stack Endpoint Example


The following example shows how to enable dual-stack endpoints when creating an Amazon S3 client
using the AWS SDK for Java.

For instructions on creating and testing a working Java sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;

public class DualStackEndpoints {

public static void main(String[] args) {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";

try {
// Create an Amazon S3 client with dual-stack endpoints enabled.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.withDualstackEnabled(true)
.build();

s3Client.listObjects(bucketName);
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

If you are using the AWS SDK for Java on Windows, you might have to set the following Java virtual
machine (JVM) property:

java.net.preferIPv6Addresses=true

API Version 2006-03-01


16
Amazon Simple Storage Service Developer Guide
Using Dual-Stack Endpoints

AWS .NET SDK Dual-Stack Endpoint Example


When using the AWS SDK for .NET you use the AmazonS3Config class to enable the use of a dual-stack
endpoint as shown in the following example.

var config = new AmazonS3Config


{
UseDualstackEndpoint = true,
RegionEndpoint = RegionEndpoint.USWest2
};

using (var s3Client = new AmazonS3Client(config))


{
var request = new ListObjectsRequest
{
BucketName = “myBucket”
};

var response = await s3Client.ListObjectsAsync(request);


}

For a full .NET sample for listing objects, see Listing Keys Using the AWS SDK for .NET (p. 222).

For information about how to create and test a working .NET sample, see Running the Amazon S3 .NET
Code Examples (p. 664).

Using Dual-Stack Endpoints from the REST API


For information about making requests to dual-stack endpoints by using the REST API, see Making
Requests to Dual-Stack Endpoints by Using the REST API (p. 45).

API Version 2006-03-01


17
Amazon Simple Storage Service Developer Guide
Making Requests Using the AWS SDKs

Making Requests Using the AWS SDKs


Topics
• Making Requests Using AWS Account or IAM User Credentials (p. 18)
• Making Requests Using IAM User Temporary Credentials (p. 25)
• Making Requests Using Federated User Temporary Credentials (p. 34)

You can send authenticated requests to Amazon S3 using either the AWS SDK or by making the REST
API calls directly in your application. The AWS SDK API uses the credentials that you provide to compute
the signature for authentication. If you use the REST API directly in your applications, you must write
the necessary code to compute the signature for authenticating your request. For a list of available AWS
SDKs go to, Sample Code and Libraries.

Making Requests Using AWS Account or IAM User


Credentials
You can use your AWS account or IAM user security credentials to send authenticated requests to
Amazon S3. This section provides examples of how you can send authenticated requests using the AWS
SDK for Java, AWS SDK for .NET, and AWS SDK for PHP. For a list of available AWS SDKs, go to Sample
Code and Libraries.

Topics
• Making Requests Using AWS Account or IAM User Credentials - AWS SDK for Java (p. 19)
• Making Requests Using AWS Account or IAM User Credentials - AWS SDK for .NET (p. 20)
• Making Requests Using AWS Account or IAM User Credentials - AWS SDK for PHP (p. 22)
• Making Requests Using AWS Account or IAM User Credentials - AWS SDK for Ruby (p. 23)

Each of these AWS SDKs uses an SDK-specific credentials provider chain to find and use credentials and
perform actions on behalf of the credentials owner. What all these credentials provider chains have in
common is that they all look for your local AWS credentials file.

The easiest way to configure credentials for your AWS SDKs is to use an AWS credentials file. If you
use the AWS Command Line Interface (AWS CLI), you may already have a local AWS credentials file
configured. Otherwise, use the following procedure to set up a credentials file:

To create a local AWS credentials file

1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. Create a new user with permissions limited to the services and actions that you want your code
to have access to. For more information about creating a new IAM user, see Creating IAM Users
(Console), and follow the instructions through step 8.
3. Choose Download .csv to save a local copy of your AWS credentials.
4. On your computer, navigate to your home directory, and create an .aws directory. On Unix-based
systems, such as Linux or OS X, this is in the following location:

~/.aws

On Windows, this is in the following location:

API Version 2006-03-01


18
Amazon Simple Storage Service Developer Guide
Using AWS Account or IAM User Credentials

%HOMEPATH%\.aws

5. In the .aws directory, create a new file named credentials.


6. Open the credentials .csv file that you downloaded from the IAM console, and copy its contents into
the credentials file using the following format:

[default]
aws_access_key_id = your_access_key_id
aws_secret_access_key = your_secret_access_key

7. Save the credentials file, and delete the .csv file that you downloaded in step 3.

Your shared credentials file is now configured on your local computer, and it's ready to be used with the
AWS SDKs.

Making Requests Using AWS Account or IAM User Credentials -


AWS SDK for Java
To send authenticated requests to Amazon S3 using your AWS account or IAM user credentials, do the
following:

• Use the AmazonS3ClientBuilder class to create an AmazonS3Client instance.


• Execute one of the AmazonS3Client methods to send requests to Amazon S3. The client generates
the necessary signature from the credentials that you provide and includes it in the request.

The following example performs the preceding tasks. For information on creating and testing a working
sample, see Testing the Amazon S3 Java Code Examples (p. 662).

Example

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;
import java.util.List;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.S3ObjectSummary;

public class MakingRequests {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

API Version 2006-03-01


19
Amazon Simple Storage Service Developer Guide
Using AWS Account or IAM User Credentials

// Get a list of objects in the bucket, two at a time, and


// print the name and size of each object.
ListObjectsRequest listRequest = new
ListObjectsRequest().withBucketName(bucketName).withMaxKeys(2);
ObjectListing objects = s3Client.listObjects(listRequest);
while(true) {
List<S3ObjectSummary> summaries = objects.getObjectSummaries();
for(S3ObjectSummary summary : summaries) {
System.out.printf("Object \"%s\" retrieved with size %d\n",
summary.getKey(), summary.getSize());
}
if(objects.isTruncated()) {
objects = s3Client.listNextBatchOfObjects(objects);
}
else {
break;
}
}
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Related Resources
• Using the AWS SDKs, CLI, and Explorers (p. 655)

Making Requests Using AWS Account or IAM User Credentials -


AWS SDK for .NET
To send authenticated requests using your AWS account or IAM user credentials:

• Create an instance of the AmazonS3Client class.


• Execute one of the AmazonS3Client methods to send requests to Amazon S3. The client generates
the necessary signature from the credentialsthat you provide and includes it in the request it sends to
Amazon S3.

The following C# example shows how to perform the preceding tasks. For information about running
the .NET examples in this guide and for instructions on how to store your credentials in a configuration
file, see Running the Amazon S3 .NET Code Examples (p. 664).

Example

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;

API Version 2006-03-01


20
Amazon Simple Storage Service Developer Guide
Using AWS Account or IAM User Credentials

using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class MakeS3RequestTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
using (client = new AmazonS3Client(bucketRegion))
{
Console.WriteLine("Listing objects stored in a bucket");
ListingObjectsAsync().Wait();
}
}

static async Task ListingObjectsAsync()


{
try
{
ListObjectsRequest request = new ListObjectsRequest
{
BucketName = bucketName,
MaxKeys = 2
};
do
{
ListObjectsResponse response = await client.ListObjectsAsync(request);
// Process the response.
foreach (S3Object entry in response.S3Objects)
{
Console.WriteLine("key = {0} size = {1}",
entry.Key, entry.Size);
}

// If the response is truncated, set the marker to get the next


// set of keys.
if (response.IsTruncated)
{
request.Marker = response.NextMarker;
}
else
{
request = null;
}
} while (request != null);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

API Version 2006-03-01


21
Amazon Simple Storage Service Developer Guide
Using AWS Account or IAM User Credentials

Note
You can create the AmazonS3Client client without providing your security credentials.
Requests sent using this client are anonymous requests, without a signature. Amazon S3 returns
an error if you send anonymous requests for a resource that is not publicly available.

For working examples, see Working with Amazon S3 Objects (p. 95) and Working with Amazon S3
Buckets (p. 52). You can test these examples using your AWS Account or an IAM user credentials.

For example, to list all the object keys in your bucket, see Listing Keys Using the AWS SDK
for .NET (p. 222).

Related Resources
• Using the AWS SDKs, CLI, and Explorers (p. 655)

Making Requests Using AWS Account or IAM User Credentials -


AWS SDK for PHP
This section explains how to use a class from version 3 of the AWS SDK for PHP to send authenticated
requests using your AWS account or IAM user credentials. It assumes that you are already following the
instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 664) and have the AWS
SDK for PHP properly installed.

The following PHP example shows how the client makes a request using your security credentials to list
all of the buckets for your account.

Example

<?php

require 'vendor/autoload.php';

use Aws\Sts\StsClient;
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';

$s3 = new S3Client([


'region' => 'us-east-1',
'version' => 'latest',
]);

// Retrieve the list of buckets.


$result = $s3->listBuckets();

try {
// Retrieve a paginator for listing objects.
$objects = $s3->getPaginator('ListObjects', [
'Bucket' => $bucket
]);

echo "Keys retrieved!" . PHP_EOL;

// Print the list of objects to the page.


foreach ($objects as $object) {
echo $object['Key'] . PHP_EOL;
}
} catch (S3Exception $e) {

API Version 2006-03-01


22
Amazon Simple Storage Service Developer Guide
Using AWS Account or IAM User Credentials

echo $e->getMessage() . PHP_EOL;


}

Note
You can create the S3Client client without providing your security credentials. Requests sent
using this client are anonymous requests, without a signature. Amazon S3 returns an error if you
send anonymous requests for a resource that is not publicly available.

For working examples, see Operations on Objects (p. 156). You can test these examples using your AWS
account or IAM user credentials.

For an example of listing object keys in a bucket, see Listing Keys Using the AWS SDK for PHP (p. 223).

Related Resources
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
• AWS SDK for PHP Documentation

Making Requests Using AWS Account or IAM User Credentials -


AWS SDK for Ruby
Before you can use version 3 of the AWS SDK for Ruby to make calls to Amazon S3, you must set the
AWS access credentials that the SDK uses to verify your access to your buckets and objects. If you
have shared credentials set up in the AWS credentials profile on your local system, version 3 of the
SDK for Ruby can use those credentials without your having to declare them in your code. For more
information about setting up shared credentials, see Making Requests Using AWS Account or IAM User
Credentials (p. 18).

The following Ruby code snippet uses the credentials in a shared AWS credentials file on a local
computer to authenticate a request to get all of the object key names in a specific bucket. It does the
following:

1. Creates an instance of the Aws::S3::Resource class.


2. Makes a request to Amazon S3 by enumerating objects in a bucket using the bucket method of
Aws::S3::Resource. The client generates the necessary signature value from the credentials in the
AWS credentials file on your computer, and includes it in the request it sends to Amazon S3.
3. Prints the array of object key names to the terminal.

Example

# Use the Amazon S3 modularized gem for version 3 of the AWS Ruby SDK.
require 'aws-sdk-s3'

# Get an Amazon S3 resource.


s3 = Aws::S3::Resource.new(region: 'us-west-2')

# Create an array of up to the first 100 object keynames in the bucket.


bucket = s3.bucket('example_bucket').objects.collect(&:key)

# Print the array to the terminal.


puts bucket

If you don't have a local AWS credentials file, you can still create the Aws::S3::Resource resource
and execute code against Amazon S3 buckets and objects. Requests that are sent using version 3 of
the SDK for Ruby are anonymous, with no signature by default. Amazon S3 returns an error if you send
anonymous requests for a resource that's not publicly available.

API Version 2006-03-01


23
Amazon Simple Storage Service Developer Guide
Using AWS Account or IAM User Credentials

You can use and expand the previous code snippet for SDK for Ruby applications, as in the following
more robust example. The credentials that are used for this example come from a local AWS credentials
file on the computer that is running this application. The credentials are for an IAM user who can list
objects in the bucket that the user specifies when they run the application.

# auth_request_test.rb
# Use the Amazon S3 modularized gem for version 3 of the AWS Ruby SDK.
require 'aws-sdk-s3'

# Usage: ruby auth_request_test.rb list BUCKET

# Set the name of the bucket on which the operations are performed.
# This argument is required
bucket_name = nil

# The operation to perform on the bucket.


operation = 'list' # default
operation = ARGV[0] if (ARGV.length > 0)

if ARGV.length > 1
bucket_name = ARGV[1]
else
exit 1
end

# Get an Amazon S3 resource.


s3 = Aws::S3::Resource.new(region: 'us-west-2')

# Get the bucket by name.


bucket = s3.bucket(bucket_name)

case operation

when 'list'
if bucket.exists?
# Enumerate the bucket contents and object etags.
puts "Contents of '%s':" % bucket_name
puts ' Name => GUID'

bucket.objects.limit(50).each do |obj|
puts " #{obj.key} => #{obj.etag}"
end
else
puts "The bucket '%s' does not exist!" % bucket_name
end

else
puts "Unknown operation: '%s'! Only list is supported." % operation
end

API Version 2006-03-01


24
Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials

Making Requests Using IAM User Temporary


Credentials
Topics
• Making Requests Using IAM User Temporary Credentials - AWS SDK for Java (p. 25)
• Making Requests Using IAM User Temporary Credentials - AWS SDK for .NET (p. 27)
• Making Requests Using AWS Account or IAM User Temporary Credentials - AWS SDK for
PHP (p. 29)
• Making Requests Using IAM User Temporary Credentials - AWS SDK for Ruby (p. 31)

An AWS Account or an IAM user can request temporary security credentials and use them to send
authenticated requests to Amazon S3. This section provides examples of how to use the AWS SDK for
Java, .NET, and PHP to obtain temporary security credentials and use them to authenticate your requests
to Amazon S3.

Making Requests Using IAM User Temporary Credentials - AWS


SDK for Java
An IAM user or an AWS Account can request temporary security credentials (see Making Requests (p. 10))
using the AWS SDK for Java and use them to access Amazon S3. These credentials expire after the
specified session duration. To use IAM temporary security credentials, do the following:

1. Create an instance of the AWSSecurityTokenServiceClient class. For information about


providing credentials, see Using the AWS SDKs, CLI, and Explorers (p. 655).
2. Assume the desired role by calling the assumeRole() method of the Security Token Service (STS)
client.
3. Start a session by calling the getSessionToken() method of the STS client. You provide session
information to this method using a GetSessionTokenRequest object.

The method returns the temporary security credentials.


4. Package the temporary security credentials into a BasicSessionCredentials object. You use this
object to provide the temporary security credentials to your Amazon S3 client.
5. Create an instance of the AmazonS3Client class using the temporary security credentials. You send
requests to Amazon S3 using this client. If you send requests using expired credentials, Amazon S3
will return an error.

Note
If you obtain temporary security credentials using your AWS account security credentials, the
temporary credentials are valid for only one hour. You can specify the session duration only if
you use IAM user credentials to request a session.

The following example lists a set of object keys in the specified bucket. The example obtains temporary
security credentials for a two-hour session and uses them to send an authenticated request to Amazon
S3.

If you want to test the sample using IAM user credentials, you will need to create an IAM user under your
AWS Account. For more information about how to create an IAM user, see Creating Your First IAM User
and Administrators Group in the IAM User Guide.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

API Version 2006-03-01


25
Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicSessionCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.securitytoken.AWSSecurityTokenService;
import com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClientBuilder;
import com.amazonaws.services.securitytoken.model.AssumeRoleRequest;
import com.amazonaws.services.securitytoken.model.Credentials;
import com.amazonaws.services.securitytoken.model.GetSessionTokenRequest;
import com.amazonaws.services.securitytoken.model.GetSessionTokenResult;

public class MakingRequestsWithIAMTempCredentials {


public static void main(String[] args) {
String clientRegion = "*** Client region ***";
String roleARN = "*** ARN for role to be assumed ***";
String roleSessionName = "*** Role session name ***";
String bucketName = "*** Bucket name ***";

try {
// Creating the STS client is part of your trusted code. It has
// the security credentials you use to obtain temporary security credentials.
AWSSecurityTokenService stsClient =
AWSSecurityTokenServiceClientBuilder.standard()
.withCredentials(new
ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Assume the IAM role. Note that you cannot assume the role of an AWS root
account;
// Amazon S3 will deny access. You must use credentials for an IAM user or an
IAM role.
AssumeRoleRequest roleRequest = new AssumeRoleRequest()
.withRoleArn(roleARN)
.withRoleSessionName(roleSessionName);
stsClient.assumeRole(roleRequest);

// Start a session.
GetSessionTokenRequest getSessionTokenRequest = new GetSessionTokenRequest();
// The duration can be set to more than 3600 seconds only if temporary
// credentials are requested by an IAM user rather than an account owner.
getSessionTokenRequest.setDurationSeconds(7200);
GetSessionTokenResult sessionTokenResult =
stsClient.getSessionToken(getSessionTokenRequest);
Credentials sessionCredentials = sessionTokenResult.getCredentials();

// Package the temporary security credentials as a BasicSessionCredentials


object
// for an Amazon S3 client object to use.
BasicSessionCredentials basicSessionCredentials = new BasicSessionCredentials(
sessionCredentials.getAccessKeyId(),
sessionCredentials.getSecretAccessKey(),
sessionCredentials.getSessionToken());

// Provide temporary security credentials so that the Amazon S3 client


// can send authenticated requests to Amazon S3. You create the client
// using the basicSessionCredentials object.

API Version 2006-03-01


26
Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials

AmazonS3 s3Client = AmazonS3ClientBuilder.standard()


.withCredentials(new
AWSStaticCredentialsProvider(basicSessionCredentials))
.withRegion(clientRegion)
.build();

// Verify that assuming the role worked and the permissions are set correctly
// by getting a set of object keys from the bucket.
ObjectListing objects = s3Client.listObjects(bucketName);
System.out.println("No. of Objects: " + objects.getObjectSummaries().size());
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Related Resources
• Using the AWS SDKs, CLI, and Explorers (p. 655)

Making Requests Using IAM User Temporary Credentials - AWS


SDK for .NET
An IAM user or an AWS account can request temporary security credentials using the AWS SDK for .NET
and use them to access Amazon S3. These credentials expire after the session duration. To get temporary
security credentials and access Amazon S3, do the following:

1. Create an instance of the AWS Security Token Service client,


AmazonSecurityTokenServiceClient. For information about providing credentials, see Using
the AWS SDKs, CLI, and Explorers (p. 655).
2. Start a session by calling the GetSessionToken method of the STS client you created in the
preceding step. You provide session information to this method using a GetSessionTokenRequest
object.

The method returns your temporary security credentials.


3. Package the temporary security credentials in an instance of the SessionAWSCredentials object.
You use this object to provide the temporary security credentials to your Amazon S3 client.
4. Create an instance of the AmazonS3Client class by passing in the temporary security credentials.
You send requests to Amazon S3 using this client. If you send requests using expired credentials,
Amazon S3 returns an error.

Note
If you obtain temporary security credentials using your AWS account security credentials, those
credentials are valid for only one hour. You can specify a session duration only if you use IAM
user credentials to request a session.

API Version 2006-03-01


27
Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials

The following C# example lists object keys in the specified bucket. For illustration, the example obtains
temporary security credentials for a default one-hour session and uses them to send authenticated
request to Amazon S3.

If you want to test the sample using IAM user credentials, you need to create an IAM user under your
AWS account. For more information about how to create an IAM user, see Creating Your First IAM User
and Administrators Group in the IAM User Guide. For more information about making requests, see
Making Requests (p. 10).

For instructions on creating and testing a working example, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.SecurityToken;
using Amazon.SecurityToken.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class TempCredExplicitSessionStartTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
ListObjectsAsync().Wait();
}

private static async Task ListObjectsAsync()


{
try
{
// Credentials use the default AWS SDK for .NET credential search chain.
// On local development machines, this is your default profile.
Console.WriteLine("Listing objects stored in a bucket");
SessionAWSCredentials tempCredentials = await
GetTemporaryCredentialsAsync();

// Create a client by providing temporary security credentials.


using (s3Client = new AmazonS3Client(tempCredentials, bucketRegion))
{
var listObjectRequest = new ListObjectsRequest
{
BucketName = bucketName
};
// Send request to Amazon S3.
ListObjectsResponse response = await
s3Client.ListObjectsAsync(listObjectRequest);
List<S3Object> objects = response.S3Objects;
Console.WriteLine("Object count = {0}", objects.Count);
}
}
catch (AmazonS3Exception s3Exception)
{

API Version 2006-03-01


28
Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials

Console.WriteLine(s3Exception.Message, s3Exception.InnerException);
}
catch (AmazonSecurityTokenServiceException stsException)
{
Console.WriteLine(stsException.Message, stsException.InnerException);
}
}

private static async Task<SessionAWSCredentials> GetTemporaryCredentialsAsync()


{
using (var stsClient = new AmazonSecurityTokenServiceClient())
{
var getSessionTokenRequest = new GetSessionTokenRequest
{
DurationSeconds = 7200 // seconds
};

GetSessionTokenResponse sessionTokenResponse =
await
stsClient.GetSessionTokenAsync(getSessionTokenRequest);

Credentials credentials = sessionTokenResponse.Credentials;

var sessionCredentials =
new SessionAWSCredentials(credentials.AccessKeyId,
credentials.SecretAccessKey,
credentials.SessionToken);
return sessionCredentials;
}
}
}
}

Related Resources
• Using the AWS SDKs, CLI, and Explorers (p. 655)

Making Requests Using AWS Account or IAM User Temporary


Credentials - AWS SDK for PHP
This topic guides explains how to use classes from version 3 of the AWS SDK for PHP to request
temporary security credentials and use them to access Amazon S3. It assumes that you are already
following the instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 664) and
have the AWS SDK for PHP properly installed.

An IAM user or an AWS account can request temporary security credentials using version 3 of the AWS
SDK for PHP. It can then use the temporary credentials to access Amazon S3. The credentials expire when
the session duration expires. By default, the session duration is one hour. If you use IAM user credentials,
you can specify the duration (from 1 to 36 hours) when requesting the temporary security credentials.
For more information about temporary security credentials, see Temporary Security Credentials in the
IAM User Guide. For more information about making requests, see Making Requests (p. 10).
Note
If you obtain temporary security credentials using your AWS account security credentials, the
temporary security credentials are valid for only one hour. You can specify the session duration
only if you use IAM user credentials to request a session.

Example
The following PHP example lists object keys in the specified bucket using temporary security credentials.
The example obtains temporary security credentials for a default one-hour session, and uses them to

API Version 2006-03-01


29
Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials

send authenticated request to Amazon S3. For information about running the PHP examples in this
guide, see Running PHP Examples (p. 665).

If you want to test the example using IAM user credentials, you need to create an IAM user under your
AWS account. For information about how to create an IAM user, see Creating Your First IAM User and
Administrators Group in the IAM User Guide. For an example of setting the session duration when
using IAM user credentials to request a session, see Making Requests Using Federated User Temporary
Credentials - AWS SDK for PHP (p. 39).

<?php

require 'vendor/autoload.php';

use Aws\Sts\StsClient;
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';

$sts = new StsClient([


'version' => 'latest',
'region' => 'us-east-1'
]);

$sessionToken = $sts->getSessionToken();

$s3 = new S3Client([


'region' => 'us-east-1',
'version' => 'latest',
'credentials' => [
'key' => $sessionToken['Credentials']['AccessKeyId'],
'secret' => $sessionToken['Credentials']['SecretAccessKey'],
'token' => $sessionToken['Credentials']['SessionToken']
]
]);

$result = $s3->listBuckets();

try {
// Retrieve a paginator for listing objects.
$objects = $s3->getPaginator('ListObjects', [
'Bucket' => $bucket
]);

echo "Keys retrieved!" . PHP_EOL;

// List objects
foreach ($objects as $object) {
echo $object['Key'] . PHP_EOL;
}
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

Related Resources
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
• AWS SDK for PHP Documentation

API Version 2006-03-01


30
Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials

Making Requests Using IAM User Temporary Credentials - AWS


SDK for Ruby
An IAM user or an AWS account can request temporary security credentials using AWS SDK for Ruby
and use them to access Amazon S3. These credentials expire after the session duration. By default, the
session duration is one hour. If you use IAM user credentials, you can specify the duration (from 1 to 36
hours) when requesting the temporary security credentials. For information about requesting temporary
security credentials, see Making Requests (p. 10).
Note
If you obtain temporary security credentials using your AWS account security credentials, the
temporary security credentials are valid for only one hour. You can specify session duration only
if you use IAM user credentials to request a session.

The following Ruby example creates a temporary user to list the items in a specified bucket for one hour.
To use this example, you must have AWS credentials that have the necessary permissions to create new
AWS Security Token Service (AWS STS) clients, and list Amazon S3 buckets.

require 'aws-sdk-core'
require 'aws-sdk-s3'
require 'aws-sdk-iam'

USAGE = <<DOC

Usage: assumerole_create_bucket_policy.rb -b BUCKET -u USER [-r REGION] [-d] [-h]

Assumes a role for USER to list items in BUCKET for one hour.

BUCKET is required and must already exist.

USER is required and if not found, is created.

If REGION is not supplied, defaults to us-west-2.

-d gives you extra (debugging) information.

-h displays this message and quits.

DOC

$debug = false

def print_debug(s)
if $debug
puts s
end
end

def get_user(region, user_name, create)


user = nil
iam = Aws::IAM::Client.new(region: 'us-west-2')

begin
user = iam.create_user(user_name: user_name)
iam.wait_until(:user_exists, user_name: user_name)
print_debug("Created new user #{user_name}")
rescue Aws::IAM::Errors::EntityAlreadyExists
print_debug("Found user #{user_name} in region #{region}")
end
end

API Version 2006-03-01


31
Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials

# main
region = 'us-west-2'
user_name = ''
bucket_name = ''

i = 0

while i < ARGV.length


case ARGV[i]

when '-b'
i += 1
bucket_name = ARGV[i]

when '-u'
i += 1
user_name = ARGV[i]

when '-r'
i += 1

region = ARGV[i]

when '-d'
puts 'Debugging enabled'
$debug = true

when '-h'
puts USAGE
exit 0

else
puts 'Unrecognized option: ' + ARGV[i]
puts USAGE
exit 1

end

i += 1
end

if bucket_name == ''
puts 'You must supply a bucket name'
puts USAGE
exit 1
end

if user_name == ''
puts 'You must supply a user name'
puts USAGE
exit 1
end

#Identify the IAM user that is allowed to list Amazon S3 bucket items for an hour.
user = get_user(region, user_name, true)

# Create a new Amazon STS client and get temporary credentials. This uses a role that was
already created.
creds = Aws::AssumeRoleCredentials.new(
client: Aws::STS::Client.new(region: region),
role_arn: "arn:aws:iam::111122223333:role/assumedrolelist",
role_session_name: "assumerole-s3-list"
)

# Create an Amazon S3 resource with temporary credentials.


s3 = Aws::S3::Resource.new(region: region, credentials: creds)

API Version 2006-03-01


32
Amazon Simple Storage Service Developer Guide
Using IAM User Temporary Credentials

puts "Contents of '%s':" % bucket_name


puts ' Name => GUID'

s3.bucket(bucket_name).objects.limit(50).each do |obj|
puts " #{obj.key} => #{obj.etag}"
end

API Version 2006-03-01


33
Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials

Making Requests Using Federated User Temporary


Credentials
You can request temporary security credentials and provide them to your federated users or applications
who need to access your AWS resources. This section provides examples of how you can use the AWS SDK
to obtain temporary security credentials for your federated users or applications and send authenticated
requests to Amazon S3 using those credentials. For a list of available AWS SDKs, see Sample Code and
Libraries.
Note
Both the AWS account and an IAM user can request temporary security credentials for federated
users. However, for added security, only an IAM user with the necessary permissions should
request these temporary credentials to ensure that the federated user gets at most the
permissions of the requesting IAM user. In some applications, you might find it suitable to
create an IAM user with specific permissions for the sole purpose of granting temporary security
credentials to your federated users and applications.

Making Requests Using Federated User Temporary Credentials -


AWS SDK for Java
You can provide temporary security credentials for your federated users and applications so that they
can send authenticated requests to access your AWS resources. When requesting these temporary
credentials, you must provide a user name and an IAM policy that describes the resource permissions that
you want to grant. By default, the session duration is one hour. You can explicitly set a different duration
value when requesting the temporary security credentials for federated users and applications.
Note
For added security when requesting temporary security credentials for federated users and
applications, we recommend that you use a dedicated IAM user with only the necessary access
permissions. The temporary user you create can never get more permissions than the IAM user
who requested the temporary security credentials. For more information, see AWS Identity and
Access Management FAQs .

To provide security credentials and send authenticated request to access resources, do the following:

• Create an instance of the AWSSecurityTokenServiceClient class. For information about providing


credentials, see Using the AWS SDK for Java (p. 661).
• Start a session by calling the getFederationToken() method of the Security Token Service (STS)
client. Provide session information, including the user name and an IAM policy, that you want to attach
to the temporary credentials. You can provide an optional session duration. This method returns your
temporary security credentials.
• Package the temporary security credentials in an instance of the BasicSessionCredentials object.
You use this object to provide the temporary security credentials to your Amazon S3 client.
• Create an instance of the AmazonS3Client class using the temporary security credentials. You send
requests to Amazon S3 using this client. If you send requests using expired credentials, Amazon S3
returns an error.

Example

The example lists keys in the specified S3 bucket. In the example, you obtain temporary security
credentials for a two-hour session for your federated user and use the credentials to send authenticated
requests to Amazon S3. To run the example, you need to create an IAM user with an attached policy that
allows the user to request temporary security credentials and list your AWS resources. The following
policy accomplishes this:

API Version 2006-03-01


34
Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials

{
"Statement":[{
"Action":["s3:ListBucket",
"sts:GetFederationToken*"
],
"Effect":"Allow",
"Resource":"*"
}
]
}

For more information about how to create an IAM user, see Creating Your First IAM User and
Administrators Group in the IAM User Guide.

After creating an IAM user and attaching the preceding policy, you can run the following example.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicSessionCredentials;
import com.amazonaws.auth.policy.Policy;
import com.amazonaws.auth.policy.Resource;
import com.amazonaws.auth.policy.Statement;
import com.amazonaws.auth.policy.Statement.Effect;
import com.amazonaws.auth.policy.actions.S3Actions;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.securitytoken.AWSSecurityTokenService;
import com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClientBuilder;
import com.amazonaws.services.securitytoken.model.Credentials;
import com.amazonaws.services.securitytoken.model.GetFederationTokenRequest;
import com.amazonaws.services.securitytoken.model.GetFederationTokenResult;
import com.amazonaws.services.s3.model.ObjectListing;

public class MakingRequestsWithFederatedTempCredentials {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Specify bucket name ***";
String federatedUser = "*** Federated user name ***";
String resourceARN = "arn:aws:s3:::" + bucketName;

try {
AWSSecurityTokenService stsClient = AWSSecurityTokenServiceClientBuilder
.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

GetFederationTokenRequest getFederationTokenRequest = new


GetFederationTokenRequest();
getFederationTokenRequest.setDurationSeconds(7200);
getFederationTokenRequest.setName(federatedUser);

// Define the policy and add it to the request.

API Version 2006-03-01


35
Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials

Policy policy = new Policy();


policy.withStatements(new Statement(Effect.Allow)
.withActions(S3Actions.ListObjects)
.withResources(new Resource(resourceARN)));
getFederationTokenRequest.setPolicy(policy.toJson());

// Get the temporary security credentials.


GetFederationTokenResult federationTokenResult =
stsClient.getFederationToken(getFederationTokenRequest);
Credentials sessionCredentials = federationTokenResult.getCredentials();

// Package the session credentials as a BasicSessionCredentials


// object for an Amazon S3 client object to use.
BasicSessionCredentials basicSessionCredentials = new BasicSessionCredentials(

sessionCredentials.getAccessKeyId(),

sessionCredentials.getSecretAccessKey(),

sessionCredentials.getSessionToken());
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new
AWSStaticCredentialsProvider(basicSessionCredentials))
.withRegion(clientRegion)
.build();

// To verify that the client works, send a listObjects request using


// the temporary security credentials.
ObjectListing objects = s3Client.listObjects(bucketName);
System.out.println("No. of Objects = " + objects.getObjectSummaries().size());
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Related Resources
• Using the AWS SDKs, CLI, and Explorers (p. 655)

Making Requests Using Federated User Temporary Credentials -


AWS SDK for .NET
You can provide temporary security credentials for your federated users and applications so that they
can send authenticated requests to access your AWS resources. When requesting these temporary
credentials, you must provide a user name and an IAM policy that describes the resource permissions
that you want to grant. By default, the duration of a session is one hour. You can explicitly set a different
duration value when requesting the temporary security credentials for federated users and applications.
For information about sending authenticated requests, see Making Requests (p. 10).
Note
When requesting temporary security credentials for federated users and applications, for
added security, we suggest that you use a dedicated IAM user with only the necessary access

API Version 2006-03-01


36
Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials

permissions. The temporary user you create can never get more permissions than the IAM user
who requested the temporary security credentials. For more information, see AWS Identity and
Access Management FAQs .

You do the following:

• Create an instance of the AWS Security Token Service client, AmazonSecurityTokenServiceClient


class. For information about providing credentials, see Using the AWS SDK for .NET (p. 663).
• Start a session by calling the GetFederationToken method of the STS client. You need to provide
session information, including the user name and an IAM policy that you want to attach to the
temporary credentials. Optionally, you can provide a session duration. This method returns your
temporary security credentials.
• Package the temporary security credentials in an instance of the SessionAWSCredentials object.
You use this object to provide the temporary security credentials to your Amazon S3 client.
• Create an instance of the AmazonS3Client class by passing the temporary security credentials. You
use this client to send requests to Amazon S3. If you send requests using expired credentials, Amazon
S3 returns an error.

Example

The following C# example lists the keys in the specified bucket. In the example, you obtain temporary
security credentials for a two-hour session for your federated user (User1), and use the credentials to
send authenticated requests to Amazon S3.

• For this exercise, you create an IAM user with minimal permissions. Using the credentials of this IAM
user, you request temporary credentials for others. This example lists only the objects in a specific
bucket. Create an IAM user with the following policy attached:

{
"Statement":[{
"Action":["s3:ListBucket",
"sts:GetFederationToken*"
],
"Effect":"Allow",
"Resource":"*"
}
]
}

The policy allows the IAM user to request temporary security credentials and access permission only to
list your AWS resources. For more information about how to create an IAM user, see Creating Your First
IAM User and Administrators Group in the IAM User Guide.
• Use the IAM user security credentials to test the following example. The example sends authenticated
request to Amazon S3 using temporary security credentials. The example specifies the following policy
when requesting temporary security credentials for the federated user (User1), which restricts access
to listing objects in a specific bucket (YourBucketName). You must update the policy and provide your
own existing bucket name.

{
"Statement":[
{
"Sid":"1",
"Action":["s3:ListBucket"],
"Effect":"Allow",
"Resource":"arn:aws:s3:::YourBucketName"
}
]

API Version 2006-03-01


37
Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials

• Example

Update the following sample and provide the bucket name that you specified in the preceding
federated user access policy. For instructions on how to create and test a working example, see
Running the Amazon S3 .NET Code Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.SecurityToken;
using Amazon.SecurityToken.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class TempFederatedCredentialsTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
ListObjectsAsync().Wait();
}

private static async Task ListObjectsAsync()


{
try
{
Console.WriteLine("Listing objects stored in a bucket");
// Credentials use the default AWS SDK for .NET credential search chain.

// On local development machines, this is your default profile.


SessionAWSCredentials tempCredentials =
await GetTemporaryFederatedCredentialsAsync();

// Create a client by providing temporary security credentials.


using (client = new AmazonS3Client(bucketRegion))
{
ListObjectsRequest listObjectRequest = new ListObjectsRequest();
listObjectRequest.BucketName = bucketName;

ListObjectsResponse response = await


client.ListObjectsAsync(listObjectRequest);
List<S3Object> objects = response.S3Objects;
Console.WriteLine("Object count = {0}", objects.Count);

Console.WriteLine("Press any key to continue...");


Console.ReadKey();
}
}
catch (AmazonS3Exception e)
{

API Version 2006-03-01


38
Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials

Console.WriteLine("Error encountered ***. Message:'{0}' when writing an


object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}

private static async Task<SessionAWSCredentials>


GetTemporaryFederatedCredentialsAsync()
{
AmazonSecurityTokenServiceConfig config = new
AmazonSecurityTokenServiceConfig();
AmazonSecurityTokenServiceClient stsClient =
new AmazonSecurityTokenServiceClient(
config);

GetFederationTokenRequest federationTokenRequest =
new GetFederationTokenRequest();
federationTokenRequest.DurationSeconds = 7200;
federationTokenRequest.Name = "User1";
federationTokenRequest.Policy = @"{
""Statement"":
[
{
""Sid"":""Stmt1311212314284"",
""Action"":[""s3:ListBucket""],
""Effect"":""Allow"",
""Resource"":""arn:aws:s3:::" + bucketName + @"""
}
]
}
";

GetFederationTokenResponse federationTokenResponse =
await stsClient.GetFederationTokenAsync(federationTokenRequest);
Credentials credentials = federationTokenResponse.Credentials;

SessionAWSCredentials sessionCredentials =
new SessionAWSCredentials(credentials.AccessKeyId,
credentials.SecretAccessKey,
credentials.SessionToken);
return sessionCredentials;
}
}
}

Related Resources
• Using the AWS SDKs, CLI, and Explorers (p. 655)

Making Requests Using Federated User Temporary Credentials -


AWS SDK for PHP
This topic explains how to use classes from version 3 of the AWS SDK for PHP to request temporary
security credentials for federated users and applications and use them to access resources stored in
Amazon S3. It assumes that you are already following the instructions for Using the AWS SDK for PHP
and Running PHP Examples (p. 664) and have the AWS SDK for PHP properly installed.

API Version 2006-03-01


39
Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials

You can provide temporary security credentials to your federated users and applications so they can send
authenticated requests to access your AWS resources. When requesting these temporary credentials,
you must provide a user name and an IAM policy that describes the resource permissions that you want
to grant. These credentials expire when the session duration expires. By default, the session duration
is one hour. You can explicitly set a different value for the duration when requesting the temporary
security credentials for federated users and applications. For more information about temporary security
credentials, see Temporary Security Credentials in the IAM User Guide. For information about providing
temporary security credentials to your federated users and applications, see Making Requests (p. 10).

For added security when requesting temporary security credentials for federated users and applications,
we recommend using a dedicated IAM user with only the necessary access permissions. The temporary
user you create can never get more permissions than the IAM user who requested the temporary security
credentials. For information about identity federation, see AWS Identity and Access Management FAQs.

For information about running the PHP examples in this guide, see Running PHP Examples (p. 665).

Example

The following PHP example lists keys in the specified bucket. In the example, you obtain temporary
security credentials for an hour session for your federated user (User1). Then you use the temporary
security credentials to send authenticated requests to Amazon S3.

For added security when requesting temporary credentials for others, you use the security credentials
of an IAM user who has permissions to request temporary security credentials. To ensure that the IAM
user grants only the minimum application-specific permissions to the federated user, you can also limit
the access permissions of this IAM user. This example lists only objects in a specific bucket. Create an IAM
user with the following policy attached:

{
"Statement":[{
"Action":["s3:ListBucket",
"sts:GetFederationToken*"
],
"Effect":"Allow",
"Resource":"*"
}
]
}

The policy allows the IAM user to request temporary security credentials and access permission only to
list your AWS resources. For more information about how to create an IAM user, see Creating Your First
IAM User and Administrators Group in the IAM User Guide.

You can now use the IAM user security credentials to test the following example. The example sends an
authenticated request to Amazon S3 using temporary security credentials. When requesting temporary
security credentials for the federated user (User1), the example specifies the following policy, which
restricts access to list objects in a specific bucket. Update the policy with your bucket name.

{
"Statement":[
{
"Sid":"1",
"Action":["s3:ListBucket"],
"Effect":"Allow",
"Resource":"arn:aws:s3:::YourBucketName"
}
]
}

API Version 2006-03-01


40
Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials

In the following example, when specifying the policy resource, replace YourBucketName with the name
of your bucket.:

<?php

require 'vendor/autoload.php';

use Aws\Sts\StsClient;
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';

// In real applications, the following code is part of your trusted code. It has
// the security credentials that you use to obtain temporary security credentials.
$sts = new StsClient(
[
'version' => 'latest',
'region' => 'us-east-1']
);

// Fetch the federated credentials.


$sessionToken = $sts->getFederationToken([
'Name' => 'User1',
'DurationSeconds' => '3600',
'Policy' => json_encode([
'Statement' => [
'Sid' => 'randomstatementid' . time(),
'Action' => ['s3:ListBucket'],
'Effect' => 'Allow',
'Resource' => 'arn:aws:s3:::' . $bucket
]
])
]);

// The following will be part of your less trusted code. You provide temporary
// security credentials so the code can send authenticated requests to Amazon S3.

$s3 = new S3Client([


'region' => 'us-east-1',
'version' => 'latest',
'credentials' => [
'key' => $sessionToken['Credentials']['AccessKeyId'],
'secret' => $sessionToken['Credentials']['SecretAccessKey'],
'token' => $sessionToken['Credentials']['SessionToken']
]
]);

try {
$result = $s3->listObjects([
'Bucket' => $bucket
]);
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

Related Resources
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
• AWS SDK for PHP Documentation

API Version 2006-03-01


41
Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials

Making Requests Using Federated User Temporary Credentials -


AWS SDK for Ruby
You can provide temporary security credentials for your federated users and applications so that they
can send authenticated requests to access your AWS resources. When requesting temporary credentials
from the IAM service, you must provide a user name and an IAM policy that describes the resource
permissions that you want to grant. By default, the session duration is one hour. However, if you are
requesting temporary credentials using IAM user credentials, you can explicitly set a different duration
value when requesting the temporary security credentials for federated users and applications. For
information about temporary security credentials for your federated users and applications, see Making
Requests (p. 10).
Note
For added security when you request temporary security credentials for federated users and
applications, you might want to use a dedicated IAM user with only the necessary access
permissions. The temporary user you create can never get more permissions than the IAM user
who requested the temporary security credentials. For more information, see AWS Identity and
Access Management FAQs .

Example
The following Ruby code example allows a federated user with a limited set of permissions to lists keys
in the specified bucket.

require 'aws-sdk-s3'
require 'aws-sdk-iam'

USAGE = <<DOC

Usage: federated_create_bucket_policy.rb -b BUCKET -u USER [-r REGION] [-d] [-h]

Creates a federated policy for USER to list items in BUCKET for one hour.

BUCKET is required and must already exist.

USER is required and if not found, is created.

If REGION is not supplied, defaults to us-west-2.

-d gives you extra (debugging) information.

-h displays this message and quits.

DOC

$debug = false

def print_debug(s)
if $debug
puts s
end
end

def get_user(region, user_name, create)


user = nil
iam = Aws::IAM::Client.new(region: 'us-west-2')

begin
user = iam.create_user(user_name: user_name)
iam.wait_until(:user_exists, user_name: user_name)
print_debug("Created new user #{user_name}")
rescue Aws::IAM::Errors::EntityAlreadyExists

API Version 2006-03-01


42
Amazon Simple Storage Service Developer Guide
Using Federated User Temporary Credentials

print_debug("Found user #{user_name} in region #{region}")


end
end

# main
region = 'us-west-2'
user_name = ''
bucket_name = ''

i = 0

while i < ARGV.length


case ARGV[i]

when '-b'
i += 1
bucket_name = ARGV[i]

when '-u'
i += 1
user_name = ARGV[i]

when '-r'
i += 1

region = ARGV[i]

when '-d'
puts 'Debugging enabled'
$debug = true

when '-h'
puts USAGE
exit 0

else
puts 'Unrecognized option: ' + ARGV[i]
puts USAGE
exit 1

end

i += 1
end

if bucket_name == ''
puts 'You must supply a bucket name'
puts USAGE
exit 1
end

if user_name == ''
puts 'You must supply a user name'
puts USAGE
exit 1
end

#Identify the IAM user we allow to list Amazon S3 bucket items for an hour.
user = get_user(region, user_name, true)

# Create a new STS client and get temporary credentials.


sts = Aws::STS::Client.new(region: region)

creds = sts.get_federation_token({
duration_seconds: 3600,
name: user_name,

API Version 2006-03-01


43
Amazon Simple Storage Service Developer Guide
Making Requests Using the REST API

policy: "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"Stmt1\",\"Effect\":\"Allow
\",\"Action\":\"s3:ListBucket\",\"Resource\":\"arn:aws:s3:::#{bucket_name}\"}]}",
})

# Create an Amazon S3 resource with temporary credentials.


s3 = Aws::S3::Resource.new(region: region, credentials: creds)

puts "Contents of '%s':" % bucket_name


puts ' Name => GUID'

s3.bucket(bucket_name).objects.limit(50).each do |obj|
puts " #{obj.key} => #{obj.etag}"
end

Making Requests Using the REST API


This section contains information on how to make requests to Amazon S3 endpoints by using the REST
API. For a list of Amazon S3 endpoints, see Regions and Endpoints in the AWS General Reference.

Topics
• Making Requests to Dual-Stack Endpoints by Using the REST API (p. 45)
• Virtual Hosting of Buckets (p. 45)
• Request Redirection and the REST API (p. 49)

When making requests by using the REST API, you can use virtual hosted–style or path-style URIs for the
Amazon S3 endpoints. For more information, see Working with Amazon S3 Buckets (p. 52).

Example Virtual Hosted–Style Request

Following is an example of a virtual hosted–style request to delete the puppy.jpg file from the bucket
named examplebucket.

DELETE /puppy.jpg HTTP/1.1


Host: examplebucket.s3-us-west-2.amazonaws.com
Date: Mon, 11 Apr 2016 12:00:00 GMT
x-amz-date: Mon, 11 Apr 2016 12:00:00 GMT
Authorization: authorization string

Example Path-Style Request

Following is an example of a path-style version of the same request.

DELETE /examplebucket/puppy.jpg HTTP/1.1


Host: s3-us-west-2.amazonaws.com
Date: Mon, 11 Apr 2016 12:00:00 GMT
x-amz-date: Mon, 11 Apr 2016 12:00:00 GMT
Authorization: authorization string

Amazon S3 supports virtual hosted-style and path-style access in all regions. The path-style syntax,
however, requires that you use the region-specific endpoint when attempting to access a bucket.
For example, if you have a bucket called mybucket that resides in the EU (Ireland) region, you want
to use path-style syntax, and the object is named puppy.jpg, the correct URI is http://s3-eu-
west-1.amazonaws.com/mybucket/puppy.jpg.

API Version 2006-03-01


44
Amazon Simple Storage Service Developer Guide
Dual-Stack Endpoints (REST API)

You will receive an HTTP response code 307 Temporary Redirect error and a message indicating what the
correct URI is for your resource if you try to access a bucket outside the US East (N. Virginia) region with
path-style syntax that uses either of the following:

• http://s3.amazonaws.com
• An endpoint for a region different from the one where the bucket resides. For example, if you
use http://s3-eu-west-1.amazonaws.com for a bucket that was created in the US West (N.
California) region.

Making Requests to Dual-Stack Endpoints by Using


the REST API
When using the REST API, you can directly access a dual-stack endpoint by using a virtual hosted–style
or a path style endpoint name (URI). All Amazon S3 dual-stack endpoint names include the region in the
name. Unlike the standard IPv4-only endpoints, both virtual hosted–style and a path-style endpoints use
region-specific endpoint names.

Example Virtual Hosted–Style Dual-Stack Endpoint Request


You can use a virtual hosted–style endpoint in your REST request as shown in the following example that
retrieves the puppy.jpg object from the bucket named examplebucket.

GET /puppy.jpg HTTP/1.1


Host: examplebucket.s3.dualstack.us-west-2.amazonaws.com
Date: Mon, 11 Apr 2016 12:00:00 GMT
x-amz-date: Mon, 11 Apr 2016 12:00:00 GMT
Authorization: authorization string

Example Path-Style Dual-Stack Endpoint Request


Or you can use a path-style endpoint in your request as shown in the following example.

GET /examplebucket/puppy.jpg HTTP/1.1


Host: s3.dualstack.us-west-2.amazonaws.com
Date: Mon, 11 Apr 2016 12:00:00 GMT
x-amz-date: Mon, 11 Apr 2016 12:00:00 GMT
Authorization: authorization string

For more information about dual-stack endpoints, see Using Amazon S3 Dual-Stack Endpoints (p. 14).

Virtual Hosting of Buckets


Topics
• HTTP Host Header Bucket Specification (p. 46)
• Examples (p. 47)
• Customizing Amazon S3 URLs with CNAMEs (p. 48)
• Limitations (p. 49)
• Backward Compatibility (p. 49)

In general, virtual hosting is the practice of serving multiple web sites from a single web server. One
way to differentiate sites is by using the apparent host name of the request instead of just the path
name part of the URI. An ordinary Amazon S3 REST request specifies a bucket by using the first slash-
delimited component of the Request-URI path. Alternatively, you can use Amazon S3 virtual hosting to

API Version 2006-03-01


45
Amazon Simple Storage Service Developer Guide
Virtual Hosting of Buckets

address a bucket in a REST API call by using the HTTP Host header. In practice, Amazon S3 interprets
Host as meaning that most buckets are automatically accessible (for limited types of requests) at
http://bucketname.s3.amazonaws.com. Furthermore, by naming your bucket after your registered
domain name and by making that name a DNS alias for Amazon S3, you can completely customize the
URL of your Amazon S3 resources, for example, http://my.bucketname.com/.

Besides the attractiveness of customized URLs, a second benefit of virtual hosting is the ability to
publish to the "root directory" of your bucket's virtual server. This ability can be important because many
existing applications search for files in this standard location. For example, favicon.ico, robots.txt,
crossdomain.xml are all expected to be found at the root.
Important
Amazon S3 supports virtual hosted-style and path-style access in all regions. The path-style
syntax, however, requires that you use the region-specific endpoint when attempting to access
a bucket. For example, if you have a bucket called mybucket that resides in the EU (Ireland)
region, you want to use path-style syntax, and the object is named puppy.jpg, the correct URI
is http://s3-eu-west-1.amazonaws.com/mybucket/puppy.jpg.
You will receive an HTTP response code 307 Temporary Redirect error and a message indicating
what the correct URI is for your resource if you try to access a bucket outside the US East (N.
Virginia) region with path-style syntax that uses either of the following:

• http://s3.amazonaws.com
• An endpoint for a region different from the one where the bucket resides. For example, if you
use http://s3-eu-west-1.amazonaws.com for a bucket that was created in the US West
(N. California) region.

Note
Amazon S3 routes any virtual hosted–style requests to the US East (N. Virginia) region by
default if you use the US East (N. Virginia) endpoint (s3.amazonaws.com), instead of the region-
specific endpoint (for example, s3-eu-west-1.amazonaws.com). When you create a bucket, in any
region, Amazon S3 updates DNS to reroute the request to the correct location, which might take
time. In the meantime, the default rule applies and your virtual hosted–style request goes to the
US East (N. Virginia) region, and Amazon S3 redirects it with HTTP 307 redirect to the correct
region. For more information, see Request Redirection and the REST API (p. 608).
When using virtual hosted–style buckets with SSL, the SSL wild card certificate only matches
buckets that do not contain periods. To work around this, use HTTP or write your own certificate
verification logic.

HTTP Host Header Bucket Specification


As long as your GET request does not use the SSL endpoint, you can specify the bucket for the request by
using the HTTP Host header. The Host header in a REST request is interpreted as follows:

• If the Host header is omitted or its value is 's3.amazonaws.com', the bucket for the request will be
the first slash-delimited component of the Request-URI, and the key for the request will be the rest
of the Request-URI. This is the ordinary method, as illustrated by the first and second examples in this
section. Omitting the Host header is valid only for HTTP 1.0 requests.
• Otherwise, if the value of the Host header ends in '.s3.amazonaws.com', the bucket name is the
leading component of the Host header's value up to '.s3.amazonaws.com'. The key for the request
is the Request-URI. This interpretation exposes buckets as subdomains of s3.amazonaws.com, as
illustrated by the third and fourth examples in this section.
• Otherwise, the bucket for the request is the lowercase value of the Host header, and the key for
the request is the Request-URI. This interpretation is useful when you have registered the same DNS
name as your bucket name and have configured that name to be a CNAME alias for Amazon S3. The
procedure for registering domain names and configuring DNS is beyond the scope of this guide, but
the result is illustrated by the final example in this section.

API Version 2006-03-01


46
Amazon Simple Storage Service Developer Guide
Virtual Hosting of Buckets

Examples
This section provides example URLs and requests.

Example Path Style Method

This example uses johnsmith.net as the bucket name and homepage.html as the key name.

The URL is as follows:

http://s3.amazonaws.com/johnsmith.net/homepage.html

The request is as follows:

GET /johnsmith.net/homepage.html HTTP/1.1


Host: s3.amazonaws.com

The request with HTTP 1.0 and omitting the host header is as follows:

GET /johnsmith.net/homepage.html HTTP/1.0

For information about DNS-compatible names, see Limitations (p. 49). For more information about
keys, see Keys (p. 3).

Example Virtual Hosted–Style Method

This example uses johnsmith.net as the bucket name and homepage.html as the key name.

The URL is as follows:

http://johnsmith.net.s3.amazonaws.com/homepage.html

The request is as follows:

GET /homepage.html HTTP/1.1


Host: johnsmith.net.s3.amazonaws.com

The virtual hosted–style method requires the bucket name to be DNS-compliant.

Example Virtual Hosted–Style Method for a Bucket in a Region Other Than US East (N.
Virginia) region

This example uses johnsmith.eu as the name for a bucket in the EU (Ireland) region and
homepage.html as the key name.

The URL is as follows:

http://johnsmith.eu.s3-eu-west-1.amazonaws.com/homepage.html

The request is as follows:

GET /homepage.html HTTP/1.1


Host: johnsmith.eu.s3-eu-west-1.amazonaws.com

API Version 2006-03-01


47
Amazon Simple Storage Service Developer Guide
Virtual Hosting of Buckets

Note that, instead of using the region-specific endpoint, you can also use the US East (N. Virginia) region
endpoint no matter what region the bucket resides.

http://johnsmith.eu.s3.amazonaws.com/homepage.html

The request is as follows:

GET /homepage.html HTTP/1.1


Host: johnsmith.eu.s3.amazonaws.com

Example CNAME Method


This example uses www.johnsmith.net as the bucket name and homepage.html as the
key name. To use this method, you must configure your DNS name as a CNAME alias for
bucketname.s3.amazonaws.com.

The URL is as follows:

http://www.johnsmith.net/homepage.html

The example is as follows:

GET /homepage.html HTTP/1.1


Host: www.johnsmith.net

Customizing Amazon S3 URLs with CNAMEs


Depending on your needs, you might not want "s3.amazonaws.com" to appear on your website or
service. For example, if you host your website images on Amazon S3, you might prefer http://
images.johnsmith.net/ instead of http://johnsmith-images.s3.amazonaws.com/.

The bucket name must be the same as the CNAME. So http://images.johnsmith.net/filename


would be the same as http://images.johnsmith.net.s3.amazonaws.com/filename if a CNAME
were created to map images.johnsmith.net to images.johnsmith.net.s3.amazonaws.com.

Any bucket with a DNS-compatible name can be referenced as follows: http://


[BucketName].s3.amazonaws.com/[Filename], for example, http://
images.johnsmith.net.s3.amazonaws.com/mydog.jpg. By using CNAME, you can map
images.johnsmith.net to an Amazon S3 host name so that the previous URL could become http://
images.johnsmith.net/mydog.jpg.

The CNAME DNS record should alias your domain name to the appropriate virtual hosted–style host
name. For example, if your bucket name and domain name are images.johnsmith.net, the CNAME
record should alias to images.johnsmith.net.s3.amazonaws.com.

images.johnsmith.net CNAME images.johnsmith.net.s3.amazonaws.com.

Setting the alias target to s3.amazonaws.com also works, but it may result in extra HTTP redirects.

Amazon S3 uses the host name to determine the bucket name. For example, suppose that you have
configured www.example.com as a CNAME for www.example.com.s3.amazonaws.com. When you
access http://www.example.com, Amazon S3 receives a request similar to the following:

Example

GET / HTTP/1.1

API Version 2006-03-01


48
Amazon Simple Storage Service Developer Guide
Request Redirection and the REST API

Host: www.example.com
Date: date
Authorization: signatureValue

Because Amazon S3 sees only the original host name www.example.com and is unaware of the CNAME
mapping used to resolve the request, the CNAME and the bucket name must be the same.

Any Amazon S3 endpoint can be used in a CNAME. For example, s3-ap-


southeast-1.amazonaws.com can be used in CNAMEs. For more information about endpoints, see
Request Endpoints (p. 11).

To associate a host name with an Amazon S3 bucket using CNAMEs

1. Select a host name that belongs to a domain you control. This example uses the images subdomain
of the johnsmith.net domain.
2. Create a bucket that matches the host name. In this example, the host and bucket names are
images.johnsmith.net.
Note
The bucket name must exactly match the host name.
3. Create a CNAME record that defines the host name as an alias for the Amazon S3 bucket. For
example:

images.johnsmith.net CNAME images.johnsmith.net.s3.amazonaws.com


Important
For request routing reasons, the CNAME record must be defined exactly as shown in the
preceding example. Otherwise, it might appear to operate correctly, but will eventually
result in unpredictable behavior.
Note
The procedure for configuring DNS depends on your DNS server or DNS provider. For
specific information, see your server documentation or contact your provider.

Limitations
Specifying the bucket for the request by using the HTTP Host header is supported for non-SSL requests
and when using the REST API. You cannot specify the bucket in SOAP by using a different endpoint.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

Backward Compatibility
Early versions of Amazon S3 incorrectly ignored the HTTP Host header. Applications that depend on
this undocumented behavior must be updated to set the Host header correctly. Because Amazon S3
determines the bucket name from Host when it is present, the most likely symptom of this problem is to
receive an unexpected NoSuchBucket error result code.

Request Redirection and the REST API


Topics
• Redirects and HTTP User-Agents (p. 50)
• Redirects and 100-Continue (p. 50)
• Redirect Example (p. 50)

API Version 2006-03-01


49
Amazon Simple Storage Service Developer Guide
Request Redirection and the REST API

This section describes how to handle HTTP redirects by using the Amazon S3 REST API. For general
information about Amazon S3 redirects, see Request Redirection and the REST API (p. 608) in the
Amazon Simple Storage Service API Reference.

Redirects and HTTP User-Agents


Programs that use the Amazon S3 REST API should handle redirects either at the application layer or the
HTTP layer. Many HTTP client libraries and user agents can be configured to correctly handle redirects
automatically; however, many others have incorrect or incomplete redirect implementations.

Before you rely on a library to fulfill the redirect requirement, test the following cases:

• Verify all HTTP request headers are correctly included in the redirected request (the second request
after receiving a redirect) including HTTP standards such as Authorization and Date.
• Verify non-GET redirects, such as PUT and DELETE, work correctly.
• Verify large PUT requests follow redirects correctly.
• Verify PUT requests follow redirects correctly if the 100-continue response takes a long time to arrive.

HTTP user-agents that strictly conform to RFC 2616 might require explicit confirmation before following
a redirect when the HTTP request method is not GET or HEAD. It is generally safe to follow redirects
generated by Amazon S3 automatically, as the system will issue redirects only to hosts within the
amazonaws.com domain and the effect of the redirected request will be the same as that of the original
request.

Redirects and 100-Continue


To simplify redirect handling, improve efficiencies, and avoid the costs associated with sending a
redirected request body twice, configure your application to use 100-continues for PUT operations.
When your application uses 100-continue, it does not send the request body until it receives an
acknowledgement. If the message is rejected based on the headers, the body of the message is not sent.
For more information about 100-continue, go to RFC 2616 Section 8.2.3
Note
According to RFC 2616, when using Expect: Continue with an unknown HTTP server, you
should not wait an indefinite period before sending the request body. This is because some
HTTP servers do not recognize 100-continue. However, Amazon S3 does recognize if your
request contains an Expect: Continue and will respond with a provisional 100-continue
status or a final status code. Additionally, no redirect error will occur after receiving the
provisional 100 continue go-ahead. This will help you avoid receiving a redirect response while
you are still writing the request body.

Redirect Example
This section provides an example of client-server interaction using HTTP redirects and 100-continue.

Following is a sample PUT to the quotes.s3.amazonaws.com bucket.

PUT /nelson.txt HTTP/1.1


Host: quotes.s3.amazonaws.com
Date: Mon, 15 Oct 2007 22:18:46 +0000

Content-Length: 6
Expect: 100-continue

Amazon S3 returns the following:

API Version 2006-03-01


50
Amazon Simple Storage Service Developer Guide
Request Redirection and the REST API

HTTP/1.1 307 Temporary Redirect


Location: http://quotes.s3-4c25d83b.amazonaws.com/nelson.txt?rk=8d47490b
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Mon, 15 Oct 2007 22:18:46 GMT

Server: AmazonS3

<?xml version="1.0" encoding="UTF-8"?>


<Error>
<Code>TemporaryRedirect</Code>
<Message>Please re-send this request to the
specified temporary endpoint. Continue to use the
original request endpoint for future requests.
</Message>
<Endpoint>quotes.s3-4c25d83b.amazonaws.com</Endpoint>
<Bucket>quotes</Bucket>
</Error>

The client follows the redirect response and issues a new request to the
quotes.s3-4c25d83b.amazonaws.com temporary endpoint.

PUT /nelson.txt?rk=8d47490b HTTP/1.1


Host: quotes.s3-4c25d83b.amazonaws.com
Date: Mon, 15 Oct 2007 22:18:46 +0000

Content-Length: 6
Expect: 100-continue

Amazon S3 returns a 100-continue indicating the client should proceed with sending the request body.

HTTP/1.1 100 Continue

The client sends the request body.

ha ha\n

Amazon S3 returns the final response.

HTTP/1.1 200 OK
Date: Mon, 15 Oct 2007 22:18:48 GMT

ETag: "a2c8d6b872054293afd41061e93bc289"
Content-Length: 0
Server: AmazonS3

API Version 2006-03-01


51
Amazon Simple Storage Service Developer Guide
Creating a Bucket

Working with Amazon S3 Buckets


Amazon S3 is cloud storage for the internet. To upload your data (photos, videos, documents etc.),
you first create a bucket in one of the AWS Regions. You can then upload any number of objects to the
bucket.

In terms of implementation, buckets and objects are resources, and Amazon S3 provides APIs for you to
manage them. For example, you can create a bucket and upload objects using the Amazon S3 API. You
can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3 APIs
to send requests to Amazon S3.

This section explains how to work with buckets. For information about working with objects, see Working
with Amazon S3 Objects (p. 95).

An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts.
This means that after a bucket is created, the name of that bucket cannot be used by another AWS
account in any AWS Region until the bucket is deleted. You should not depend on specific bucket naming
conventions for availability or security verification purposes. For bucket naming guidelines, see Bucket
Restrictions and Limitations (p. 56).

Amazon S3 creates buckets in a region you specify. To optimize latency, minimize costs, or address
regulatory requirements, choose any AWS Region that is geographically close to you. For example, if you
reside in Europe, you might find it advantageous to create buckets in the EU (Ireland) or EU (Frankfurt)
regions. For a list of Amazon S3 regions, see Regions and Endpoints in the AWS General Reference.
Note
Objects belonging to a bucket that you create in a specific AWS Region never leave that region,
unless you explicitly transfer them to another region. For example, objects stored in the EU
(Ireland) region never leave it.

Topics
• Creating a Bucket (p. 52)
• Accessing a Bucket (p. 54)
• Bucket Configuration Options (p. 54)
• Bucket Restrictions and Limitations (p. 56)
• Examples of Creating a Bucket (p. 57)
• Deleting or Emptying a Bucket (p. 61)
• Amazon S3 Default Encryption for S3 Buckets (p. 65)
• Managing Bucket Website Configuration (p. 68)
• Amazon S3 Transfer Acceleration (p. 72)
• Requester Pays Buckets (p. 79)
• Buckets and Access Control (p. 83)
• Billing and Usage Reporting for S3 Buckets (p. 83)

Creating a Bucket
Amazon S3 provides APIs for creating and managing buckets. By default, you can create up to 100
buckets in each of your AWS accounts. If you need more buckets, you can increase your bucket limit by
submitting a service limit increase. To learn how to submit a bucket limit increase, see AWS Service Limits
in the AWS General Reference.

When you create a bucket, you provide a name and the AWS Region where you want to create the
bucket. For information about naming buckets, see Rules for Bucket Naming (p. 57).

API Version 2006-03-01


52
Amazon Simple Storage Service Developer Guide
About Permissions

You can store any number of objects in a bucket.

You can create a bucket using any of the following methods:

• With the console.


• Programmatically, using the AWS SDKs.
Note
If you need to, you can also make the Amazon S3 REST API calls directly from your code.
However, this can be cumbersome because it requires you to write code to authenticate your
requests. For more information, see PUT Bucket in the Amazon Simple Storage Service API
Reference.

When using the AWS SDKs, you first create a client and then use the client to send a request to create
a bucket.  When you create the client, you can specify an AWS Region. US East (N. Virginia) is the
default Region. Note the following:
• If you create a client by specifying the US East (N. Virginia) Region, the client uses the following
endpoint to communicate with Amazon S3:

s3.amazonaws.com

You can use this client to create a bucket in any AWS Region. In your create bucket request:
• If you don’t specify a Region, Amazon S3 creates the bucket in the US East (N. Virginia) Region.
• If you specify an AWS Region, Amazon S3 creates the bucket in the specified Region.
• If you create a client by specifying any other AWS Region, each of these Regions maps to the Region-
specific endpoint:

s3-<region>.amazonaws.com

For example, if you create a client by specifying the eu-west-1 Region, it maps to the following
region-specific endpoint:

s3-eu-west-1.amazonaws.com

In this case, you can use the client to create a bucket only in the eu-west-1 Region. Amazon S3
returns an error if you specify any other Region in your request to create a bucket.
• If you create a client to access a dual-stack endpoint, you must specify an AWS Region. For more
information, see Dual-Stack Endpoints (p. 14).

For a list of available AWS Regions, see Regions and Endpoints in the AWS General Reference.

For examples, see Examples of Creating a Bucket (p. 57).

About Permissions
You can use your AWS account root credentials to create a bucket and perform any other Amazon
S3 operation. However, AWS recommends not using the root credentials of your AWS account to
make requests such as to create a bucket. Instead, create an IAM user, and grant that user full access
(users by default have no permissions). We refer to these users as administrator users. You can use the
administrator user credentials, instead of the root credentials of your account, to interact with AWS and
perform tasks, such as create a bucket, create users, and grant them permissions.

For more information, see Root Account Credentials vs. IAM User Credentials in the AWS General
Reference and IAM Best Practices in the IAM User Guide.

API Version 2006-03-01


53
Amazon Simple Storage Service Developer Guide
Accessing a Bucket

The AWS account that creates a resource owns that resource. For example, if you create an IAM user in
your AWS account and grant the user permission to create a bucket, the user can create a bucket. But
the user does not own the bucket; the AWS account to which the user belongs owns the bucket. The
user will need additional permission from the resource owner to perform any other bucket operations.
For more information about managing permissions for your Amazon S3 resources, see Managing Access
Permissions to Your Amazon S3 Resources (p. 269).

Accessing a Bucket
You can access your bucket using the Amazon S3 console. Using the console UI, you can perform almost
all bucket operations without having to write any code.

If you access a bucket programmatically, note that Amazon S3 supports RESTful architecture in which
your buckets and objects are resources, each with a resource URI that uniquely identifies the resource.

Amazon S3 supports both virtual-hosted–style and path-style URLs to access a bucket.

• In a virtual-hosted–style URL, the bucket name is part of the domain name in the URL. For example:  
• http://bucket.s3.amazonaws.com
• http://bucket.s3-aws-region.amazonaws.com.

In a virtual-hosted–style URL, you can use either of these endpoints. If you make a request to the
http://bucket.s3.amazonaws.com endpoint, the DNS has sufficient information to route your
request directly to the Region where your bucket resides.

For more information, see Virtual Hosting of Buckets (p. 45).

 
• In a path-style URL, the bucket name is not part of the domain (unless you use a Region-specific
endpoint). For example:
• US East (N. Virginia) Region endpoint, http://s3.amazonaws.com/bucket
• Region-specific endpoint, http://s3-aws-region.amazonaws.com/bucket

In a path-style URL, the endpoint you use must match the Region in which the bucket resides. For
example, if your bucket is in the South America (São Paulo) Region, you must use the http://s3-sa-
east-1.amazonaws.com/bucket endpoint. If your bucket is in the US East (N. Virginia) Region, you
must use the http://s3.amazonaws.com/bucket endpoint.

Important
Because buckets can be accessed using path-style and virtual-hosted–style URLs, we
recommend you create buckets with DNS-compliant bucket names. For more information, see
Bucket Restrictions and Limitations (p. 56).

Accessing an S3 Bucket over IPv6

Amazon S3 has a set of dual-stack endpoints, which support requests to S3 buckets over both Internet
Protocol version 6 (IPv6) and IPv4. For more information, see Making Requests over IPv6 (p. 12).

Bucket Configuration Options


Amazon S3 supports various options for you to configure your bucket. For example, you can configure
your bucket for website hosting, add configuration to manage lifecycle of objects in the bucket, and

API Version 2006-03-01


54
Amazon Simple Storage Service Developer Guide
Bucket Configuration Options

configure the bucket to log all access to the bucket. Amazon S3 supports subresources for you to store,
and manage the bucket configuration information. That is, using the Amazon S3 API, you can create and
manage these subresources. You can also use the console or the AWS SDKs.
Note
There are also object-level configurations. For example, you can configure object-level
permissions by configuring an access control list (ACL) specific to that object.

These are referred to as subresources because they exist in the context of a specific bucket or object. The
following table lists subresources that enable you to manage bucket-specific configurations.

Subresource Description

location When you create a bucket, you specify the AWS Region where you want Amazon
S3 to create the bucket. Amazon S3 stores this information in the location
subresource and provides an API for you to retrieve this information.

policy and ACL All your resources (such as buckets and objects) are private by default. Amazon
(access control list) S3 supports both bucket policy and access control list (ACL) options for you to
grant and manage bucket-level permissions. Amazon S3 stores the permission
information in the policy and acl subresources.

For more information, see Managing Access Permissions to Your Amazon S3


Resources (p. 269).

cors (cross-origin You can configure your bucket to allow cross-origin requests.
resource sharing)
For more information, see Enabling Cross-Origin Resource Sharing.

website You can configure your bucket for static website hosting. Amazon S3 stores this
configuration by creating a website subresource.

For more information, see Hosting a Static Website on Amazon S3.

logging Logging enables you to track requests for access to your bucket. Each access
log record provides details about a single access request, such as the requester,
bucket name, request time, request action, response status, and error code, if
any. Access log information can be useful in security and access audits. It can also
help you learn about your customer base and understand your Amazon S3 bill.  

For more information, see Amazon S3 Server Access Logging (p. 642).

event notification You can enable your bucket to send you notifications of specified bucket events.

For more information, see Configuring Amazon S3 Event Notifications (p. 542).

versioning Versioning helps you recover accidental overwrites and deletes.

We recommend versioning as a best practice to recover objects from being


deleted or overwritten by mistake.

For more information, see Using Versioning (p. 425).

lifecycle You can define lifecycle rules for objects in your bucket that have a well-defined
lifecycle. For example, you can define a rule to archive objects one year after
creation, or delete an object 10 years after creation.

For more information, see Object Lifecycle Management.

API Version 2006-03-01


55
Amazon Simple Storage Service Developer Guide
Restrictions and Limitations

Subresource Description

cross-region Cross-region replication is the automatic, asynchronous copying of objects


replication across buckets in different AWS Regions. For more information, see Cross-Region
Replication (p. 563).

tagging You can add cost allocation tags to your bucket to categorize and track your AWS
costs. Amazon S3 provides the tagging subresource to store and manage tags on
a bucket. Using tags you apply to your bucket, AWS generates a cost allocation
report with usage and costs aggregated by your tags.

For more information, see Billing and Usage Reporting for S3 Buckets (p. 83).

requestPayment By default, the AWS account that creates the bucket (the bucket owner) pays
for downloads from the bucket. Using this subresource, the bucket owner
can specify that the person requesting the download will be charged for the
download. Amazon S3 provides an API for you to manage this subresource.

For more information, see Requester Pays Buckets (p. 79).

transfer Transfer Acceleration enables fast, easy, and secure transfers of files over long
acceleration distances between your client and an S3 bucket. Transfer Acceleration takes
advantage of Amazon CloudFront’s globally distributed edge locations.

For more information, see Amazon S3 Transfer Acceleration (p. 72).

Bucket Restrictions and Limitations


A bucket is owned by the AWS account that created it. By default, you can create up to 100 buckets
in each of your AWS accounts. If you need additional buckets, you can increase your bucket limit by
submitting a service limit increase. For information about how to increase your bucket limit, see AWS
Service Limits in the AWS General Reference.

Bucket ownership is not transferable; however, if a bucket is empty, you can delete it. After a bucket is
deleted, the name becomes available to reuse, but the name might not be available for you to reuse for
various reasons. For example, some other account could create a bucket with that name. Note, too, that
it might take some time before the name can be reused. So if you want to use the same bucket name,
don't delete the bucket.

There is no limit to the number of objects that can be stored in a bucket and no difference in
performance whether you use many buckets or just a few. You can store all of your objects in a single
bucket, or you can organize them across several buckets.

After you have created a bucket, you can't change its Region.

If you explicitly specify an AWS Region in your create bucket request that is different from the Region
that you specified when you created the client, you might get an error.

You cannot create a bucket within another bucket.

The high-availability engineering of Amazon S3 is focused on get, put, list, and delete operations.
Because bucket operations work against a centralized, global resource space, it is not appropriate to
create or delete buckets on the high-availability code path of your application. It is better to create or
delete buckets in a separate initialization or setup routine that you run less often.
Note
If your application automatically creates buckets, choose a bucket naming scheme that is
unlikely to cause naming conflicts. Ensure that your application logic will choose a different
bucket name if a bucket name is already taken.

API Version 2006-03-01


56
Amazon Simple Storage Service Developer Guide
Rules for Naming

Rules for Bucket Naming


After you create an S3 bucket, you can't change the bucket name, so choose the name wisely.
Important
On March 1, 2018, we updated our naming conventions for S3 buckets in the US East (N.
Virginia) Region to match the naming conventions that we use in all other worldwide AWS
Regions. Amazon S3 no longer supports creating bucket names that contain uppercase letters
or underscores. This change ensures that each bucket can be addressed using virtual host style
addressing, such as https://myawsbucket.s3.amazonaws.com. We highly recommend
that you review your existing bucket-creation processes to ensure that you follow these DNS-
compliant naming conventions.

The following are the rules for naming S3 buckets in all AWS Regions:

• Bucket names must be unique across all existing bucket names in Amazon S3.
• Bucket names must comply with DNS naming conventions.
• Bucket names must be at least 3 and no more than 63 characters long.
• Bucket names must not contain uppercase characters or underscores.
• Bucket names must start with a lowercase letter or number.
• Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period
(.). Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end
with a lowercase letter or a number.
• Bucket names must not be formatted as an IP address (for example, 192.168.5.4).
• When you use virtual hosted–style buckets with Secure Sockets Layer (SSL), the SSL wildcard
certificate only matches buckets that don't contain periods. To work around this, use HTTP or write
your own certificate verification logic. We recommend that you do not use periods (".") in bucket names
when using virtual hosted–style buckets.

Legacy Non–DNS-Compliant Bucket Names


Beginning on March 1, 2018, we updated our naming conventions for S3 buckets in the US East (N.
Virginia) Region to require DNS-compliant names.

The US East (N. Virginia) Region previously allowed more relaxed standards for bucket naming, which
could have resulted in a bucket name that is not DNS-compliant. For example, MyAWSbucket was a
valid bucket name, even though it contains uppercase letters. If you try to access this bucket by using
a virtual-hosted–style request (http://MyAWSbucket.s3.amazonaws.com/yourobject), the URL
resolves to the bucket myawsbucket and not the bucket MyAWSbucket. In response, Amazon S3 returns
a "bucket not found" error. For more information about virtual-hosted–style access to your buckets, see
Virtual Hosting of Buckets (p. 45).

The legacy rules for bucket names in the US East (N. Virginia) Region allowed bucket names to be as
long as 255 characters, and bucket names could contain any combination of uppercase letters, lowercase
letters, numbers, periods (.), hyphens (-), and underscores (_).

The name of the bucket used for Amazon S3 Transfer Acceleration must be DNS-compliant and must
not contain periods ("."). For more information about transfer acceleration, see Amazon S3 Transfer
Acceleration (p. 72).

Examples of Creating a Bucket


Topics
• Using the Amazon S3 Console (p. 58)

API Version 2006-03-01


57
Amazon Simple Storage Service Developer Guide
Using the Amazon S3 Console

• Using the AWS SDK for Java (p. 58)


• Using the AWS SDK for .NET (p. 59)
• Using the AWS SDK for Ruby Version 3 (p. 60)
• Using Other AWS SDKs (p. 61)

The following code examples create a bucket programmatically using the AWS SDKs for Java, .NET, and
Ruby. The code examples perform the following tasks:

• Create a bucket, if the bucket doesn't already exist—The examples create a bucket by performing the
following tasks:
• Create a client by explicitly specifying an AWS Region (the example uses the s3-eu-
west-1 Region). Accordingly, the client communicates with Amazon S3 using the s3-eu-
west-1.amazonaws.com endpoint. You can specify any other AWS Region. For a list of AWS
Regions, see Regions and Endpoints in the AWS General Reference.
• Send a create bucket request by specifying only a bucket name. The create bucket request doesn't
specify another AWS Region. The client sends a request to Amazon S3 to create the bucket in the
Region you specified when creating the client. Once you have created a bucket, you can't change its
Region.
Note
If you explicitly specify an AWS Region in your create bucket request that is different from
the Region you specifed when you created the client, you might get an error. For more
information, see Creating a Bucket (p. 52).

The SDK libraries send the PUT bucket request to Amazon S3 to create the bucket. For more
information, see PUT Bucket.
• Retrieve information about the location of the bucket—Amazon S3 stores bucket location information
in the location subresource that is associated with the bucket. The SDK libraries send the GET Bucket
location request (see GET Bucket location) to retrieve this information.

Using the Amazon S3 Console


To create a bucket using the Amazon S3 console, see How Do I Create an S3 Bucket? in the Amazon
Simple Storage Service Console User Guide.

Using the AWS SDK for Java


Example

This example shows how to create an Amazon S3 bucket using the AWS SDK for Java. For instructions on
creating and testing a working sample, see Testing the Amazon S3 Java Code Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CreateBucketRequest;
import com.amazonaws.services.s3.model.GetBucketLocationRequest;

API Version 2006-03-01


58
Amazon Simple Storage Service Developer Guide
Using the AWS SDK for .NET

public class CreateBucket {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

if (!s3Client.doesBucketExistV2(bucketName)) {
// Because the CreateBucketRequest object doesn't specify a region, the
// bucket is created in the region specified in the client.
s3Client.createBucket(new CreateBucketRequest(bucketName));

// Verify that the bucket was created by retrieving it and checking its
location.
String bucketLocation = s3Client.getBucketLocation(new
GetBucketLocationRequest(bucketName));
System.out.println("Bucket location: " + bucketLocation);
}
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it and returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Using the AWS SDK for .NET


For information about how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

Example

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Util;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class CreateBucketTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

API Version 2006-03-01


59
Amazon Simple Storage Service Developer Guide
Using the AWS SDK for Ruby Version 3

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
CreateBucketAsync().Wait();
}

static async Task CreateBucketAsync()


{
try
{
if (!(await AmazonS3Util.DoesS3BucketExistAsync(s3Client, bucketName)))
{
var putBucketRequest = new PutBucketRequest
{
BucketName = bucketName,
UseClientRegion = true
};

PutBucketResponse putBucketResponse = await


s3Client.PutBucketAsync(putBucketRequest);
}
// Retrieve the bucket location.
string bucketLocation = await FindBucketLocationAsync(s3Client);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
static async Task<string> FindBucketLocationAsync(IAmazonS3 client)
{
string bucketLocation;
var request = new GetBucketLocationRequest()
{
BucketName = bucketName
};
GetBucketLocationResponse response = await
client.GetBucketLocationAsync(request);
bucketLocation = response.Location.ToString();
return bucketLocation;
}
}
}

Using the AWS SDK for Ruby Version 3


For information about how to create and test a working sample, see Using the AWS SDK for Ruby -
Version 3 (p. 665).

Example

require 'aws-sdk-s3'

s3 = Aws::S3::Client.new(region: 'us-west-2')
s3.create_bucket(bucket: 'bucket-name')

API Version 2006-03-01


60
Amazon Simple Storage Service Developer Guide
Using Other AWS SDKs

Using Other AWS SDKs


For information about using other AWS SDKs, see Sample Code and Libraries.

Deleting or Emptying a Bucket


It is easy to delete an empty bucket. However, in some situations, you may need to delete or empty a
bucket that contains objects. In this section, we'll explain how to delete objects in an unversioned bucket,
and how to delete object versions and delete markers in a bucket that has versioning enabled. For more
information about versioning, see Using Versioning (p. 425). In some situations, you may choose to
empty a bucket instead of deleting it. This section explains various options you can use to delete or
empty a bucket that contains objects.

Topics
• Delete a Bucket (p. 61)
• Empty a Bucket (p. 63)

Delete a Bucket
You can delete a bucket and its content programmatically using the AWS SDKs. You can also use lifecycle
configuration on a bucket to empty its content and then delete the bucket. There are additional options,
such as using Amazon S3 console and AWS CLI, but there are limitations on these methods based on the
number of objects in your bucket and the bucket's versioning status.

Delete a Bucket: Using the Amazon S3 Console


The Amazon S3 console supports deleting a bucket that may or may not be empty. For information
about using the Amazon S3 console to delete a bucket, see How Do I Delete an S3 Bucket? in the Amazon
Simple Storage Service Console User Guide.

Delete a Bucket: Using the AWS CLI


You can delete a bucket that contains objects using the AWS CLI only if the bucket does not have
versioning enabled. If your bucket does not have versioning enabled, you can use the rb (remove bucket)
AWS CLI command with --force parameter to remove a non-empty bucket. This command deletes all
objects first and then deletes the bucket.

$ aws s3 rb s3://bucket-name --force

For more information, see Using High-Level S3 Commands with the AWS Command Line Interface in the
AWS Command Line Interface User Guide.

To delete a non-empty bucket that does not have versioning enabled, you have the following options:

• Delete the bucket programmatically using the AWS SDK.


• Delete all of the objects using the bucket's lifecycle configuration and then delete the empty bucket
using the Amazon S3 console.

Delete a Bucket: Using Lifecycle Configuration


You can configure lifecycle on your bucket to expire objects, Amazon S3 then deletes expired objects.
You can add lifecycle configuration rules to expire all or a subset of objects with a specific key name

API Version 2006-03-01


61
Amazon Simple Storage Service Developer Guide
Delete a Bucket

prefix. For example, to remove all objects in a bucket, you can set a lifecycle rule to expire objects one
day after creation.

If your bucket has versioning enabled, you can also configure the rule to expire noncurrent objects.

After Amazon S3 deletes all of the objects in your bucket, you can delete the bucket or keep it.
Important
If you just want to empty the bucket and not delete it, make sure you remove the lifecycle
configuration rule you added to empty the bucket so that any new objects you create in the
bucket will remain in the bucket.

For more information, see Object Lifecycle Management (p. 115) and Configuring Object
Expiration (p. 121).

Delete a Bucket: Using the AWS SDKs


You can use the AWS SDKs to delete a bucket. The following sections provide examples of how to delete
a bucket using the AWS SDK for Java and .NET. First, the code deletes objects in the bucket and then it
deletes the bucket. For information about other AWS SDKs, see Tools for Amazon Web Services.

Delete a Bucket Using the AWS SDK for Java


The following Java example deletes a bucket that contains objects. The example deletes all objects, and
then it deletes the bucket. The example works for buckets with or without versioning enabled.
Note
For buckets without versioning enabled, you can delete all objects directly and then delete the
bucket. For buckets with versioning enabled, you must delete all object versions before deleting
the bucket.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.util.Iterator;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ListVersionsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.amazonaws.services.s3.model.S3VersionSummary;
import com.amazonaws.services.s3.model.VersionListing;

public class DeleteBucket {

public static void main(String[] args) {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Delete all objects from the bucket. This is sufficient

API Version 2006-03-01


62
Amazon Simple Storage Service Developer Guide
Empty a Bucket

// for unversioned buckets. For versioned buckets, when you attempt to delete
objects, Amazon S3 inserts
// delete markers for all objects, but doesn't delete the object versions.
// To delete objects from versioned buckets, delete all of the object versions
before deleting
// the bucket (see below for an example).
ObjectListing objectListing = s3Client.listObjects(bucketName);
while (true) {
Iterator<S3ObjectSummary> objIter =
objectListing.getObjectSummaries().iterator();
while (objIter.hasNext()) {
s3Client.deleteObject(bucketName, objIter.next().getKey());
}

// If the bucket contains many objects, the listObjects() call


// might not return all of the objects in the first listing. Check to
// see whether the listing was truncated. If so, retrieve the next page of
objects
// and delete them.
if (objectListing.isTruncated()) {
objectListing = s3Client.listNextBatchOfObjects(objectListing);
} else {
break;
}
}

// Delete all object versions (required for versioned buckets).


VersionListing versionList = s3Client.listVersions(new
ListVersionsRequest().withBucketName(bucketName));
while (true) {
Iterator<S3VersionSummary> versionIter =
versionList.getVersionSummaries().iterator();
while (versionIter.hasNext()) {
S3VersionSummary vs = versionIter.next();
s3Client.deleteVersion(bucketName, vs.getKey(), vs.getVersionId());
}

if (versionList.isTruncated()) {
versionList = s3Client.listNextBatchOfVersions(versionList);
} else {
break;
}
}

// After all objects and object versions are deleted, delete the bucket.
s3Client.deleteBucket(bucketName);
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client couldn't
// parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Empty a Bucket
You can empty a bucket's content (that is, delete all content, but keep the bucket) programmatically
using the AWS SDK. You can also specify lifecycle configuration on a bucket to expire objects so that

API Version 2006-03-01


63
Amazon Simple Storage Service Developer Guide
Empty a Bucket

Amazon S3 can delete them. There are additional options, such as using Amazon S3 console and AWS
CLI, but there are limitations on this method based on the number of objects in your bucket and the
bucket's versioning status.

Topics
• Empty a Bucket: Using the Amazon S3 console (p. 64)
• Empty a Bucket: Using the AWS CLI (p. 64)
• Empty a Bucket: Using Lifecycle Configuration (p. 64)
• Empty a Bucket: Using the AWS SDKs (p. 65)

Empty a Bucket: Using the Amazon S3 console


For information about using the Amazon S3 console to empty a bucket, see How Do I Empty an S3
Bucket? in the Amazon Simple Storage Service Console User Guide

Empty a Bucket: Using the AWS CLI


You can empty a bucket using the AWS CLI only if the bucket does not have versioning enabled. If your
bucket does not have versioning enabled, you can use the rm (remove) AWS CLI command with the --
recursive parameter to empty a bucket (or remove a subset of objects with a specific key name prefix).

The following rm command removes objects with key name prefix doc, for example, doc/doc1 and
doc/doc2.

$ aws s3 rm s3://bucket-name/doc --recursive

Use the following command to remove all objects without specifying a prefix.

$ aws s3 rm s3://bucket-name --recursive

For more information, see Using High-Level S3 Commands with the AWS Command Line Interface in the
AWS Command Line Interface User Guide.
Note
You cannot remove objects from a bucket with versioning enabled. Amazon S3 adds a delete
marker when you delete an object, which is what this command will do. For more information
about versioning, see Using Versioning (p. 425).

To empty a bucket with versioning enabled, you have the following options:

• Delete the bucket programmatically using the AWS SDK.


• Use the bucket's lifecycle configuration to request that Amazon S3 delete the objects.
• Use the Amazon S3 console (see How Do I Empty an S3 Bucket? in the Amazon Simple Storage Service
Console User Guide).

Empty a Bucket: Using Lifecycle Configuration


You can configure lifecycle on your bucket to expire objects and request that Amazon S3 delete expired
objects. You can add lifecycle configuration rules to expire all or a subset of objects with a specific key
name prefix. For example, to remove all objects in a bucket, you can set lifecycle rule to expire objects
one day after creation.

If your bucket has versioning enabled, you can also configure the rule to expire noncurrent objects.

API Version 2006-03-01


64
Amazon Simple Storage Service Developer Guide
Default Encryption for a Bucket

Warning
After your objects expire, Amazon S3 deletes the expired objects. If you just want to empty the
bucket and not delete it, make sure you remove the lifecycle configuration rule you added to
empty the bucket so that any new objects you create in the bucket will remain in the bucket.

For more information, see Object Lifecycle Management (p. 115) and Configuring Object
Expiration (p. 121).

Empty a Bucket: Using the AWS SDKs


You can use the AWS SDKs to empty a bucket or remove a subset of objects with a specific key name
prefix.

For an example of how to empty a bucket using AWS SDK for Java, see Delete a Bucket Using the AWS
SDK for Java (p. 62). The code deletes all objects, regardless of whether the bucket has versioning
enabled or not, and then it deletes the bucket. To just empty the bucket, make sure you remove the
statement that deletes the bucket.

For more information about using other AWS SDKs, see Tools for Amazon Web Services.

Amazon S3 Default Encryption for S3 Buckets


Amazon S3 default encryption provides a way to set the default encryption behavior for an S3 bucket.
You can set default encryption on a bucket so that all objects are encrypted when they are stored in the
bucket. The objects are encrypted using server-side encryption with either Amazon S3-managed keys
(SSE-S3) or AWS KMS-managed keys (SSE-KMS).

When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk in its
data centers and decrypts it when you download the objects. For more information about protecting
data using server-side encryption and encryption key management, see Protecting Data Using
Encryption (p. 388).

Default encryption works with all existing and new S3 buckets. Without default encryption, to encrypt
all objects stored in a bucket, you must include encryption information with every object storage
request. You must also set up an S3 bucket policy to reject storage requests that don't include encryption
information.

There are no new charges for using default encryption for S3 buckets. Requests to configure the default
encryption feature incur standard Amazon S3 request charges. For information about pricing, see
Amazon S3 Pricing. For SSE-KMS encryption key storage, AWS Key Management Service charges apply
and are listed at AWS KMS Pricing.

Topics
• How Do I Set Up Amazon S3 Default Encryption for an S3 Bucket? (p. 65)
• Moving to Default Encryption from Using Bucket Policies for Encryption Enforcement (p. 66)
• Using Default Encryption with Cross-Region Replication (p. 66)
• Monitoring Default Encryption with CloudTrail and CloudWatch (p. 67)
• More Info (p. 67)

How Do I Set Up Amazon S3 Default Encryption for


an S3 Bucket?
This section describes how to set up Amazon S3 default encryption. You can use the AWS SDKs, the
Amazon S3 REST API, the AWS Command Line Interface (AWS CLI), or the Amazon S3 console to enable

API Version 2006-03-01


65
Amazon Simple Storage Service Developer Guide
Moving to Default Encryption from Using
Bucket Policies for Encryption Enforcement

the default encryption. The easiest way to set up default encryption for an S3 bucket is by using the AWS
Management Console.

You can set up default encryption on a bucket using any of the following ways:

• Use the Amazon S3 console. For more information, see How Do I Enable Default Encryption for an S3
Bucket? in the Amazon Simple Storage Service Console User Guide.
• Use the following REST APIs:
• Use the REST API PUT Bucket encryption operation to enable default encryption and to set the type
of server-side encryption to use—SSE-S3 or SSE-KMS.
• Use the REST API DELETE Bucket encryption to disable the default encryption of objects. After you
disable default encryption, Amazon S3 encrypts objects only if PUT requests include the encryption
information. For more information, see PUT Object and PUT Object - Copy.
• Use the REST API GET Bucket encryption to check the current default encryption configuration.
• Use the AWS CLI and AWS SDKs. For more information, see Using the AWS SDKs, CLI, and
Explorers (p. 655).

After you enable default encryption for a bucket, the following encryption behavior applies:

• There is no change to the encryption of the objects that existed in the bucket before default
encryption was enabled.
• When you upload objects after enabling default encryption:
• If your PUT request headers don't include encryption information, Amazon S3 uses the bucket’s
default encryption settings to encrypt the objects.
• If your PUT request headers include encryption information, Amazon S3 uses the encryption
information from the PUT request to encrypt objects before storing them in Amazon S3. If the PUT
succeeds, the response is an HTTP/1.1 200 OK with the encryption information in the response
headers. For more information, see PUT Object.
• If you use the SSE-KMS option for your default encryption configuration, you are subject to the RPS
(requests per second) limits of AWS KMS. For more information about AWS KMS limits and how to
request a limit increase, see AWS KMS limits.

Moving to Default Encryption from Using Bucket


Policies for Encryption Enforcement
If you currently enforce object encryption for an S3 bucket by using a bucket policy to reject PUT
requests without encryption headers, we recommend that you use the following procedure to start using
default encryption.

To change from using a bucket policy to reject PUT requests without encryption headers to
using default encryption

1. If you plan to specify that default encryption use SSE-KMS, make sure that all PUT and GET object
requests are signed using Signature Version 4 and sent over an SSL connection to Amazon S3. For
information about using AWS KMS, see Protecting Data Using Server-Side Encryption with AWS
KMS–Managed Keys (SSE-KMS) (p. 389).
Note
By default, the Amazon S3 console, the AWS CLI version 1.11.108 and later, and all AWS
SDKs released after May 2016 use Signature Version 4 signed requests sent to Amazon S3
over an SSL connection.
2. Delete the bucket policy statements that reject PUT requests without encryption headers. (We
recommend that you save a backup copy of the bucket policy that is being replaced.)

API Version 2006-03-01


66
Amazon Simple Storage Service Developer Guide
Using Default Encryption with Cross-Region Replication

3. To ensure that the encryption behavior is set as you want, test multiple PUT requests to closely
simulate your actual workload.
4. If you are using default encryption with SSE-KMS, monitor your clients for failing PUT and GET
requests that weren’t failing before your changes. Most likely these are the requests that you didn't
update according to Step 1. Change the failing PUT or GET requests to be signed with AWS Signature
Version 4 and sent over SSL.

After you enable default encryption for your S3 bucket, objects stored in Amazon S3 through any PUT
requests without encryption headers are encrypted using the bucket-level default encryption settings.

Using Default Encryption with Cross-Region


Replication
After you enable default encryption for a cross-region replication destination bucket, the following
encryption behavior applies:

• If objects in the source bucket are not encrypted, the replica objects in the destination bucket are
encrypted using the default encryption settings of the destination bucket. This results in the ETag of
the source object being different from the ETag of the replica object. You must update applications
that use the ETag to accommodate for this difference.

• If objects in the source bucket are encrypted using SSE-S3 or SSE-KMS, the replica objects in the
destination bucket use the same encryption as the source object encryption. The default encryption
settings of the destination bucket are not used.

Monitoring Default Encryption with CloudTrail and


CloudWatch
You can track default encryption configuration requests through AWS CloudTrail events. The API
event names used in CloudTrail logs are PutBucketEncryption, GetBucketEncryption, and
DeleteBucketEncryption. You can also create Amazon CloudWatch Events with S3 bucket-level
operations as the event type. For more information about CloudTrail events, see How Do I Enable Object-
Level Logging for an S3 Bucket with CloudWatch Data Events?

You can use CloudTrail logs for object-level Amazon S3 actions to track PUT and POST requests to
Amazon S3 to verify whether default encryption is being used to encrypt objects when incoming PUT
requests don't have encryption headers.

When Amazon S3 encrypts an object using the default encryption settings, the log includes
the following field as the name/value pair: "SSEApplied":"Default_SSE_S3" or
"SSEApplied":"Default_SSE_KMS".

When Amazon S3 encrypts an object using the PUT encryption headers, the log includes the
following field as the name/value pair: "SSEApplied":"SSE_S3", "SSEApplied":"SSE_KMS,
or "SSEApplied":"SSE_C". For multipart uploads, this information is included in the
InitiateMultipartUpload API requests. For more information about using CloudTrail and
CloudWatch, see Monitoring Amazon S3 (p. 615).

More Info
• PUT Bucket encryption
• DELETE Bucket encryption

API Version 2006-03-01


67
Amazon Simple Storage Service Developer Guide
Bucket Website Configuration

• GET Bucket encryption

Managing Bucket Website Configuration


Topics
• Managing Websites with the AWS Management Console (p. 68)
• Managing Websites with the AWS SDK for Java (p. 68)
• Managing Websites with the AWS SDK for .NET (p. 69)
• Managing Websites with the AWS SDK for PHP (p. 71)
• Managing Websites with the REST API (p. 72)

You can host static websites in Amazon S3 by configuring your bucket for website hosting. For more
information, see Hosting a Static Website on Amazon S3 (p. 515). There are several ways you can
manage your bucket's website configuration. You can use the AWS Management Console to manage
configuration without writing any code. You can programmatically create, update, and delete the website
configuration by using the AWS SDKs. The SDKs provide wrapper classes around the Amazon S3 REST
API. If your application requires it, you can send REST API requests directly from your application.

Managing Websites with the AWS Management


Console
For more information, see Configuring a Bucket for Website Hosting (p. 517).

Managing Websites with the AWS SDK for Java


The following example shows how to use the AWS SDK for Java to manage website configuration
for a bucket. To add a website configuration to a bucket, you provide a bucket name and a website
configuration. The website configuration must include an index document and can include an optional
error document. These documents must already exist in the bucket. For more information, see PUT
Bucket website. For more information about the Amazon S3 website feature, see Hosting a Static
Website on Amazon S3 (p. 515).

Example

The following example uses the AWS SDK for Java to add a website configuration to a bucket, retrieve
and print the configuration, and then delete the configuration and verify the deletion. For instructions on
how to create and test a working sample, see Testing the Amazon S3 Java Code Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketWebsiteConfiguration;

public class WebsiteConfiguration {

API Version 2006-03-01


68
Amazon Simple Storage Service Developer Guide
Using the AWS SDK for .NET

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String indexDocName = "*** Index document name ***";
String errorDocName = "*** Error document name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Print the existing website configuration, if it exists.


printWebsiteConfig(s3Client, bucketName);

// Set the new website configuration.


s3Client.setBucketWebsiteConfiguration(bucketName, new
BucketWebsiteConfiguration(indexDocName, errorDocName));

// Verify that the configuration was set properly by printing it.


printWebsiteConfig(s3Client, bucketName);

// Delete the website configuration.


s3Client.deleteBucketWebsiteConfiguration(bucketName);

// Verify that the website configuration was deleted by printing it.


printWebsiteConfig(s3Client, bucketName);
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}

private static void printWebsiteConfig(AmazonS3 s3Client, String bucketName) {


System.out.println("Website configuration: ");
BucketWebsiteConfiguration bucketWebsiteConfig =
s3Client.getBucketWebsiteConfiguration(bucketName);
if (bucketWebsiteConfig == null) {
System.out.println("No website config.");
} else {
System.out.println("Index doc: " +
bucketWebsiteConfig.getIndexDocumentSuffix());
System.out.println("Error doc: " + bucketWebsiteConfig.getErrorDocument());
}
}
}

Managing Websites with the AWS SDK for .NET


The following example shows how to use the AWS SDK for .NET to manage website configuration
for a bucket. To add a website configuration to a bucket, you provide a bucket name and a website
configuration. The website configuration must include an index document and can contain an optional
error document. These documents must be stored in the bucket. For more information, see PUT Bucket
website. For more information about the Amazon S3 website feature, see Hosting a Static Website on
Amazon S3 (p. 515).

API Version 2006-03-01


69
Amazon Simple Storage Service Developer Guide
Using the AWS SDK for .NET

Example
The following C# code example adds a website configuration to the specified bucket. The configuration
specifies both the index document and the error document names. For instructions on how to create and
test a working sample, see Running the Amazon S3 .NET Code Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class WebsiteConfigTest
{
private const string bucketName = "*** bucket name ***";
private const string indexDocumentSuffix = "*** index object key ***"; // For
example, index.html.
private const string errorDocument = "*** error object key ***"; // For example,
error.html.
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;
public static void Main()
{
client = new AmazonS3Client(bucketRegion);
AddWebsiteConfigurationAsync(bucketName, indexDocumentSuffix,
errorDocument).Wait();
}

static async Task AddWebsiteConfigurationAsync(string bucketName,


string indexDocumentSuffix,
string errorDocument)
{
try
{
// 1. Put the website configuration.
PutBucketWebsiteRequest putRequest = new PutBucketWebsiteRequest()
{
BucketName = bucketName,
WebsiteConfiguration = new WebsiteConfiguration()
{
IndexDocumentSuffix = indexDocumentSuffix,
ErrorDocument = errorDocument
}
};
PutBucketWebsiteResponse response = await
client.PutBucketWebsiteAsync(putRequest);

// 2. Get the website configuration.


GetBucketWebsiteRequest getRequest = new GetBucketWebsiteRequest()
{
BucketName = bucketName
};
GetBucketWebsiteResponse getResponse = await
client.GetBucketWebsiteAsync(getRequest);
Console.WriteLine("Index document: {0}",
getResponse.WebsiteConfiguration.IndexDocumentSuffix);
Console.WriteLine("Error document: {0}",
getResponse.WebsiteConfiguration.ErrorDocument);
}

API Version 2006-03-01


70
Amazon Simple Storage Service Developer Guide
Using the SDK for PHP

catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

Managing Websites with the AWS SDK for PHP


This topic explains how to use classes from the AWS SDK for PHP to configure and manage an Amazon
S3 bucket for website hosting. It assumes that you are already following the instructions for Using
the AWS SDK for PHP and Running PHP Examples (p. 664) and have the AWS SDK for PHP properly
installed. For more information about the Amazon S3 website feature, see Hosting a Static Website on
Amazon S3 (p. 515).

The following PHP example adds a website configuration to the specified bucket. The
create_website_config method explicitly provides the index document and error document names.
The example also retrieves the website configuration and prints the response. For more information
about the Amazon S3 website feature, see Hosting a Static Website on Amazon S3 (p. 515).

For instructions on creating and testing a working sample, see Using the AWS SDK for PHP and Running
PHP Examples (p. 664).

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Add the website configuration.


$s3->putBucketWebsite([
'Bucket' => $bucket,
'WebsiteConfiguration' => [
'IndexDocument' => ['Suffix' => 'index.html'],
'ErrorDocument' => ['Key' => 'error.html']
]
]);

// Retrieve the website configuration.


$result = $s3->getBucketWebsite([
'Bucket' => $bucket
]);
echo $result->getPath('IndexDocument/Suffix');

// Delete the website configuration.


$s3->deleteBucketWebsite([
'Bucket' => $bucket

API Version 2006-03-01


71
Amazon Simple Storage Service Developer Guide
Using the REST API

]);

Related Resources
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
• AWS SDK for PHP Documentation

Managing Websites with the REST API


You can use the AWS Management Console or the AWS SDK to configure a bucket as a website. However,
if your application requires it, you can send REST requests directly. For more information, see the
following sections in the Amazon Simple Storage Service API Reference.

• PUT Bucket website


• GET Bucket website
• DELETE Bucket website

Amazon S3 Transfer Acceleration


Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances
between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s
globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3
over an optimized network path.

When using Transfer Acceleration, additional data transfer charges may apply. For more information
about pricing, see Amazon S3 Pricing.

Topics
• Why Use Amazon S3 Transfer Acceleration? (p. 72)
• Getting Started with Amazon S3 Transfer Acceleration (p. 73)
• Requirements for Using Amazon S3 Transfer Acceleration (p. 74)
• Amazon S3 Transfer Acceleration Examples (p. 74)

Why Use Amazon S3 Transfer Acceleration?


You might want to use Transfer Acceleration on a bucket for various reasons, including the following:

• You have customers that upload to a centralized bucket from all over the world.
• You transfer gigabytes to terabytes of data on a regular basis across continents.
• You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon
S3.

For more information about when to use Transfer Acceleration, see Amazon S3 FAQs.

Using the Amazon S3 Transfer Acceleration Speed Comparison


Tool
You can use the Amazon S3 Transfer Acceleration Speed Comparison tool to compare accelerated and
non-accelerated upload speeds across Amazon S3 regions. The Speed Comparison tool uses multipart

API Version 2006-03-01


72
Amazon Simple Storage Service Developer Guide
Getting Started

uploads to transfer a file from your browser to various Amazon S3 regions with and without using
Transfer Acceleration.

You can access the Speed Comparison tool using either of the following methods:

• Copy the following URL into your browser window, replacing region with the region that you are
using (for example, us-west-2) and yourBucketName with the name of the bucket that you want to
evaluate:

http://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-
speed-comparsion.html?region=region&origBucketName=yourBucketName

For a list of the regions supported by Amazon S3, see Regions and Endpoints in the Amazon Web
Services General Reference.
• Use the Amazon S3 console. For details, see Enabling Transfer Acceleration in the Amazon Simple
Storage Service Console User Guide.

Getting Started with Amazon S3 Transfer


Acceleration
To get started using Amazon S3 Transfer Acceleration, perform the following steps:

1. Enable Transfer Acceleration on a bucket – For your bucket to work with transfer acceleration, the
bucket name must conform to DNS naming requirements and must not contain periods (".").

You can enable Transfer Acceleration on a bucket any of the following ways:
• Use the Amazon S3 console. For more information, see Enabling Transfer Acceleration in the
Amazon Simple Storage Service Console User Guide.
• Use the REST API PUT Bucket accelerate operation.
• Use the AWS CLI and AWS SDKs. For more information, see Using the AWS SDKs, CLI, and
Explorers (p. 655).
2. Transfer data to and from the acceleration-enabled bucket by using one of the following s3-
accelerate endpoint domain names:
• bucketname.s3-accelerate.amazonaws.com – to access an acceleration-enabled bucket.
• bucketname.s3-accelerate.dualstack.amazonaws.com – to access an acceleration-enabled
bucket over IPv6. Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and
IPv4. The Transfer Acceleration dual-stack endpoint only uses the virtual hosted-style type of
endpoint name. For more information, see Getting Started Making Requests over IPv6 (p. 12) and
Using Amazon S3 Dual-Stack Endpoints (p. 14).
Important
Support for the dual-stack accelerated endpoint currently is only available from the AWS
Java SDK. Support for the AWS CLI and other AWS SDKs is coming soon.
Note
You can continue to use the regular endpoint in addition to the accelerate endpoints.

You can point your Amazon S3 PUT object and GET object requests to the s3-accelerate endpoint
domain name after you enable Transfer Acceleration. For example, let's say you currently have a REST
API application using PUT Object that uses the host name mybucket.s3.amazonaws.com in the PUT
request. To accelerate the PUT you simply change the host name in your request to mybucket.s3-
accelerate.amazonaws.com. To go back to using the standard upload speed, simply change the name
back to mybucket.s3.amazonaws.com.
API Version 2006-03-01
73
Amazon Simple Storage Service Developer Guide
Requirements for Using Amazon S3 Transfer Acceleration

After Transfer Acceleration is enabled, it can take up to 20 minutes for you to realize the performance
benefit. However, the accelerate endpoint will be available as soon as you enable Transfer
Acceleration.

You can use the accelerate endpoint in the AWS CLI, AWS SDKs, and other tools that transfer data
to and from Amazon S3. If you are using the AWS SDKs, some of the supported languages use
an accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint
for Transfer Acceleration to bucketname.s3-accelerate.amazonaws.com. For examples of how
to use an accelerate endpoint client configuration flag, see Amazon S3 Transfer Acceleration
Examples (p. 74).

You can use all of the Amazon S3 operations through the transaction acceleration endpoints, except for
the following the operations: GET Service (list buckets), PUT Bucket (create bucket), and DELETE Bucket.
Also, Amazon S3 Transfer Acceleration does not support cross region copies using PUT Object - Copy.

Requirements for Using Amazon S3 Transfer


Acceleration
The following are the requirements for using Transfer Acceleration on an S3 bucket:

• Transfer Acceleration is only supported on virtual style requests. For more information about virtual
style requests, see Making Requests Using the REST API (p. 44).
• The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain
periods (".").
• Transfer Acceleration must be enabled on the bucket. After enabling Transfer Acceleration on a bucket
it might take up to thirty minutes before the data transfer speed to the bucket increases.
• To access the bucket that is enabled for Transfer Acceleration, you must use the endpoint
bucketname.s3-accelerate.amazonaws.com. or the dual-stack endpoint bucketname.s3-
accelerate.dualstack.amazonaws.com to connect to the enabled bucket over IPv6.
• You must be the bucket owner to set the transfer acceleration state. The bucket owner can
assign permissions to other users to allow them to set the acceleration state on a bucket. The
s3:PutAccelerateConfiguration permission permits users to enable or disable Transfer
Acceleration on a bucket. The s3:GetAccelerateConfiguration permission permits users
to return the Transfer Acceleration state of a bucket, which is either Enabled or Suspended.
For more information about these permissions, see Permissions Related to Bucket Subresource
Operations (p. 315) and Managing Access Permissions to Your Amazon S3 Resources (p. 269).

More Info
• GET Bucket accelerate
• PUT Bucket accelerate

Amazon S3 Transfer Acceleration Examples


This section provides examples of how to enable Amazon S3 Transfer Acceleration on a bucket and
use the acceleration endpoint for the enabled bucket. Some of the AWS SDK supported languages
(for example, Java and .NET) use an accelerate endpoint client configuration flag so you don't need to
explicitly set the endpoint for Transfer Acceleration to bucketname.s3-accelerate.amazonaws.com. For
more information about Transfer Acceleration, see Amazon S3 Transfer Acceleration (p. 72).

Topics

API Version 2006-03-01


74
Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples

• Using the Amazon S3 Console (p. 75)


• Using Transfer Acceleration from the AWS Command Line Interface (AWS CLI) (p. 75)
• Using Transfer Acceleration with the AWS SDK for Java (p. 76)
• Using Transfer Acceleration from the AWS SDK for .NET (p. 77)
• Using Transfer Acceleration from the AWS SDK for JavaScript (p. 79)
• Using Transfer Acceleration from the AWS SDK for Python (Boto) (p. 79)
• Using Other AWS SDKs (p. 79)

Using the Amazon S3 Console


For information about enabling Transfer Acceleration on a bucket using the Amazon S3 console, see
Enabling Transfer Acceleration in the Amazon Simple Storage Service Console User Guide.

Using Transfer Acceleration from the AWS Command Line


Interface (AWS CLI)
This section provides examples of AWS CLI commands used for Transfer Acceleration. For instructions on
setting up the AWS CLI, see Setting Up the AWS CLI (p. 661).

Enabling Transfer Acceleration on a Bucket Using the AWS CLI


Use the AWS CLI put-bucket-accelerate-configuration command to enable or suspend Transfer
Acceleration on a bucket. The following example sets Status=Enabled to enable Transfer Acceleration
on a bucket. You use Status=Suspended to suspend Transfer Acceleration.

Example

$ aws s3api put-bucket-accelerate-configuration --bucket bucketname --accelerate-


configuration Status=Enabled

Using the Transfer Acceleration from the AWS CLI


Setting the configuration value use_accelerate_endpoint to true in a profile in your AWS Config
File will direct all Amazon S3 requests made by s3 and s3api AWS CLI commands to the accelerate
endpoint: s3-accelerate.amazonaws.com. Transfer Acceleration must be enabled on your bucket to
use the accelerate endpoint.

All request are sent using the virtual style of bucket addressing:  my-bucket.s3-
accelerate.amazonaws.com. Any ListBuckets, CreateBucket, and DeleteBucket requests will
not be sent to the accelerate endpoint as the endpoint does not support those operations. For more
information about use_accelerate_endpoint, see AWS CLI S3 Configuration.

The following example sets use_accelerate_endpoint to true in the default profile.

Example

$ aws configure set default.s3.use_accelerate_endpoint true

If you want to use the accelerate endpoint for some AWS CLI commands but not others, you can use
either one of the following two methods:

API Version 2006-03-01


75
Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples

• You can use the accelerate endpoint per command by setting the --endpoint-url parameter to
https://s3-accelerate.amazonaws.com or http://s3-accelerate.amazonaws.com for any
s3 or s3api command.
• You can setup separate profiles in your AWS Config File. For example, create one profile that sets
use_accelerate_endpoint to true and a profile that does not set use_accelerate_endpoint.
When you execute a command specify which profile you want to use, depending upon whether or not
you want to use the accelerate endpoint.

AWS CLI Examples of Uploading an Object to a Bucket Enabled for Transfer


Acceleration
The following example uploads a file to a bucket enabled for Transfer Acceleration by using the default
profile that has been configured to use the accelerate endpoint.

Example

$ aws s3 cp file.txt s3://bucketname/keyname --region region

The following example uploads a file to a bucket enabled for Transfer Acceleration by using the --
endpoint-url parameter to specify the accelerate endpoint.

Example

$ aws configure set s3.addressing_style virtual


$ aws s3 cp file.txt s3://bucketname/keyname --region region --endpoint-url http://s3-
accelerate.amazonaws.com

Using Transfer Acceleration with the AWS SDK for Java


Example

The following example shows how to use an accelerate endpoint to upload an object to Amazon S3. The
example does the following:

• Creates an AmazonS3Client that is configured to use accelerate endpoints. All buckets that the client
accesses must have transfer acceleration enabled.
• Enables transfer acceleration on a specified bucket. This step is necessary only if the bucket you specify
doesn't already have transfer acceleration enabled.
• Verifies that transfer acceleration is enabled for the specified bucket.
• Uploads a new object to the specified bucket using the bucket's accelerate endpoint.

For more information about using Transfer Acceleration, see Getting Started with Amazon S3 Transfer
Acceleration (p. 73). For instructions on creating and testing a working sample, see Testing the
Amazon S3 Java Code Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;

API Version 2006-03-01


76
Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples

import com.amazonaws.services.s3.model.BucketAccelerateConfiguration;
import com.amazonaws.services.s3.model.BucketAccelerateStatus;
import com.amazonaws.services.s3.model.GetBucketAccelerateConfigurationRequest;
import com.amazonaws.services.s3.model.SetBucketAccelerateConfigurationRequest;

public class TransferAcceleration {


public static void main(String[] args) {
String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ***";

try {
// Create an Amazon S3 client that is configured to use the accelerate
endpoint.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.enableAccelerateMode()
.build();

// Enable Transfer Acceleration for the specified bucket.


s3Client.setBucketAccelerateConfiguration(
new SetBucketAccelerateConfigurationRequest(bucketName,
new
BucketAccelerateConfiguration(

BucketAccelerateStatus.Enabled)));

// Verify that transfer acceleration is enabled for the bucket.


String accelerateStatus = s3Client.getBucketAccelerateConfiguration(
new
GetBucketAccelerateConfigurationRequest(bucketName))
.getStatus();
System.out.println("Bucket accelerate status: " + accelerateStatus);

// Upload a new object using the accelerate endpoint.


s3Client.putObject(bucketName, keyName, "Test object for transfer
acceleration");
System.out.println("Object \"" + keyName + "\" uploaded with transfer
acceleration.");
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Using Transfer Acceleration from the AWS SDK for .NET


The following example shows how to use the AWS SDK for .NET to enable Transfer Acceleration on a
bucket. For instructions on how to create and test a working sample, see Running the Amazon S3 .NET
Code Examples (p. 664).

Example

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.

API Version 2006-03-01


77
Amazon Simple Storage Service Developer Guide
Transfer Acceleration Examples

// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-


developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class TransferAccelerationTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
public static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
EnableAccelerationAsync().Wait();
}

static async Task EnableAccelerationAsync()


{
try
{
var putRequest = new PutBucketAccelerateConfigurationRequest
{
BucketName = bucketName,
AccelerateConfiguration = new AccelerateConfiguration
{
Status = BucketAccelerateStatus.Enabled
}
};
await s3Client.PutBucketAccelerateConfigurationAsync(putRequest);

var getRequest = new GetBucketAccelerateConfigurationRequest


{
BucketName = bucketName
};
var response = await
s3Client.GetBucketAccelerateConfigurationAsync(getRequest);

Console.WriteLine("Acceleration state = '{0}' ", response.Status);


}
catch (AmazonS3Exception amazonS3Exception)
{
Console.WriteLine(
"Error occurred. Message:'{0}' when setting transfer
acceleration",
amazonS3Exception.Message);
}
}
}
}

When uploading an object to a bucket that has Transfer Acceleration enabled, you specify using
acceleration endpoint at the time of creating a client as shown:

var client = new AmazonS3Client(new AmazonS3Config


{
RegionEndpoint = TestRegionEndpoint,
UseAccelerateEndpoint = true
}

API Version 2006-03-01


78
Amazon Simple Storage Service Developer Guide
Requester Pays Buckets

Using Transfer Acceleration from the AWS SDK for JavaScript


For an example of enabling Transfer Acceleration by using the AWS SDK for JavaScript, see Calling the
putBucketAccelerateConfiguration operation in the AWS SDK for JavaScript API Reference.

Using Transfer Acceleration from the AWS SDK for Python


(Boto)
For an example of enabling Transfer Acceleration by using the SDK for Python, see
put_bucket_accelerate_configuration in the AWS SDK for Python (Boto 3) API Reference.

Using Other AWS SDKs


For information about using other AWS SDKs, see Sample Code and Libraries.

Requester Pays Buckets


Topics
• Configure Requester Pays by Using the Amazon S3 Console (p. 80)
• Configure Requester Pays with the REST API (p. 80)
• Charge Details (p. 82)

In general, bucket owners pay for all Amazon S3 storage and data transfer costs associated with their
bucket. A bucket owner, however, can configure a bucket to be a Requester Pays bucket. With Requester
Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data
download from the bucket. The bucket owner always pays the cost of storing data.

Typically, you configure buckets to be Requester Pays when you want to share data but not incur charges
associated with others accessing the data. You might, for example, use Requester Pays buckets when
making available large datasets, such as zip code directories, reference data, geospatial information, or
web crawling data.
Important
If you enable Requester Pays on a bucket, anonymous access to that bucket is not allowed.

You must authenticate all requests involving Requester Pays buckets. The request authentication enables
Amazon S3 to identify and charge the requester for their use of the Requester Pays bucket.

When the requester assumes an AWS Identity and Access Management (IAM) role prior to making their
request, the account to which the role belongs is charged for the request. For more information about
IAM roles, see IAM Roles in the IAM User Guide.

After you configure a bucket to be a Requester Pays bucket, requesters must include x-amz-request-
payer in their requests either in the header, for POST, GET and HEAD requests, or as a parameter in
a REST request to show that they understand that they will be charged for the request and the data
download.

Requester Pays buckets do not support the following.

• Anonymous requests
• BitTorrent
• SOAP requests

API Version 2006-03-01


79
Amazon Simple Storage Service Developer Guide
Configure with the Console

• You cannot use a Requester Pays bucket as the target bucket for end user logging, or vice versa;
however, you can turn on end user logging on a Requester Pays bucket where the target bucket is not a
Requester Pays bucket.

Configure Requester Pays by Using the Amazon S3


Console
You can configure a bucket for Requester Pays by using the Amazon S3 console.

To configure a bucket for Requester Pays

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Buckets list, click the details icon on the left of the bucket name and then click Properties to
display bucket properties.
3. In the Properties pane, click Requester Pays.
4. Select the Enabled check box.

Configure Requester Pays with the REST API


Topics
• Setting the requestPayment Bucket Configuration (p. 80)
• Retrieving the requestPayment Configuration (p. 81)
• Downloading Objects in Requester Pays Buckets (p. 82)

Setting the requestPayment Bucket Configuration


Only the bucket owner can set the RequestPaymentConfiguration.payer configuration value of a
bucket to BucketOwner, the default, or Requester. Setting the requestPayment resource is optional.
By default, the bucket is not a Requester Pays bucket.

To revert a Requester Pays bucket to a regular bucket, you use the value BucketOwner. Typically, you
would use BucketOwner when uploading data to the Amazon S3 bucket, and then you would set the
value to Requester before publishing the objects in the bucket.

To set requestPayment

• Use a PUT request to set the Payer value to Requester on a specified bucket.

PUT ?requestPayment HTTP/1.1

API Version 2006-03-01


80
Amazon Simple Storage Service Developer Guide
Configure with the REST API

Host: [BucketName].s3.amazonaws.com
Content-Length: 173
Date: Wed, 01 Mar 2009 12:00:00 GMT
Authorization: AWS [Signature]

<RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Payer>Requester</Payer>
</RequestPaymentConfiguration>

If the request succeeds, Amazon S3 returns a response similar to the following.

HTTP/1.1 200 OK
x-amz-id-2: [id]
x-amz-request-id: [request_id]
Date: Wed, 01 Mar 2009 12:00:00 GMT
Content-Length: 0
Connection: close
Server: AmazonS3
x-amz-request-charged:requester

You can set Requester Pays only at the bucket level; you cannot set Requester Pays for specific objects
within the bucket.

You can configure a bucket to be BucketOwner or Requester at any time. Realize, however, that there
might be a small delay, on the order of minutes, before the new configuration value takes effect.
Note
Bucket owners who give out presigned URLs should think twice before configuring a bucket to
be Requester Pays, especially if the URL has a very long lifetime. The bucket owner is charged
each time the requester uses a presigned URL that uses the bucket owner's credentials.

Retrieving the requestPayment Configuration


You can determine the Payer value that is set on a bucket by requesting the resource requestPayment.

To return the requestPayment resource

• Use a GET request to obtain the requestPayment resource, as shown in the following request.

GET ?requestPayment HTTP/1.1


Host: [BucketName].s3.amazonaws.com
Date: Wed, 01 Mar 2009 12:00:00 GMT
Authorization: AWS [Signature]

If the request succeeds, Amazon S3 returns a response similar to the following.

HTTP/1.1 200 OK
x-amz-id-2: [id]
x-amz-request-id: [request_id]
Date: Wed, 01 Mar 2009 12:00:00 GMT
Content-Type: [type]
Content-Length: [length]
Connection: close
Server: AmazonS3

<?xml version="1.0" encoding="UTF-8"?>


<RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">

API Version 2006-03-01


81
Amazon Simple Storage Service Developer Guide
Charge Details

<Payer>Requester</Payer>
</RequestPaymentConfiguration>

This response shows that the payer value is set to Requester.

Downloading Objects in Requester Pays Buckets


Because requesters are charged for downloading data from Requester Pays buckets, the requests must
contain a special parameter, x-amz-request-payer, which confirms that the requester knows he or
she will be charged for the download. To access objects in Requester Pays buckets, requests must include
one of the following.

• For GET, HEAD, and POST requests, include x-amz-request-payer : requester in the header
• For signed URLs, include x-amz-request-payer=requester in the request

If the request succeeds and the requester is charged, the response includes the header x-amz-request-
charged:requester. If x-amz-request-payer is not in the request, Amazon S3 returns a 403 error
and charges the bucket owner for the request.
Note
Bucket owners do not need to add x-amz-request-payer to their requests.
Ensure that you have included x-amz-request-payer and its value in your signature
calculation. For more information, see Constructing the CanonicalizedAmzHeaders
Element (p. 677).

To download objects from a Requester Pays bucket

• Use a GET request to download an object from a Requester Pays bucket, as shown in the following
request.

GET / [destinationObject] HTTP/1.1


Host: [BucketName].s3.amazonaws.com
x-amz-request-payer : requester
Date: Wed, 01 Mar 2009 12:00:00 GMT
Authorization: AWS [Signature]

If the GET request succeeds and the requester is charged, the response includes x-amz-request-
charged:requester.

Amazon S3 can return an Access Denied error for requests that try to get objects from a Requester
Pays bucket. For more information, see Error Responses.

Charge Details
The charge for successful Requester Pays requests is straightforward: the requester pays for the data
transfer and the request; the bucket owner pays for the data storage. However, the bucket owner is
charged for the request under the following conditions:

• The requester doesn't include the parameter x-amz-request-payer in the header (GET, HEAD, or
POST) or as a parameter (REST) in the request (HTTP code 403).
• Request authentication fails (HTTP code 403).
• The request is anonymous (HTTP code 403).
• The request is a SOAP request.

API Version 2006-03-01


82
Amazon Simple Storage Service Developer Guide
Access Control

Buckets and Access Control


Each bucket has an associated access control policy. This policy governs the creation, deletion and
enumeration of objects within the bucket. For more information, see Managing Access Permissions to
Your Amazon S3 Resources (p. 269).

Billing and Usage Reporting for S3 Buckets


When using Amazon Simple Storage Service (Amazon S3), you don't have to pay any upfront fees or
commit to how much content you'll store. As with the other Amazon Web Services (AWS) services, you
pay as you go and pay only for what you use.

AWS provides the following reports for Amazon S3:

• Billing reports – Multiple reports that provide high-level views of all of the activity for the AWS
services that you're using, including Amazon S3. AWS always bills the owner of the S3 bucket for
Amazon S3 fees, unless the bucket was created as a Requester Pays bucket. For more information
about Requester Pays, see Requester Pays Buckets (p. 79). For more information about billing
reports, see AWS Billing Reports for Amazon S3 (p. 83).
• Usage report – A summary of activity for a specific service, aggregated by hour, day, or month.
You can choose which usage type and operation to include. You can also choose how the data is
aggregated. For more information, see AWS Usage Report for Amazon S3 (p. 85).

The following topics provide information about billing and usage reporting for Amazon S3.

Topics
• AWS Billing Reports for Amazon S3 (p. 83)
• AWS Usage Report for Amazon S3 (p. 85)
• Understanding Your AWS Billing and Usage Reports for Amazon S3 (p. 86)
• Using Cost Allocation S3 Bucket Tags (p. 93)

AWS Billing Reports for Amazon S3


Your monthly bill from AWS separates your usage information and cost by AWS service and function.
There are several AWS billing reports available, the monthly report, the cost allocation report, and
detailed billing reports. For information about how to see your billing reports, see Viewing Your Bill in
the AWS Billing and Cost Management User Guide.

You can also download a usage report that gives more detail about your Amazon S3 storage usage than
the billing reports. For more information, see AWS Usage Report for Amazon S3 (p. 85).

The following table lists the charges associated with Amazon S3 usage.

Amazon S3 Usage Charges

Charge Comments

Storage You pay for storing objects in your S3 buckets.


The rate you’re charged depends on your objects'
size, how long you stored the objects during
the month, and the storage class—STANDARD,
INTELLIGENT_TIERING, STANDARD_IA (IA for
infrequent access), ONEZONE_IA, GLACIER, or

API Version 2006-03-01


83
Amazon Simple Storage Service Developer Guide
Billing Reports

Charge Comments
Reduced Redundancy Storage (RRS). For more
information about storage classes, see Storage
Classes (p. 100).

Monitoring and Automation You pay a monthly monitoring and automation


fee per object stored in the INTELLIGENT_TIERING
storage class to monitor access patterns
and move objects between access tiers in
INTELLIGENT_TIERING.

Requests You pay for requests, for example, GET requests,


made against your S3 buckets and objects.
This includes lifecycle requests. The rates for
requests depend on what kind of request you’re
making. For information about request pricing,
see Amazon S3 Pricing.

Retrievals You pay for retrieving objects that are stored


in STANDARD_IA, ONEZONE_IA, and GLACIER
storage.

Early Deletes If you delete an object stored in


INTELLIGENT_TIERING, STANDARD_IA,
ONEZONE_IA, or GLACIER storage before the
minimum storage commitment has passed, you
pay an early deletion fee for that object.

Storage Management You pay for the storage management features


(Amazon S3 inventory, analytics, and object
tagging) that are enabled on your account’s
buckets.

Bandwidth You pay for all bandwidth into and out of Amazon
S3, except for the following:

• Data transferred in from the internet


• Data transferred out to an Amazon Elastic
Compute Cloud (Amazon EC2) instance, when
the instance is in the same AWS Region as the
S3 bucket
• Data transferred out to Amazon CloudFront
(CloudFront)

You also pay a fee for any data transferred using


Amazon S3 Transfer Acceleration.

For detailed information on Amazon S3 usage charges for storage, data transfer, and services, see
Amazon S3 Pricing and the Amazon S3 FAQ.

For information on understanding codes and abbreviations used in the billing and usage reports for
Amazon S3, see Understanding Your AWS Billing and Usage Reports for Amazon S3 (p. 86).

More Info
• AWS Usage Report for Amazon S3 (p. 85)

API Version 2006-03-01


84
Amazon Simple Storage Service Developer Guide
Usage Report

• Using Cost Allocation S3 Bucket Tags (p. 93)


• AWS Billing and Cost Management
• Amazon S3 Pricing
• Amazon S3 FAQ
• Glacier Pricing

AWS Usage Report for Amazon S3


For more detail about your Amazon S3 storage usage, download dynamically generated AWS usage
reports. You can choose which usage type, operation, and time period to include. You can also choose
how the data is aggregated.

When you download a usage report, you can choose to aggregate usage data by hour, day, or month.
The Amazon S3 usage report lists operations by usage type and AWS Region, for example, the amount of
data transferred out of the Asia Pacific (Sydney) Region.

The Amazon S3 usage report includes the following information:

• Service – Amazon Simple Storage Service


• Operation – The operation performed on your bucket or object. For a detailed explanation of Amazon
S3 operations, see Tracking Operations in Your Usage Reports (p. 93).
• UsageType – One of the following values:
• A code that identifies the type of storage
• A code that identifies the type of request
• A code that identifies the type of retrieval
• A code that identifies the type of data transfer
• A code that identifies early deletions from INTELLIGENT_TIERING, STANDARD_IA, ONEZONE_IA, or
GLACIER storage
• StorageObjectCount – The count of objects stored within a given bucket

For a detailed explanation of Amazon S3 usage types, see Understanding Your AWS Billing and Usage
Reports for Amazon S3 (p. 86).
• Resource – The name of the bucket associated with the listed usage.
• StartTime – Start time of the day that the usage applies to, in Coordinated Universal Time (UTC).
• EndTime – End time of the day that the usage applies to, in Coordinated Universal Time (UTC).
• UsageValue – One of the following volume values:
• The number of requests during the specified time period
• The amount of data transferred, in bytes
• The amount of data stored, in byte-hours, which is the number of bytes stored in a given hour
• The amount of data associated with restorations from GLACIER, STANDARD_IA, or ONEZONE_IA
storage, in bytes

Tip
For detailed information about every request that Amazon S3 receives for your objects, turn
on server access logging for your buckets. For more information, see Amazon S3 Server Access
Logging (p. 642).

You can download a usage report as an XML or a comma-separated values (CSV) file. The following is an
example CSV usage report opened in a spreadsheet application.
API Version 2006-03-01
85
Amazon Simple Storage Service Developer Guide
Understanding Billing and Usage Reports

For information on understanding the usage report, see Understanding Your AWS Billing and Usage
Reports for Amazon S3 (p. 86).

Downloading the AWS Usage Report


You can download a usage report as an .xml or a .csv file.

To download the usage report

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the title bar, choose your AWS Identity and Access Management (IAM) user name, and then choose
My Billing Dashboard.
3. In the navigation pane, choose Reports.
4. In the Other Reports section, choose AWS Usage Report.
5. For Services:, choose Amazon Simple Storage Service.
6. For Download Usage Report, choose the following settings:

• Usage Types – For a detailed explanation of Amazon S3 usage types, see Understanding Your
AWS Billing and Usage Reports for Amazon S3 (p. 86).
• Operation – For a detailed explanation of Amazon S3 operations, see Tracking Operations in Your
Usage Reports (p. 93).
• Time Period – The time period that you want the report to cover.
• Report Granularity – Whether you want the report to include subtotals by the hour, by the day, or
by the month.
7. To choose the format for the report, choose the Download for that format, and then follow the
prompts to see or save the report.

More Info
• Understanding Your AWS Billing and Usage Reports for Amazon S3 (p. 86)
• AWS Billing Reports for Amazon S3 (p. 83)

Understanding Your AWS Billing and Usage Reports


for Amazon S3
Amazon S3 billing and usage reports use codes and abbreviations. For example, for usage type, which is
defined in the following table, region is replaced with one of the following abbreviations:

• APN1: Asia Pacific (Tokyo)


• APN2: Asia Pacific (Seoul)
• APS1: Asia Pacific (Singapore)

API Version 2006-03-01


86
Amazon Simple Storage Service Developer Guide
Understanding Billing and Usage Reports

• APS2: Asia Pacific (Sydney)


• APS3: Asia Pacific (Mumbai)
• CAN1: Canada (Central)
• EUC1: EU (Frankfurt)
• EU: EU (Ireland)
• EUW2: EU (London)
• EUW3: EU (Paris)
• SAE1: South America (São Paulo)
• UGW1: AWS GovCloud (US-West)
• USE1 (or no prefix): US East (N. Virginia)
• USE2: US East (Ohio)
• USW1: US West (N. California)
• USW2: US West (Oregon)

For information about pricing by AWS Region, see Amazon S3 Pricing.

The first column in the following table lists usage types that appear in your billing and usage reports.

Usage Types

Usage Type Units Granularity Description

region1-region2-AWS-In-Bytes Bytes Hourly The amount of data


transferred in to AWS
Region1 from AWS Region2

region1-region2-AWS-Out-Bytes Bytes Hourly The amount of data


transferred from AWS
Region1 to AWS Region2

region-DataTransfer-In-Bytes Bytes Hourly The amount of data


transferred into Amazon S3
from the internet

region-DataTransfer-Out-Bytes Bytes Hourly The amount of data


transferred from Amazon
1
S3 to the internet

region-C3DataTransfer-In-Bytes Bytes Hourly The amount of data


transferred into Amazon S3
from Amazon EC2 within
the same AWS Region

region-C3DataTransfer-Out-Bytes Bytes Hourly The amount of data


transferred from Amazon
S3 to Amazon EC2 within
the same AWS Region

region-S3G-DataTransfer-In-Bytes Bytes Hourly The amount of data


transferred into Amazon
S3 to restore objects from
GLACIER storage

region-S3G-DataTransfer-Out-Bytes Bytes Hourly The amount of data


transferred from Amazon

API Version 2006-03-01


87
Amazon Simple Storage Service Developer Guide
Understanding Billing and Usage Reports

Usage Type Units Granularity Description


S3 to transition objects to
GLACIER storage

region-DataTransfer-Regional-Bytes Bytes Hourly The amount of data


transferred from Amazon
S3 to AWS resources within
the same AWS Region

StorageObjectCount Count Daily The number of objects


stored within a given
bucket

region-CloudFront-In-Bytes Bytes Hourly The amount of data


transferred into an AWS
Region from a CloudFront
distribution

region-CloudFront-Out-Bytes Bytes Hourly The amount of data


transferred from an AWS
Region to a CloudFront
distribution
2
region-EarlyDelete-ByteHrs Byte-Hours Hourly Prorated storage usage
for objects deleted from
GLACIER storage before
the 90-day minimum
3
commitment ended

region-EarlyDelete-SIA Byte-Hours Hourly Prorated storage usage


for objects deleted from
STANDARD_IA before
the 30-day minimum
4
commitment ended

region-EarlyDelete-ZIA Byte-Hours Hourly Prorated storage usage


for objects deleted from
ONEZONE_IA before
the 30-day minimum
4
commitment ended

region-EarlyDelete-SIA-SmObjects Byte-Hours Hourly Prorated storage usage for


small objects (smaller than
128 KB) that were deleted
from STANDARD_IA before
the 30-day minimum
4
commitment ended

region-EarlyDelete-ZIA-SmObjects Byte-Hours Hourly Prorated storage usage for


small objects (smaller than
128 KB) that were deleted
from ONEZONE_IA before
the 30-day minimum
4
commitment ended

API Version 2006-03-01


88
Amazon Simple Storage Service Developer Guide
Understanding Billing and Usage Reports

Usage Type Units Granularity Description

region-Inventory-ObjectsListed Objects Hourly The number of objects


listed for an object group
(objects are grouped by
bucket or prefix) with an
inventory list

region-Requests-GLACIER-Tier1 Count Hourly The number of


PUT, COPY, POST,
InitiateMultipartUpload,
UploadPart, or
CompleteMultipartUpload
requests on GLACIER
objects

region-Requests-GLACIER-Tier2 Count Hourly The number of GET and all


other requests not listed on
GLACIER objects

region-Requests-SIA-Tier1 Count Hourly The number of PUT, COPY,


POST, or LIST requests on
STANDARD_IA objects

region-Requests-ZIA-Tier1 Count Hourly The number of PUT, COPY,


POST, or LIST requests on
ONEZONE_IA objects

region-Requests-SIA-Tier2 Count Hourly The number of GET and


all other non-SIA-Tier1
requests on STANDARD_IA
objects

region-Requests-ZIA-Tier2 Count Hourly The number of GET and


all other non-ZIA-Tier1
requests on ONEZONE_IA
objects

region-Requests-Tier1 Count Hourly The number of PUT, COPY,


POST, or LIST requests for
STANDARD, RRS, and tags

region-Requests-Tier2 Count Hourly The number of GET and all


other non-Tier1 requests

region-Requests-Tier3 Count Hourly The number of GLACIER


archive requests and
standard restore requests

region-Requests-Tier4 Count Hourly The number of


lifecycle transitions to
INTELLIGENT_TIERING,
STANDARD_IA, or
ONEZONE_IA storage

region-Requests-Tier5 Count Hourly The number of Bulk


GLACIER restore requests

API Version 2006-03-01


89
Amazon Simple Storage Service Developer Guide
Understanding Billing and Usage Reports

Usage Type Units Granularity Description

region-Requests-Tier6 Count Hourly The number of Expedited


GLACIER restore requests

region-Bulk-Retrieval-Bytes Bytes Hourly The number of bytes of


data retrieved with Bulk
GLACIER requests

region-Requests-INT-Tier1 Count Hourly The number of PUT, COPY,


POST, or LIST requests
on INTELLIGENT_TIERING
objects

region-Requests-INT-Tier2 Count Hourly The number of GET and all


other non-Tier1 requests
for INTELLIGENT_TIERING
objects

region-Select-Returned-INT-Bytes Bytes Hourly The number of bytes


of data returned with
Select requests from
INTELLIGENT_TIERING
storage

region-Select-Scanned-INT-Bytes Bytes Hourly The number of bytes


of data scanned with
Select requests from
INTELLIGENT_TIERING
storage

region-EarlyDelete-INT Byte-Hours Hourly Prorated storage usage


for objects deleted from
INTELLIGENT_TIERING
before the 30-day
minimum commitment
ended

region-Monitoring-Automation-INT Objects Hourly The number of unique


objects monitored
and auto-tiered in the
INTELLIGENT_TIERING
storage class

region-Expedited-Retrieval-Bytes Bytes Hourly The number of bytes


of data retrieved with
Expedited GLACIER
requests

region-Standard-Retrieval-Bytes Bytes Hourly The number of bytes


of data retrieved with
standard GLACIER requests

region-Retrieval-SIA Bytes Hourly The number of bytes


of data retrieved from
STANDARD_IA storage

API Version 2006-03-01


90
Amazon Simple Storage Service Developer Guide
Understanding Billing and Usage Reports

Usage Type Units Granularity Description

region-Retrieval-ZIA Bytes Hourly The number of bytes


of data retrieved from
ONEZONE_IA storage

region-StorageAnalytics-ObjCount Objects Hourly The number of unique


objects in each object
group (where objects are
grouped by bucket or
prefix) tracked by storage
analytics

region-Select-Scanned-Bytes Bytes Hourly The number of bytes of


data scanned with Select
requests from STANDARD
storage

region-Select-Scanned-SIA-Bytes Bytes Hourly The number of bytes


of data scanned with
Select requests from
STANDARD_IA storage

region-Select-Scanned-ZIA-Bytes Bytes Hourly The number of bytes of


data scanned with Select
requests from ONEZONE_IA
storage

region-Select-Returned-Bytes Bytes Hourly The number of bytes of


data returned with Select
requests from STANDARD
storage

region-Select-Returned-SIA-Bytes Bytes Hourly The number of bytes


of data returned with
Select requests from
STANDARD_IA storage

region-Select-Returned-ZIA-Bytes Bytes Hourly The number of bytes of


data returned with Select
requests from ONEZONE_IA
storage

region-TagStorage-TagHrs Tag-Hours Daily The total of tags on all


objects in the bucket
reported by hour

region-TimedStorage-ByteHrs Byte-Hours Daily The number of byte-hours


that data was stored in
STANDARD storage

region-TimedStorage-GLACIERByteHrs Byte-Hours Daily The number of byte-hours


that data was stored in
GLACIER storage

region-TimedStorage-GlacierStaging Byte-Hours Daily The number of byte-hours


that data was stored in
GLACIER staging storage

API Version 2006-03-01


91
Amazon Simple Storage Service Developer Guide
Understanding Billing and Usage Reports

Usage Type Units Granularity Description

region-TimedStorage-INT-Freq-ByteHrs Byte-Hours Daily The number of byte-hours


that data was stored in
the frequent access tier
of INTELLIGENT_TIERING
storage

region-TimedStorage-INT-InFreq- Byte-Hours Daily The number of byte-hours


ByteHrs that data was stored in
the infrequent access tier
of INTELLIGENT_TIERING
storage

region-TimedStorage-RRS-ByteHrs Byte-Hours Daily The number of byte-hours


that data was stored in
Reduced Redundancy
Storage (RRS) storage

region-TimedStorage-SIA-ByteHrs Byte-Hours Daily The number of byte-hours


that data was stored in
STANDARD_IA storage

region-TimedStorage-ZIA-ByteHrs Byte-Hours Daily The number of byte-hours


that data was stored in
ONEZONE_IA storage

region-TimedStorage-SIA-SmObjects Byte-Hours Daily The number of byte-hours


that small objects (smaller
than 128 KB) were stored in
STANDARD_IA storage

region-TimedStorage-ZIA-SmObjects Byte-Hours Daily The number of byte-hours


that small objects (smaller
than 128 KB) were stored in
ONEZONE_IA storage

Notes:

1. If you terminate a transfer before completion, the amount of data that is transferred might exceed
the amount of data that your application receives. This discrepancy can occur because a transfer
termination request cannot be executed instantaneously, and some amount of data might be in transit
pending execution of the termination request. This data in transit is billed as data transferred “out.”
2. For more information on the byte-hours unit, see Converting Usage Byte-Hours to Billed GB-
Months (p. 93).
3. For objects that are archived to the GLACIER storage class, when they are deleted prior to 90 days,
there is a prorated charge per gigabyte for the remaining days.
4. For objects that are in INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE_IA storage, when they are
deleted, overwritten, or transitioned to a different storage class prior to 30 days, there is a prorated
charge per gigabyte for the remaining days.
5. For small objects (smaller than 128 KB) that are in STANDARD_IA or ONEZONE_IA storage, when
they are deleted, overwritten, or transitioned to a different storage class prior to 30 days, there is a
prorated charge per gigabyte for the remaining days.
6. There is no minimum billable object size for objects in the INTELLIGENT_TIERING storage class, but
objects that are smaller than 128 KB are not eligible for auto-tiering and are always charged at the
rate for the INTELLIGENT_TIERING frequent access tier.

API Version 2006-03-01


92
Amazon Simple Storage Service Developer Guide
Using Cost Allocation Tags

Tracking Operations in Your Usage Reports


Operations describe the action taken on your AWS object or bucket by the specified usage type.
Operations are indicated by self-explanatory codes, such as PutObject or ListBucket. To see which
actions on your bucket generated a specific type of usage, use these codes. When you create a usage
report, you can choose to include All Operations, or a specific operation, for example, GetObject, to
report on.

Converting Usage Byte-Hours to Billed GB-Months


The volume of storage that we bill you for each month is based on the average amount of storage you
used throughout the month. You are billed for all of the object data and metadata stored in buckets
that you created under your AWS account. For more information about metadata, see Object Key and
Metadata (p. 96).

We measure your storage usage in TimedStorage-ByteHrs, which are totaled up at the end of the month
to generate your monthly charges. The usage report reports your storage usage in byte-hours and the
billing reports report storage usage in GB-months. To correlate your usage report to your billing reports,
you need to convert byte-hours into GB-months.

For example, if you store 100 GB (107,374,182,400 bytes) of STANDARD Amazon S3 storage data in your
bucket for the first 15 days in March, and 100 TB (109,951,162,777,600 bytes) of STANDARD Amazon S3
storage data for the final 16 days in March, you will have used 42,259,901,212,262,400 byte-hours.

First, calculate the total byte-hour usage:

[107,374,182,400 bytes x 15 days x (24 hours/day)]


+ [109,951,162,777,600 bytes x 16 days x (24 hours/day)]
= 42,259,901,212,262,400 byte-hours

Then convert the byte-hours to GB-Months:

42,259,901,212,262,400 byte-hours/1,073,741,824 bytes per GB/24 hours per day


/31 days in March
=52,900 GB-Months

More Info
• AWS Usage Report for Amazon S3 (p. 85)
• AWS Billing Reports for Amazon S3 (p. 83)
• Amazon S3 Pricing
• Amazon S3 FAQ
• Glacier Pricing
• Glacier FAQs

Using Cost Allocation S3 Bucket Tags


To track the storage cost or other criteria for individual projects or groups of projects, label your Amazon
S3 buckets using cost allocation tags. A cost allocation tag is a key-value pair that you associate with an
S3 bucket. After you activate cost allocation tags, AWS uses the tags to organize your resource costs on
your cost allocation report. Cost allocation tags can only be used to label buckets. For information about
tags used for labeling objects, see Object Tagging (p. 107).

API Version 2006-03-01


93
Amazon Simple Storage Service Developer Guide
Using Cost Allocation Tags

The cost allocation report lists the AWS usage for your account by product category and AWS Identity
and Access Management (IAM) user. The report contains the same line items as the detailed billing
report (see Understanding Your AWS Billing and Usage Reports for Amazon S3 (p. 86)) and additional
columns for your tag keys.

AWS provides two types of cost allocation tags, an AWS-generated tag and user-defined tags.
AWS defines, creates, and applies the AWS-generated createdBy tag for you after an Amazon S3
CreateBucket event. You define, create, and apply user-defined tags to your S3 bucket.

You must activate both types of tags separately in the Billing and Cost Management console before they
can appear in your billing reports. For more information about AWS-generated tags, see AWS-Generated
Cost Allocation Tags. For more information about activating tags, see Using Cost Allocation Tags in the
AWS Billing and Cost Management User Guide.

A user-defined cost allocation tag has the following components:

• The tag key. The tag key is the name of the tag. For example, in the tag project/Trinity, project is the
key. The tag key is a case-sensitive string that can contain 1 to 128 Unicode characters.

• The tag value. The tag value is a required string. For example, in the tag project/Trinity, Trinity is the
value. The tag value is a case-sensitive string that can contain from 0 to 256 Unicode characters.

For details on the allowed characters for user-defined tags and other restrictions, see User-Defined Tag
Restrictions in the AWS Billing and Cost Management User Guide.

Each S3 bucket has a tag set. A tag set contains all of the tags that are assigned to that bucket. A tag set
can contain as many as 10 tags, or it can be empty. Keys must be unique within a tag set, but values in
a tag set don't have to be unique. For example, you can have the same value in tag sets named project/
Trinity and cost-center/Trinity.

Within a bucket, if you add a tag that has the same key as an existing tag, the new value overwrites the
old value.

AWS doesn't apply any semantic meaning to your tags. We interpret tags strictly as character strings.

To add, list, edit, or delete tags, you can use the Amazon S3 console, the AWS Command Line Interface
(AWS CLI), or the Amazon S3 API.

For more information about creating tags, see the appropriate topic:

• To create tags in the console, see How Do I View the Properties for an S3 Bucket? in the Amazon Simple
Storage Service Console User Guide.
• To create tags using the Amazon S3 API, see PUT Bucket tagging in the Amazon Simple Storage Service
API Reference.
• To create tags using the AWS CLI, see put-bucket-tagging in the AWS CLI Command Reference.

For more information about user-defined tags, see User-Defined Cost Allocation Tags in the AWS Billing
and Cost Management User Guide.

More Info
• Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide
• Understanding Your AWS Billing and Usage Reports for Amazon S3 (p. 86)
• AWS Billing Reports for Amazon S3 (p. 83)

API Version 2006-03-01


94
Amazon Simple Storage Service Developer Guide

Working with Amazon S3 Objects


Amazon S3 is a simple key, value store designed to store as many objects as you want. You store these
objects in one or more buckets. An object consists of the following:

• Key – The name that you assign to an object. You use the object key to retrieve the object.

For more information, see Object Key and Metadata (p. 96)
• Version ID – Within a bucket, a key and version ID uniquely identify an object.

The version ID is a string that Amazon S3 generates when you add an object to a bucket. For more
information, see Object Versioning (p. 104).
• Value – The content that you are storing.

An object value can be any sequence of bytes. Objects can range in size from zero to 5 TB. For more
information, see Uploading Objects (p. 165).
• Metadata – A set of name-value pairs with which you can store information regarding the object.

You can assign metadata, referred to as user-defined metadata, to your objects in Amazon S3. Amazon
S3 also assigns system-metadata to these objects, which it uses for managing objects. For more
information, see Object Key and Metadata (p. 96).
• Subresources – Amazon S3 uses the subresource mechanism to store object-specific additional
information.

Because subresources are subordinates to objects, they are always associated with some other entity
such as an object or a bucket. For more information, see Object Subresources (p. 104).
• Access Control Information – You can control access to the objects you store in Amazon S3.

Amazon S3 supports both the resource-based access control, such as an Access Control List (ACL) and
bucket policies, and user-based access control. For more information, see Managing Access Permissions
to Your Amazon S3 Resources (p. 269).

For more information about working with objects, see the following sections. Your Amazon S3 resources
(for example buckets and objects) are private by default. You need to explicitly grant permission for
others to access these resources. For example, you might want to share a video or a photo stored in
your Amazon S3 bucket on your website. That works only if you either make the object public or use a
presigned URL on your website. For more information about sharing objects, see Share an Object with
Others (p. 162).

Topics
• Object Key and Metadata (p. 96)
• Storage Classes (p. 100)
• Object Subresources (p. 104)
• Object Versioning (p. 104)
• Object Tagging (p. 107)
• Object Lifecycle Management (p. 115)
• Cross-Origin Resource Sharing (CORS) (p. 146)
API Version 2006-03-01
95
Amazon Simple Storage Service Developer Guide
Object Key and Metadata

• Operations on Objects (p. 156)

Object Key and Metadata


Each Amazon S3 object has data, a key, and metadata. Object key (or key name) uniquely identifies the
object in a bucket. Object metadata is a set of name-value pairs. You can set object metadata at the time
you upload it. After you upload the object, you cannot modify object metadata. The only way to modify
object metadata is to make a copy of the object and set the metadata.

Topics
• Object Keys (p. 96)
• Object Metadata (p. 98)

Object Keys
When you create an object, you specify the key name, which uniquely identifies the object in the bucket.
For example, in the Amazon S3 console (see AWS Management Console), when you highlight a bucket, a
list of objects in your bucket appears. These names are the object keys. The name for a key is a sequence
of Unicode characters whose UTF-8 encoding is at most 1024 bytes long.

The Amazon S3 data model is a flat structure: you create a bucket, and the bucket stores objects. There
is no hierarchy of subbuckets or subfolders; however, you can infer logical hierarchy using key name
prefixes and delimiters as the Amazon S3 console does. The Amazon S3 console supports a concept of
folders. Suppose that your bucket (admin-created) has four objects with the following object keys:

Development/Projects1.xls

Finance/statement1.pdf

Private/taxdocument.pdf

s3-dg.pdf

The console uses the key name prefixes (Development/, Finance/, and Private/) and delimiter ('/')
to present a folder structure as shown:

The s3-dg.pdf key does not have a prefix, so its object appears directly at the root level of the bucket.
If you open the Development/ folder, you see the Projects.xlsx object in it.

API Version 2006-03-01


96
Amazon Simple Storage Service Developer Guide
Object Keys

Note
Amazon S3 supports buckets and objects, and there is no hierarchy in Amazon S3. However, the
prefixes and delimiters in an object key name enable the Amazon S3 console and the AWS SDKs
to infer hierarchy and introduce the concept of folders.

Object Key Naming Guidelines


You can use any UTF-8 character in an object key name. However, using certain characters in key names
may cause problems with some applications and protocols. The following guidelines help you maximize
compliance with DNS, web-safe characters, XML parsers, and other APIs.

Safe Characters
The following character sets are generally safe for use in key names:

Alphanumeric characters • 0-9


• a-z
• A-Z

Special characters • !
• -
• _
• .
• *
• '
• (
• )

The following are examples of valid object key names:

• 4my-organization
• my.great_photos-2014/jan/myvacation.jpg
• videos/2014/birthday/video1.wmv

API Version 2006-03-01


97
Amazon Simple Storage Service Developer Guide
Object Metadata

Characters That Might Require Special Handling


The following characters in a key name might require additional code handling and likely will need to be
URL encoded or referenced as HEX. Some of these are non-printable characters and your browser might
not handle them, which also requires special handling:

• Ampersand ("&")
• Dollar ("$")
• ASCII character ranges 00–1F hex (0–31 decimal) and 7F (127 decimal)
• 'At' symbol ("@")
• Equals ("=")
• Semicolon (";")
• Colon (":")
• Plus ("+")
• Space – Significant sequences of spaces may be lost in some uses (especially multiple spaces)
• Comma (",")
• Question mark ("?")

Characters to Avoid
Avoid the following characters in a key name because of significant special handling for consistency
across all applications.

• Backslash ("\")
• Left curly brace ("{")
• Non-printable ASCII characters (128–255 decimal characters)
• Caret ("^")
• Right curly brace ("}")
• Percent character ("%")
• Grave accent / back tick ("`")
• Right square bracket ("]")
• Quotation marks
• 'Greater Than' symbol (">")
• Left square bracket ("[")
• Tilde ("~")
• 'Less Than' symbol ("<")
• 'Pound' character ("#")
• Vertical bar / pipe ("|")

Object Metadata
There are two kinds of metadata: system metadata and user-defined metadata.

System-Defined Metadata
For each object stored in a bucket, Amazon S3 maintains a set of system metadata. Amazon S3 processes
this system metadata as needed. For example, Amazon S3 maintains object creation date and size
metadata and uses this information as part of object management.

There are two categories of system metadata:

API Version 2006-03-01


98
Amazon Simple Storage Service Developer Guide
Object Metadata

1. Metadata such as object creation date is system controlled where only Amazon S3 can modify the
value.
2. Other system metadata, such as the storage class configured for the object and whether the object
has server-side encryption enabled, are examples of system metadata whose values you control.
If your bucket is configured as a website, sometimes you might want to redirect a page request to
another page or an external URL. In this case, a webpage is an object in your bucket. Amazon S3 stores
the page redirect value as system metadata whose value you control.

When you create objects, you can configure values of these system metadata items or update the
values when you need to. For more information about storage classes, see Storage Classes (p. 100).
For more information about server-side encryption, see Protecting Data Using Encryption (p. 388).

The following table provides a list of system-defined metadata and whether you can update it.

Name Description Can User


Modify the
Value?

Date Current date and time. No

Content-Length Object size in bytes. No

Last-Modified Object creation date or the last modified date, whichever is No


the latest.

Content-MD5 The base64-encoded 128-bit MD5 digest of the object. No

x-amz-server-side- Indicates whether server-side encryption is enabled for the Yes


encryption object, and whether that encryption is from the AWS Key
Management Service (SSE-KMS) or from AWS managed
encryption (SSE-S3). For more information, see Protecting
Data Using Server-Side Encryption (p. 388).

x-amz-version-id Object version. When you enable versioning on a No


bucket, Amazon S3 assigns a version number to objects
added to the bucket. For more information, see Using
Versioning (p. 425).

x-amz-delete-marker In a bucket that has versioning enabled, this Boolean marker No


indicates whether the object is a delete marker.

x-amz-storage-class Storage class used for storing the object. For more Yes
information, see Storage Classes (p. 100).

x-amz-website- Redirects requests for the associated object to another Yes


redirect-location object in the same bucket or an external URL. For more
information, see (Optional) Configuring a Webpage
Redirect (p. 522).

x-amz-server-side- If x-amz-server-side-encryption is present and has the Yes


encryption-aws-kms- value of aws:kms, this indicates the ID of the AWS Key
key-id Management Service (AWS KMS) master encryption key that
was used for the object.

x-amz-server-side- Indicates whether server-side encryption with customer- Yes


encryption-customer- provided encryption keys (SSE-C) is enabled. For more
algorithm information, see Protecting Data Using Server-Side

API Version 2006-03-01


99
Amazon Simple Storage Service Developer Guide
Storage Classes

Name Description Can User


Modify the
Value?
Encryption with Customer-Provided Encryption Keys (SSE-
C) (p. 403).

User-Defined Metadata
When uploading an object, you can also assign metadata to the object. You provide this optional
information as a name-value (key-value) pair when you send a PUT or POST request to create the object.
When you upload objects using the REST API, the optional user-defined metadata names must begin
with "x-amz-meta-" to distinguish them from other HTTP headers. When you retrieve the object using
the REST API, this prefix is returned. When you upload objects using the SOAP API, the prefix is not
required. When you retrieve the object using the SOAP API, the prefix is removed, regardless of which API
you used to upload the object.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

When metadata is retrieved through the REST API, Amazon S3 combines headers that have the same
name (ignoring case) into a comma-delimited list. If some metadata contains unprintable characters, it
is not returned. Instead, the x-amz-missing-meta header is returned with a value of the number of
unprintable metadata entries.

User-defined metadata is a set of key-value pairs. Amazon S3 stores user-defined metadata keys in
lowercase. Each key-value pair must conform to US-ASCII when you are using REST and to UTF-8 when
you are using SOAP or browser-based uploads via POST.
Note
The PUT request header is limited to 8 KB in size. Within the PUT request header, the user-
defined metadata is limited to 2 KB in size. The size of user-defined metadata is measured by
taking the sum of the number of bytes in the UTF-8 encoding of each key and value.

For information about adding metadata to your object after it’s been uploaded, see How Do I Add
Metadata to an S3 Object? in the Amazon Simple Storage Service Console User Guide.

Storage Classes
Each object in Amazon S3 has a storage class associated with it. For example, if you list the objects in an
S3 bucket, the console shows the storage class for all the objects in the list.

API Version 2006-03-01


100
Amazon Simple Storage Service Developer Guide
Storage Classes for Frequently Accessed Objects

Amazon S3 offers a range of storage classes for the objects that you store. You choose a class depending
on your use case scenario and performance access requirements. All of these storage classes offer high
durability.

Topics
• Storage Classes for Frequently Accessed Objects (p. 101)
• Storage Class That Automatically Optimizes Frequently and Infrequently Accessed Objects (p. 101)
• Storage Classes for Infrequently Accessed Objects (p. 102)
• Comparing the Amazon S3 Storage Classes (p. 103)
• Setting the Storage Class of an Object (p. 103)

Storage Classes for Frequently Accessed Objects


For performance-sensitive use cases (those that require millisecond access time) and frequently accessed
data, Amazon S3 provides the following storage classes:

• STANDARD—The default storage class. If you don't specify the storage class when you upload an
object, Amazon S3 assigns the STANDARD storage class.

 
• REDUCED_REDUNDANCY—The Reduced Redundancy Storage (RRS) storage class is designed for
noncritical, reproducible data that can be stored with less redundancy than the STANDARD storage
class.
Important
We recommend that you not use this storage class. The STANDARD storage class is more cost
effective.

For durability, RRS objects have an average annual expected loss of 0.01% of objects. If an RRS object
is lost, when requests are made to that object, Amazon S3 returns a 405 error.

Storage Class That Automatically Optimizes


Frequently and Infrequently Accessed Objects
The INTELLIGENT_TIERING storage class is designed to optimize storage costs by automatically moving
data to the most cost-effective storage access tier, without performance impact or operational overhead.
INTELLIGENT_TIERING delivers automatic cost savings by moving data on a granular object level
between two access tiers, a frequent access tier and a lower-cost infrequent access tier, when access
patterns change. The INTELLIGENT_TIERING storage class is ideal if you want to optimize storage costs
automatically for long-lived data when access patterns are unknown or unpredictable.

The INTELLIGENT_TIERING storage class stores objects in two access tiers: one tier that is optimized
for frequent access and another lower-cost tier that is optimized for infrequently accessed data. For
a small monthly monitoring and automation fee per object, Amazon S3 monitors access patterns
of the objects in the INTELLIGENT_TIERING storage class and moves objects that have not been
accessed for 30 consecutive days to the infrequent access tier. There are no retrieval fees when using
the INTELLIGENT_TIERING storage class. If an object in the infrequent access tier is accessed, it is
automatically moved back to the frequent access tier. No additional tiering fees apply when objects are
moved between access tiers within the INTELLIGENT_TIERING storage class.
Note
The INTELLIGENT_TIERING storage class is suitable for objects larger than 128 KB that you
plan to store for at least 30 days. If the size of an object is less than 128 KB, it is not eligible for
auto-tiering. Smaller objects can be stored, but they are always charged at the frequent access

API Version 2006-03-01


101
Amazon Simple Storage Service Developer Guide
Storage Classes for Infrequently Accessed Objects

tier rates in the INTELLIGENT_TIERING storage class. If you delete an object before the 30-day
minimum, you are charged for 30 days. For pricing information, see Amazon S3 Pricing.

Storage Classes for Infrequently Accessed Objects


The STANDARD_IA and ONEZONE_IA storage classes are designed for long-lived and infrequently
accessed data. (IA stands for infrequent access.) STANDARD_IA and ONEZONE_IA objects are available for
millisecond access (similar to the STANDARD storage class). Amazon S3 charges a retrieval fee for these
objects, so they are most suitable for infrequently accessed data. For pricing information, see Amazon S3
Pricing.

For example, you might choose the STANDARD_IA and ONEZONE_IA storage classes:

• For storing backups.

 
• For older data that is accessed infrequently, but that still requires millisecond access. For example,
when you upload data, you might choose the STANDARD storage class, and use lifecycle configuration
to tell Amazon S3 to transition the objects to the STANDARD_IA or ONEZONE_IA class. For more
information about lifecycle management, see Object Lifecycle Management (p. 115).

Note
The STANDARD_IA and ONEZONE_IA storage classes are suitable for objects larger than 128 KB
that you plan to store for at least 30 days. If an object is less than 128 KB, Amazon S3 charges
you for 128 KB. If you delete an object before the 30-day minimum, you are charged for 30
days. For pricing information, see Amazon S3 Pricing.

These storage classes differ as follows:

• STANDARD_IA—Amazon S3 stores the object data redundantly across multiple geographically


separated Availability Zones (similar to the STANDARD storage class). STANDARD_IA objects are
resilient to the loss of an Availability Zone. This storage class offers greater availability, durability, and
resiliency than the ONEZONE_IA class.

 
• ONEZONE_IA—Amazon S3 stores the object data in only one Availability Zone, which makes it less
expensive than STANDARD_IA. However, the data is not resilient to the physical loss of the Availability
Zone resulting from disasters, such as earth quakes and floods. The ONEZONE_IA storage class is
as durable as STANDARD_IA, but it is less available and less resilient. For a comparison of storage
class durability and availability, see the Durability and Availability table at the end of this section. For
pricing, see Amazon S3 Pricing.

We recommend the following:

• STANDARD_IA—Use for your primary or only copy of data that can't be recreated.
• ONEZONE_IA—Use if you can recreate the data if the Availability Zone fails, and for object replicas
when setting cross-region replication (CRR).

GLACIER Storage Class


The GLACIER storage class is suitable for archiving data where data access is infrequent. This storage
class offers the same durability and resiliency as the STANDARD storage class.

You can set the storage class of an object to GLACIER in the same ways that you do for the other storage
classes as described in the section Setting the Storage Class of an Object (p. 103). However, the

API Version 2006-03-01


102
Amazon Simple Storage Service Developer Guide
Comparing the Amazon S3 Storage Classes

GLACIER archived objects are not available for real-time access. You must first restore the GLACIER
objects before you can access them (STANDARD, RRS, STANDARD_IA, and ONEZONE_IA objects are
available for anytime access). For more information, see Restoring Archived Objects (p. 246).
Important
When you choose the GLACIER storage class, Amazon S3 uses the low-cost Glacier service to
store the objects. Although the objects are stored in Glacier, these remain Amazon S3 objects
that you manage in Amazon S3, and you cannot access them directly through Glacier.

To learn more about the Glacier service, see the Amazon S3 Glacier Developer Guide.

Comparing the Amazon S3 Storage Classes


The following table compares the storage classes.

All of the storage classes except for ONEZONE_IA are designed to be resilient to simultaneous complete
data loss in a single Availability Zone and partial loss in another Availability Zone.

In addition to the performance requirements of your application scenario, consider price. For storage
class pricing, see Amazon S3 Pricing.

Setting the Storage Class of an Object


Amazon S3 APIs support setting (or updating) the storage class of objects as follows:

• When creating a new object, you can specify its storage class. For example, when creating objects
using the PUT Object, POST Object, and Initiate Multipart Upload APIs, you add the x-amz-storage-
class request header to specify a storage class. If you don't add this header, Amazon S3 uses
STANDARD, the default storage class.

 
• You can also change the storage class of an object that is already stored in Amazon S3 by making a
copy of the object using the PUT Object - Copy API. You copy the object in the same bucket using the
same key name and specify request headers as follows:
• Set the x-amz-metadata-directive header to COPY.
• Set the x-amz-storage-class to the storage class that you want to use.

In a versioning-enabled bucket, you cannot change the storage class of a specific version of an object.
When you copy it, Amazon S3 gives it a new version ID.

API Version 2006-03-01


103
Amazon Simple Storage Service Developer Guide
Subresources

 
• You can direct Amazon S3 to change the storage class of objects by adding a lifecycle configuration to
a bucket. For more information, see Object Lifecycle Management (p. 115).

 
• When setting up a Cross-region replication (CRR) configuration you can set the storage class for
replicated objects. For more information, see Replication Configuration Overview (p. 567).

To create and update object storage classes, you can use the Amazon S3 console, AWS SDKs, or the AWS
Command Line Interface (AWS CLI). Each uses the Amazon S3 APIs to send requests to Amazon S3.

Object Subresources
Amazon S3 defines a set of subresources associated with buckets and objects. Subresources are
subordinates to objects; that is, subresources do not exist on their own, they are always associated with
some other entity, such as an object or a bucket.

The following table lists the subresources associated with Amazon S3 objects.

Subresource Description

acl Contains a list of grants identifying the grantees and the permissions granted. When
you create an object, the acl identifies the object owner as having full control over the
object. You can retrieve an object ACL or replace it with an updated list of grants. Any
update to an ACL requires you to replace the existing ACL. For more information about
ACLs, see Managing Access with ACLs (p. 370).

torrent Amazon S3 supports the BitTorrent protocol. Amazon S3 uses the torrent subresource
to return the torrent file associated with the specific object. To retrieve a torrent file,
you specify the torrent subresource in your GET request. Amazon S3 creates a torrent
file and returns it. You can only retrieve the torrent subresource, you cannot create,
update, or delete the torrent subresource. For more information, see Using BitTorrent
with Amazon S3 (p. 631).

Object Versioning
Use versioning to keep multiple versions of an object in one bucket. For example, you could store my-
image.jpg (version 111111) and my-image.jpg (version 222222) in a single bucket. Versioning
protects you from the consequences of unintended overwrites and deletions. You can also use versioning
to archive objects so you have access to previous versions.
Note
The SOAP API does not support versioning. SOAP support over HTTP is deprecated, but it is still
available over HTTPS. New Amazon S3 features are not supported for SOAP.

To customize your data retention approach and control storage costs, use object versioning with Object
Lifecycle Management (p. 115). For information about creating lifecycle policies using the AWS
Management Console, see How Do I Create a Lifecycle Policy for an S3 Bucket? in the Amazon Simple
Storage Service Console User Guide.

If you have an object expiration lifecycle policy in your non-versioned bucket and you want to maintain
the same permanent delete behavior when you enable versioning, you must add a noncurrent expiration

API Version 2006-03-01


104
Amazon Simple Storage Service Developer Guide
Versioning

policy. The noncurrent expiration lifecycle policy will manage the deletes of the noncurrent object
versions in the version-enabled bucket. (A version-enabled bucket maintains one current and zero or
more noncurrent object versions.)

You must explicitly enable versioning on your bucket. By default, versioning is disabled. Regardless
of whether you have enabled versioning, each object in your bucket has a version ID. If you have not
enabled versioning, Amazon S3 sets the value of the version ID to null. If you have enabled versioning,
Amazon S3 assigns a unique version ID value for the object. When you enable versioning on a bucket,
objects already stored in the bucket are unchanged. The version IDs (null), contents, and permissions
remain the same.

Enabling and suspending versioning is done at the bucket level. When you enable versioning for a
bucket, all objects added to it will have a unique version ID. Unique version IDs are randomly generated,
Unicode, UTF-8 encoded, URL-ready, opaque strings that are at most 1024 bytes long. An example
version ID is 3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo. Only
Amazon S3 generates version IDs. They cannot be edited.
Note
For simplicity, we will use much shorter IDs in all our examples.

When you PUT an object in a versioning-enabled bucket, the noncurrent version is not overwritten. The
following figure shows that when a new version of photo.gif is PUT into a bucket that already contains
an object with the same name, the original object (ID = 111111) remains in the bucket, Amazon S3
generates a new version ID (121212), and adds the newer version to the bucket.

This functionality prevents you from accidentally overwriting or deleting objects and affords you the
opportunity to retrieve a previous version of an object.

When you DELETE an object, all versions remain in the bucket and Amazon S3 inserts a delete marker, as
shown in the following figure.

The delete marker becomes the current version of the object. By default, GET requests retrieve the most
recently stored version. Performing a simple GET Object request when the current version is a delete
marker returns a 404 Not Found error, as shown in the following figure.

API Version 2006-03-01


105
Amazon Simple Storage Service Developer Guide
Versioning

You can, however, GET a noncurrent version of an object by specifying its version ID. In the following
figure, we GET a specific object version, 111111. Amazon S3 returns that object version even though it's
not the current version.

You can permanently delete an object by specifying the version you want to delete. Only the owner
of an Amazon S3 bucket can permanently delete a version. The following figure shows how DELETE
versionId permanently deletes an object from a bucket and that Amazon S3 doesn't insert a delete
marker.

You can add additional security by configuring a bucket to enable MFA (multi-factor authentication)
Delete. When you do, the bucket owner must include two forms of authentication in any request
to delete a version or change the versioning state of the bucket. For more information, see MFA
Delete (p. 427).
Important
If you notice a significant increase in the number of HTTP 503-slow down responses received
for Amazon S3 PUT or DELETE object requests to a bucket that has versioning enabled, you
might have one or more objects in the bucket for which there are millions of versions. For more
information, see Troubleshooting Amazon S3 (p. 638).

For more information, see Using Versioning (p. 425).

API Version 2006-03-01


106
Amazon Simple Storage Service Developer Guide
Object Tagging

Object Tagging
Use object tagging to categorize storage. Each tag is a key-value pair. Consider the following tagging
examples:

• Suppose an object contains protected health information (PHI) data. You might tag the object using
the following key-value pair, as shown following:

PHI=True

or

Classification=PHI

• Suppose you store project files in your S3 bucket. You might tag these objects with a key called
Project and a value, as shown following:

Project=Blue

• You can add multiple tags to an object, as shown following:

Project=x
Classification=confidential

You can add tags to new objects when you upload them or you can add them to existing objects. Note
the following:

• You can associate up to 10 tags with an object. Tags associated with an object must have unique tag
keys.
• A tag key can be up to 128 Unicode characters in length and tag values can be up to 256 Unicode
characters in length.
• Key and values are case sensitive.

Object key name prefixes also enable you to categorize storage, however prefix-based categorization is
one dimensional. Consider the following object key names:

photos/photo1.jpg
project/projectx/document.pdf
project/projecty/document2.pdf

These key names have the prefixes photos/, project/projectx/, and project/projecty/. These
prefixes enable one-dimensional categorization. That is, everything under a prefix is one category. For
example, the prefix project/projectx identifies all documents related to project x.

With tagging, you now have another dimension. If you want photo1 in project x category, you can tag
the object accordingly. In addition to data classification, tagging offers other benefits. For example,

• Object tags enable fine-grained access control of permissions. For example, you could grant an IAM
user permissions to read only objects with specific tags.
• Object tags enable fine-grained object lifecycle management in which you can specify tag-based filter,
in addition to key name prefix, in a lifecycle rule.
• When using Amazon S3 analytics, you can configure filters to group objects together for analysis by
object tags, by key name prefix, or by both prefix and tags.

API Version 2006-03-01


107
Amazon Simple Storage Service Developer Guide
API Operations Related to Object Tagging

• You can also customize Amazon CloudWatch metrics to display information by specific tag filters. The
following sections provide details.

Important
While it is acceptable to use tags to label objects containing confidential data (such as,
personally identifiable information (PII) or protected health information (PHI)), the tags
themselves shouldn't contain any confidential information.

API Operations Related to Object Tagging


Amazon S3 supports the following API operations that are specifically for object tagging:

Object API Operations

• PUT Object tagging – Replaces tags on an object. You specify tags in the request body. There are two
distinct scenarios of object tag management using this API.
• Object has no tags – Using this API you can add a set of tags to an object (the object has no prior
tags).
• Object has a set of existing tags – To modify the existing tag set, you must first retrieve the existing
tag set, modify it on the client side, and then use this API to replace the tag set. If you send this
request with empty tag set, S3 deletes existing tag set on the object.

 
• GET Object tagging – Returns the tag set associated with an object. Amazon S3 returns object tags in
the response body.

 
• DELETE Object tagging – Deletes the tag set associated with an object.

Other API Operations that Support Tagging

• PUT Object and Initiate Multipart Upload– You can specify tags when you create objects. You specify
tags using the x-amz-tagging request header.

 
• GET Object – Instead of returning the tag set, Amazon S3 returns the object tag count in the x-amz-
tag-count header (only if the requester has permissions to read tags) because the header response
size is limited to 8 K bytes. If you want to view the tags, you make another request for the GET Object
tagging API operation.

 
• POST Object – You can specify tags in your POST request.

As long as the tags in your request don't exceed the 8 K byte HTTP request header size limit, you can
use the PUT Object API to create objects with tags. If the tags you specify exceed the header size
limit, you can use this POST method in which you include the tags in the body.

 
• PUT Object - Copy – You can specify the x-amz-tagging-directive in your request to direct
Amazon S3 to either copy (default behavior) the tags or replace tags by a new set of tags provided in
the request.

Note the following:


API Version 2006-03-01
108
Amazon Simple Storage Service Developer Guide
Object Tagging and Additional Information

• Tagging follows the eventual consistency model. That is, soon after adding tags to an object, if you try
to retrieve the tags, you might get old tags, if any, on the objects. However, a subsequent call will likely
provide the updated tags.

Object Tagging and Additional Information


This section explains how object tagging relates to other configurations.

Object Tagging and Lifecycle Management


In bucket lifecycle configuration, you can specify a filter to select a subset of objects to which the rule
applies. You can specify a filter based on the key name prefixes, object tags, or both.

Suppose you store photos (raw and the finished format) in your Amazon S3 bucket. You might tag these
objects as shown following:

phototype=raw
or
phototype=finished

You might consider archiving the raw photos to Glacier sometime after they are created. You can
configure a lifecycle rule with a filter that identifies the subset of objects with the key name prefix
(photos/) that have a specific tag (phototype=raw).

For more information, see Object Lifecycle Management (p. 115).

Object Tagging and Cross-Region Replication (CRR)


If you configured cross-region replication (CRR) on your bucket, Amazon S3 replicates tags, provided you
grant S3 permission to read the tags. For more information, see Overview of Setting Up CRR (p. 566).

Object Tagging and Access Control Policies


You can also use permissions policies (bucket and user policies) to manage permissions related to object
tagging. For policy actions see the following topics:

• Permissions for Object Operations (p. 313)


• Permissions Related to Bucket Operations (p. 314)

Object tags enable fine-grained access control for managing permissions. You can grant conditional
permissions based on object tags. Amazon S3 supports the following condition keys that you can use to
grant conditional permissions based on object tags:

• s3:ExistingObjectTag/<tag-key> – Use this condition key to verify that an existing object tag
has the specific tag key and value.

 
Note
When granting permissions for the PUT Object and DELETE Object operations, this
condition key is not supported. That is, you cannot create a policy to grant or deny a user
permissions to delete or overwrite an object based on its existing tags.

API Version 2006-03-01


109
Amazon Simple Storage Service Developer Guide
Object Tagging and Additional Information

• s3:RequestObjectTagKeys – Use this condition key to restrict the tag keys that you want to allow
on objects. This is useful when adding tags to objects using the PutObjectTagging and PutObject, and
POST object requests.

 
• s3:RequestObjectTag/<tag-key> – Use this condition key to restrict the tag keys and values that
you want to allow on objects. This is useful when adding tags to objects using the PutObjectTagging
and PutObject, and POST Bucket requests.

For a complete list of Amazon S3 service-specific condition keys, see Available Condition Keys (p. 318).
The following permissions policies illustrate how object tagging enables fine grained access permissions
management.

Example 1: Allow a User to Read Only the Objects that Have a Specific Tag
The following permissions policy grants a user permission to read objects, but the condition limits the
read permission to only objects that have the following specific tag key and value:

security : public

Note that the policy uses the Amazon S3 condition key, s3:ExistingObjectTag/<tag-key> to
specify the key and value.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::examplebucket/*"
],
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/security": "public"
}
}
}
]
}

Example 2: Allow a User to Add Object Tags with Restrictions on the Allowed Tag Keys
The following permissions policy grants a user permissions to perform the s3:PutObjectTagging
action, which allows user to add tags to an existing object. The condition limits the tag keys that the user
is allowed to use. The condition uses the s3:RequestObjectTagKeys condition key to specify the set
of tag keys.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:s3:::examplebucket/*"

API Version 2006-03-01


110
Amazon Simple Storage Service Developer Guide
Object Tagging and Additional Information

],
"Condition": {
"ForAllValues:StringLike": {
"s3:RequestObjectTagKeys": [
"Owner",
"CreationDate"
]
}
}
}
]
}

The policy ensures that the tag set, if specified in the request, has the specified keys. A user might send
an empty tag set in PutObjectTagging, which is allowed by this policy (an empty tag set in the request
removes any existing tags on the object). If you want to prevent a user from removing the tag set, you
can add another condition to ensure that the user provides at least one value. The ForAnyValue in the
condition ensures at least one of the specified values must be present in the request.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:s3:::examplebucket/*"
],
"Condition": {
"ForAllValues:StringLike": {
"s3:RequestObjectTagKeys": [
"Owner",
"CreationDate"
]
},
"ForAnyValue:StringLike": {
"s3:RequestObjectTagKeys": [
"Owner",
"CreationDate"
]
}
}
}
]
}

For more information, see Creating a Condition That Tests Multiple Key Values (Set Operations) in the
IAM User Guide.

Example 3: Allow a User to Add Object Tags that Include a Specific Tag Key and Value

The following user policy grants a user permissions to perform the s3:PutObjectTagging action,
which allows user to add tags on an existing object. The condition requires the user to include a specific
tag (Project) with value set to X.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",

API Version 2006-03-01


111
Amazon Simple Storage Service Developer Guide
Managing Object Tags

"Action": [
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:s3:::examplebucket/*"
],
"Condition": {
"StringEquals": {
"s3:RequestObjectTag/Project": "X"
}
}
}
]
}

Related Topics

Managing Object Tags (p. 112)

Managing Object Tags


This section explains how you can add object tags programmatically using the AWS SDK for Java or the
Amazon S3 console.

Topics
• Managing Object Tags the Console (p. 112)
• Managing Tags Using the AWS SDK for Java (p. 112)
• Managing Tags Using the AWS SDK for .NET (p. 113)

Managing Object Tags the Console


You can use the Amazon S3 console to add tags to new objects when you upload them or you can add
them to existing objects. For instructions on how to add tags to objects using the Amazon S3 console,
see Adding Object Tags in the Amazon Simple Storage Service Console User Guide.

Managing Tags Using the AWS SDK for Java


The following example shows how to use the AWS SDK for Java to set tags for a new object and
retrieve or replace tags for an existing object. For more information about object tagging, see Object
Tagging (p. 107). For instructions on creating and testing a working sample, see Testing the Amazon S3
Java Code Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.File;
import java.util.ArrayList;
import java.util.List;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;

public class ManagingObjectTags {

API Version 2006-03-01


112
Amazon Simple Storage Service Developer Guide
Managing Object Tags

public static void main(String[] args) {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String keyName = "*** Object key ***";
String filePath = "*** File path ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Create an object, add two new tags, and upload the object to Amazon S3.
PutObjectRequest putRequest = new PutObjectRequest(bucketName, keyName, new
File(filePath));
List<Tag> tags = new ArrayList<Tag>();
tags.add(new Tag("Tag 1", "This is tag 1"));
tags.add(new Tag("Tag 2", "This is tag 2"));
putRequest.setTagging(new ObjectTagging(tags));
PutObjectResult putResult = s3Client.putObject(putRequest);

// Retrieve the object's tags.


GetObjectTaggingRequest getTaggingRequest = new
GetObjectTaggingRequest(bucketName, keyName);
GetObjectTaggingResult getTagsResult =
s3Client.getObjectTagging(getTaggingRequest);

// Replace the object's tags with two new tags.


List<Tag> newTags = new ArrayList<Tag>();
newTags.add(new Tag("Tag 3", "This is tag 3"));
newTags.add(new Tag("Tag 4", "This is tag 4"));
s3Client.setObjectTagging(new SetObjectTaggingRequest(bucketName, keyName, new
ObjectTagging(newTags)));
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Managing Tags Using the AWS SDK for .NET


The following example shows how to use the AWS SDK for .NET to set the tags for a new object and
retrieve or replace the tags for an existing object. For more information about object tagging, see Object
Tagging (p. 107).

For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;

API Version 2006-03-01


113
Amazon Simple Storage Service Developer Guide
Managing Object Tags

using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
public class ObjectTagsTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** key name for the new object ***";
private const string filePath = @"*** file path ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
PutObjectWithTagsTestAsync().Wait();
}

static async Task PutObjectWithTagsTestAsync()


{
try
{
// 1. Put an object with tags.
var putRequest = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName,
FilePath = filePath,
TagSet = new List<Tag>{
new Tag { Key = "Keyx1", Value = "Value1"},
new Tag { Key = "Keyx2", Value = "Value2" }
}
};

PutObjectResponse response = await client.PutObjectAsync(putRequest);


// 2. Retrieve the object's tags.
GetObjectTaggingRequest getTagsRequest = new GetObjectTaggingRequest
{
BucketName = bucketName,
Key = keyName
};

GetObjectTaggingResponse objectTags = await


client.GetObjectTaggingAsync(getTagsRequest);
for (int i = 0; i < objectTags.Tagging.Count; i++)
Console.WriteLine("Key: {0}, Value: {1}", objectTags.Tagging[i].Key,
objectTags.Tagging[0].Value);

// 3. Replace the tagset.

Tagging newTagSet = new Tagging();


newTagSet.TagSet = new List<Tag>{
new Tag { Key = "Key3", Value = "Value3"},
new Tag { Key = "Key4", Value = "Value4" }
};

PutObjectTaggingRequest putObjTagsRequest = new PutObjectTaggingRequest()


{
BucketName = bucketName,
Key = keyName,
Tagging = newTagSet
};

API Version 2006-03-01


114
Amazon Simple Storage Service Developer Guide
Lifecycle Management

PutObjectTaggingResponse response2 = await


client.PutObjectTaggingAsync(putObjTagsRequest);

// 4. Retrieve the object's tags.


GetObjectTaggingRequest getTagsRequest2 = new GetObjectTaggingRequest();
getTagsRequest2.BucketName = bucketName;
getTagsRequest2.Key = keyName;
GetObjectTaggingResponse objectTags2 = await
client.GetObjectTaggingAsync(getTagsRequest2);
for (int i = 0; i < objectTags2.Tagging.Count; i++)
Console.WriteLine("Key: {0}, Value: {1}", objectTags2.Tagging[i].Key,
objectTags2.Tagging[0].Value);

}
catch (AmazonS3Exception e)
{
Console.WriteLine(
"Error encountered ***. Message:'{0}' when writing an object"
, e.Message);
}
catch (Exception e)
{
Console.WriteLine(
"Encountered an error. Message:'{0}' when writing an object"
, e.Message);
}
}
}
}

Object Lifecycle Management


To manage your objects so that they are stored cost effectively throughout their lifecycle, configure their
lifecycle. A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group
of objects. There are two types of actions:

• Transition actions—Define when objects transition to another storage class. For example, you might
choose to transition objects to the STANDARD_IA storage class 30 days after you created them, or
archive objects to the GLACIER storage class one year after creating them.

There are costs associated with the lifecycle transition requests. For pricing information, see Amazon
S3 Pricing.

 
• Expiration actions—Define when objects expire. Amazon S3 deletes expired objects on your behalf.

The lifecycle expiration costs depend on when you choose to expire objects. For more information, see
Configuring Object Expiration (p. 121).

For more information about lifecycle rules, see Lifecycle Configuration Elements (p. 122).

When Should I Use Lifecycle Configuration?


Define lifecycle configuration rules for objects that have a well-defined lifecycle. For example:

API Version 2006-03-01


115
Amazon Simple Storage Service Developer Guide
How Do I Configure a Lifecycle?

• If you upload periodic logs to a bucket, your application might need them for a week or a month. After
that, you might want to delete them.
• Some documents are frequently accessed for a limited period of time. After that, they are infrequently
accessed. At some point, you might not need real-time access to them, but your organization or
regulations might require you to archive them for a specific period. After that, you can delete them.
• You might upload some types of data to Amazon S3 primarily for archival purposes. For example, you
might archive digital media, financial and healthcare records, raw genomics sequence data, long-term
database backups, and data that must be retained for regulatory compliance.

With lifecycle configuration rules, you can tell Amazon S3 to transition objects to less expensive storage
classes, or archive or delete them.

How Do I Configure a Lifecycle?


A lifecycle configuration, an XML file, comprises a set of rules with predefined actions that you want
Amazon S3 to perform on objects during their lifetime.

Amazon S3 provides a set of API operations for managing lifecycle configuration on a bucket. Amazon
S3 stores the configuration as a lifecycle subresource that is attached to your bucket. For details, see the
following:

PUT Bucket lifecycle

GET Bucket lifecycle

DELETE Bucket lifecycle

You can also configure the lifecycle by using the Amazon S3 console or programmatically by using
the AWS SDK wrapper libraries. If you need to, you can also make the REST API calls directly. For more
information, see Setting Lifecycle Configuration on a Bucket (p. 138).

For more information, see the following topics:

• Additional Considerations for Lifecycle Configuration (p. 116)


• Lifecycle Configuration Elements (p. 122)
• Examples of Lifecycle Configuration (p. 128)
• Setting Lifecycle Configuration on a Bucket (p. 138)

Additional Considerations for Lifecycle Configuration


When configuring the lifecycle of objects, you need to understand the following guidelines for
transitioning objects, setting expiration dates, and other object configurations.

Topics
• Transitioning Objects (p. 116)
• Configuring Object Expiration (p. 121)
• Lifecycle and Other Bucket Configurations (p. 121)

Transitioning Objects
You can add rules in a lifecycle configuration to tell Amazon S3 to transition objects to another Amazon
S3 storage class. For example:

API Version 2006-03-01


116
Amazon Simple Storage Service Developer Guide
Additional Considerations

• When you know objects are infrequently accessed, you might transition them to the STANDARD_IA
storage class.
• You might want to archive objects that you don't need to access in real time to the GLACIER storage
class.

The following sections describe supported transitions, related constraints, and transitioning to the
GLACIER storage class.

Supported Transitions and Related Constraints


In a lifecycle configuration, you can define rules to transition objects from one storage class to another
to save on storage costs. When you don't know the access patterns of your objects, or your access
patterns are changing over time, you can transition the objects to the INTELLIGENT_TIERING storage
class for automatic cost savings. For information about storage classes, see Storage Classes (p. 100).

Amazon S3 supports a waterfall model for transitioning between storage classes, as shown in the
following diagram.

Note
The diagram does not mention the REDUCED_REDUNDANCY storage class because we don't
recommend using it.

Amazon S3 supports the following lifecycle transitions between storage classes using a lifecycle
configuration:

• You can transition from the STANDARD storage class to any other storage class.
• You can transition from any storage class to the GLACIER storage class.
• You can transition from the STANDARD_IA storage class to the INTELLIGENT_TIERING or ONEZONE_IA
storage classes.
• You can transition from the INTELLIGENT_TIERING storage class to the ONEZONE_IA storage class.

The following lifecycle transitions are not supported:

API Version 2006-03-01


117
Amazon Simple Storage Service Developer Guide
Additional Considerations

• You can't transition from any storage class to the STANDARD storage class.
• You can't transition from any storage class to the REDUCED_REDUNDANCY storage class.
• You can't transition from the INTELLIGENT_TIERING storage class to the STANDARD_IA storage class.
• You can't transition from the ONEZONE_IA storage class to the STANDARD_IA or
INTELLIGENT_TIERING storage classes.
• You can't transition from the GLACIER storage class to any other storage class.

The lifecycle storage class transitions have the following constraints:

• From the STANDARD or STANDARD_IA storage class to INTELLIGENT_TIERING. The following


constraints apply:
• For larger objects, there is a cost benefit for transitioning to INTELLIGENT_TIERING. Amazon S3
does not transition objects that are smaller than 128 KB to the INTELLIGENT_TIERING storage class
because it's not cost effective.

 
• From the STANDARD storage classes to STANDARD_IA or ONEZONE_IA. The following constraints
apply:

 
• For larger objects, there is a cost benefit for transitioning to STANDARD_IA or ONEZONE_IA. Amazon
S3 does not transition objects that are smaller than 128 KB to the STANDARD_IA or ONEZONE_IA
storage classes because it's not cost effective.

 
• Objects must be stored at least 30 days in the current storage class before you can transition them
to STANDARD_IA or ONEZONE_IA. For example, you cannot create a lifecycle rule to transition
objects to the STANDARD_IA storage class one day after you create them.

Amazon S3 doesn't transition objects within the first 30 days because newer objects are often
accessed more frequently or deleted sooner than is suitable for STANDARD_IA or ONEZONE_IA
storage.

 
• If you are transitioning noncurrent objects (in versioned buckets), you can transition only objects
that are at least 30 days noncurrent to STANDARD_IA or ONEZONE_IA storage.

 
• From the STANDARD_IA storage class to ONEZONE_IA. The following constraints apply:
• Objects must be stored at least 30 days in the STANDARD_IA storage class before you can transition
them to the ONEZONE_IA class.

You can combine these lifecycle actions to manage an object's complete lifecycle. For example, suppose
that the objects you create have a well-defined lifecycle. Initially, the objects are frequently accessed for
a period of 30 days. Then, objects are infrequently accessed for up to 90 days. After that, the objects are
no longer needed, so you might choose to archive or delete them.

In this scenario, you can create a lifecycle rule in which you specify the initial transition action to
INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE_IA storage, another transition action to GLACIER
storage for archiving, and an expiration action. As you move the objects from one storage class to
API Version 2006-03-01
118
Amazon Simple Storage Service Developer Guide
Additional Considerations

another, you save on storage cost. For more information about cost considerations, see Amazon S3
Pricing.
Important
You can't specify a single lifecycle rule for both INTELLIGENT_TIERING (or STANDARD_IA or
ONEZONE_IA) and GLACIER transitions when the GLACIER transition occurs less than 30 days
after the INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE_IA transition. That's because there
is a minimum 30-day storage charge associated with the INTELLIGENT_TIERING, STANDARD_IA,
and ONEZONE_IA storage classes.
The same 30-day minimum applies when you specify a transition from STANDARD_IA storage
to ONEZONE_IA or INTELLIGENT_TIERING storage. You can specify two rules to accomplish this,
but you pay minimum storage charges. For more information about cost considerations, see
Amazon S3 Pricing.

Transitioning to the GLACIER Storage Class (Object Archival)


Using lifecycle configuration, you can transition objects to the GLACIER storage class—that is, archive
data to Glacier, a lower-cost storage solution.
Important
When you choose the GLACIER storage class, Amazon S3 uses the low-cost Glacier service to
store the objects. Although the objects are stored in Glacier, these remain Amazon S3 objects
that you manage in Amazon S3, and you cannot access them directly through Glacier.

Before you archive objects, review the following sections for relevant considerations.

General Considerations

The following are the general considerations for you to consider before you archive objects:

• Encrypted objects remain encrypted throughout the storage class transition process.

 
• Objects in the GLACIER storage class are not available in real time.

Archived objects are Amazon S3 objects, but before you can access an archived object, you must first
restore a temporary copy of it. The restored object copy is available only for the duration you specify
in the restore request. After that, Amazon S3 deletes the temporary copy, and the object remains
archived in Glacier.

You can restore an object by using the Amazon S3 console or programmatically by using the AWS
SDKs wrapper libraries or the Amazon S3 REST API in your code. For more information, see Restoring
Archived Objects (p. 246).

 
• The transition of objects to the GLACIER storage class is one-way.

You cannot use a lifecycle configuration rule to convert the storage class of an object from GLACIER
to any other storage class. If you want to change the storage class of an archived object to another
storage class, you must use the restore operation to make a temporary copy of the object first.
Then use the copy operation to overwrite the object specifying STANDARD, INTELLIGENT_TIERING,
STANDARD_IA, ONEZONE_IA, or REDUCED_REDUNDANCY as the storage class.

API Version 2006-03-01


119
Amazon Simple Storage Service Developer Guide
Additional Considerations

• The GLACIER storage class objects are visible and available only through Amazon S3, not through
Glacier.

Amazon S3 stores the archived objects in Glacier. However, these are Amazon S3 objects, and you
can access them only by using the Amazon S3 console or the Amazon S3 API. You cannot access the
archived objects through the Glacier console or the Glacier API.

Cost Considerations

If you are planning to archive infrequently accessed data for a period of months or years, the GLACIER
storage class will usually reduce your storage costs. You should, however, consider the following in order
to ensure that the GLACIER storage class is appropriate for you:

• Storage overhead charges – When you transition objects to the GLACIER storage class, a fixed amount
of storage is added to each object to accommodate metadata for managing the object.

 
• For each object archived to Glacier, Amazon S3 uses 8 KB of storage for the name of the object and
other metadata. Amazon S3 stores this metadata so that you can get a real-time list of your archived
objects by using the Amazon S3 API. For more information, see Get Bucket (List Objects). You are
charged standard Amazon S3 rates for this additional storage.

 
• For each archived object, Glacier adds 32 KB of storage for index and related metadata. This extra
data is necessary to identify and restore your object. You are charged Glacier rates for this additional
storage.

If you are archiving small objects, consider these storage charges. Also consider aggregating many
small objects into a smaller number of large objects to reduce overhead costs.

 
• Number of days you plan to keep objects archived—Glacier is a long-term archival solution. Deleting
data that is archived to Glacier is free if the objects you delete are archived for three months or longer.
If you delete or overwrite an object within three months of archiving it, Amazon S3 charges a prorated
early deletion fee.

 
• Glacier archive request charges— Each object that you transition to the GLACIER storage class
constitutes one archive request. There is a cost for each such request. If you plan to transition a large
number of objects, consider the request costs.

 
• Glacier data restore charges—Glacier is designed for long-term archival of data that you will access
infrequently. For information on data restoration charges, see How much does it cost to retrieve
data from Glacier? in the Amazon S3 FAQ. For information on how to restore data from Glacier, see
Restoring Archived Objects (p. 246).

When you archive objects to Glacier by using object lifecycle management, Amazon S3 transitions these
objects asynchronously. There might be a delay between the transition date in the lifecycle configuration
rule and the date of the physical transition. You are charged Glacier prices based on the transition date
specified in the rule.
API Version 2006-03-01
120
Amazon Simple Storage Service Developer Guide
Additional Considerations

The Amazon S3 product detail page provides pricing information and example calculations for archiving
Amazon S3 objects. For more information, see the following topics:

• How is my storage charge calculated for Amazon S3 objects archived to Glacier?


• How am I charged for deleting objects from Glacier that are less than 3 months old?
• How much does it cost to retrieve data from Glacier?
• Amazon S3 Pricing for storage costs for the Standard and GLACIER storage classes.

Restoring Archived Objects

Archived objects are not accessible in real time. You must first initiate a restore request and then wait
until a temporary copy of the object is available for the duration that you specify in the request. After
you receive a temporary copy of the restored object, the object's storage class remains GLACIER (a GET or
HEAD request will return GLACIER as the storage class).
Note
When you restore an archive, you are paying for both the archive (GLACIER rate) and a copy you
restored temporarily (REDUCED_REDUNDANCY storage rate). For information about pricing, see
Amazon S3 Pricing.

You can restore an object copy programmatically or by using the Amazon S3 console. Amazon S3
processes only one restore request at a time per object. For more information, see Restoring Archived
Objects (p. 246).

Configuring Object Expiration


When an object reaches the end of its lifetime, Amazon S3 queues it for removal and removes it
asynchronously. There may be a delay between the expiration date and the date at which Amazon S3
removes an object. You are not charged for storage time associated with an object that has expired.

To find when an object is scheduled to expire, use the HEAD Object or the GET Object API operations.
These API operations return response headers that provide this information.

If you create a lifecycle expiration rule that causes objects that have been in INTELLIGENT_TIERING,
STANDARD_IA, or ONEZONE_IA storage for less than 30 days to expire, you are charged for 30 days. If
you create a lifecycle expiration rule that causes objects that have been in GLACIER storage for less than
90 days to expire, you are charged for 90 days. For more information, see Amazon S3 Pricing.

Lifecycle and Other Bucket Configurations


In addition to lifecycle configurations, you can associate other configurations with your bucket. This
section explains how lifecycle configuration relates to other bucket configurations.

Lifecycle and Versioning


You can add lifecycle configurations to unversioned buckets and versioning-enabled buckets. For more
information, see Object Versioning (p. 104).

A versioning-enabled bucket maintains one current object version, and zero or more noncurrent object
versions. You can define separate lifecycle rules for current and noncurrent object versions.

For more information, see Lifecycle Configuration Elements (p. 122). For information about versioning,
see Object Versioning (p. 104).

Lifecycle Configuration on MFA-enabled Buckets


Lifecycle configuration on MFA-enabled buckets is not supported.

API Version 2006-03-01


121
Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements

Lifecycle and Logging


If you have logging enabled on your bucket, Amazon S3 reports the results of an expiration action as
follows:

• If the lifecycle expiration action results in Amazon S3 permanently removing the object, Amazon S3
reports it as an S3.EXPIRE.OBJECT operation in the log record.
• For a versioning-enabled bucket, if the lifecycle expiration action results in a logical deletion of the
current version, in which Amazon S3 adds a delete marker, Amazon S3 reports the logical deletion
as an S3.CREATE.DELETEMARKER operation in the log record. For more information, see Object
Versioning (p. 104).
• When Amazon S3 transitions an object to the GLACIER storage class, it reports it as an operation
S3.TRANSITION.OBJECT in the log record to indicate it has initiated the operation. When the
object is transitioned to the STANDARD_IA (or ONEZONE_IA) storage class, it is reported as an
S3.TRANSITION_SIA.OBJECT (or S3.TRANSITION_ZIA.OBJECT) operation.

More Info

• Lifecycle Configuration Elements (p. 122)


• Transitioning to the GLACIER Storage Class (Object Archival) (p. 119)
• Setting Lifecycle Configuration on a Bucket (p. 138)

Lifecycle Configuration Elements


Topics
• ID Element (p. 123)
• Status Element (p. 123)
• Filter Element (p. 123)
• Elements to Describe Lifecycle Actions (p. 125)

You specify a lifecycle configuration as XML, consisting of one or more lifecycle rules.

<LifecycleConfiguration>
<Rule>
...
</Rule>
<Rule>
...
</Rule>
</LifecycleConfiguration>

Each rule consists of the following:

• Rule metadata that include a rule ID, and status indicating whether the rule is enabled or disabled. If a
rule is disabled, Amazon S3 doesn't perform any actions specified in the rule.
• Filter identifying objects to which the rule applies. You can specify a filter by using an object key prefix,
one or more object tags, or both.
• One or more transition or expiration actions with a date or a time period in the object's lifetime when
you want Amazon S3 to perform the specified action.

The following sections describe the XML elements in a lifecycle configuration. For example lifecycle
configurations, see Examples of Lifecycle Configuration (p. 128).

API Version 2006-03-01


122
Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements

ID Element
A lifecycle configuration can have up to 1,000 rules. The <ID> element uniquely identifies a rule. ID
length is limited to 255 characters.

Status Element
The <Status> element value can be either Enabled or Disabled. If a rule is disabled, Amazon S3 doesn't
perform any of the actions defined in the rule.

Filter Element
A lifecycle rule can apply to all or a subset of objects in a bucket based on the <Filter> element that you
specify in the lifecycle rule.

You can filter objects by key prefix, object tags, or a combination of both (in which case Amazon S3 uses
a logical AND to combine the filters). Consider the following examples:

• Specifying a filter using key prefixes – This example shows a lifecycle rule that applies to a subset
of objects based on the key name prefix (logs/). For example, the lifecycle rule applies to objects
logs/mylog.txt, logs/temp1.txt, and logs/test.txt. The rule does not apply to the object
example.jpg.

<LifecycleConfiguration>
<Rule>
<Filter>
<Prefix>logs/</Prefix>
</Filter>
transition/expiration actions.
...
</Rule>
...
</LifecycleConfiguration>

If you want to apply a lifecycle action to a subset of objects based on different key name prefixes,
specify separate rules. In each rule, specify a prefix-based filter. For example, to describe a lifecycle
action for objects with key prefixes projectA/ and projectB/, you specify two rules as shown
following:

<LifecycleConfiguration>
<Rule>
<Filter>
<Prefix>projectA/</Prefix>
</Filter>
transition/expiration actions.
...
</Rule>

<Rule>
<Filter>
<Prefix>projectB/</Prefix>
</Filter>
transition/expiration actions.
...
</Rule>
</LifecycleConfiguration>

For more information about object keys, see Object Keys (p. 96).

API Version 2006-03-01


123
Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements

• Specifying a filter based on object tags – In the following example, the lifecycle rule specifies a filter
based on a tag (key) and value (value). The rule then applies only to a subset of objects with the
specific tag.

<LifecycleConfiguration>
<Rule>
<Filter>
<Tag>
<Key>key</Key>
<Value>value</Value>
</Tag>
</Filter>
transition/expiration actions.
...
</Rule>
</LifecycleConfiguration>

You can specify a filter based on multiple tags. You must wrap the tags in the <AND> element shown
in the following example. The rule directs Amazon S3 to perform lifecycle actions on objects with two
tags (with the specific tag key and value).

<LifecycleConfiguration>
<Rule>
<Filter>
<And>
<Tag>
<Key>key1</Key>
<Value>value1</Value>
</Tag>
<Tag>
<Key>key2</Key>
<Value>value2</Value>
</Tag>
...
</And>
</Filter>
transition/expiration actions.
</Rule>
</Lifecycle>

The lifecycle rule applies to objects that have both of the tags specified. Amazon S3 performs a logical
AND. Note the following:
• Each tag must match both key and value exactly.
• The rule applies to a subset of objects that have one or more tags specified in the rule. If an object
has additional tags specified, it doesn't matter.
Note
When you specify multiple tags in a filter, each tag key must be unique.
• Specifying a filter based on both prefix and one or more tags – In a lifecycle rule, you can specify
a filter based on both the key prefix and one or more tags. Again, you must wrap all of these in the
<And> element as shown following:

<LifecycleConfiguration>
<Rule>
<Filter>
<And>
<Prefix>key-prefix</Prefix>
<Tag>
<Key>key1</Key>
<Value>value1</Value>
API Version 2006-03-01
124
Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements

</Tag>
<Tag>
<Key>key2</Key>
<Value>value2</Value>
</Tag>
...
</And>
</Filter>
<Status>Enabled</Status>
transition/expiration actions.
</Rule>
</LifecycleConfiguration>

Amazon S3 combines these filters using a logical AND. That is, the rule applies to subset of objects
with a specific key prefix and specific tags. A filter can have only one prefix, and zero or more tags.
• You can specify an empty filter, in which case the rule applies to all objects in the bucket.

<LifecycleConfiguration>
<Rule>
<Filter>
</Filter>
<Status>Enabled</Status>
transition/expiration actions.
</Rule>
</LifecycleConfiguration>

Elements to Describe Lifecycle Actions


You can direct Amazon S3 to perform specific actions in an object's lifetime by specifying one or more of
the following predefined actions in a lifecycle rule. The effect of these actions depends on the versioning
state of your bucket.

• Transition action element – You specify the Transition action to transition objects from one storage
class to another. For more information about transitioning objects, see Supported Transitions and
Related Constraints (p. 117). When a specified date or time period in the object's lifetime is reached,
Amazon S3 performs the transition.

For a versioned bucket (versioning-enabled or versioning-suspended bucket), the Transition


action applies to the current object version. To manage noncurrent versions, Amazon S3 defines the
NoncurrentVersionTransition action (described below).

• Expiration action element – The Expiration action expires objects identified in the rule and applies
to eligible objects in any of the Amazon S3 storage classes. For more information about storage
classes, see Storage Classes (p. 100). Amazon S3 makes all expired objects unavailable. Whether the
objects are permanently removed depends on the versioning state of the bucket.
Important
Object expiration lifecycle polices do not remove incomplete multipart uploads. To remove
incomplete multipart uploads you must use the AbortIncompleteMultipartUpload lifecycle
configuration action that is described later in this section.
• Non-versioned bucket – The Expiration action results in Amazon S3 permanently removing the
object.
• Versioned bucket – For a versioned bucket (that is, versioning-enabled or versioning-suspended),
there are several considerations that guide how Amazon S3 handles the expiration action. For
more information, see Using Versioning (p. 425). Regardless of the versioning state, the following
applies:

API Version 2006-03-01


125
Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements

• The Expiration action applies only to the current version (it has no impact on noncurrent object
versions).
• Amazon S3 doesn't take any action if there are one or more object versions and the delete marker
is the current version.
• If the current object version is the only object version and it is also a delete marker (also referred
as an expired object delete marker, where all object versions are deleted and you only have a
delete marker remaining), Amazon S3 removes the expired object delete marker. You can also use
the expiration action to direct Amazon S3 to remove any expired object delete markers. For an
example, see Example 7: Removing Expired Object Delete Markers (p. 136).

Also consider the following when setting up Amazon S3 to manage expiration:


• Versioning-enabled bucket

If the current object version is not a delete marker, Amazon S3 adds a delete marker with a unique
version ID. This makes the current version noncurrent, and the delete marker the current version.
• Versioning-suspended bucket

In a versioning-suspended bucket, the expiration action causes Amazon S3 to create a delete


marker with null as the version ID. This delete marker replaces any object version with a null
version ID in the version hierarchy, which effectively deletes the object.

In addition, Amazon S3 provides the following actions that you can use to manage noncurrent object
versions in a versioned bucket (that is, versioning-enabled and versioning-suspended buckets).

• NoncurrentVersionTransition action element – Use this action to specify how long (from the time
the objects became noncurrent) you want the objects to remain in the current storage class before
Amazon S3 transitions them to the specified storage class. For more information about transitioning
objects, see Supported Transitions and Related Constraints (p. 117).
• NoncurrentVersionExpiration action element – Use this action to specify how long (from the time
the objects became noncurrent) you want to retain noncurrent object versions before Amazon S3
permanently removes them. The deleted object can't be recovered.

This delayed removal of noncurrent objects can be helpful when you need to correct any accidental
deletes or overwrites. For example, you can configure an expiration rule to delete noncurrent versions
five days after they become noncurrent. For example, suppose that on 1/1/2014 10:30 AM UTC, you
create an object called photo.gif (version ID 111111). On 1/2/2014 11:30 AM UTC, you accidentally
delete photo.gif (version ID 111111), which creates a delete marker with a new version ID (such as
version ID 4857693). You now have five days to recover the original version of photo.gif (version ID
111111) before the deletion is permanent. On 1/8/2014 00:00 UTC, the lifecycle rule for expiration
executes and permanently deletes photo.gif (version ID 111111), five days after it became a
noncurrent version.
Important
Object expiration lifecycle policies do not remove incomplete multipart uploads. To remove
incomplete multipart uploads, you must use the AbortIncompleteMultipartUpload lifecycle
configuration action that is described later in this section.

In addition to the transition and expiration actions, you can use the following lifecycle configuration
action to direct Amazon S3 to abort incomplete multipart uploads.

• AbortIncompleteMultipartUpload action element – Use this element to set a maximum time (in days)
that you want to allow multipart uploads to remain in progress. If the applicable multipart uploads
(determined by the key name prefix specified in the lifecycle rule) are not successfully completed
within the predefined time period, Amazon S3 aborts the incomplete multipart uploads. For more
information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy (p. 173).

API Version 2006-03-01


126
Amazon Simple Storage Service Developer Guide
Lifecycle Configuration Elements

Note
You cannot specify this lifecycle action in a rule that specifies a filter based on object tags.
• ExpiredObjectDeleteMarker action element – In a versioning-enabled bucket, a delete marker with
zero noncurrent versions is referred to as the expired object delete marker. You can use this lifecycle
action to direct S3 to remove the expired object delete markers. For an example, see Example 7:
Removing Expired Object Delete Markers (p. 136).
Note
You cannot specify this lifecycle action in a rule that specifies a filter based on object tags.

How Amazon S3 Calculates How Long an Object Has Been Noncurrent


In a versioning-enabled bucket, you can have multiple versions of an object, there is always one current
version, and zero or more noncurrent versions. Each time you upload an object, the current version is
retained as the noncurrent version and the newly added version, the successor, becomes the current
version. To determine the number of days an object is noncurrent, Amazon S3 looks at when its
successor was created. Amazon S3 uses the number of days since its successor was created as the number
of days an object is noncurrent.
Restoring Previous Versions of an Object When Using Lifecycle Configurations
As explained in detail in the topic Restoring Previous Versions (p. 443), you can use either of
the following two methods to retrieve previous versions of an object:

1. By copying a noncurrent version of the object into the same bucket. The copied object
becomes the current version of that object, and all object versions are preserved.
2. By permanently deleting the current version of the object. When you delete the current
object version, you, in effect, turn the noncurrent version into the current version of that
object.

When using lifecycle configuration rules with versioning-enabled buckets, we recommend as a


best practice that you use the first method.
Because of Amazon S3's eventual consistency semantics, a current version that you permanently
deleted may not disappear until the changes propagate (Amazon S3 may be unaware of this
deletion). In the meantime, the lifecycle rule that you configured to expire noncurrent objects
may permanently remove noncurrent objects, including the one you want to restore. So, copying
the old version, as recommended in the first method, is the safer alternative.

Lifecycle Rules: Based on an Object's Age


You can specify a time period, in number of days from the creation (or modification) of the objects, when
Amazon S3 can take the action.

When you specify the number of days in the Transition and Expiration actions in a lifecycle
configuration, note the following:

• It is the number of days since object creation when the action will occur.
• Amazon S3 calculates the time by adding the number of days specified in the rule to the object
creation time and rounding the resulting time to the next day midnight UTC. For example, if an object
was created at 1/15/2014 10:30 AM UTC and you specify 3 days in a transition rule, then the transition
date of the object would be calculated as 1/19/2014 00:00 UTC.

Note
Amazon S3 maintains only the last modified date for each object. For example, the Amazon S3
console shows the Last Modified date in the object Properties pane. When you initially create
a new object, this date reflects the date the object is created. If you replace the object, the date

API Version 2006-03-01


127
Amazon Simple Storage Service Developer Guide
Examples of Lifecycle Configuration

changes accordingly. So when we use the term creation date, it is synonymous with the term last
modified date.

When specifying the number of days in the NoncurrentVersionTransition and


NoncurrentVersionExpiration actions in a lifecycle configuration, note the following:

• It is the number of days from when the version of the object becomes noncurrent (that is, since the
object was overwritten or deleted), as the time when Amazon S3 will perform the action on the
specified object or objects.
• Amazon S3 calculates the time by adding the number of days specified in the rule to the time when
the new successor version of the object is created and rounding the resulting time to the next day
midnight UTC. For example, in your bucket, you have a current version of an object that was created at
1/1/2014 10:30 AM UTC, if the new successor version of the object that replaces the current version
is created at 1/15/2014 10:30 AM UTC and you specify 3 days in a transition rule, then the transition
date of the object would be calculated as 1/19/2014 00:00 UTC.

Lifecycle Rules: Based on a Specific Date


When specifying an action in a lifecycle rule, you can specify a date when you want Amazon S3to take
the action. When the specific date arrives, S3 applies the action to all qualified objects (based on the
filter criteria).

If you specify a lifecycle action with a date that is in the past, all qualified objects become immediately
eligible for that lifecycle action.
Important
The date-based action is not a one-time action. S3 continues to apply the date-based action
even after the date has passed, as long as the rule status is Enabled.
For example, suppose that you specify a date-based Expiration action to delete all objects
(assume no filter specified in the rule). On the specified date, S3 expires all the objects in
the bucket. S3 also continues to expire any new objects you create in the bucket. To stop the
lifecycle action, you must remove the action from the lifecycle configuration, disable the rule, or
delete the rule from the lifecycle configuration.

The date value must conform to the ISO 8601 format. The time is always midnight UTC.
Note
You can't create the date-based lifecycle rules using the Amazon S3 console, but you can view,
disable, or delete such rules.

Examples of Lifecycle Configuration


This section provides examples of lifecycle configuration. Each example shows how you can specify the
XML in each of the example scenarios.

Topics
• Example 1: Specifying a Filter (p. 129)
• Example 2: Disabling a Lifecycle Rule (p. 130)
• Example 3: Tiering Down Storage Class over an Object's Lifetime (p. 131)
• Example 4: Specifying Multiple Rules (p. 132)
• Example 5: Overlapping Filters, Conflicting Lifecycle Actions, and What Amazon S3 Does (p. 132)
• Example 6: Specifying a Lifecycle Rule for a Versioning-Enabled Bucket (p. 135)
• Example 7: Removing Expired Object Delete Markers (p. 136)
• Example 8: Lifecycle Configuration to Abort Multipart Uploads (p. 137)

API Version 2006-03-01


128
Amazon Simple Storage Service Developer Guide
Examples of Lifecycle Configuration

Example 1: Specifying a Filter


Each lifecycle rule includes a filter that you can use to identify a subset of objects in your bucket to which
the lifecycle rule applies. The following lifecycle configurations show examples of how you can specify a
filter.

• In this lifecycle configuration rule, the filter specifies a key prefix (tax/). Therefore, the rule applies to
objects with key name prefix tax/, such as tax/doc1.txt and tax/doc2.txt

The rule specifies two actions that direct Amazon S3 to do the following:
• Transition objects to the GLACIER storage class 365 days (one year) after creation.
• Delete objects (the Expiration action) 3650 days (10 years) after creation.

<LifecycleConfiguration>
<Rule>
<ID>Transition and Expiration Rule</ID>
<Filter>
<Prefix>tax/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<Days>365</Days>
<StorageClass>GLACIER</StorageClass>
</Transition>
<Expiration>
<Days>3650</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>

Instead of specifying object age in terms of days after creation, you can specify a date for each action.
However, you can't use both Date and Days in the same rule.
• If you want the lifecycle rule to apply to all objects in the bucket, specify an empty prefix. In the
following configuration, the rule specifies a Transition action directing Amazon S3 to transition
objects to the GLACIER storage class 0 days after creation in which case objects are eligible for archival
to Glacier at midnight UTC following creation.

<LifecycleConfiguration>
<Rule>
<ID>Archive all object same-day upon creation</ID>
<Filter>
<Prefix></Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<Days>0</Days>
<StorageClass>GLACIER</StorageClass>
</Transition>
</Rule>
</LifecycleConfiguration>

• You can specify zero or one key name prefix and zero or more object tags in a filter. The following
example code applies the lifecycle rule to a subset of objects with the tax/ key prefix and to objects
that have two tags with specific key and value. Note that when you specify more than one filter, you
must include the AND as shown (Amazon S3 applies a logical AND to combine the specified filter
conditions).

...
<Filter>

API Version 2006-03-01


129
Amazon Simple Storage Service Developer Guide
Examples of Lifecycle Configuration

<And>
<Prefix>tax/</Prefix>
<Tag>
<Key>key1</Key>
<Value>value1</Value>
</Tag>
<Tag>
<Key>key2</Key>
<Value>value2</Value>
</Tag>
</And>
</Filter>
...

• You can filter objects based only on tags. For example, the following lifecycle rule applies to objects
that have the two specified tags (it does not specify any prefix):

...
<Filter>
<And>
<Tag>
<Key>key1</Key>
<Value>value1</Value>
</Tag>
<Tag>
<Key>key2</Key>
<Value>value2</Value>
</Tag>
</And>
</Filter>
...

Important
When you have multiple rules in a lifecycle configuration, an object can become eligible for
multiple lifecycle actions. The general rules that Amazon S3 follows in such cases are:

• Permanent deletion takes precedence over transition.


• Transition takes precedence over creation of delete markers.
• When an object is eligible for both a GLACIER and STANDARD_IA (or ONEZONE_IA) transition,
Amazon S3 chooses the GLACIER transition.

For examples, see Example 5: Overlapping Filters, Conflicting Lifecycle Actions, and What
Amazon S3 Does (p. 132)

Example 2: Disabling a Lifecycle Rule


You can temporarily disable a lifecycle rule. The following lifecycle configuration specifies two rules:

• Rule 1 directs Amazon S3 to transition objects with the logs/ prefix to the GLACIER storage class
soon after creation.
• Rule 2 directs Amazon S3 to transition objects with the documents/ prefix to the GLACIER storage
class soon after creation.

In the policy Rule 1 is enabled and Rule 2 is disable. Amazon S3 will not take any action on disabled
rules.

<LifecycleConfiguration>

API Version 2006-03-01


130
Amazon Simple Storage Service Developer Guide
Examples of Lifecycle Configuration

<Rule>
<ID>Rule1</ID>
<Filter>
<Prefix>logs/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<Days>0</Days>
<StorageClass>GLACIER</StorageClass>
</Transition>
</Rule>
<Rule>
<ID>Rule2</ID>
<Prefix>documents/</Prefix>
<Status>Disabled</Status>
<Transition>
<Days>0</Days>
<StorageClass>GLACIER</StorageClass>
</Transition>
</Rule>
</LifecycleConfiguration>

Example 3: Tiering Down Storage Class over an Object's Lifetime


In this example, you leverage lifecycle configuration to tier down the storage class of objects over their
lifetime. Tiering down can help reduce storage costs. For more information about pricing, see Amazon S3
Pricing.

The following lifecycle configuration specifies a rule that applies to objects with key name prefix logs/.
The rule specifies the following actions:

• Two transition actions:


• Transition objects to the STANDARD_IA storage class 30 days after creation.
• Transition objects to the GLACIER storage class 90 days after creation.
• One expiration action that directs Amazon S3 to delete objects a year after creation.

<LifecycleConfiguration>
<Rule>
<ID>example-id</ID>
<Filter>
<Prefix>logs/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<Days>30</Days>
<StorageClass>STANDARD_IA</StorageClass>
</Transition>
<Transition>
<Days>90</Days>
<StorageClass>GLACIER</StorageClass>
</Transition>
<Expiration>
<Days>365</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>

API Version 2006-03-01


131
Amazon Simple Storage Service Developer Guide
Examples of Lifecycle Configuration

Note
You can use one rule to describe all lifecycle actions if all actions apply to the same set of
objects (identified by the filter). Otherwise, you can add multiple rules with each specifying a
different filter.

Example 4: Specifying Multiple Rules


You can specify multiple rules if you want different lifecycle actions of different objects. The following
lifecycle configuration has two rules:

• Rule 1 applies to objects with the key name prefix classA/. It directs Amazon S3 to transition objects
to the GLACIER storage class one year after creation and expire these objects 10 years after creation.
• Rule 2 applies to objects with key name prefix classB/. It directs Amazon S3 to transition objects to
the STANDARD_IA storage class 90 days after creation and delete them one year after creation.

<LifecycleConfiguration>
<Rule>
<ID>ClassADocRule</ID>
<Filter>
<Prefix>classA/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>      
<Days>365</Days>      
<StorageClass>GLACIER</StorageClass>    
</Transition>
<Expiration>
<Days>3650</Days>
</Expiration>
</Rule>
<Rule>
<ID>ClassBDocRule</ID>
<Filter>
<Prefix>classB/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>      
<Days>90</Days>      
<StorageClass>STANDARD_IA</StorageClass>    
</Transition>
<Expiration>
<Days>365</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>

Example 5: Overlapping Filters, Conflicting Lifecycle Actions,


and What Amazon S3 Does
You might specify a lifecycle configuration in which you specify overlapping prefixes, or actions. The
following examples show how Amazon S3 chooses to resolve potential conflicts.

Example 1: Overlapping Prefixes (No Conflict)

The following example configuration has two rules that specify overlapping prefixes as follows:

• First rule specifies an empty filter, indicating all objects in the bucket.
• Second rule specifies a key name prefix logs/, indicating only a subset of objects.

API Version 2006-03-01


132
Amazon Simple Storage Service Developer Guide
Examples of Lifecycle Configuration

Rule 1 requests Amazon S3 to delete all objects one year after creation, and Rule 2 requests Amazon S3
to transition a subset of objects to the STANDARD_IA storage class 30 days after creation.

<LifecycleConfiguration>
<Rule>
<ID>Rule 1</ID>
<Filter>
</Filter>
<Status>Enabled</Status>
<Expiration>
<Days>365</Days>
</Expiration>
</Rule>
<Rule>
<ID>Rule 2</ID>
<Filter>
<Prefix>logs/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<StorageClass>STANDARD_IA<StorageClass>
<Days>30</Days>
</Transition>
</Rule>
</LifecycleConfiguration>

Example 2: Conflicting Lifecycle Actions

In this example configuration, there are two rules that direct Amazon S3 to perform two different actions
on the same set of objects at the same time in object's lifetime:

• Both rules specify the same key name prefix, so both rules apply to the same set of objects.
• Both rules specify the same 365 days after object creation when the rules apply.
• One rule directs Amazon S3 to transition objects to the STANDARD_IA storage class and another rule
wants Amazon S3 to expire the objects at the same time.

<LifecycleConfiguration>
<Rule>
<ID>Rule 1</ID>
<Filter>
<Prefix>logs/</Prefix>
</Filter>
<Status>Enabled</Status>
<Expiration>
<Days>365</Days>
</Expiration>
</Rule>
<Rule>
<ID>Rule 2</ID>
<Filter>
<Prefix>logs/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<StorageClass>STANDARD_IA<StorageClass>
<Days>365</Days>
</Transition>
</Rule>
</LifecycleConfiguration>

API Version 2006-03-01


133
Amazon Simple Storage Service Developer Guide
Examples of Lifecycle Configuration

In this case, because you want objects to expire (removed), there is no point in changing the storage
class, and Amazon S3 simply chooses the expiration action on these objects.

Example 3: Overlapping Prefixes Resulting in Conflicting Lifecycle Actions

In this example, the configuration has two rules which specify overlapping prefixes as follows:

• Rule 1 specifies an empty prefix (indicating all objects).


• Rule 2 specifies a key name prefix (logs/) that identifies a subset of all objects.

For the subset of objects with the logs/ key name prefix, lifecycle actions in both rules apply. One rule
directing Amazon S3 to transition objects 10 days after creation and another rule directing Amazon S3 to
transition objects 365 days after creation.

<LifecycleConfiguration>
<Rule>
<ID>Rule 1</ID>
<Filter>
<Prefix></Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<StorageClass>STANDARD_IA<StorageClass>
<Days>10</Days>
</Transition>
</Rule>
<Rule>
<ID>Rule 2</ID>
<Filter>
<Prefix>logs/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<StorageClass>STANDARD_IA<StorageClass>
<Days>365</Days>
</Transition>
</Rule>
</LifecycleConfiguration>

In this case, Amazon S3 chooses to transition them 10 days after creation.

Example 4: Tag-based Filtering and Resulting Conflicting Lifecycle Actions

Suppose you have the following lifecycle policy that has two rules, each specifying a tag filter:

• Rule 1 specifies a tag-based filter (tag1/value1). This rule directs Amazon S3 to transition objects to
the GLACIER storage class 365 days after creation.
• Rule 2 specifies a tag-based filter (tag2/value2). This rule directs Amazon S3 to expire objects 14
days after creation.

The lifecycle configuration is shown following:

<LifecycleConfiguration>
<Rule>
<ID>Rule 1</ID>
<Filter>
<Tag>
<Key>tag1</Key>

API Version 2006-03-01


134
Amazon Simple Storage Service Developer Guide
Examples of Lifecycle Configuration

<Value>value1</Value>
</Tag>
</Filter>
<Status>Enabled</Status>
<Transition>
<StorageClass>GLACIER<StorageClass>
<Days>365</Days>
</Transition>
</Rule>
<Rule>
<ID>Rule 2</ID>
<Filter>
<Tag>
<Key>tag2</Key>
<Value>value1</Value>
</Tag>
</Filter>
<Status>Enabled</Status>
<Expiration>
<Days>14</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>

The policy is fine, but if there is an object with both tags, then S3 has to decide what to do. That is, both
rules apply to an object and in effect you are directing Amazon S3 to perform conflicting actions. In
this case, Amazon S3 expires the object 14 days after creation. The object is removed, and therefore the
transition action does not come into play.

Example 6: Specifying a Lifecycle Rule for a Versioning-Enabled


Bucket
Suppose you have a versioning-enabled bucket, which means that for each object you have a current
version and zero or more noncurrent versions. You want to maintain one year's worth of history and then
delete the noncurrent versions. For more information about versioning, see Object Versioning (p. 104).

Also, you want to save storage costs by moving noncurrent versions to GLACIER 30 days after they
become noncurrent (assuming cold data for which you don't need real-time access). In addition, you also
expect frequency of access of the current versions to diminish 90 days after creation so you might choose
to move these objects to the STANDARD_IA storage class.

<LifecycleConfiguration>
<Rule>
<ID>sample-rule</ID>
<Filter>
<Prefix></Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<Days>90</Days>
<StorageClass>STANDARD_IA</StorageClass>
</Transition>
<NoncurrentVersionTransition>
<NoncurrentDays>30</NoncurrentDays>
<StorageClass>GLACIER</StorageClass>
</NoncurrentVersionTransition>
<NoncurrentVersionExpiration>
<NoncurrentDays>365</NoncurrentDays>
</NoncurrentVersionExpiration>
</Rule>
</LifecycleConfiguration>

API Version 2006-03-01


135
Amazon Simple Storage Service Developer Guide
Examples of Lifecycle Configuration

Example 7: Removing Expired Object Delete Markers


A versioning-enabled bucket has one current version and zero or more noncurrent versions for each
object. When you delete an object, note the following:

• If you don't specify a version ID in your delete request, Amazon S3 adds a delete marker instead of
deleting the object. The current object version becomes noncurrent, and then the delete marker
becomes the current version.
• If you specify a version ID in your delete request, Amazon S3 deletes the object version permanently (a
delete marker is not created).
• A delete marker with zero noncurrent versions is referred to as the expired object delete marker.

This example shows a scenario that can create expired object delete markers in your bucket, and how you
can use lifecycle configuration to direct Amazon S3 to remove the expired object delete markers.

Suppose you write a lifecycle policy that specifies the NoncurrentVersionExpiration action to
remove the noncurrent versions 30 days after they become noncurrent as shown following:

<LifecycleConfiguration>
<Rule>
...
<NoncurrentVersionExpiration>
<NoncurrentDays>30</NoncurrentDays>
</NoncurrentVersionExpiration>
</Rule>
</LifecycleConfiguration>

The NoncurrentVersionExpiration action does not apply to the current object versions, it only
removes noncurrent versions.

For current object versions, you have the following options to manage their lifetime depending on
whether or not the current object versions follow a well-defined lifecycle:

• Current object versions follow a well-defined lifecycle.

In this case you can use lifecycle policy with the Expiration action to direct Amazon S3 to remove
current versions as shown in the following example:

<LifecycleConfiguration>
<Rule>
...
<Expiration>
<Days>60</Days>
</Expiration>
<NoncurrentVersionExpiration>
<NoncurrentDays>30</NoncurrentDays>
</NoncurrentVersionExpiration>
</Rule>
</LifecycleConfiguration>

Amazon S3 removes current versions 60 days after they are created by adding a delete marker for
each of the current object versions. This makes the current version noncurrent and the delete marker
becomes the current version. For more information, see Using Versioning (p. 425).

The NoncurrentVersionExpiration action in the same lifecycle configuration removes noncurrent


objects 30 days after they become noncurrent. Thus, all object versions are removed and you have
expired object delete markers, but Amazon S3 detects and removes the expired object delete markers
for you.

API Version 2006-03-01


136
Amazon Simple Storage Service Developer Guide
Examples of Lifecycle Configuration

• Current object versions don't have a well-defined lifecycle.

In this case you might remove the objects manually when you don't need them, creating
a delete marker with one or more noncurrent versions. If lifecycle configuration with
NoncurrentVersionExpiration action removes all the noncurrent versions, you now have expired
object delete markers.

Specifically for this scenario, Amazon S3 lifecycle configuration provides an Expiration action where
you can request Amazon S3 to remove the expired object delete markers:

<LifecycleConfiguration>
<Rule>
<ID>Rule 1</ID>
<Filter>
<Prefix>logs/</Prefix>
</Filter>
<Status>Enabled</Status>
<Expiration>
<ExpiredObjectDeleteMarker>true</ExpiredObjectDeleteMarker>
</Expiration>
<NoncurrentVersionExpiration>
<NoncurrentDays>30</NoncurrentDays>
</NoncurrentVersionExpiration>
</Rule>
</LifecycleConfiguration>

By setting the ExpiredObjectDeleteMarker element to true in the Expiration action, you direct
Amazon S3 to remove expired object delete markers.
Note
When specifying the ExpiredObjectDeleteMarker lifecycle action, the rule cannot specify a
tag-based filter.

Example 8: Lifecycle Configuration to Abort Multipart Uploads


You can use the multipart upload API to upload large objects in parts. For more information about
multipart uploads, see Multipart Upload Overview (p. 171).

Using lifecycle configuration, you can direct Amazon S3 to abort incomplete multipart uploads
(identified by the key name prefix specified in the rule) if they don't complete within a specified number
of days after initiation. When Amazon S3 aborts a multipart upload, it deletes all parts associated with
the multipart upload. This ensures that you don't have incomplete multipart uploads with parts that are
stored in Amazon S3 and, therefore, you don't have to pay any storage costs for these parts.
Note
When specifying the AbortIncompleteMultipartUpload lifecycle action, the rule cannot
specify a tag-based filter.

The following is an example lifecycle configuration that specifies a rule with the
AbortIncompleteMultipartUpload action. This action requests Amazon S3 to abort incomplete
multipart uploads seven days after initiation.

<LifecycleConfiguration>
<Rule>
<ID>sample-rule</ID>
<Filter>
<Prefix>SomeKeyPrefix/</Prefix>
</Filter>
<Status>rule-status</Status>
<AbortIncompleteMultipartUpload>

API Version 2006-03-01


137
Amazon Simple Storage Service Developer Guide
Setting Lifecycle Configuration

<DaysAfterInitiation>7</DaysAfterInitiation>
</AbortIncompleteMultipartUpload>
</Rule>
</LifecycleConfiguration>

Setting Lifecycle Configuration on a Bucket


Topics
• Manage an Object's Lifecycle Using the Amazon S3 Console (p. 138)
• Set Lifecycle Configurations Using the AWS CLI (p. 138)
• Managing Object Lifecycles Using the AWS SDK for Java (p. 141)
• Manage an Object's Lifecycle Using the AWS SDK for .NET (p. 143)
• Manage an Object's Lifecycle Using the AWS SDK for Ruby (p. 146)
• Manage an Object's Lifecycle Using the REST API (p. 146)

This section explains how you can set lifecycle configuration on a bucket programmatically using AWS
SDKs, or by using the Amazon S3 console, or the AWS CLI. Note the following:

• When you add a lifecycle configuration to a bucket, there is usually some lag before a new or updated
lifecycle configuration is fully propagated to all the Amazon S3 systems. Expect a delay of a few
minutes before the lifecycle configuration fully takes effect. This delay can also occur when you delete
a lifecycle configuration.
• When you disable or delete a lifecycle rule, after a small delay Amazon S3 stops scheduling new
objects for deletion or transition. Any objects that were already scheduled will be unscheduled and
they won't be deleted or transitioned.
• When you add a lifecycle configuration to a bucket, the configuration rules apply to both existing
objects and objects that you add later. For example, if you add a lifecycle configuration rule today with
an expiration action that causes objects with a specific prefix to expire 30 days after creation, Amazon
S3 will queue for removal any existing objects that are more than 30 days old.
• There may be a lag between when the lifecycle configuration rules are satisfied and when the action
triggered by satisfying the rule is taken. However, changes in billing happen as soon as the lifecycle
configuration rule is satisfied even if the action is not yet taken. One example is you are not charged
for storage after the object expiration time even if the object is not deleted immediately. Another
example is you are charged Glacier storage rates as soon as the object transition time elapses even if
the object is not transitioned to Glacier immediately.

For information about lifecycle configuration, see Object Lifecycle Management (p. 115).

Manage an Object's Lifecycle Using the Amazon S3 Console


You can specify lifecycle rules on a bucket using the Amazon S3 console.

For instructions on how to setup lifecycle rules using the AWS Management Console, see How Do I
Create a Lifecycle Policy for an S3 Bucket? in the Amazon Simple Storage Service Console User Guide.

Set Lifecycle Configurations Using the AWS CLI


You can use the following AWS CLI commands to manage lifecycle configurations:

• put-bucket-lifecycle-configuration
• get-bucket-lifecycle-configuration
• delete-bucket-lifecycle

API Version 2006-03-01


138
Amazon Simple Storage Service Developer Guide
Setting Lifecycle Configuration

For instructions to set up the AWS CLI, see Setting Up the AWS CLI (p. 661).

Note that the Amazon S3 lifecycle configuration is an XML file. But when using CLI, you cannot specify
the XML, you must specify JSON instead. The following are examples XML lifecycle configurations and
equivalent JSON that you can specify in AWS CLI command:

• Consider the following example lifecycle configuration:

<LifecycleConfiguration>
<Rule>
<ID>ExampleRule</ID>
<Filter>
<Prefix>documents/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<Days>365</Days>
<StorageClass>GLACIER</StorageClass>
</Transition>
<Expiration>
<Days>3650</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>

The equivalent JSON is shown:

{
"Rules": [
{
"Filter": {
"Prefix": "documents/"
},
"Status": "Enabled",
"Transitions": [
{
"Days": 365,
"StorageClass": "GLACIER"
}
],
"Expiration": {
"Days": 3650
},
"ID": "ExampleRule"
}
]
}

• Consider the following example lifecycle configuration:

<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Rule>
<ID>id-1</ID>
<Expiration>
<Days>1</Days>
</Expiration>
<Filter>
<And>
<Prefix>myprefix</Prefix>
<Tag>
<Key>mytagkey1</Key>
<Value>mytagvalue1</Value>
</Tag>

API Version 2006-03-01


139
Amazon Simple Storage Service Developer Guide
Setting Lifecycle Configuration

<Tag>
<Key>mytagkey2</Key>
<Value>mytagvalue2</Value>
</Tag>
</And>
</Filter>
<Status>Enabled</Status>
</Rule>
</LifecycleConfiguration>

The equivalent JSON is shown:

{
"Rules": [
{
"ID": "id-1",
"Filter": {
"And": {
"Prefix": "myprefix",
"Tags": [
{
"Value": "mytagvalue1",
"Key": "mytagkey1"
},
{
"Value": "mytagvalue2",
"Key": "mytagkey2"
}
]
}
},
"Status": "Enabled",
"Expiration": {
"Days": 1
}
}
]
}

You can test the put-bucket-lifecycle-configuration as follows:

1. Save the JSON lifecycle configuration in a file (lifecycle.json).


2. Run the following AWS CLI command to set the lifecycle configuration on your bucket:

$ aws s3api put-bucket-lifecycle-configuration  \


--bucket bucketname  \
--lifecycle-configuration file://lifecycle.json

3. To verify, retrieve the lifecycle configuration using the get-bucket-lifecycle-configuration


AWS CLI command as follows:

$ aws s3api get-bucket-lifecycle-configuration  \


--bucket bucketname

4. To delete the lifecycle configuration use the delete-bucket-lifecycle AWS CLI command as
follows:

aws s3api delete-bucket-lifecycle \


--bucket bucketname

API Version 2006-03-01


140
Amazon Simple Storage Service Developer Guide
Setting Lifecycle Configuration

Managing Object Lifecycles Using the AWS SDK for Java


You can use the AWS SDK for Java to manage the lifecycle configuration of a bucket. For more
information about managing lifecycle configuration, see Object Lifecycle Management (p. 115).
Note
When you add a lifecycle configuration to a bucket, Amazon S3 replaces the bucket's current
lifecycle configuration, if there is one. To update a configuration, you retrieve it, make the
desired changes, and then add the revised lifecycle configuration to the bucket.

Example

The following example shows how to use the AWS SDK for Java to add, update, and delete the lifecycle
configuration of a bucket. The example does the following:

• Adds a lifecycle configuration to a bucket.


• Retrieves the lifecycle configuration and updates it by adding another rule.
• Adds the modified lifecycle configuration to the bucket. Amazon S3 replaces the existing
configuration.
• Retrieves the configuration again and verifies that it has the right number of rules by the printing
number of rules.
• Deletes the lifecycle configuration and verifies that it has been deleted by attempting to retrieve it
again.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;
import java.util.Arrays;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketLifecycleConfiguration;
import com.amazonaws.services.s3.model.BucketLifecycleConfiguration.Transition;
import com.amazonaws.services.s3.model.StorageClass;
import com.amazonaws.services.s3.model.Tag;
import com.amazonaws.services.s3.model.lifecycle.LifecycleAndOperator;
import com.amazonaws.services.s3.model.lifecycle.LifecycleFilter;
import com.amazonaws.services.s3.model.lifecycle.LifecyclePrefixPredicate;
import com.amazonaws.services.s3.model.lifecycle.LifecycleTagPredicate;

public class LifecycleConfiguration {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";

// Create a rule to archive objects with the "glacierobjects/" prefix to Glacier


immediately.
BucketLifecycleConfiguration.Rule rule1 = new BucketLifecycleConfiguration.Rule()
.withId("Archive immediately rule")
.withFilter(new LifecycleFilter(new
LifecyclePrefixPredicate("glacierobjects/")))

API Version 2006-03-01


141
Amazon Simple Storage Service Developer Guide
Setting Lifecycle Configuration

.addTransition(new
Transition().withDays(0).withStorageClass(StorageClass.Glacier))
.withStatus(BucketLifecycleConfiguration.ENABLED);

// Create a rule to transition objects to the Standard-Infrequent Access storage


class
// after 30 days, then to Glacier after 365 days. Amazon S3 will delete the objects
after 3650 days.
// The rule applies to all objects with the tag "archive" set to "true".
BucketLifecycleConfiguration.Rule rule2 = new BucketLifecycleConfiguration.Rule()
.withId("Archive and then delete rule")
.withFilter(new LifecycleFilter(new LifecycleTagPredicate(new
Tag("archive", "true"))))
.addTransition(new
Transition().withDays(30).withStorageClass(StorageClass.StandardInfrequentAccess))
.addTransition(new
Transition().withDays(365).withStorageClass(StorageClass.Glacier))
.withExpirationInDays(3650)
.withStatus(BucketLifecycleConfiguration.ENABLED);

// Add the rules to a new BucketLifecycleConfiguration.


BucketLifecycleConfiguration configuration = new BucketLifecycleConfiguration()
.withRules(Arrays.asList(rule1, rule2));

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Save the configuration.


s3Client.setBucketLifecycleConfiguration(bucketName, configuration);

// Retrieve the configuration.


configuration = s3Client.getBucketLifecycleConfiguration(bucketName);

// Add a new rule with both a prefix predicate and a tag predicate.
configuration.getRules().add(new
BucketLifecycleConfiguration.Rule().withId("NewRule")
.withFilter(new LifecycleFilter(new LifecycleAndOperator(
Arrays.asList(new
LifecyclePrefixPredicate("YearlyDocuments/"),
new LifecycleTagPredicate(new Tag("expire_after",
"ten_years"))))))
.withExpirationInDays(3650)
.withStatus(BucketLifecycleConfiguration.ENABLED));

// Save the configuration.


s3Client.setBucketLifecycleConfiguration(bucketName, configuration);

// Retrieve the configuration.


configuration = s3Client.getBucketLifecycleConfiguration(bucketName);

// Verify that the configuration now has three rules.


configuration = s3Client.getBucketLifecycleConfiguration(bucketName);
System.out.println("Expected # of rules = 3; found: " +
configuration.getRules().size());

// Delete the configuration.


s3Client.deleteBucketLifecycleConfiguration(bucketName);

// Verify that the configuration has been deleted by attempting to retrieve


it.
configuration = s3Client.getBucketLifecycleConfiguration(bucketName);
String s = (configuration == null) ? "No configuration found." : "Configuration
found.";

API Version 2006-03-01


142
Amazon Simple Storage Service Developer Guide
Setting Lifecycle Configuration

System.out.println(s);
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Manage an Object's Lifecycle Using the AWS SDK for .NET


You can use the AWS SDK for .NET to manage the lifecycle configuration on a bucket. For more
information about managing lifecycle configuration, see Object Lifecycle Management (p. 115).
Note
When you add a lifecycle configuration, Amazon S3 replaces the existing lifecycle configuration
on the specified bucket. To update a configuration, you must first retrieve the lifecycle
configuration, make the changes, and then add the revised lifecycle configuration to the bucket.

Example .NET Code Example

The following example shows how to use the AWS SDK for .NET to add, update, and delete a bucket's
lifecycle configuration. The code example does the following:

• Adds a lifecycle configuration to a bucket.


• Retrieves the lifecycle configuration and updates it by adding another rule.
• Adds the modified lifecycle configuration to the bucket. Amazon S3 replaces the existing lifecycle
configuration.
• Retrieves the configuration again and verifies it by printing the number of rules in the configuration.
• Deletes the lifecycle configuration.and verifies the deletion

For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class LifecycleTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;
public static void Main()
{

API Version 2006-03-01


143
Amazon Simple Storage Service Developer Guide
Setting Lifecycle Configuration

client = new AmazonS3Client(bucketRegion);


AddUpdateDeleteLifecycleConfigAsync().Wait();
}

private static async Task AddUpdateDeleteLifecycleConfigAsync()


{
try
{
var lifeCycleConfiguration = new LifecycleConfiguration()
{
Rules = new List<LifecycleRule>
{
new LifecycleRule
{
Id = "Archive immediately rule",
Filter = new LifecycleFilter()
{
LifecycleFilterPredicate = new
LifecyclePrefixPredicate()
{
Prefix = "glacierobjects/"
}
},
Status = LifecycleRuleStatus.Enabled,
Transitions = new List<LifecycleTransition>
{
new LifecycleTransition
{
Days = 0,
StorageClass = S3StorageClass.Glacier
}
},
},
new LifecycleRule
{
Id = "Archive and then delete rule",
Filter = new LifecycleFilter()
{
LifecycleFilterPredicate = new
LifecyclePrefixPredicate()
{
Prefix = "projectdocs/"
}
},
Status = LifecycleRuleStatus.Enabled,
Transitions = new List<LifecycleTransition>
{
new LifecycleTransition
{
Days = 30,
StorageClass =
S3StorageClass.StandardInfrequentAccess
},
new LifecycleTransition
{
Days = 365,
StorageClass = S3StorageClass.Glacier
}
},
Expiration = new LifecycleRuleExpiration()
{
Days = 3650
}
}
}
};

API Version 2006-03-01


144
Amazon Simple Storage Service Developer Guide
Setting Lifecycle Configuration

// Add the configuration to the bucket.


await AddExampleLifecycleConfigAsync(client, lifeCycleConfiguration);

// Retrieve an existing configuration.


lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);

// Add a new rule.


lifeCycleConfiguration.Rules.Add(new LifecycleRule
{
Id = "NewRule",
Filter = new LifecycleFilter()
{
LifecycleFilterPredicate = new LifecyclePrefixPredicate()
{
Prefix = "YearlyDocuments/"
}
},
Expiration = new LifecycleRuleExpiration()
{
Days = 3650
}
});

// Add the configuration to the bucket.


await AddExampleLifecycleConfigAsync(client, lifeCycleConfiguration);

// Verify that there are now three rules.


lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);
Console.WriteLine("Expected # of rulest=3; found:{0}",
lifeCycleConfiguration.Rules.Count);

// Delete the configuration.


await RemoveLifecycleConfigAsync(client);

// Retrieve a nonexistent configuration.


lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);

}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered ***. Message:'{0}' when writing an
object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}

static async Task AddExampleLifecycleConfigAsync(IAmazonS3 client,


LifecycleConfiguration configuration)
{

PutLifecycleConfigurationRequest request = new


PutLifecycleConfigurationRequest
{
BucketName = bucketName,
Configuration = configuration
};
var response = await client.PutLifecycleConfigurationAsync(request);
}

static async Task<LifecycleConfiguration> RetrieveLifecycleConfigAsync(IAmazonS3


client)

API Version 2006-03-01


145
Amazon Simple Storage Service Developer Guide
Cross-Origin Resource Sharing (CORS)

{
GetLifecycleConfigurationRequest request = new
GetLifecycleConfigurationRequest
{
BucketName = bucketName
};
var response = await client.GetLifecycleConfigurationAsync(request);
var configuration = response.Configuration;
return configuration;
}

static async Task RemoveLifecycleConfigAsync(IAmazonS3 client)


{
DeleteLifecycleConfigurationRequest request = new
DeleteLifecycleConfigurationRequest
{
BucketName = bucketName
};
await client.DeleteLifecycleConfigurationAsync(request);
}
}
}

Manage an Object's Lifecycle Using the AWS SDK for Ruby


You can use the AWS SDK for Ruby to manage lifecycle configuration on a bucket by using the class
AWS::S3::BucketLifecycleConfiguration. For more information about using the AWS SDK for Ruby
with Amazon S3, see Using the AWS SDK for Ruby - Version 3 (p. 665). For more information about
managing lifecycle configuration, see Object Lifecycle Management (p. 115).

Manage an Object's Lifecycle Using the REST API


You can use the AWS Management Console to set the lifecycle configuration on your bucket. If your
application requires it, you can also send REST requests directly. The following sections in the Amazon
Simple Storage Service API Reference describe the REST API related to the lifecycle configuration.

• PUT Bucket lifecycle


• GET Bucket lifecycle
• DELETE Bucket lifecycle

Cross-Origin Resource Sharing (CORS)


Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one
domain to interact with resources in a different domain. With CORS support, you can build rich client-
side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3
resources.

This section provides an overview of CORS. The subtopics describe how you can enable CORS using the
Amazon S3 console, or programmatically by using the Amazon S3 REST API and the AWS SDKs.

Topics
• Cross-Origin Resource Sharing: Use-case Scenarios (p. 147)
• How Do I Configure CORS on My Bucket? (p. 147)
• How Does Amazon S3 Evaluate the CORS Configuration on a Bucket? (p. 149)
• Enabling Cross-Origin Resource Sharing (CORS) (p. 149)

API Version 2006-03-01


146
Amazon Simple Storage Service Developer Guide
Cross-Origin Resource Sharing: Use-case Scenarios

• Troubleshooting CORS Issues (p. 155)

Cross-Origin Resource Sharing: Use-case Scenarios


The following are example scenarios for using CORS:

• Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website
as described in Hosting a Static Website on Amazon S3 (p. 515). Your users load the website
endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use
JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET
and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket,
website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those
requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from
website.s3-website-us-east-1.amazonaws.com.
• Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a
CORS check (also called a preflight check) for loading web fonts. You would configure the bucket that
is hosting the web font to allow any origin to make these requests.

How Do I Configure CORS on My Bucket?


To configure your bucket to allow cross-origin requests, you create a CORS configuration, which is
an XML document with rules that identify the origins that you will allow to access your bucket, the
operations (HTTP methods) that will support for each origin, and other operation-specific information.

You can add up to 100 rules to the configuration. You add the XML document as the cors subresource
to the bucket either programmatically or by using the Amazon S3 console. For more information, see
Enabling Cross-Origin Resource Sharing (CORS) (p. 149).

Instead of accessing a website by using an Amazon S3 website endpoint, you can use your own domain,
such as example1.com to serve your content. For information about using your own domain, see
Example: Setting up a Static Website Using a Custom Domain (p. 531). The following example cors
configuration has three rules, which are specified as CORSRule elements:

• The first rule allows cross-origin PUT, POST, and DELETE requests from the http://
www.example1.com origin. The rule also allows all headers in a preflight OPTIONS request through
the Access-Control-Request-Headers header. In response to preflight OPTIONS requests,
Amazon S3 returns requested headers.
• The second rule allows the same cross-origin requests as the first rule, but the rule applies to another
origin, http://www.example2.com.
• The third rule allows cross-origin GET requests from all origins. The * wildcard character refers to all
origins.

<CORSConfiguration>
<CORSRule>
<AllowedOrigin>http://www.example1.com</AllowedOrigin>

<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>

<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>http://www.example2.com</AllowedOrigin>

API Version 2006-03-01


147
Amazon Simple Storage Service Developer Guide
How Do I Configure CORS on My Bucket?

<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>

<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>

The CORS configuration also allows optional configuration parameters, as shown in the following CORS
configuration. In this example, the CORS configuration allows cross-origin PUT, POST, and DELETE
requests from the http://www.example.com origin.

<CORSConfiguration>
<CORSRule>
<AllowedOrigin>http://www.example.com</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>x-amz-server-side-encryption</
ExposeHeader>
<ExposeHeader>x-amz-request-id</
ExposeHeader>
<ExposeHeader>x-amz-id-2</ExposeHeader>
</CORSRule>
</CORSConfiguration>

The CORSRule element in the preceding configuration includes the following optional elements:

• MaxAgeSeconds—Specifies the amount of time in seconds (in this example, 3000) that the browser
caches an Amazon S3 response to a preflight OPTIONS request for the specified resource. By caching
the response, the browser does not have to send preflight requests to Amazon S3 if the original
request will be repeated.
• ExposeHeader—Identifies the response headers (in this example, x-amz-server-side-
encryption, x-amz-request-id, and x-amz-id-2) that customers are able to access from their
applications (for example, from a JavaScript XMLHttpRequest object).

AllowedMethod Element
In the CORS configuration, you can specify the following values for the AllowedMethod element.

• GET
• PUT
• POST
• DELETE
• HEAD

AllowedOrigin Element
In the AllowedOrigin element, you specify the origins that you want to allow cross-domain requests
from, for example, http://www.example.com. The origin string can contain only one * wildcard

API Version 2006-03-01


148
Amazon Simple Storage Service Developer Guide
How Does Amazon S3 Evaluate the
CORS Configuration on a Bucket?

character, such as http://*.example.com. You can optionally specify * as the origin to enable all the
origins to send cross-origin requests. You can also specify https to enable only secure origins.

AllowedHeader Element
The AllowedHeader element specifies which headers are allowed in a preflight request through the
Access-Control-Request-Headers header. Each header name in the Access-Control-Request-
Headers header must match a corresponding entry in the rule. Amazon S3 will send only the allowed
headers in a response that were requested. For a sample list of headers that can be used in requests to
Amazon S3, go to Common Request Headers in the Amazon Simple Storage Service API Reference guide.

Each AllowedHeader string in the rule can contain at most one * wildcard character. For example,
<AllowedHeader>x-amz-*</AllowedHeader> will enable all Amazon-specific headers.

ExposeHeader Element
Each ExposeHeader element identifies a header in the response that you want customers to be able
to access from their applications (for example, from a JavaScript XMLHttpRequest object). For a list of
common Amazon S3 response headers, go to Common Response Headers in the Amazon Simple Storage
Service API Reference guide.

MaxAgeSeconds Element
The MaxAgeSeconds element specifies the time in seconds that your browser can cache the response for
a preflight request as identified by the resource, the HTTP method, and the origin.

How Does Amazon S3 Evaluate the CORS


Configuration on a Bucket?
When Amazon S3 receives a preflight request from a browser, it evaluates the CORS configuration for the
bucket and uses the first CORSRule rule that matches the incoming browser request to enable a cross-
origin request. For a rule to match, the following conditions must be met:

• The request's Origin header must match an AllowedOrigin element.


• The request method (for example, GET or PUT) or the Access-Control-Request-Method header in
case the of a preflight OPTIONS request must be one of the AllowedMethod elements.
• Every header listed in the request's Access-Control-Request-Headers header on the preflight
request must match an AllowedHeader element.

Note
The ACLs and policies continue to apply when you enable CORS on the bucket.

Enabling Cross-Origin Resource Sharing (CORS)


Enable cross-origin resource sharing by setting a CORS configuration on your bucket using the AWS
Management Console, the REST API, or the AWS SDKs.

Topics
• Enabling Cross-Origin Resource Sharing (CORS) Using the AWS Management Console (p. 150)
• Enabling Cross-Origin Resource Sharing (CORS) Using the AWS SDK for Java (p. 150)
• Enabling Cross-Origin Resource Sharing (CORS) Using the AWS SDK for .NET (p. 152)

API Version 2006-03-01


149
Amazon Simple Storage Service Developer Guide
Enabling CORS

• Enabling Cross-Origin Resource Sharing (CORS) Using the REST API (p. 155)

Enabling Cross-Origin Resource Sharing (CORS) Using the AWS


Management Console
You can use the AWS Management Console to set a CORS configuration on your bucket. For instructions,
see How Do I Allow Cross-Domain Resource Sharing with CORS? in the Amazon Simple Storage Service
Console User Guide.

Enabling Cross-Origin Resource Sharing (CORS) Using the AWS


SDK for Java
You can use the AWS SDK for Java to manage cross-origin resource sharing (CORS) for a bucket. For more
information about CORS, see Cross-Origin Resource Sharing (CORS) (p. 146).

Example

The following example:

• Creates a CORS configuration and sets the configuration on a bucket


• Retrieves the configuration and modifies it by adding a rule
• Adds the modified configuration to the bucket
• Deletes the configuration

For instructions on how to create and test a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketCrossOriginConfiguration;
import com.amazonaws.services.s3.model.CORSRule;

public class CORS {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";

// Create two CORS rules.


List<CORSRule.AllowedMethods> rule1AM = new ArrayList<CORSRule.AllowedMethods>();
rule1AM.add(CORSRule.AllowedMethods.PUT);
rule1AM.add(CORSRule.AllowedMethods.POST);
rule1AM.add(CORSRule.AllowedMethods.DELETE);
CORSRule rule1 = new CORSRule().withId("CORSRule1").withAllowedMethods(rule1AM)

API Version 2006-03-01


150
Amazon Simple Storage Service Developer Guide
Enabling CORS

.withAllowedOrigins(Arrays.asList(new String[] { "http://


*.example.com" }));

List<CORSRule.AllowedMethods> rule2AM = new ArrayList<CORSRule.AllowedMethods>();


rule2AM.add(CORSRule.AllowedMethods.GET);
CORSRule rule2 = new CORSRule().withId("CORSRule2").withAllowedMethods(rule2AM)
.withAllowedOrigins(Arrays.asList(new String[]
{ "*" })).withMaxAgeSeconds(3000)
.withExposedHeaders(Arrays.asList(new String[] { "x-amz-server-side-
encryption" }));

List<CORSRule> rules = new ArrayList<CORSRule>();


rules.add(rule1);
rules.add(rule2);

// Add the rules to a new CORS configuration.


BucketCrossOriginConfiguration configuration = new
BucketCrossOriginConfiguration();
configuration.setRules(rules);

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Add the configuration to the bucket.


s3Client.setBucketCrossOriginConfiguration(bucketName, configuration);

// Retrieve and display the configuration.


configuration = s3Client.getBucketCrossOriginConfiguration(bucketName);
printCORSConfiguration(configuration);

// Add another new rule.


List<CORSRule.AllowedMethods> rule3AM = new
ArrayList<CORSRule.AllowedMethods>();
rule3AM.add(CORSRule.AllowedMethods.HEAD);
CORSRule rule3 = new
CORSRule().withId("CORSRule3").withAllowedMethods(rule3AM)
.withAllowedOrigins(Arrays.asList(new String[] { "http://
www.example.com" }));

rules = configuration.getRules();
rules.add(rule3);
configuration.setRules(rules);
s3Client.setBucketCrossOriginConfiguration(bucketName, configuration);

// Verify that the new rule was added by checking the number of rules in the
configuration.
configuration = s3Client.getBucketCrossOriginConfiguration(bucketName);
System.out.println("Expected # of rules = 3, found " +
configuration.getRules().size());

// Delete the configuration.


s3Client.deleteBucketCrossOriginConfiguration(bucketName);
System.out.println("Removed CORS configuration.");

// Retrieve and display the configuration to verify that it was


// successfully deleted.
configuration = s3Client.getBucketCrossOriginConfiguration(bucketName);
printCORSConfiguration(configuration);
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();

API Version 2006-03-01


151
Amazon Simple Storage Service Developer Guide
Enabling CORS

}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}

private static void printCORSConfiguration(BucketCrossOriginConfiguration


configuration) {
if (configuration == null) {
System.out.println("Configuration is null.");
} else {
System.out.println("Configuration has " + configuration.getRules().size() + "
rules\n");

for (CORSRule rule : configuration.getRules()) {


System.out.println("Rule ID: " + rule.getId());
System.out.println("MaxAgeSeconds: " + rule.getMaxAgeSeconds());
System.out.println("AllowedMethod: " + rule.getAllowedMethods());
System.out.println("AllowedOrigins: " + rule.getAllowedOrigins());
System.out.println("AllowedHeaders: " + rule.getAllowedHeaders());
System.out.println("ExposeHeader: " + rule.getExposedHeaders());
System.out.println();
}
}
}
}

Enabling Cross-Origin Resource Sharing (CORS) Using the AWS


SDK for .NET
To manage cross-origin resource sharing (CORS) for a bucket, you can use the AWS SDK for .NET. For
more information about CORS, see Cross-Origin Resource Sharing (CORS) (p. 146).

Example

The following C# code:

• Creates a CORS configuration and sets the configuration on a bucket


• Retrieves the configuration and modifies it by adding a rule
• Adds the modified configuration to the bucket
• Deletes the configuration

For information about creating and testing a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class CORSTest

API Version 2006-03-01


152
Amazon Simple Storage Service Developer Guide
Enabling CORS

{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
CORSConfigTestAsync().Wait();
}
private static async Task CORSConfigTestAsync()
{
try
{
// Create a new configuration request and add two rules
CORSConfiguration configuration = new CORSConfiguration
{
Rules = new System.Collections.Generic.List<CORSRule>
{
new CORSRule
{
Id = "CORSRule1",
AllowedMethods = new List<string> {"PUT", "POST", "DELETE"},
AllowedOrigins = new List<string> {"http://*.example.com"}
},
new CORSRule
{
Id = "CORSRule2",
AllowedMethods = new List<string> {"GET"},
AllowedOrigins = new List<string> {"*"},
MaxAgeSeconds = 3000,
ExposeHeaders = new List<string> {"x-amz-server-side-
encryption"}
}
}
};

// Add the configuration to the bucket.


await PutCORSConfigurationAsync(configuration);

// Retrieve an existing configuration.


configuration = await RetrieveCORSConfigurationAsync();

// Add a new rule.


configuration.Rules.Add(new CORSRule
{
Id = "CORSRule3",
AllowedMethods = new List<string> { "HEAD" },
AllowedOrigins = new List<string> { "http://www.example.com" }
});

// Add the configuration to the bucket.


await PutCORSConfigurationAsync(configuration);

// Verify that there are now three rules.


configuration = await RetrieveCORSConfigurationAsync();
Console.WriteLine();
Console.WriteLine("Expected # of rulest=3; found:{0}",
configuration.Rules.Count);
Console.WriteLine();
Console.WriteLine("Pause before configuration delete. To continue, click
Enter...");
Console.ReadKey();

// Delete the configuration.

API Version 2006-03-01


153
Amazon Simple Storage Service Developer Guide
Enabling CORS

await DeleteCORSConfigurationAsync();

// Retrieve a nonexistent configuration.


configuration = await RetrieveCORSConfigurationAsync();
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}

static async Task PutCORSConfigurationAsync(CORSConfiguration configuration)


{

PutCORSConfigurationRequest request = new PutCORSConfigurationRequest


{
BucketName = bucketName,
Configuration = configuration
};

var response = await s3Client.PutCORSConfigurationAsync(request);


}

static async Task<CORSConfiguration> RetrieveCORSConfigurationAsync()


{
GetCORSConfigurationRequest request = new GetCORSConfigurationRequest
{
BucketName = bucketName

};
var response = await s3Client.GetCORSConfigurationAsync(request);
var configuration = response.Configuration;
PrintCORSRules(configuration);
return configuration;
}

static async Task DeleteCORSConfigurationAsync()


{
DeleteCORSConfigurationRequest request = new DeleteCORSConfigurationRequest
{
BucketName = bucketName
};
await s3Client.DeleteCORSConfigurationAsync(request);
}

static void PrintCORSRules(CORSConfiguration configuration)


{
Console.WriteLine();

if (configuration == null)
{
Console.WriteLine("\nConfiguration is null");
return;
}

Console.WriteLine("Configuration has {0} rules:", configuration.Rules.Count);


foreach (CORSRule rule in configuration.Rules)
{
Console.WriteLine("Rule ID: {0}", rule.Id);
Console.WriteLine("MaxAgeSeconds: {0}", rule.MaxAgeSeconds);

API Version 2006-03-01


154
Amazon Simple Storage Service Developer Guide
Troubleshooting CORS

Console.WriteLine("AllowedMethod: {0}", string.Join(", ",


rule.AllowedMethods.ToArray()));
Console.WriteLine("AllowedOrigins: {0}", string.Join(", ",
rule.AllowedOrigins.ToArray()));
Console.WriteLine("AllowedHeaders: {0}", string.Join(", ",
rule.AllowedHeaders.ToArray()));
Console.WriteLine("ExposeHeader: {0}", string.Join(", ",
rule.ExposeHeaders.ToArray()));
}
}
}
}

Enabling Cross-Origin Resource Sharing (CORS) Using the REST


API
To set a CORS configuration on your bucket, you can use the AWS Management Console. If your
application requires it, you can also send REST requests directly. The following sections in the Amazon
Simple Storage Service API Reference describe the REST API actions related to the CORS configuration:

• PUT Bucket cors


• GET Bucket cors
• DELETE Bucket cors
• OPTIONS object

Troubleshooting CORS Issues


If you encounter unexpected behavior while accessing buckets set with the CORS configuration, try the
following steps to troubleshoot:

1. Verify that the CORS configuration is set on the bucket.

For instructions, see Editing Bucket Permissions in the Amazon Simple Storage Service Console User
Guide. If the CORS configuration is set, the console displays an Edit CORS Configuration link in the
Permissions section of the Properties bucket.
2. Capture the complete request and response using a tool of your choice. For each request Amazon S3
receives, there must be a CORS rule that matches the data in your request, as follows:
a. Verify that the request has the Origin header.

If the header is missing, Amazon S3 doesn't treat the request as a cross-origin request, and doesn't
send CORS response headers in the response.
b. Verify that the Origin header in your request matches at least one of the AllowedOrigin elements
in the specified CORSRule.

The scheme, the host, and the port values in the Origin request header must match the
AllowedOrigin elements in the CORSRule. For example, if you set the CORSRule to
allow the origin http://www.example.com, then both https://www.example.com and
http://www.example.com:80 origins in your request don't match the allowed origin in your
configuration.
c. Verify that the method in your request (or in a preflight request, the method specified in the
Access-Control-Request-Method) is one of the AllowedMethod elements in the same
CORSRule.
d. For a preflight request, if the request includes an Access-Control-Request-Headers header,
verify that the CORSRule includes the AllowedHeader entries for each value in the Access-
Control-Request-Headers header.

API Version 2006-03-01


155
Amazon Simple Storage Service Developer Guide
Operations on Objects

Operations on Objects
Amazon S3 enables you to store, retrieve, and delete objects. You can retrieve an entire object or a
portion of an object. If you have enabled versioning on your bucket, you can retrieve a specific version
of the object. You can also retrieve a subresource associated with your object and update it where
applicable. You can make a copy of your existing object. Depending on the object size, the following
upload and copy related considerations apply:

• Uploading objects—You can upload objects of up to 5 GB in size in a single operation. For objects
greater than 5 GB you must use the multipart upload API.

Using the multipart upload API you can upload objects up to 5 TB each. For more information, see
Uploading Objects Using Multipart Upload API (p. 171).
• Copying objects—The copy operation creates a copy of an object that is already stored in Amazon S3.

You can create a copy of your object up to 5 GB in size in a single atomic operation. However, for
copying an object greater than 5 GB, you must use the multipart upload API. For more information, see
Copying Objects (p. 207).

You can use the REST API (see Making Requests Using the REST API (p. 44)) to work with objects or use
one of the following AWS SDK libraries:

• AWS SDK for Java


• AWS SDK for .NET
• AWS SDK for PHP

These libraries provide a high-level abstraction that makes working with objects easy. However, if your
application requires, you can use the REST API directly.

Getting Objects
Topics
• Related Resources (p. 157)
• Get an Object Using the AWS SDK for Java (p. 157)
• Get an Object Using the AWS SDK for .NET (p. 159)
• Get an Object Using the AWS SDK for PHP (p. 161)
• Get an Object Using the REST API (p. 162)
• Share an Object with Others (p. 162)

You can retrieve objects directly from Amazon S3. You have the following options when retrieving an
object:

• Retrieve an entire object—A single GET operation can return you the entire object stored in Amazon
S3.
• Retrieve object in parts—Using the Range HTTP header in a GET request, you can retrieve a specific
range of bytes in an object stored in Amazon S3.

You resume fetching other parts of the object whenever your application is ready. This resumable
download is useful when you need only portions of your object data. It is also useful where network
connectivity is poor and you need to react to failures.

API Version 2006-03-01


156
Amazon Simple Storage Service Developer Guide
Getting Objects

Note
Amazon S3 doesn't support retrieving multiple ranges of data per GET request.

When you retrieve an object, its metadata is returned in the response headers. There are times when you
want to override certain response header values returned in a GET response. For example, you might
override the Content-Disposition response header value in your GET request. The REST GET Object
API (see GET Object) allows you to specify query string parameters in your GET request to override these
values.

The AWS SDKs for Java, .NET, and PHP also provide necessary objects you can use to specify values for
these response headers in your GET request.

When retrieving objects that are stored encrypted using server-side encryption you will need to provide
appropriate request headers. For more information, see Protecting Data Using Encryption (p. 388).

Related Resources
• Using the AWS SDKs, CLI, and Explorers (p. 655)

Get an Object Using the AWS SDK for Java


When you download an object through the AWS SDK for Java, Amazon S3 returns all of the object's
metadata and an input stream from which to read the object's contents.

To retrieve an object, you do the following:

• Execute the AmazonS3Client.getObject() method, providing the bucket name and object key in
the request.
• Execute one of the S3Object instance methods to process the input stream.

Note
Your network connection remains open until you read all of the data or close the input stream.
We recommend that you read the content of the stream as quickly as possible.

The following are some variations you might use:

• Instead of reading the entire object, you can read only a portion of the object data by specifying the
byte range that you want in the request.
• You can optionally override the response header values (see Getting Objects (p. 156)) by using a
ResponseHeaderOverrides object and setting the corresponding request property. For example,
you can use this feature to indicate that the object should be downloaded into a file with a different
file name than the object key name.

The following example retrieves an object from an Amazon S3 bucket three ways: first, as a complete
object, then as a range of bytes from the object, then as a complete object with overridden response
header values. For more information about getting objects from Amazon S3, see GET Object.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.BufferedReader;

API Version 2006-03-01


157
Amazon Simple Storage Service Developer Guide
Getting Objects

import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ResponseHeaderOverrides;
import com.amazonaws.services.s3.model.S3Object;

public class GetObject {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String key = "*** Object key ***";

S3Object fullObject = null, objectPortion = null, headerOverrideObject = null;


try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Get an object and print its contents.


System.out.println("Downloading an object");
fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
System.out.println("Content-Type: " +
fullObject.getObjectMetadata().getContentType());
System.out.println("Content: ");
displayTextInputStream(fullObject.getObjectContent());

// Get a range of bytes from an object and print the bytes.


GetObjectRequest rangeObjectRequest = new GetObjectRequest(bucketName, key)
.withRange(0,9);
objectPortion = s3Client.getObject(rangeObjectRequest);
System.out.println("Printing bytes retrieved.");
displayTextInputStream(objectPortion.getObjectContent());

// Get an entire object, overriding the specified response headers, and print
the object's content.
ResponseHeaderOverrides headerOverrides = new ResponseHeaderOverrides()
.withCacheControl("No-cache")

.withContentDisposition("attachment; filename=example.txt");
GetObjectRequest getObjectRequestHeaderOverride = new
GetObjectRequest(bucketName, key)

.withResponseHeaders(headerOverrides);
headerOverrideObject = s3Client.getObject(getObjectRequestHeaderOverride);
displayTextInputStream(headerOverrideObject.getObjectContent());
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
finally {

API Version 2006-03-01


158
Amazon Simple Storage Service Developer Guide
Getting Objects

// To ensure that the network connection doesn't remain open, close any open
input streams.
if(fullObject != null) {
fullObject.close();
}
if(objectPortion != null) {
objectPortion.close();
}
if(headerOverrideObject != null) {
headerOverrideObject.close();
}
}
}

private static void displayTextInputStream(InputStream input) throws IOException {


// Read the text input stream one line at a time and display each line.
BufferedReader reader = new BufferedReader(new InputStreamReader(input));
String line = null;
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
System.out.println();
}
}

Get an Object Using the AWS SDK for .NET


When you download an object, you get all of the object's metadata and a stream from which to read the
contents. You should read the content of the stream as quickly as possible because the data is streamed
directly from Amazon S3 and your network connection will remain open until you read all the data or
close the input stream. You do the following to get an object:

• Execute the getObject method by providing bucket name and object key in the request.
• Execute one of the GetObjectResponse methods to process the stream.

The following are some variations you might use:

• Instead of reading the entire object, you can read only the portion of the object data by specifying the
byte range in the request, as shown in the following C# example:

Example

GetObjectRequest request = new GetObjectRequest


{
BucketName = bucketName,
Key = keyName,
ByteRange = new ByteRange(0, 10)
};

• When retrieving an object, you can optionally override the response header values (see Getting
Objects (p. 156)) by using the ResponseHeaderOverrides object and setting the corresponding
request property. The following C# code example shows how to do this. For example, you can use this
feature to indicate that the object should be downloaded into a file with a different filename that the
object key name.

Example

GetObjectRequest request = new GetObjectRequest


{
BucketName = bucketName,

API Version 2006-03-01


159
Amazon Simple Storage Service Developer Guide
Getting Objects

Key = keyName
};

ResponseHeaderOverrides responseHeaders = new ResponseHeaderOverrides();


responseHeaders.CacheControl = "No-cache";
responseHeaders.ContentDisposition = "attachment; filename=testing.txt";

request.ResponseHeaderOverrides = responseHeaders;

Example
The following C# code example retrieves an object from an Amazon S3 bucket. From the response,
the example reads the object data using the GetObjectResponse.ResponseStream property.
The example also shows how you can use the GetObjectResponse.Metadata collection to read
object metadata. If the object you retrieve has the x-amz-meta-title metadata, the code prints the
metadata value.

For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class GetObjectTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** object key ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
ReadObjectDataAsync().Wait();
}

static async Task ReadObjectDataAsync()


{
string responseBody = "";
try
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = keyName
};
using (GetObjectResponse response = await client.GetObjectAsync(request))
using (Stream responseStream = response.ResponseStream)
using (StreamReader reader = new StreamReader(responseStream))
{
string title = response.Metadata["x-amz-meta-title"]; // Assume you
have "title" as medata added to the object.
string contentType = response.Headers["Content-Type"];

API Version 2006-03-01


160
Amazon Simple Storage Service Developer Guide
Getting Objects

Console.WriteLine("Object metadata, Title: {0}", title);


Console.WriteLine("Content type: {0}", contentType);

responseBody = reader.ReadToEnd(); // Now you process the response


body.
}
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered ***. Message:'{0}' when writing an
object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

Get an Object Using the AWS SDK for PHP


This topic explains how to use a class from the AWS SDK for PHP to retrieve an Amazon S3 object. You
can retrieve an entire object or a byte range from the object. We assume that you are already following
the instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 664) and have the AWS
SDK for PHP properly installed.

When retrieving an object, you can optionally override the response header values by
adding the response keys, ResponseContentType, ResponseContentLanguage,
ResponseContentDisposition, ResponseCacheControl, and ResponseExpires, to the
getObject() method, as shown in the following PHP code example:

Example

$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname,
'ResponseContentType' => 'text/plain',
'ResponseContentLanguage' => 'en-US',
'ResponseContentDisposition' => 'attachment; filename=testing.txt',
'ResponseCacheControl' => 'No-cache',
'ResponseExpires' => gmdate(DATE_RFC2822, time() + 3600),
]);

For more information about retrieving objects, see Getting Objects (p. 156).

The following PHP example retrieves an object and displays the content of the object in the browser. The
example shows how to use the getObject() method. For information about running the PHP examples
in this guide, see Running PHP Examples (p. 665).

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

API Version 2006-03-01


161
Amazon Simple Storage Service Developer Guide
Getting Objects

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

try {
// Get the object.
$result = $s3->getObject([
'Bucket' => $bucket,
'Key' => $keyname
]);

// Display the object in the browser.


header("Content-Type: {$result['ContentType']}");
echo $result['Body'];
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

Related Resources
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
• AWS SDK for PHP Documentation

Get an Object Using the REST API


You can use the AWS SDK to retrieve object keys from a bucket. However, if your application requires
it, you can send REST requests directly. You can send a GET request to retrieve object keys. For more
information about the request and response format, go to Get Object.

Share an Object with Others


Topics
• Generate a Presigned Object URL using AWS Explorer for Visual Studio (p. 163)
• Generate a presigned Object URL Using the AWS SDK for Java (p. 163)
• Generate a Presigned Object URL Using AWS SDK for .NET (p. 164)

All objects by default are private. Only the object owner has permission to access these objects. However,
the object owner can optionally share objects with others by creating a presigned URL, using their own
security credentials, to grant time-limited permission to download the objects.

When you create a presigned URL for your object, you must provide your security credentials, specify a
bucket name, an object key, specify the HTTP method (GET to download the object) and expiration date
and time. The presigned URLs are valid only for the specified duration.

Anyone who receives the presigned URL can then access the object. For example, if you have a video
in your bucket and both the bucket and the object are private, you can share the video with others by
generating a presigned URL.
Note
Anyone with valid security credentials can create a presigned URL. However, in order to
successfully access an object, the presigned URL must be created by someone who has
permission to perform the operation that the presigned URL is based upon.

You can generate presigned URL programmatically using the AWS SDK for Java and .NET.

API Version 2006-03-01


162
Amazon Simple Storage Service Developer Guide
Getting Objects

Generate a Presigned Object URL using AWS Explorer for Visual Studio
If you are using Visual Studio, you can generate a presigned URL for an object without writing any
code by using AWS Explorer for Visual Studio. Anyone with this URL can download the object. For more
information, go to Using Amazon S3 from AWS Explorer.

For instructions about how to install the AWS Explorer, see Using the AWS SDKs, CLI, and
Explorers (p. 655).

Generate a presigned Object URL Using the AWS SDK for Java
Example

The following example generates a presigned URL that you can give to others so that they can retrieve
an object from an S3 bucket. For more information, see Share an Object with Others (p. 162).

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;
import java.net.URL;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.HttpMethod;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;

public class GeneratePresignedURL {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String objectKey = "*** Object key ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Set the presigned URL to expire after one hour.


java.util.Date expiration = new java.util.Date();
long expTimeMillis = expiration.getTime();
expTimeMillis += 1000 * 60 * 60;
expiration.setTime(expTimeMillis);

// Generate the presigned URL.


System.out.println("Generating pre-signed URL.");
GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.GET)
.withExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);

System.out.println("Pre-Signed URL: " + url.toString());


}

API Version 2006-03-01


163
Amazon Simple Storage Service Developer Guide
Getting Objects

catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Generate a Presigned Object URL Using AWS SDK for .NET


Example
The following example generates a presigned URL that you can give to others so that they can retrieve
an object. For more information, see Share an Object with Others (p. 162).

For instructions about how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;

namespace Amazon.DocSamples.S3
{
class GenPresignedURLTest
{
private const string bucketName = "*** bucket name ***";
private const string objectKey = "*** object key ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
string urlString = GeneratePreSignedURL();
}
static string GeneratePreSignedURL()
{
string urlString = "";
try
{
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Expires = DateTime.Now.AddMinutes(5)
};
urlString = s3Client.GetPreSignedURL(request1);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}

API Version 2006-03-01


164
Amazon Simple Storage Service Developer Guide
Uploading Objects

catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
return urlString;
}
}
}

Uploading Objects
Depending on the size of the data you are uploading, Amazon S3 offers the following options:

• Upload objects in a single operation—With a single PUT operation, you can upload objects up to 5
GB in size.

For more information, see Uploading Objects in a Single Operation (p. 165).
• Upload objects in parts—Using the multipart upload API, you can upload large objects, up to 5 TB.

The multipart upload API is designed to improve the upload experience for larger objects. You can
upload objects in parts. These object parts can be uploaded independently, in any order, and in
parallel. You can use a multipart upload for objects from 5 MB to 5 TB in size. For more information,
see Uploading Objects Using Multipart Upload API (p. 171).

We recommend that you use multipart uploading in the following ways:

• If you're uploading large objects over a stable high-bandwidth network, use multipart uploading to
maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded
performance.
• If you're uploading over a spotty network, use multipart uploading to increase resiliency to network
errors by avoiding upload restarts. When using multipart uploading, you need to retry uploading only
parts that are interrupted during the upload. You don't need to restart uploading your object from the
beginning.

For more information about multipart uploads, see Multipart Upload Overview (p. 171).

Topics
• Uploading Objects in a Single Operation (p. 165)
• Uploading Objects Using Multipart Upload API (p. 171)
• Uploading Objects Using Presigned URLs (p. 203)

When uploading an object, you can optionally request that Amazon S3 encrypt it before saving
it to disk, and decrypt it when you download it. For more information, see Protecting Data Using
Encryption (p. 388).

Related Topics

Using the AWS SDKs, CLI, and Explorers (p. 655)

Uploading Objects in a Single Operation


Topics
• Upload an Object Using the AWS SDK for Java (p. 166)

API Version 2006-03-01


165
Amazon Simple Storage Service Developer Guide
Uploading Objects

• Upload an Object Using the AWS SDK for .NET (p. 167)
• Upload an Object Using the AWS SDK for PHP (p. 168)
• Upload an Object Using the AWS SDK for Ruby (p. 169)
• Upload an Object Using the REST API (p. 170)

You can use the AWS SDK to upload objects. The SDK provides wrapper libraries for you to upload data
easily. However, if your application requires it, you can use the REST API directly in your application.

Upload an Object Using the AWS SDK for Java


Example

The following example creates two objects. The first object has a text string as data, and the second
object is a file. The example creates the first object by specifying the bucket name, object key, and
text data directly in a call to AmazonS3Client.putObject(). The example creates the second
object by using a PutObjectRequest that specifies the bucket name, object key, and file path. The
PutObjectRequest also specifies the ContentType header and title metadata.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.File;
import java.io.IOException;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;

public class UploadObject {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String stringObjKeyName = "*** String object key name ***";
String fileObjKeyName = "*** File object key name ***";
String fileName = "*** Path to file to upload ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Upload a text string as a new object.


s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");

// Upload a file as a new object with ContentType and title specified.


PutObjectRequest request = new PutObjectRequest(bucketName, fileObjKeyName, new
File(fileName));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("plain/text");
metadata.addUserMetadata("x-amz-meta-title", "someTitle");
request.setMetadata(metadata);

API Version 2006-03-01


166
Amazon Simple Storage Service Developer Guide
Uploading Objects

s3Client.putObject(request);
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Upload an Object Using the AWS SDK for .NET


Example

The following C# code example creates two objects with two PutObjectRequest requests:

• The first PutObjectRequest request saves a text string as sample object data. It also specifies the
bucket and object key names.
• The second PutObjectRequest request uploads a file by specifing the file name. This request also
specifies the ContentType header and optional object metadata (a title).

For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadObjectTest
{
private const string bucketName = "*** bucket name ***";
// Example creates two objects (for simplicity, we upload same file twice).
// You specify key names for these objects.
private const string keyName1 = "*** key name for first object created ***";
private const string keyName2 = "*** key name for second object created ***";
private const string filePath = @"*** file path ***";
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.EUWest1;

private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
WritingAnObjectAsync().Wait();
}

static async Task WritingAnObjectAsync()


{
try
{

API Version 2006-03-01


167
Amazon Simple Storage Service Developer Guide
Uploading Objects

// 1. Put object-specify only key name for the new object.


var putRequest1 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName1,
ContentBody = "sample text"
};

PutObjectResponse response1 = await client.PutObjectAsync(putRequest1);

// 2. Put the object-set ContentType and add metadata.


var putRequest2 = new PutObjectRequest
{
BucketName = bucketName,
Key = keyName2,
FilePath = filePath,
ContentType = "text/plain"
};
putRequest2.Metadata.Add("x-amz-meta-title", "someTitle");
}
catch (AmazonS3Exception e)
{
Console.WriteLine(
"Error encountered ***. Message:'{0}' when writing an object"
, e.Message);
}
catch (Exception e)
{
Console.WriteLine(
"Unknown encountered on server. Message:'{0}' when writing an object"
, e.Message);
}
}
}
}

Upload an Object Using the AWS SDK for PHP


This topic guides you through using classes from the AWS SDK for PHP to upload an object of up to 5 GB
in size. For larger files you must use multipart upload API. For more information, see Uploading Objects
Using Multipart Upload API (p. 171).

This topic assumes that you are already following the instructions for Using the AWS SDK for PHP and
Running PHP Examples (p. 664) and have the AWS SDK for PHP properly installed.

Example of Creating an Object in an Amazon S3 bucket by Uploading Data

The following PHP example creates an object in a specified bucket by uploading data using the
putObject() method. For information about running the PHP examples in this guide, go to Running
PHP Examples (p. 665).

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',

API Version 2006-03-01


168
Amazon Simple Storage Service Developer Guide
Uploading Objects

'region' => 'us-east-1'


]);

try {
// Upload data.
$result = $s3->putObject([
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => 'Hello, world!',
'ACL' => 'public-read'
]);

// Print the URL to the object.


echo $result['ObjectURL'] . PHP_EOL;
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

Related Resources

• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class


• AWS SDK for PHP Documentation

Upload an Object Using the AWS SDK for Ruby


The AWS SDK for Ruby - Version 3 has two ways of uploading an object to Amazon S3. The first uses a
managed file uploader, which makes it easy to upload files of any size from disk. To use the managed file
uploader method:

1. Create an instance of the Aws::S3::Resource class.


2. Reference the target object by bucket name and key. Objects live in a bucket and have unique keys
that identify each object.
3. Call#upload_file on the object.

Example

require 'aws-sdk-s3'

s3 = Aws::S3::Resource.new(region:'us-west-2')
obj = s3.bucket('bucket-name').object('key')
obj.upload_file('/path/to/source/file')

The second way that AWS SDK for Ruby - Version 3 can upload an object uses the #put method of
Aws::S3::Object. This is useful if the object is a string or an I/O object that is not a file on disk. To use
this method:

1. Create an instance of the Aws::S3::Resource class.


2. Reference the target object by bucket name and key.
3. Call#put, passing in the string or I/O object.

Example

require 'aws-sdk-s3'

s3 = Aws::S3::Resource.new(region:'us-west-2')
obj = s3.bucket('bucket-name').object('key')

API Version 2006-03-01


169
Amazon Simple Storage Service Developer Guide
Uploading Objects

# string data
obj.put(body: 'Hello World!')

# I/O object
File.open('/path/to/source.file', 'rb') do |file|
obj.put(body: file)
end

Upload an Object Using the REST API


You can use AWS SDK to upload an object. However, if your application requires it, you can send REST
requests directly. You can send a PUT request to upload data in a single operation. For more information,
see PUT Object.

API Version 2006-03-01


170
Amazon Simple Storage Service Developer Guide
Uploading Objects

Uploading Objects Using Multipart Upload API


Topics
• Multipart Upload Overview (p. 171)
• Using the AWS Java SDK for Multipart Upload (High-Level API) (p. 177)
• Using the AWS Java SDK for a Multipart Upload (Low-Level API) (p. 181)
• Using the AWS SDK for .NET for Multipart Upload (High-Level API) (p. 186)
• Using the AWS SDK for .NET for Multipart Upload (Low-Level API) (p. 193)
• Using the AWS PHP SDK for Multipart Upload (p. 198)
• Using the AWS PHP SDK for Multipart Upload (Low-Level API) (p. 200)
• Using the AWS SDK for Ruby for Multipart Upload (p. 202)
• Using the REST API for Multipart Upload (p. 203)

Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion
of the object's data. You can upload these object parts independently and in any order. If transmission
of any part fails, you can retransmit that part without affecting other parts. After all parts of your object
are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size
reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single
operation.

Using multipart upload provides the following advantages:

• Improved throughput - You can upload parts in parallel to improve throughput.


• Quick recovery from any network issues - Smaller part size minimizes the impact of restarting a failed
upload due to a network error.
• Pause and resume object uploads - You can upload object parts over time. Once you initiate a
multipart upload there is no expiry; you must explicitly complete or abort the multipart upload.
• Begin an upload before you know the final object size - You can upload an object as you are creating it.

For more information, see Multipart Upload Overview (p. 171).

Multipart Upload Overview


Topics
• Concurrent Multipart Upload Operations (p. 172)
• Multipart Upload and Pricing (p. 173)
• Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy (p. 173)
• Amazon S3 Multipart Upload Limits (p. 174)
• API Support for Multipart Upload (p. 175)
• Multipart Upload API and Permissions (p. 175)

The Multipart upload API enables you to upload large objects in parts. You can use this API to upload
new large objects or make a copy of an existing object (see Operations on Objects (p. 156)).

Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and
after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete
multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then
access the object just as you would any other object in your bucket.

API Version 2006-03-01


171
Amazon Simple Storage Service Developer Guide
Uploading Objects

You can list of all your in-progress multipart uploads or get a list of the parts that you have uploaded for
a specific multipart upload. Each of these operations is explained in this section.

Multipart Upload Initiation

When you send a request to initiate a multipart upload, Amazon S3 returns a response with an upload
ID, which is a unique identifier for your multipart upload. You must include this upload ID whenever you
upload parts, list the parts, complete an upload, or abort an upload. If you want to provide any metadata
describing the object being uploaded, you must provide it in the request to initiate multipart upload.

Parts Upload

When uploading a part, in addition to the upload ID, you must specify a part number. You can choose
any part number between 1 and 10,000. A part number uniquely identifies a part and its position in the
object you are uploading. Part number you choose need not be a consecutive sequence (for example, it
can be 1, 5, and 14). If you upload a new part using the same part number as a previously uploaded part,
the previously uploaded part is overwritten. Whenever you upload a part, Amazon S3 returns an ETag
header in its response. For each part upload, you must record the part number and the ETag value. You
need to include these values in the subsequent request to complete the multipart upload.
Note
After you initiate a multipart upload and upload one or more parts, you must either complete or
abort the multipart upload in order to stop getting charged for storage of the uploaded parts.
Only after you either complete or abort a multipart upload will Amazon S3 free up the parts
storage and stop charging you for the parts storage.

Multipart Upload Completion (or Abort)

When you complete a multipart upload, Amazon S3 creates an object by concatenating the parts in
ascending order based on the part number. If any object metadata was provided in the initiate multipart
upload request, Amazon S3 associates that metadata with the object. After a successful complete
request, the parts no longer exist. Your complete multipart upload request must include the upload ID
and a list of both part numbers and corresponding ETag values. Amazon S3 response includes an ETag
that uniquely identifies the combined object data. This ETag will not necessarily be an MD5 hash of
the object data. You can optionally abort the multipart upload. After aborting a multipart upload, you
cannot upload any part using that upload ID again. All storage that any parts from the aborted multipart
upload consumed is then freed. If any part uploads were in-progress, they can still succeed or fail even
after you aborted. To free all storage consumed by all parts, you must abort a multipart upload only
after all part uploads have completed.

Multipart Upload Listings

You can list the parts of a specific multipart upload or all in-progress multipart uploads. The list parts
operation returns the parts information that you have uploaded for a specific multipart upload. For each
list parts request, Amazon S3 returns the parts information for the specified multipart upload, up to a
maximum of 1,000 parts. If there are more than 1,000 parts in the multipart upload, you must send a
series of list part requests to retrieve all the parts. Note that the returned list of parts doesn't include
parts that haven't completed uploading. Using the list multipart uploads operation, you can obtain a list
of multipart uploads in progress. An in-progress multipart upload is an upload that you have initiated,
but have not yet completed or aborted. Each request returns at most 1000 multipart uploads. If there
are more than 1,000 multipart uploads in progress, you need to send additional requests to retrieve the
remaining multipart uploads. Only use the returned listing for verification. You should not use the result
of this listing when sending a complete multipart upload request. Instead, maintain your own list of the
part numbers you specified when uploading parts and the corresponding ETag values that Amazon S3
returns.

Concurrent Multipart Upload Operations


In a distributed development environment, it is possible for your application to initiate several updates
on the same object at the same time. Your application might initiate several multipart uploads using

API Version 2006-03-01


172
Amazon Simple Storage Service Developer Guide
Uploading Objects

the same object key. For each of these uploads, your application can then upload parts and send a
complete upload request to Amazon S3 to create the object. When the buckets have versioning enabled,
completing a multipart upload always creates a new version. For buckets that do not have versioning
enabled, it is possible that some other request received between the time when a multipart upload is
initiated and when it is completed might take precedence.
Note
It is possible for some other request received between the time you initiated a multipart upload
and completed it to take precedence. For example, if another operation deletes a key after you
initiate a multipart upload with that key, but before you complete it, the complete multipart
upload response might indicate a successful object creation without you ever seeing the object.

Multipart Upload and Pricing

Once you initiate a multipart upload, Amazon S3 retains all the parts until you either complete or
abort the upload. Throughout its lifetime, you are billed for all storage, bandwidth, and requests for
this multipart upload and its associated parts. If you abort the multipart upload, Amazon S3 deletes
upload artifacts and any parts that you have uploaded, and you are no longer billed for them. For more
information about pricing, see Amazon S3 Pricing.

Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy

After you initiate a multipart upload, you begin uploading parts. Amazon S3 stores these parts, but it
creates the object from the parts only after you upload all of them and send a successful request
to complete the multipart upload (you should verify that your request to complete multipart upload is
successful). Upon receiving the complete multipart upload request, Amazon S3 assembles the parts and
creates an object.

If you don't send the complete multipart upload request successfully, Amazon S3 will not assemble the
parts and will not create any object. Therefore, the parts remain in Amazon S3 and you pay for the parts
that are stored in Amazon S3. As a best practice, we recommend you configure a lifecycle rule (using the
AbortIncompleteMultipartUpload action) to minimize your storage costs.

Amazon S3 supports a bucket lifecycle rule that you can use to direct Amazon S3 to abort multipart
uploads that don't complete within a specified number of days after being initiated. When a multipart
upload is not completed within the time frame, it becomes eligible for an abort operation and Amazon
S3 aborts the multipart upload (and deletes the parts associated with the multipart upload).

The following is an example lifecycle configuration that specifies a rule with the
AbortIncompleteMultipartUpload action.

<LifecycleConfiguration>
<Rule>
<ID>sample-rule</ID>
<Prefix></Prefix>
<Status>Enabled</Status>
<AbortIncompleteMultipartUpload>
<DaysAfterInitiation>7</DaysAfterInitiation>
</AbortIncompleteMultipartUpload>
</Rule>
</LifecycleConfiguration>

In the example, the rule does not specify a value for the Prefix element (object key name prefix)
and therefore it applies to all objects in the bucket for which you initiated multipart uploads. Any
multipart uploads that were initiated and did not complete within seven days become eligible for an
abort operation (the action has no effect on completed multipart uploads).

For more information about the bucket lifecycle configuration, see Object Lifecycle
Management (p. 115).

API Version 2006-03-01


173
Amazon Simple Storage Service Developer Guide
Uploading Objects

Note
if the multipart upload is completed within the number of days specified in the rule, the
AbortIncompleteMultipartUpload lifecycle action does not apply (that is, Amazon S3 will
not take any action). Also, this action does not apply to objects, no objects are deleted by this
lifecycle action.

The following put-bucket-lifecycle  CLI command adds the lifecycle configuration for the
specified bucket.

$ aws s3api put-bucket-lifecycle  \


--bucket bucketname  \
--lifecycle-configuration filename-containing-lifecycle-configuration

To test the CLI command, do the following:

1. Set up the AWS CLI. For instructions, see Setting Up the AWS CLI (p. 661).
2. Save the following example lifecycle configuration in a file (lifecycle.json). The example
configuration specifies empty prefix and therefore it applies to all objects in the bucket. You can
specify a prefix to restrict the policy to a subset of objects.

{
"Rules": [
{
"ID": "Test Rule",
"Status": "Enabled",
"Prefix": "",
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}
]
}

3. Run the following CLI command to set lifecycle configuration on your bucket.

aws s3api put-bucket-lifecycle   \


--bucket bucketname  \
--lifecycle-configuration file://lifecycle.json

4. To verify, retrieve the lifecycle configuration using the get-bucket-lifecycle CLI command.

aws s3api get-bucket-lifecycle  \


--bucket bucketname

5. To delete the lifecycle configuration use the delete-bucket-lifecycle CLI command.

aws s3api delete-bucket-lifecycle \


--bucket bucketname

Amazon S3 Multipart Upload Limits


The following table provides multipart upload core specifications. For more information, see Multipart
Upload Overview (p. 171).

Item Specification

Maximum object size 5 TB

API Version 2006-03-01


174
Amazon Simple Storage Service Developer Guide
Uploading Objects

Item Specification

Maximum number of parts per upload 10,000

Part numbers 1 to 10,000 (inclusive)

Part size 5 MB to 5 GB, last part can be < 5 MB

Maximum number of parts returned 1000


for a list parts request

Maximum number of multipart 1000


uploads returned in a list multipart
uploads request

API Support for Multipart Upload

You can use an AWS SDK to upload an object in parts. The following AWS SDK libraries support multipart
upload:

• AWS SDK for Java


• AWS SDK for .NET
• AWS SDK for PHP

These libraries provide a high-level abstraction that makes uploading multipart objects easy. However,
if your application requires, you can use the REST API directly. The following sections in the Amazon
Simple Storage Service API Reference describe the REST API for multipart upload.

• Initiate Multipart Upload


• Upload Part
• Upload Part (Copy)
• Complete Multipart Upload
• Abort Multipart Upload
• List Parts
• List Multipart Uploads

Multipart Upload API and Permissions

An individual must have the necessary permissions to use the multipart upload operations. You can use
ACLs, the bucket policy, or the user policy to grant individuals permissions to perform these operations.
The following table lists the required permissions for various multipart upload operations when using
ACLs, bucket policy, or the user policy.

Action Required Permissions

Initiate You must be allowed to perform the s3:PutObject action on an object to initiate
Multipart multipart upload.
Upload
The bucket owner can allow other principals to perform the s3:PutObject action.

Initiator Container element that identifies who initiated the multipart upload. If the initiator is
an AWS account, this element provides the same information as the Owner element.
If the initiator is an IAM User, this element provides the user ARN and display name.

API Version 2006-03-01


175
Amazon Simple Storage Service Developer Guide
Uploading Objects

Action Required Permissions

Upload Part You must be allowed to perform the s3:PutObject action on an object to upload a
part.

Only the initiator of a multipart upload can upload parts. The bucket owner must
allow the initiator to perform the s3:PutObject action on an object in order for the
initiator to upload a part for that object.

Upload Part You must be allowed to perform the s3:PutObject action on an object to upload a
(Copy) part. Because you are uploading a part from an existing object, you must be allowed
s3:GetObject on the source object.

Only the initiator of a multipart upload can upload parts. For the initiator to upload
a part for an object, the owner of the bucket must allow the initiator to perform the
s3:PutObject action on the object.

Complete You must be allowed to perform the s3:PutObject action on an object to complete
Multipart a multipart upload.
Upload
Only the initiator of a multipart upload can complete that multipart upload. The
bucket owner must allow the initiator to perform the s3:PutObject action on an
object in order for the initiator to complete a multipart upload for that object.

Abort You must be allowed to perform the s3:AbortMultipartUpload action to abort a


Multipart multipart upload.
Upload
By default, the bucket owner and the initiator of the multipart upload are allowed
to perform this action. If the initiator is an IAM user, that user's AWS account is also
allowed to abort that multipart upload.

In addition to these defaults, the bucket owner can allow other principals to perform
the s3:AbortMultipartUpload action on an object. The bucket owner can deny
any principal the ability to perform the s3:AbortMultipartUpload action.

List Parts You must be allowed to perform the s3:ListMultipartUploadParts action to


list parts in a multipart upload.

By default, the bucket owner has permission to list parts for any multipart upload to
the bucket. The initiator of the multipart upload has the permission to list parts of
the specific multipart upload. If the multipart upload initiator is an IAM user, the AWS
account controlling that IAM user also has permission to list parts of that upload.

In addition to these defaults, the bucket owner can allow other principals to perform
the s3:ListMultipartUploadParts action on an object. The bucket owner can
also deny any principal the ability to perform the s3:ListMultipartUploadParts
action.

List Multipart You must be allowed to perform the s3:ListBucketMultipartUploads action on


Uploads a bucket to list multipart uploads in progress to that bucket.

In addition to the default, the bucket owner can allow other principals to perform the
s3:ListBucketMultipartUploads action on the bucket.

For information on the relationship between ACL permissions and permissions in access policies, see
Mapping of ACL Permissions and Access Policy Permissions (p. 373). For information on IAM users, go
to Working with Users and Groups.

API Version 2006-03-01


176
Amazon Simple Storage Service Developer Guide
Uploading Objects

Using the AWS Java SDK for Multipart Upload (High-Level API)
Topics
• Upload a File (p. 177)
• Abort Multipart Uploads (p. 178)
• Track Multipart Upload Progress (p. 179)

The AWS SDK for Java exposes a high-level API, called TransferManager, that simplifies multipart
uploads (see Uploading Objects Using Multipart Upload API (p. 171)). You can upload data from a file
or a stream. You can also set advanced options, such as the part size you want to use for the multipart
upload, or the number of concurrent threads you want to use when uploading the parts. You can also
set optional object properties, the storage class, or the ACL. You use the PutObjectRequest and the
TransferManagerConfiguration classes to set these advanced options.

When possible, TransferManager attempts to use multiple threads to upload multiple parts of a single
upload at once. When dealing with large content sizes and high bandwidth, this can increase throughput
significantly.

In addition to file-upload functionality, the TransferManager class enables you to abort an in-progress
multipart upload. An upload is considered to be in progress after you initiate it and until you complete or
abort it. The TransferManager aborts all in-progress multipart uploads on a specified bucket that were
initiated before a specified date and time.

For more information about multipart uploads, including additional functionality offered by the low-
level API methods, see Uploading Objects Using Multipart Upload API (p. 171).

Upload a File

Example

The following example shows how to upload an object using the high-level multipart-upload Java API
(the TransferManager class). For instructions on creating and testing a working sample, see Testing
the Amazon S3 Java Code Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.File;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;

public class HighLevelMultipartUpload {

public static void main(String[] args) throws Exception {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String keyName = "*** Object key ***";
String filePath = "*** Path for file to upload ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()

API Version 2006-03-01


177
Amazon Simple Storage Service Developer Guide
Uploading Objects

.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();

// TransferManager processes all transfers asynchronously,


// so this call returns immediately.
Upload upload = tm.upload(bucketName, keyName, new File(filePath));
System.out.println("Object upload started");

// Optionally, wait for the upload to finish before continuing.


upload.waitForCompletion();
System.out.println("Object upload complete");
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Abort Multipart Uploads

Example

The following example uses the high-level Java API (the TransferManager class) to abort all in-
progress multipart uploads that were initiated on a specific bucket over a week ago. For instructions on
creating and testing a working sample, see Testing the Amazon S3 Java Code Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.util.Date;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;

public class HighLevelAbortMultipartUpload {

public static void main(String[] args) {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)

API Version 2006-03-01


178
Amazon Simple Storage Service Developer Guide
Uploading Objects

.build();

// sevenDays is the duration of seven days in milliseconds.


long sevenDays = 1000 * 60 * 60 * 24 * 7;
Date oneWeekAgo = new Date(System.currentTimeMillis() - sevenDays);
tm.abortMultipartUploads(bucketName, oneWeekAgo);
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client couldn't
// parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Track Multipart Upload Progress

The high-level Java multipart upload API provides a listen interface, ProgressListener, to track
progress when uploading an object to Amazon S3. Progress events periodically notify the listener that
bytes have been transferred.

The following example demonstrates how to subscribe to a ProgressEvent event and write a handler:

Example

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.File;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.event.ProgressEvent;
import com.amazonaws.event.ProgressListener;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;

public class HighLevelTrackMultipartUpload {

public static void main(String[] args) throws Exception {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String keyName = "*** Object key ***";
String filePath = "*** Path to file to upload ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();

API Version 2006-03-01


179
Amazon Simple Storage Service Developer Guide
Uploading Objects

PutObjectRequest request = new PutObjectRequest(bucketName, keyName, new


File(filePath));

// To receive notifications when bytes are transferred, add a


// ProgressListener to your request.
request.setGeneralProgressListener(new ProgressListener() {
public void progressChanged(ProgressEvent progressEvent) {
System.out.println("Transferred bytes: " +
progressEvent.getBytesTransferred());
}
});
// TransferManager processes all transfers asynchronously,
// so this call returns immediately.
Upload upload = tm.upload(request);

// Optionally, you can wait for the upload to finish before continuing.
upload.waitForCompletion();
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

API Version 2006-03-01


180
Amazon Simple Storage Service Developer Guide
Uploading Objects

Using the AWS Java SDK for a Multipart Upload (Low-Level API)
Topics
• Upload a File (p. 181)
• List Multipart Uploads (p. 183)
• Abort a Multipart Upload (p. 184)

The AWS SDK for Java exposes a low-level API that closely resembles the Amazon S3 REST API for
multipart uploads (see Uploading Objects Using Multipart Upload API (p. 171). Use the low-level API
when you need to pause and resume multipart uploads, vary part sizes during the upload, or do not
know the size of the upload data in advance. When you don't have these requirements, use the high-level
API (see Using the AWS Java SDK for Multipart Upload (High-Level API) (p. 177)).

Upload a File

The following example shows how to use the low-level Java classes to upload a file. It performs the
following steps:

• Initiates a multipart upload using the AmazonS3Client.initiateMultipartUpload() method,


and passes in an InitiateMultipartUploadRequest object.
• Saves the upload ID that the AmazonS3Client.initiateMultipartUpload() method returns.
You provide this upload ID for each subsequent multipart upload operation.
• Uploads the parts of the object. For each part, you call the AmazonS3Client.uploadPart()
method. You provide part upload information using an UploadPartRequest object.
• For each part, saves the ETag from the response of the AmazonS3Client.uploadPart() method in
a list. You use the ETag values to complete the multipart upload.
• Calls the AmazonS3Client.completeMultipartUpload() method to complete the multipart
upload.

Example

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
import com.amazonaws.services.s3.model.PartETag;
import com.amazonaws.services.s3.model.UploadPartRequest;
import com.amazonaws.services.s3.model.UploadPartResult;

public class LowLevelMultipartUpload {

API Version 2006-03-01


181
Amazon Simple Storage Service Developer Guide
Uploading Objects

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ***";
String filePath = "*** Path to file to upload ***";

File file = new File(filePath);


long contentLength = file.length();
long partSize = 5 * 1024 * 1024; // Set part size to 5 MB.

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Create a list of ETag objects. You retrieve ETags for each object part
uploaded,
// then, after each individual part has been uploaded, pass the list of ETags
to
// the request to complete the upload.
List<PartETag> partETags = new ArrayList<PartETag>();

// Initiate the multipart upload.


InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest(bucketName, keyName);
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);

// Upload the file parts.


long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Because the last part could be less than 5 MB, adjust the part size as
needed.
partSize = Math.min(partSize, (contentLength - filePosition));

// Create the request to upload a part.


UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName(bucketName)
.withKey(keyName)
.withUploadId(initResponse.getUploadId())
.withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);

// Upload the part and add the response's ETag to our list.
UploadPartResult uploadResult = s3Client.uploadPart(uploadRequest);
partETags.add(uploadResult.getPartETag());

filePosition += partSize;
}

// Complete the multipart upload.


CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(bucketName, keyName,
initResponse.getUploadId(), partETags);
s3Client.completeMultipartUpload(compRequest);
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client

API Version 2006-03-01


182
Amazon Simple Storage Service Developer Guide
Uploading Objects

// couldn't parse the response from Amazon S3.


e.printStackTrace();
}
}
}

List Multipart Uploads

Example

The following example shows how to retrieve a list of in-progress multipart uploads using the low-level
Java API:

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.util.List;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ListMultipartUploadsRequest;
import com.amazonaws.services.s3.model.MultipartUpload;
import com.amazonaws.services.s3.model.MultipartUploadListing;

public class ListMultipartUploads {

public static void main(String[] args) {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Retrieve a list of all in-progress multipart uploads.


ListMultipartUploadsRequest allMultipartUploadsRequest = new
ListMultipartUploadsRequest(bucketName);
MultipartUploadListing multipartUploadListing =
s3Client.listMultipartUploads(allMultipartUploadsRequest);
List<MultipartUpload> uploads = multipartUploadListing.getMultipartUploads();

// Display information about all in-progress multipart uploads.


System.out.println(uploads.size() + " multipart upload(s) in progress.");
for (MultipartUpload u : uploads) {
System.out.println("Upload in progress: Key = \"" + u.getKey() + "\", id =
" + u.getUploadId());
}
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}

API Version 2006-03-01


183
Amazon Simple Storage Service Developer Guide
Uploading Objects

}
}

Abort a Multipart Upload

You can abort an in-progress multipart upload by calling the


AmazonS3Client.abortMultipartUpload() method. This method deletes all parts that were
uploaded to Amazon S3 and frees the resources. You provide the upload ID, bucket name, and key name.

Example

The following example shows how to abort multipart uploads using the low-level Java API.

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.util.List;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.AbortMultipartUploadRequest;
import com.amazonaws.services.s3.model.ListMultipartUploadsRequest;
import com.amazonaws.services.s3.model.MultipartUpload;
import com.amazonaws.services.s3.model.MultipartUploadListing;

public class LowLevelAbortMultipartUpload {

public static void main(String[] args) {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();

// Find all in-progress multipart uploads.


ListMultipartUploadsRequest allMultipartUploadsRequest = new
ListMultipartUploadsRequest(bucketName);
MultipartUploadListing multipartUploadListing =
s3Client.listMultipartUploads(allMultipartUploadsRequest);

List<MultipartUpload> uploads = multipartUploadListing.getMultipartUploads();


System.out.println("Before deletions, " + uploads.size() + " multipart uploads
in progress.");

// Abort each upload.


for (MultipartUpload u : uploads) {
System.out.println("Upload in progress: Key = \"" + u.getKey() + "\", id =
" + u.getUploadId());
s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(bucketName,
u.getKey(), u.getUploadId()));
System.out.println("Upload deleted: Key = \"" + u.getKey() + "\", id = " +
u.getUploadId());
}

// Verify that all in-progress multipart uploads have been aborted.


multipartUploadListing =
s3Client.listMultipartUploads(allMultipartUploadsRequest);

API Version 2006-03-01


184
Amazon Simple Storage Service Developer Guide
Uploading Objects

uploads = multipartUploadListing.getMultipartUploads();
System.out.println("After aborting uploads, " + uploads.size() + " multipart
uploads in progress.");
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Note
Instead of aborting multipart uploads individually, you can abort all of your in-progress
multipart uploads that were initiated before a specific time. This clean-up operation is useful
for aborting multipart uploads that you initiated but that didn't complete or were aborted. For
more information, see Abort Multipart Uploads (p. 178).

API Version 2006-03-01


185
Amazon Simple Storage Service Developer Guide
Uploading Objects

Using the AWS SDK for .NET for Multipart Upload (High-Level API)
Topics
• Upload a File to an S3 Bucket Using the AWS SDK for .NET (High-Level API) (p. 186)
• Upload a Directory (p. 187)
• Abort Multipart Uploads to an S3 Bucket Using the AWS SDK for .NET (High-Level API) (p. 189)
• Track the Progress of a Multipart Upload to an S3 Bucket Using the AWS SDK for .NET (High-level
API) (p. 190)

The AWS SDK for .NET exposes a high-level API that simplifies multipart uploads (see Uploading Objects
Using Multipart Upload API (p. 171)). You can upload data from a file, a directory, or a stream. For more
information about Amazon S3 multipart uploads, see Multipart Upload Overview (p. 171).

The TransferUtility class provides a methods for uploading files and directories, tracking upload
progress, and aborting multipart uploads.

Upload a File to an S3 Bucket Using the AWS SDK for .NET (High-Level API)

To upload a file to an S3 bucket, use the TransferUtility class. When uploading data from a file,
you must provide the object's key name. If you don't, the API uses the file name for the key name. When
uploading data from a stream, you must provide the object's key name.

To set advanced upload options—such as the part size, the number of threads when uploading the parts
concurrently, metadata, the storage class, or ACL—use the TransferUtilityUploadRequest class.

The following C# example uploads a file to an Amazon S3 bucket in multiple parts. It shows how to use
various TransferUtility.Upload overloads to upload a file. Each successive call to upload replaces
the previous upload. For information about the example's compatibility with a specific version of the
AWS SDK for .NET and instructions for creating and testing a working sample, see Running the Amazon
S3 .NET Code Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadFileMPUHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object ***";
private const string filePath = "*** provide the full path name of the file to
upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
UploadFileAsync().Wait();
}

API Version 2006-03-01


186
Amazon Simple Storage Service Developer Guide
Uploading Objects

private static async Task UploadFileAsync()


{
try
{
var fileTransferUtility =
new TransferUtility(s3Client);

// Option 1. Upload a file. The file name is used as the object key name.
await fileTransferUtility.UploadAsync(filePath, bucketName);
Console.WriteLine("Upload 1 completed");

// Option 2. Specify object key name explicitly.


await fileTransferUtility.UploadAsync(filePath, bucketName, keyName);
Console.WriteLine("Upload 2 completed");

// Option 3. Upload data from a type of System.IO.Stream.


using (var fileToUpload =
new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
await fileTransferUtility.UploadAsync(fileToUpload,
bucketName, keyName);
}
Console.WriteLine("Upload 3 completed");

// Option 4. Specify advanced settings.


var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
StorageClass = S3StorageClass.StandardInfrequentAccess,
PartSize = 6291456, // 6 MB.
Key = keyName,
CannedACL = S3CannedACL.PublicRead
};
fileTransferUtilityRequest.Metadata.Add("param1", "Value1");
fileTransferUtilityRequest.Metadata.Add("param2", "Value2");

await fileTransferUtility.UploadAsync(fileTransferUtilityRequest);
Console.WriteLine("Upload 4 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}

}
}
}

More Info

AWS SDK for .NET

Upload a Directory

You can use the TransferUtility class to upload an entire directory. By default, the API uploads only
the files at the root of the specified directory. You can, however, specify recursively uploading files in all
of the subdirectories.

API Version 2006-03-01


187
Amazon Simple Storage Service Developer Guide
Uploading Objects

To select files in the specified directory based on filtering criteria, specify filtering expressions. For
example, to upload only the .pdf files from a directory, specify the "*.pdf" filter expression.

When uploading files from a directory, you don't specify the key names for the resulting objects. Amazon
S3 constructs the key names using the original file path. For example, assume that you have a directory
called c:\myfolder with the following structure:

Example

C:\myfolder
\a.txt
\b.pdf
\media\
An.mp3

When you upload this directory, Amazon S3 uses the following key names:

Example

a.txt
b.pdf
media/An.mp3

Example

The following C# example uploads a directory to an Amazon S3 bucket. It shows how to use various
TransferUtility.UploadDirectory overloads to upload the directory. Each successive call to
upload replaces the previous upload. For instructions on how to create and test a working sample, see
Running the Amazon S3 .NET Code Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadDirMPUHighLevelAPITest
{
private const string existingBucketName = "*** bucket name ***";
private const string directoryPath = @"*** directory path ***";
// The example uploads only .txt files.
private const string wildCard = "*.txt";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;
static void Main()
{
s3Client = new AmazonS3Client(bucketRegion);
UploadDirAsync().Wait();
}

private static async Task UploadDirAsync()


{
try

API Version 2006-03-01


188
Amazon Simple Storage Service Developer Guide
Uploading Objects

{
var directoryTransferUtility =
new TransferUtility(s3Client);

// 1. Upload a directory.
await directoryTransferUtility.UploadDirectoryAsync(directoryPath,
existingBucketName);
Console.WriteLine("Upload statement 1 completed");

// 2. Upload only the .txt files from a directory


// and search recursively.
await directoryTransferUtility.UploadDirectoryAsync(
directoryPath,
existingBucketName,
wildCard,
SearchOption.AllDirectories);
Console.WriteLine("Upload statement 2 completed");

// 3. The same as Step 2 and some optional configuration.


// Search recursively for .txt files to upload.
var request = new TransferUtilityUploadDirectoryRequest
{
BucketName = existingBucketName,
Directory = directoryPath,
SearchOption = SearchOption.AllDirectories,
SearchPattern = wildCard
};

await directoryTransferUtility.UploadDirectoryAsync(request);
Console.WriteLine("Upload statement 3 completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine(
"Error encountered ***. Message:'{0}' when writing an object",
e.Message);
}
catch (Exception e)
{
Console.WriteLine(
"Unknown encountered on server. Message:'{0}' when writing an object",
e.Message);
}
}
}
}

Abort Multipart Uploads to an S3 Bucket Using the AWS SDK for .NET (High-Level API)

To abort in-progress multipart uploads, use the TransferUtility class from the AWS SDK for .NET.
You provide a DateTimevalue. The API then aborts all of the multipart uploads that were initiated
before the specified date and time and remove the uploaded parts. An upload is considered to be in-
progress after you initiate it and it completes or you abort it.

Because you are billed for all storage associated with uploaded parts, it's important that you either
complete the multipart upload to finish creating the object or abort it to remove uploaded parts. For
more information about Amazon S3 multipart uploads, see Multipart Upload Overview (p. 171). For
information about pricing, see Multipart Upload and Pricing (p. 173).

The following C# example aborts all in-progress multipart uploads that were initiated on a specific
bucket over a week ago. For information about the example's compatibility with a specific version of the
AWS SDK for .NET and instructions on creating and testing a working sample, see Running the Amazon
S3 .NET Code Examples (p. 664).

API Version 2006-03-01


189
Amazon Simple Storage Service Developer Guide
Uploading Objects

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class AbortMPUUsingHighLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
AbortMPUAsync().Wait();
}

private static async Task AbortMPUAsync()


{
try
{
var transferUtility = new TransferUtility(s3Client);

// Abort all in-progress uploads initiated before the specified date.


await transferUtility.AbortMultipartUploadsAsync(
bucketName, DateTime.Now.AddDays(-7));
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

Note
You can also abort a specific multipart upload. For more information, see List Multipart Uploads
to an S3 Bucket Using the AWS SDK for .NET (Low-level) (p. 195).

More Info

AWS SDK for .NET

Track the Progress of a Multipart Upload to an S3 Bucket Using the AWS SDK for .NET (High-
level API)

The following C# example uploads a file to an S3 bucket using the TransferUtility class, and tracks
the progress of the upload. For information about the example's compatibility with a specific version
of the AWS SDK for .NET and instructions for creating and testing a working sample, see Running the
Amazon S3 .NET Code Examples (p. 664).

API Version 2006-03-01


190
Amazon Simple Storage Service Developer Guide
Uploading Objects

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Transfer;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class TrackMPUUsingHighLevelAPITest
{
private const string bucketName = "*** provide the bucket name ***";
private const string keyName = "*** provide the name for the uploaded object ***";
private const string filePath = " *** provide the full path name of the file to
upload **";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
TrackMPUAsync().Wait();
}

private static async Task TrackMPUAsync()


{
try
{
var fileTransferUtility = new TransferUtility(s3Client);

// Use TransferUtilityUploadRequest to configure options.


// In this example we subscribe to an event.
var uploadRequest =
new TransferUtilityUploadRequest
{
BucketName = bucketName,
FilePath = filePath,
Key = keyName
};

uploadRequest.UploadProgressEvent +=
new EventHandler<UploadProgressArgs>
(uploadRequest_UploadPartProgressEvent);

await fileTransferUtility.UploadAsync(uploadRequest);
Console.WriteLine("Upload completed");
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}

static void uploadRequest_UploadPartProgressEvent(object sender, UploadProgressArgs


e)

API Version 2006-03-01


191
Amazon Simple Storage Service Developer Guide
Uploading Objects

{
// Process event.
Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
}
}

More Info

AWS SDK for .NET

API Version 2006-03-01


192
Amazon Simple Storage Service Developer Guide
Uploading Objects

Using the AWS SDK for .NET for Multipart Upload (Low-Level API)
The AWS SDK for .NET exposes a low-level API that closely resembles the Amazon S3 REST API for
multipart upload (see Using the REST API for Multipart Upload (p. 203) ). Use the low-level API when
you need to pause and resume multipart uploads, vary part sizes during the upload, or when you do
not know the size of the data in advance. Use the high-level API (see Using the AWS SDK for .NET for
Multipart Upload (High-Level API) (p. 186)), whenever you don't have these requirements.

Topics
• Upload a File to an S3 Bucket Using the AWS SDK for .NET (Low-Level API) (p. 193)
• List Multipart Uploads to an S3 Bucket Using the AWS SDK for .NET (Low-level) (p. 195)
• Track the Progress of a Multipart Upload to an S3 Bucket Using the AWS SDK for .NET (Low-
Level) (p. 196)
• Abort Multipart Uploads to an S3 Bucket Using the AWS SDK for .NET (Low-Level) (p. 196)

Upload a File to an S3 Bucket Using the AWS SDK for .NET (Low-Level API)

The following C# example shows how to use the low-level AWS SDK for .NET multipart upload API to
upload a file to an S3 bucket. For information about Amazon S3 multipart uploads, see Multipart Upload
Overview (p. 171).
Note
When you use the AWS SDK for .NET API to upload large objects, a timeout might occur
while data is being written to the request stream. You can set an explicit timeout using the
UploadPartRequest.

The following C# example uploads a file to an S3 bucket using the low-level multipart upload API.
For information about the example's compatibility with a specific version of the AWS SDK for .NET
and instructions for creating and testing a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.Runtime;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class UploadFileMPULowLevelAPITest
{
private const string bucketName = "*** provide bucket name ***";
private const string keyName = "*** provide a name for the uploaded object ***";
private const string filePath = "*** provide the full path name of the file to
upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
Console.WriteLine("Uploading an object");

API Version 2006-03-01


193
Amazon Simple Storage Service Developer Guide
Uploading Objects

UploadObjectAsync().Wait();
}

private static async Task UploadObjectAsync()


{
// Create list to store upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();

// Setup information required to initiate the multipart upload.


InitiateMultipartUploadRequest initiateRequest = new
InitiateMultipartUploadRequest
{
BucketName = bucketName,
Key = keyName
};

// Initiate the upload.


InitiateMultipartUploadResponse initResponse =
await s3Client.InitiateMultipartUploadAsync(initiateRequest);

// Upload parts.
long contentLength = new FileInfo(filePath).Length;
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB

try
{
Console.WriteLine("Uploading parts");

long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++)
{
UploadPartRequest uploadRequest = new UploadPartRequest
{
BucketName = bucketName,
Key = keyName,
UploadId = initResponse.UploadId,
PartNumber = i,
PartSize = partSize,
FilePosition = filePosition,
FilePath = filePath
};

// Track upload progress.


uploadRequest.StreamTransferProgress +=
new
EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback);

// Upload a part and add the response to our list.


uploadResponses.Add(await s3Client.UploadPartAsync(uploadRequest));

filePosition += partSize;
}

// Setup to complete the upload.


CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest
{
BucketName = bucketName,
Key = keyName,
UploadId = initResponse.UploadId
};
completeRequest.AddPartETags(uploadResponses);

// Complete the upload.


CompleteMultipartUploadResponse completeUploadResponse =
await s3Client.CompleteMultipartUploadAsync(completeRequest);

API Version 2006-03-01


194
Amazon Simple Storage Service Developer Guide
Uploading Objects

}
catch (Exception exception)
{
Console.WriteLine("An AmazonS3Exception was thrown: { 0}",
exception.Message);

// Abort the upload.


AbortMultipartUploadRequest abortMPURequest = new
AbortMultipartUploadRequest
{
BucketName = bucketName,
Key = keyName,
UploadId = initResponse.UploadId
};
await s3Client.AbortMultipartUploadAsync(abortMPURequest);
}
}
public static void UploadPartProgressEventCallback(object sender,
StreamTransferProgressArgs e)
{
// Process event.
Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
}
}

More Info

AWS SDK for .NET

List Multipart Uploads to an S3 Bucket Using the AWS SDK for .NET (Low-level)

To list all of the in-progress multipart uploads on a specific bucket, use the AWS SDK
for .NET low-level multipart upload API's ListMultipartUploadsRequest class.
The AmazonS3Client.ListMultipartUploads method returns an instance of the
ListMultipartUploadsResponse class that provides information about the in-progress multipart
uploads.

An in-progress multipart upload is a multipart upload that has been initiated using the initiate multipart
upload request, but has not yet been completed or aborted. For more information about Amazon S3
multipart uploads, see Multipart Upload Overview (p. 171).

The following C# example shows how to use the AWS SDK for .NET to list all in-progress multipart
uploads on a bucket. For information about the example's compatibility with a specific version of the
AWS SDK for .NET and instructions on how to create and test a working sample, see Running the Amazon
S3 .NET Code Examples (p. 664).

ListMultipartUploadsRequest request = new ListMultipartUploadsRequest


{
BucketName = bucketName // Bucket receiving the uploads.
};

ListMultipartUploadsResponse response = await


AmazonS3Client.ListMultipartUploadsAsync(request);

More Info

AWS SDK for .NET

API Version 2006-03-01


195
Amazon Simple Storage Service Developer Guide
Uploading Objects

Track the Progress of a Multipart Upload to an S3 Bucket Using the AWS SDK for .NET (Low-
Level)

To track the progress of a multipart upload, use the UploadPartRequest.StreamTransferProgress


event provided by the AWS SDK for .NET low-level multipart upload API. The event occurs periodically. It
returns information such as the total number of bytes to transfer and the number of bytes transferred.

The following C# example shows how to track the progress of multipart uploads. For a complete C#
sample that includes the following code, see Upload a File to an S3 Bucket Using the AWS SDK for .NET
(Low-Level API) (p. 193).

UploadPartRequest uploadRequest = new UploadPartRequest


{
// Provide the request data.
};

uploadRequest.StreamTransferProgress +=
new EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback);

...
public static void UploadPartProgressEventCallback(object sender,
StreamTransferProgressArgs e)
{
// Process the event.
Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}

More Info

AWS SDK for .NET

Abort Multipart Uploads to an S3 Bucket Using the AWS SDK for .NET (Low-Level)

You can abort an in-progress multipart upload by calling the


AmazonS3Client.AbortMultipartUploadAsync method. In addition to aborting the upload, this
method deletes all parts that were uploaded to Amazon S3.

To abort a multipart upload, you provide the upload ID, and the bucket and key names that are used
in the upload. After you have aborted a multipart upload, you can't use the upload ID to upload
additional parts. For more information about Amazon S3 multipart uploads, see Multipart Upload
Overview (p. 171).

The following C# example shows how to abort an multipart upload. For a complete C# sample that
includes the following code, see Upload a File to an S3 Bucket Using the AWS SDK for .NET (Low-Level
API) (p. 193).

AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest


{
BucketName = existingBucketName,
Key = keyName,
UploadId = initResponse.UploadId
};
await AmazonS3Client.AbortMultipartUploadAsync(abortMPURequest);

You can also abort all in-progress multipart uploads that were initiated prior to a specific time. This
clean-up operation is useful for aborting multipart uploads that didn't complete or were aborted. For
more information, see Abort Multipart Uploads to an S3 Bucket Using the AWS SDK for .NET (High-Level
API) (p. 189).

API Version 2006-03-01


196
Amazon Simple Storage Service Developer Guide
Uploading Objects

More Info

AWS SDK for .NET

API Version 2006-03-01


197
Amazon Simple Storage Service Developer Guide
Uploading Objects

Using the AWS PHP SDK for Multipart Upload


You can upload large files to Amazon S3 in multiple parts. You must use a multipart upload for files
larger than 5 GB. The AWS SDK for PHP exposes the MultipartUploader class that simplifies multipart
uploads.

The upload method of the MultipartUploader class is best used for a simple multipart upload. If you
need to pause and resume multipart uploads, vary part sizes during the upload, or do not know the size
of the data in advance, use the low-level PHP API. For more information, see Using the AWS PHP SDK for
Multipart Upload (Low-Level API) (p. 200).

For more information about multipart uploads, see Uploading Objects Using Multipart Upload
API (p. 171). For information on uploading files that are less than 5GB in size, see Upload an Object
Using the AWS SDK for PHP (p. 168).

Upload a File Using the High-Level Multipart Upload

This topic explains how to use the high-level Aws\S3\Model\MultipartUpload\UploadBuilder


class from the AWS SDK for PHP for multipart file uploads. It assumes that you are already following the
instructions for Using the AWS SDK for PHP and Running PHP Examples (p. 664) and have the AWS
SDK for PHP properly installed.

The following PHP example uploads a file to an Amazon S3 bucket. The example demonstrates how to
set parameters for the MultipartUploader object.

For information about running the PHP examples in this guide, see Running PHP Examples (p. 665).

<?php

require 'vendor/autoload.php';

use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Prepare the upload parameters.


$uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [
'bucket' => $bucket,
'key' => $keyname
]);

// Perform the upload.


try {
$result = $uploader->upload();
echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL;
} catch (MultipartUploadException $e) {
echo $e->getMessage() . PHP_EOL;
}

Related Resources

• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class


• Amazon S3 Multipart Uploads

API Version 2006-03-01


198
Amazon Simple Storage Service Developer Guide
Uploading Objects

• AWS SDK for PHP Documentation

API Version 2006-03-01


199
Amazon Simple Storage Service Developer Guide
Uploading Objects

Using the AWS PHP SDK for Multipart Upload (Low-Level API)
Topics
• Upload a File in Multiple Parts Using the PHP SDK Low-Level API (p. 200)
• List Multipart Uploads Using the Low-Level AWS SDK for PHP API (p. 201)
• Abort a Multipart Upload (p. 202)

The AWS SDK for PHP exposes a low-level API that closely resembles the Amazon S3 REST API for
multipart upload (see Using the REST API for Multipart Upload (p. 203) ). Use the low-level API when
you need to pause and resume multipart uploads, vary part sizes during the upload, or if you do not
know the size of the data in advance. Use the AWS SDK for PHP high-level abstractions (see Using the
AWS PHP SDK for Multipart Upload (p. 198)) whenever you don't have these requirements.

Upload a File in Multiple Parts Using the PHP SDK Low-Level API
This topic guides shows how to use the low-level uploadPart method from version 3 of the AWS SDK
for PHP to upload a file in multiple parts. It assumes that you are already following the instructions
for Using the AWS SDK for PHP and Running PHP Examples (p. 664) and have the AWS SDK for PHP
properly installed.

The following PHP example uploads a file to an Amazon S3 bucket using the low-level PHP API
multipart upload. For information about running the PHP examples in this guide, see Running PHP
Examples (p. 665).

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';
$filename = '*** Path to and Name of the File to Upload ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

$result = $s3->createMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'StorageClass' => 'REDUCED_REDUNDANCY',
'ACL' => 'public-read',
'Metadata' => [
'param1' => 'value 1',
'param2' => 'value 2',
'param3' => 'value 3'
]
]);
$uploadId = $result['UploadId'];

// Upload the file in parts.


try {
$file = fopen($filename, 'r');
$partNumber = 1;
while (!feof($file)) {
$result = $s3->uploadPart([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId,

API Version 2006-03-01


200
Amazon Simple Storage Service Developer Guide
Uploading Objects

'PartNumber' => $partNumber,


'Body' => fread($file, 5 * 1024 * 1024),
]);
$parts['Parts'][$partNumber] = [
'PartNumber' => $partNumber,
'ETag' => $result['ETag'],
];
$partNumber++;

echo "Uploading part {$partNumber} of {$filename}." . PHP_EOL;


}
fclose($file);
} catch (S3Exception $e) {
$result = $s3->abortMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId
]);

echo "Upload of {$filename} failed." . PHP_EOL;


}

// Complete the multipart upload.


$result = $s3->completeMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId,
'MultipartUpload' => $parts,
]);
$url = $result['Location'];

echo "Uploaded {$filename} to {$url}." . PHP_EOL;

Related Resources

• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class


• Amazon S3 Multipart Uploads
• AWS SDK for PHP Documentation

List Multipart Uploads Using the Low-Level AWS SDK for PHP API
This topic shows how to use the low-level API classes from version 3 of the AWS SDK for PHP to list all
in-progress multipart uploads on a bucket. It assumes that you are already following the instructions
for Using the AWS SDK for PHP and Running PHP Examples (p. 664) and have the AWS SDK for PHP
properly installed.

The following PHP example demonstrates listing all in-progress multipart uploads on a bucket.

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Retrieve a list of the current multipart uploads.

API Version 2006-03-01


201
Amazon Simple Storage Service Developer Guide
Uploading Objects

$result = $s3->listMultipartUploads([
'Bucket' => $bucket
]);

// Write the list of uploads to the page.


print_r($result->toArray());

Related Resources

• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class


• Amazon S3 Multipart Uploads
• AWS SDK for PHP Documentation

Abort a Multipart Upload

This topic describes how to use a class from version 3 of the AWS SDK for PHP to abort a multipart
upload that is in progress. It assumes that you are already following the instructions for Using the AWS
SDK for PHP and Running PHP Examples (p. 664) and have the AWS SDK for PHP properly installed.

The following PHP example shows how to abort an in-progress multipart upload using the
abortMultipartUpload() method. For information about running the PHP examples in this guide,
see Running PHP Examples (p. 665).

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';
$uploadId = '*** Upload ID of upload to Abort ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Abort the multipart upload.


$s3->abortMultipartUpload([
'Bucket' => $bucket,
'Key' => $keyname,
'UploadId' => $uploadId,
]);

Related Resources

• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class


• Amazon S3 Multipart Uploads
• AWS SDK for PHP Documentation

Using the AWS SDK for Ruby for Multipart Upload


The AWS SDK for Ruby version 3 supports Amazon S3 multipart uploads in two ways. For the first option,
you can use a managed file upload helper. This is the recommended method for uploading files to a
bucket and it provides the following benefits:

• Manages multipart uploads for objects larger than 15MB.

API Version 2006-03-01


202
Amazon Simple Storage Service Developer Guide
Uploading Objects

• Correctly opens files in binary mode to avoid encoding issues.


• Uses multiple threads for uploading parts of large objects in parallel.

For more information, see Uploading Files to Amazon S3 in the AWS Developer Blog.

Alternatively, you can use the following multipart upload client operations directly:

• create_multipart_upload – Initiates a multipart upload and returns an upload ID.


• upload_part – Uploads a part in a multipart upload.
• upload_part_copy – Uploads a part by copying data from an existing object as data source.
• complete_multipart_upload – Completes a multipart upload by assembling previously uploaded parts.
• abort_multipart_upload – Aborts a multipart upload.

For more information, see Using the AWS SDK for Ruby - Version 3 (p. 665).

Using the REST API for Multipart Upload


The following sections in the Amazon Simple Storage Service API Reference describe the REST API for
multipart upload.

• Initiate Multipart Upload


• Upload Part
• Complete Multipart Upload
• Abort Multipart Upload
• List Parts
• List Multipart Uploads

You can use these APIs to make your own REST requests, or you can use one the SDKs we provide. For
more information about the SDKs, see API Support for Multipart Upload (p. 175).

Uploading Objects Using Presigned URLs


Topics
• Upload an Object Using a Presigned URL (AWS SDK for Java) (p. 204)
• Upload an Object to an S3 Bucket Using a Presigned URL (AWS SDK for .NET) (p. 205)
• Upload an Object Using a Presigned URL (AWS SDK for Ruby) (p. 206)

A presigned URL gives you access to the object identified in the URL, provided that the creator of the
presigned URL has permissions to access that object. That is, if you receive a presigned URL to upload an
object, you can upload the object only if the creator of the presigned URL has the necessary permissions
to upload that object.

All objects and buckets by default are private. The presigned URLs are useful if you want your user/
customer to be able to upload a specific object to your bucket, but you don't require them to have AWS
security credentials or permissions. When you create a presigned URL, you must provide your security
credentials and then specify a bucket name, an object key, an HTTP method (PUT for uploading objects),
and an expiration date and time. The presigned URLs are valid only for the specified duration.

You can generate a presigned URL programmatically using the AWS SDK for Java or the AWS SDK
for .NET. If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a
presigned object URL without writing any code. Anyone who receives a valid presigned URL can then
programmatically upload an object.

API Version 2006-03-01


203
Amazon Simple Storage Service Developer Guide
Uploading Objects

For more information, go to Using Amazon S3 from AWS Explorer.

For instructions about how to install AWS Explorer, see Using the AWS SDKs, CLI, and Explorers (p. 655).
Note
Anyone with valid security credentials can create a presigned URL. However, in order for you
to successfully upload an object, the presigned URL must be created by someone who has
permission to perform the operation that the presigned URL is based upon.

Upload an Object Using a Presigned URL (AWS SDK for Java)


You can use the AWS SDK for Java to generate a presigned URL that you, or anyone you give the URL,
can use to upload an object to Amazon S3. When you use the URL to upload an object, Amazon S3
creates the object in the specified bucket. If an object with the same key that is specified in the presigned
URL already exists in the bucket, Amazon S3 replaces the existing object with the uploaded object. To
successfully complete an upload, you must do the following:

• Specify the HTTP PUT verb when creating the GeneratePresignedUrlRequest and
HttpURLConnection objects.
• Interact with the HttpURLConnection object in some way after finishing the upload. The following
example accomplishes this by using the HttpURLConnection object to check the HTTP response
code.

Example
This example generates a presigned URL and uses it to upload sample data as an object. For instructions
on creating and testing a working sample, see Testing the Amazon S3 Java Code Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import java.net.URL;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.HttpMethod;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;
import com.amazonaws.services.s3.model.S3Object;

public class GeneratePresignedUrlAndUploadObject {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String objectKey = "*** Object key ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Set the pre-signed URL to expire after one hour.


java.util.Date expiration = new java.util.Date();
long expTimeMillis = expiration.getTime();

API Version 2006-03-01


204
Amazon Simple Storage Service Developer Guide
Uploading Objects

expTimeMillis += 1000 * 60 * 60;


expiration.setTime(expTimeMillis);

// Generate the pre-signed URL.


System.out.println("Generating pre-signed URL.");
GeneratePresignedUrlRequest generatePresignedUrlRequest = new
GeneratePresignedUrlRequest(bucketName, objectKey)
.withMethod(HttpMethod.PUT)
.withExpiration(expiration);
URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);

// Create the connection and use it to upload the new object using the pre-
signed URL.
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream());
out.write("This text uploaded as an object via presigned URL.");
out.close();

// Check the HTTP response code. To complete the upload and make the object
available,
// you must interact with the connection object in some way.
connection.getResponseCode();
System.out.println("HTTP response code: " + connection.getResponseCode());

// Check to make sure that the object was uploaded successfully.


S3Object object = s3Client.getObject(bucketName, objectKey);
System.out.println("Object " + object.getKey() + " created in bucket " +
object.getBucketName());
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Upload an Object to an S3 Bucket Using a Presigned URL (AWS SDK for .NET)
The following C# example shows how to use the AWS SDK for .NET to upload an object to an S3 bucket
using a presigned URL. For more information about presigned URLs, see Uploading Objects Using
Presigned URLs (p. 203).

This example generates a presigned URL for a specific object and uses it to upload a file. For information
about the example's compatibility with a specific version of the AWS SDK for .NET and instructions about
how to create and test a working sample, see Running the Amazon S3 .NET Code Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Net;

API Version 2006-03-01


205
Amazon Simple Storage Service Developer Guide
Uploading Objects

namespace Amazon.DocSamples.S3
{
class UploadObjectUsingPresignedURLTest
{
private const string bucketName = "*** provide bucket name ***";
private const string objectKey = "*** provide the name for the uploaded object
***";
private const string filePath = "*** provide the full path name of the file to
upload ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
var url = GeneratePreSignedURL();
UploadObject(url);
}

private static void UploadObject(string url)


{
HttpWebRequest httpRequest = WebRequest.Create(url) as HttpWebRequest;
httpRequest.Method = "PUT";
using (Stream dataStream = httpRequest.GetRequestStream())
{
var buffer = new byte[8000];
using (FileStream fileStream = new FileStream(filePath, FileMode.Open,
FileAccess.Read))
{
int bytesRead = 0;
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
dataStream.Write(buffer, 0, bytesRead);
}
}
}
HttpWebResponse response = httpRequest.GetResponse() as HttpWebResponse;
}

private static string GeneratePreSignedURL()


{
var request = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectKey,
Verb = HttpVerb.PUT,
Expires = DateTime.Now.AddMinutes(5)
};

string url = s3Client.GetPreSignedURL(request);


return url;
}
}
}

More Info

AWS SDK for .NET

Upload an Object Using a Presigned URL (AWS SDK for Ruby)


The following tasks guide you through using a Ruby script to upload an object using a presigned URL for
SDK for Ruby - Version 3.

API Version 2006-03-01


206
Amazon Simple Storage Service Developer Guide
Copying Objects

Uploading Objects - SDK for Ruby - Version 3

1 Create an instance of the Aws::S3::Resource class.

2 Provide a bucket name and an object key by calling the #bucket[] and the #object[]
methods of your Aws::S3::Resource class instance.

Generate a presigned URL by creating an instance of the URI class, and use it to parse
the .presigned_url method of your Aws::S3::Resource class instance. You
must specify :put as an argument to .presigned_url, and you must specify PUT to
Net::HTTP::Session#send_request if you want to upload an object.

3 Anyone with the presigned URL can upload an object.

The upload creates an object or replaces any existing object with the same key that is
specified in the presigned URL.

The following Ruby code example demonstrates the preceding tasks for SDK for Ruby - Version 3.

Example

#Uploading an object using a presigned URL for SDK for Ruby - Version 3.

require 'aws-sdk-s3'
require 'net/http'

s3 = Aws::S3::Resource.new(region:'us-west-2')

obj = s3.bucket('BucketName').object('KeyName')
# Replace BucketName with the name of your bucket.
# Replace KeyName with the name of the object you are creating or replacing.

url = URI.parse(obj.presigned_url(:put))

body = "Hello World!"


# This is the contents of your object. In this case, it's a simple string.

Net::HTTP.start(url.host) do |http|
http.send_request("PUT", url.request_uri, body, {
# This is required, or Net::HTTP will add a default unsigned content-type.
"content-type" => "",
})
end

puts obj.get.body.read
# This will print out the contents of your object to the terminal window.

Copying Objects
Topics
• Related Resources (p. 208)
• Copying Objects in a Single Operation (p. 208)
• Copying Objects Using the Multipart Upload API (p. 214)

The copy operation creates a copy of an object that is already stored in Amazon S3. You can create
a copy of your object up to 5 GB in a single atomic operation. However, for copying an object that is
greater than 5 GB, you must use the multipart upload API. Using the copy operation, you can:

API Version 2006-03-01


207
Amazon Simple Storage Service Developer Guide
Copying Objects

• Create additional copies of objects


• Rename objects by copying them and deleting the original ones
• Move objects across Amazon S3 locations (e.g., us-west-1 and EU)
• Change object metadata

Each Amazon S3 object has metadata. It is a set of name-value pairs. You can set object metadata at
the time you upload it. After you upload the object, you cannot modify object metadata. The only way
to modify object metadata is to make a copy of the object and set the metadata. In the copy operation
you set the same object as the source and target.

Each object has metadata. Some of it is system metadata and other user-defined. Users control some of
the system metadata such as storage class configuration to use for the object, and configure server-side
encryption. When you copy an object, user-controlled system metadata and user-defined metadata are
also copied. Amazon S3 resets the system-controlled metadata. For example, when you copy an object,
Amazon S3 resets the creation date of the copied object. You don't need to set any of these values in
your copy request.

When copying an object, you might decide to update some of the metadata values. For example, if
your source object is configured to use standard storage, you might choose to use reduced redundancy
storage for the object copy. You might also decide to alter some of the user-defined metadata values
present on the source object. Note that if you choose to update any of the object's user-configurable
metadata (system or user-defined) during the copy, then you must explicitly specify all of the user-
configurable metadata present on the source object in your request, even if you are only changing only
one of the metadata values.

For more information about the object metadata, see Object Key and Metadata (p. 96).
Note
Copying objects across locations incurs bandwidth charges.
Note
If the source object is archived in Amazon Glacier (the storage class of the object is GLACIER),
you must first restore a temporary copy before you can copy the object to another bucket. For
information about archiving objects, see Transitioning to the GLACIER Storage Class (Object
Archival) (p. 119).

When copying objects, you can request Amazon S3 to save the target object encrypted using an AWS
Key Management Service (KMS) encryption key, an Amazon S3-managed encryption key, or a customer-
provided encryption key. Accordingly, you must specify encryption information in your request. If the
copy source is an object that is stored in Amazon S3 using server-side encryption with customer provided
key, you will need to provide encryption information in your request so Amazon S3 can decrypt the
object for copying. For more information, see Protecting Data Using Encryption (p. 388).

Related Resources
• Using the AWS SDKs, CLI, and Explorers (p. 655)

Copying Objects in a Single Operation


The examples in this section show how to copy objects up to 5 GB in a single operation. For copying
objects greater than 5 GB, you must use multipart upload API. For more information, see Copying
Objects Using the Multipart Upload API (p. 214).

Topics
• Copy an Object Using the AWS SDK for Java (p. 209)
• Copy an Amazon S3 Object in a Single Operation Using the AWS SDK for .NET (p. 209)

API Version 2006-03-01


208
Amazon Simple Storage Service Developer Guide
Copying Objects

• Copy an Object Using the AWS SDK for PHP (p. 211)
• Copy an Object Using the AWS SDK for Ruby (p. 212)
• Copy an Object Using the REST API (p. 213)

Copy an Object Using the AWS SDK for Java


Example

The following example shows how to copy an object in Amazon S3 using the AWS SDK for Java.
For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CopyObjectRequest;

public class CopyObjectSingleOperation {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String sourceKey = "*** Source object key *** ";
String destinationKey = "*** Destination object key ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Copy the object into a new object in the same bucket.


CopyObjectRequest copyObjRequest = new CopyObjectRequest(bucketName, sourceKey,
bucketName, destinationKey);
s3Client.copyObject(copyObjRequest);
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Copy an Amazon S3 Object in a Single Operation Using the AWS SDK for .NET
The following C# example shows how to use the high-level AWS SDK for .NET to copy objects that are
as big as 5 GB in a single operation. For objects that are bigger than 5 GB, use the multipart upload

API Version 2006-03-01


209
Amazon Simple Storage Service Developer Guide
Copying Objects

copy example described in Copy an Amazon S3 Object Using the AWS SDK for .NET Multipart Upload
API (p. 216).

This example makes a copy of an object that is a maximum of 5 GB. For information about the example's
compatibility with a specific version of the AWS SDK for .NET and instructions on how to create and test
a working sample, see Running the Amazon S3 .NET Code Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class CopyObjectTest
{
private const string sourceBucket = "*** provide the name of the bucket with source
object ***";
private const string destinationBucket = "*** provide the name of the bucket to
copy the object to ***";
private const string objectKey = "*** provide the name of object to copy ***";
private const string destObjectKey = "*** provide the destination object key name
***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
Console.WriteLine("Copying an object");
CopyingObjectAsync().Wait();
}

private static async Task CopyingObjectAsync()


{
try
{
CopyObjectRequest request = new CopyObjectRequest
{
SourceBucket = sourceBucket,
SourceKey = objectKey,
DestinationBucket = destinationBucket,
DestinationKey = destObjectKey
};
CopyObjectResponse response = await s3Client.CopyObjectAsync(request);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

API Version 2006-03-01


210
Amazon Simple Storage Service Developer Guide
Copying Objects

More Info

AWS SDK for .NET

Copy an Object Using the AWS SDK for PHP


This topic guides you through using classes from version 3 of the AWS SDK for PHP to copy a single
object and multiple objects within Amazon S3, from one bucket to another or within the same bucket.

This topic assumes that you are already following the instructions for Using the AWS SDK for PHP and
Running PHP Examples (p. 664) and have the AWS SDK for PHP properly installed.

The following tasks guide you through using PHP SDK classes to copy an object that is already stored in
Amazon S3.

The following tasks guide you through using PHP classes to make multiple copies of an object within
Amazon S3.

Copying Objects

1 Create an instance of an Amazon S3 client by using the Aws\S3\S3Client class


constructor.

2 To make multiple copies of an object, you execute a batch of calls to the Amazon S3
client getCommand() method, which is inherited from the Aws\CommandInterface class.
You provide the CopyObject command as the first argument and an array containing
the source bucket, source key name, target bucket, and target key name as the second
argument.

Example of Copying Objects within Amazon S3

The following PHP example illustrates the use of the copyObject() method to copy a single object
within Amazon S3 and using a batch of calls to CopyObject using the getcommand() method to make
multiple copies of an object.

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$sourceBucket = '*** Your Source Bucket Name ***';


$sourceKeyname = '*** Your Source Object Key ***';
$targetBucket = '*** Your Target Bucket Name ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Copy an object.
$s3->copyObject([
'Bucket' => $targetBucket,
'Key' => "{$sourceKeyname}-copy",
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
]);

// Perform a batch of CopyObject operations.


$batch = array();

API Version 2006-03-01


211
Amazon Simple Storage Service Developer Guide
Copying Objects

for ($i = 1; $i <= 3; $i++) {


$batch[] = $s3->getCommand('CopyObject', [
'Bucket' => $targetBucket,
'Key' => "{targetKeyname}-{$i}",
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
]);
}
try {
$results = CommandPool::batch($s3, $batch);
foreach($results as $result) {
if ($result instanceof ResultInterface) {
// Result handling here
}
if ($result instanceof AwsException) {
// AwsException handling here
}
}
} catch (\Exception $e) {
// General error handling here
}

Related Resources

• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class


• AWS SDK for PHP Documentation

Copy an Object Using the AWS SDK for Ruby


The following tasks guide you through using the Ruby classes to copy an object in Amazon S3, from one
bucket to another or to copy an object within the same bucket.

Copying Objects

1 Use the Amazon S3 modularized gem for version 3 of the AWS SDK for Ruby, require 'aws-
sdk-s3', and provide your AWS credentials. For more information about how to provide
your credentials, see Making Requests Using AWS Account or IAM User Credentials (p. 18).

2 Provide the request information, such as source bucket name, source key name,
destination bucket name, and destination key.

The following Ruby code example demonstrates the preceding tasks using the #copy_object method
to copy an object from one bucket to another.

Example

require 'aws-sdk-s3'

source_bucket_name = '*** Provide bucket name ***'


target_bucket_name = '*** Provide bucket name ***'
source_key = '*** Provide source key ***'
target_key = '*** Provide target key ***'

s3 = Aws::S3::Client.new(region: 'us-west-2')
s3.copy_object({bucket: target_bucket_name, copy_source: source_bucket_name + '/' +
source_key, key: target_key})

puts "Copying file #{source_key} to #{target_key}."

API Version 2006-03-01


212
Amazon Simple Storage Service Developer Guide
Copying Objects

Copy an Object Using the REST API


This example describes how to copy an object using REST. For more information about the REST API, go
to PUT Object (Copy).

This example copies the flotsam object from the pacific bucket to the jetsam object of the
atlantic bucket, preserving its metadata.

PUT /jetsam HTTP/1.1


Host: atlantic.s3.amazonaws.com
x-amz-copy-source: /pacific/flotsam
Authorization: AWS AKIAIOSFODNN7EXAMPLE:ENoSbxYByFA0UGLZUqJN5EUnLDg=
Date: Wed, 20 Feb 2008 22:12:21 +0000

The signature was generated from the following information.

PUT\r\n
\r\n
\r\n
Wed, 20 Feb 2008 22:12:21 +0000\r\n

x-amz-copy-source:/pacific/flotsam\r\n
/atlantic/jetsam

Amazon S3 returns the following response that specifies the ETag of the object and when it was last
modified.

HTTP/1.1 200 OK
x-amz-id-2: Vyaxt7qEbzv34BnSu5hctyyNSlHTYZFMWK4FtzO+iX8JQNyaLdTshL0KxatbaOZt
x-amz-request-id: 6B13C3C5B34AF333
Date: Wed, 20 Feb 2008 22:13:01 +0000

Content-Type: application/xml
Transfer-Encoding: chunked
Connection: close
Server: AmazonS3
<?xml version="1.0" encoding="UTF-8"?>

<CopyObjectResult>
<LastModified>2008-02-20T22:13:01</LastModified>
<ETag>"7e9c608af58950deeb370c98608ed097"</ETag>
</CopyObjectResult>

API Version 2006-03-01


213
Amazon Simple Storage Service Developer Guide
Copying Objects

Copying Objects Using the Multipart Upload API


The examples in this section show you how to copy objects greater than 5 GB using the multipart upload
API. You can copy objects less than 5 GB in a single operation. For more information, see Copying Objects
in a Single Operation (p. 208).

Topics
• Copy an Object Using the AWS SDK for Java Multipart Upload API (p. 214)
• Copy an Amazon S3 Object Using the AWS SDK for .NET Multipart Upload API (p. 216)
• Copy Object Using the REST Multipart Upload API (p. 218)

Copy an Object Using the AWS SDK for Java Multipart Upload API
To copy an Amazon S3 object that is larger than 5 GB with the AWS SDK for Java, use the low-level Java
API . For objects smaller than 5 GB, use the single-operation copy described in Copy an Object Using the
AWS SDK for Java (p. 209).

To copy an object using the low-level Java API, do the following:

• Initiate a multipart upload by executing the AmazonS3Client.initiateMultipartUpload()


method.
• Save the upload ID from the response object that the
AmazonS3Client.initiateMultipartUpload() method returns. You provide this upload ID for
each part-upload operation.
• Copy all of the parts. For each part that you need to copy, create a new instance of the
CopyPartRequest class. Provide the part information, including the source and destination bucket
names, source and destination object keys, upload ID, locations of the first and last bytes of the part,
and part number.
• Save the responses of the AmazonS3Client.copyPart() method calls. Each response includes
the ETag value and part number for the uploaded part. You need this information to complete the
multipart upload.
• Call the AmazonS3Client.completeMultipartUpload() method to complete the copy operation.

Example

The following example shows how to use the Amazon S3 low-level Java API to perform a multipart
copy. For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.*;
import com.amazonaws.services.s3.model.*;

public class LowLevelMultipartCopy {

public static void main(String[] args) throws IOException {

API Version 2006-03-01


214
Amazon Simple Storage Service Developer Guide
Copying Objects

String clientRegion = "*** Client region ***";


String sourceBucketName = "*** Source bucket name ***";
String sourceObjectKey = "*** Source object key ***";
String destBucketName = "*** Target bucket name ***";
String destObjectKey = "*** Target object key ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Initiate the multipart upload.


InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest(destBucketName, destObjectKey);
InitiateMultipartUploadResult initResult =
s3Client.initiateMultipartUpload(initRequest);

// Get the object size to track the end of the copy operation.
GetObjectMetadataRequest metadataRequest = new
GetObjectMetadataRequest(sourceBucketName, sourceObjectKey);
ObjectMetadata metadataResult = s3Client.getObjectMetadata(metadataRequest);
long objectSize = metadataResult.getContentLength();

// Copy the object using 5 MB parts.


long partSize = 5 * 1024 * 1024;
long bytePosition = 0;
int partNum = 1;
List<CopyPartResult> copyResponses = new ArrayList<CopyPartResult>();
while (bytePosition < objectSize) {
// The last part might be smaller than partSize, so check to make sure
// that lastByte isn't beyond the end of the object.
long lastByte = Math.min(bytePosition + partSize - 1, objectSize - 1);

// Copy this part.


CopyPartRequest copyRequest = new CopyPartRequest()
.withSourceBucketName(sourceBucketName)
.withSourceKey(sourceObjectKey)
.withDestinationBucketName(destBucketName)
.withDestinationKey(destObjectKey)
.withUploadId(initResult.getUploadId())
.withFirstByte(bytePosition)
.withLastByte(lastByte)
.withPartNumber(partNum++);
copyResponses.add(s3Client.copyPart(copyRequest));
bytePosition += partSize;
}

// Complete the upload request to concatenate all uploaded parts and make the
copied object available.
CompleteMultipartUploadRequest completeRequest = new
CompleteMultipartUploadRequest(
destBucketName,
destObjectKey,

initResult.getUploadId(),

getETags(copyResponses));
s3Client.completeMultipartUpload(completeRequest);
System.out.println("Multipart copy complete.");
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}

API Version 2006-03-01


215
Amazon Simple Storage Service Developer Guide
Copying Objects

catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}

// This is a helper function to construct a list of ETags.


private static List<PartETag> getETags(List<CopyPartResult> responses) {
List<PartETag> etags = new ArrayList<PartETag>();
for (CopyPartResult response : responses) {
etags.add(new PartETag(response.getPartNumber(), response.getETag()));
}
return etags;
}
}

Copy an Amazon S3 Object Using the AWS SDK for .NET Multipart Upload API
The following C# example shows how to use the AWS SDK for .NET to copy an Amazon S3 object that
is larger than 5 GB from one source location to another, such as from one bucket to another. To copy
objects that are smaller than 5 GB, use the single-operation copy procedure described in Copy an
Amazon S3 Object in a Single Operation Using the AWS SDK for .NET (p. 209). For more information
about Amazon S3 multipart uploads, see Multipart Upload Overview (p. 171).

This example shows how to copy an Amazon S3 object that is larger than 5 GB from one S3 bucket to
another using the AWS SDK for .NET multipart upload API. For information about SDK compatibility
and instructions for creating and testing a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class CopyObjectUsingMPUapiTest
{
private const string sourceBucket = "*** provide the name of the bucket with source
object ***";
private const string targetBucket = "*** provide the name of the bucket to copy the
object to ***";
private const string sourceObjectKey = "*** provide the name of object to copy
***";
private const string targetObjectKey = "*** provide the name of the object copy
***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
Console.WriteLine("Copying an object");
MPUCopyObjectAsync().Wait();
}
private static async Task MPUCopyObjectAsync()

API Version 2006-03-01


216
Amazon Simple Storage Service Developer Guide
Copying Objects

{
// Create a list to store the upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();
List<CopyPartResponse> copyResponses = new List<CopyPartResponse>();

// Setup information required to initiate the multipart upload.


InitiateMultipartUploadRequest initiateRequest =
new InitiateMultipartUploadRequest
{
BucketName = targetBucket,
Key = targetObjectKey
};

// Initiate the upload.


InitiateMultipartUploadResponse initResponse =
await s3Client.InitiateMultipartUploadAsync(initiateRequest);

// Save the upload ID.


String uploadId = initResponse.UploadId;

try
{
// Get the size of the object.
GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest
{
BucketName = sourceBucket,
Key = sourceObjectKey
};

GetObjectMetadataResponse metadataResponse =
await s3Client.GetObjectMetadataAsync(metadataRequest);
long objectSize = metadataResponse.ContentLength; // Length in bytes.

// Copy the parts.


long partSize = 5 * (long)Math.Pow(2, 20); // Part size is 5 MB.

long bytePosition = 0;
for (int i = 1; bytePosition < objectSize; i++)
{
CopyPartRequest copyRequest = new CopyPartRequest
{
DestinationBucket = targetBucket,
DestinationKey = targetObjectKey,
SourceBucket = sourceBucket,
SourceKey = sourceObjectKey,
UploadId = uploadId,
FirstByte = bytePosition,
LastByte = bytePosition + partSize - 1 >= objectSize ? objectSize -
1 : bytePosition + partSize - 1,
PartNumber = i
};

copyResponses.Add(await s3Client.CopyPartAsync(copyRequest));

bytePosition += partSize;
}

// Set up to complete the copy.


CompleteMultipartUploadRequest completeRequest =
new CompleteMultipartUploadRequest
{
BucketName = targetBucket,
Key = targetObjectKey,
UploadId = initResponse.UploadId
};
completeRequest.AddPartETags(copyResponses);

API Version 2006-03-01


217
Amazon Simple Storage Service Developer Guide
Listing Object Keys

// Complete the copy.


CompleteMultipartUploadResponse completeUploadResponse =
await s3Client.CompleteMultipartUploadAsync(completeRequest);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

More Info

AWS SDK for .NET

Copy Object Using the REST Multipart Upload API


The following sections in the Amazon Simple Storage Service API Reference describe the REST API for
multipart upload. For copying an existing object you use the Upload Part (Copy) API and specify the
source object by adding the x-amz-copy-source request header in your request.

• Initiate Multipart Upload


• Upload Part
• Upload Part (Copy)
• Complete Multipart Upload
• Abort Multipart Upload
• List Parts
• List Multipart Uploads

You can use these APIs to make your own REST requests, or you can use one the SDKs we provide. For
more information about the SDKs, see API Support for Multipart Upload (p. 175).

Listing Object Keys


Keys can be listed by prefix. By choosing a common prefix for the names of related keys and marking
these keys with a special character that delimits hierarchy, you can use the list operation to select and
browse keys hierarchically. This is similar to how files are stored in directories within a file system.

Amazon S3 exposes a list operation that lets you enumerate the keys contained in a bucket. Keys are
selected for listing by bucket and prefix. For example, consider a bucket named "dictionary" that contains
a key for every English word. You might make a call to list all the keys in that bucket that start with the
letter "q". List results are always returned in UTF-8 binary order.

Both the SOAP and REST list operations return an XML document that contains the names of matching
keys and information about the object identified by each key.
Note
SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3
features will not be supported for SOAP. We recommend that you use either the REST API or the
AWS SDKs.

API Version 2006-03-01


218
Amazon Simple Storage Service Developer Guide
Listing Object Keys

Groups of keys that share a prefix terminated by a special delimiter can be rolled up by that common
prefix for the purposes of listing. This enables applications to organize and browse their keys
hierarchically, much like how you would organize your files into directories in a file system. For
example, to extend the dictionary bucket to contain more than just English words, you might form
keys by prefixing each word with its language and a delimiter, such as "French/logical". Using this
naming scheme and the hierarchical listing feature, you could retrieve a list of only French words. You
could also browse the top-level list of available languages without having to iterate through all the
lexicographically intervening keys.

For more information on this aspect of listing, see Listing Keys Hierarchically Using a Prefix and
Delimiter (p. 219).

List Implementation Efficiency

List performance is not substantially affected by the total number of keys in your bucket, nor by
the presence or absence of the prefix, marker, maxkeys, or delimiter arguments. For information on
improving overall bucket performance, including the list operation, see Request Rate and Performance
Guidelines (p. 613).

Iterating Through Multi-Page Results


As buckets can contain a virtually unlimited number of keys, the complete results of a list query can
be extremely large. To manage large result sets, the Amazon S3 API supports pagination to split them
into multiple responses. Each list keys response returns a page of up to 1,000 keys with an indicator
indicating if the response is truncated. You send a series of list keys requests until you have received all
the keys. AWS SDK wrapper libraries provide the same pagination.

The following Java and .NET SDK examples show how to use pagination when listing keys in a bucket:

• Listing Keys Using the AWS SDK for Java (p. 220)
• Listing Keys Using the AWS SDK for .NET (p. 222)

Related Resources
• Using the AWS SDKs, CLI, and Explorers (p. 655)

Listing Keys Hierarchically Using a Prefix and Delimiter


The prefix and delimiter parameters limit the kind of results returned by a list operation. Prefix limits
results to only those keys that begin with the specified prefix, and delimiter causes list to roll up all keys
that share a common prefix into a single summary list result.

The purpose of the prefix and delimiter parameters is to help you organize and then browse your keys
hierarchically. To do this, first pick a delimiter for your bucket, such as slash (/), that doesn't occur in any
of your anticipated key names. Next, construct your key names by concatenating all containing levels of
the hierarchy, separating each level with the delimiter.

For example, if you were storing information about cities, you might naturally organize them by
continent, then by country, then by province or state. Because these names don't usually contain
punctuation, you might select slash (/) as the delimiter. The following examples use a slash (/) delimiter.

• Europe/France/Aquitaine/Bordeaux
• North America/Canada/Quebec/Montreal
• North America/USA/Washington/Bellevue

API Version 2006-03-01


219
Amazon Simple Storage Service Developer Guide
Listing Object Keys

• North America/USA/Washington/Seattle

If you stored data for every city in the world in this manner, it would become awkward to manage
a flat key namespace. By using Prefix and Delimiter with the list operation, you can use the
hierarchy you've created to list your data. For example, to list all the states in USA, set Delimiter='/'
and Prefix='North America/USA/'. To list all the provinces in Canada for which you have data, set
Delimiter='/' and Prefix='North America/Canada/'.

A list request with a delimiter lets you browse your hierarchy at just one level, skipping over and
summarizing the (possibly millions of) keys nested at deeper levels. For example, assume you have a
bucket (ExampleBucket) the following keys.

sample.jpg

photos/2006/January/sample.jpg

photos/2006/February/sample2.jpg

photos/2006/February/sample3.jpg

photos/2006/February/sample4.jpg

The sample bucket has only the sample.jpg object at the root level. To list only the root level objects
in the bucket you send a GET request on the bucket with "/" delimiter character. In response, Amazon
S3 returns the sample.jpg object key because it does not contain the "/" delimiter character. All other
keys contain the delimiter character. Amazon S3 groups these keys and return a single CommonPrefixes
element with prefix value photos/ that is a substring from the beginning of these keys to the first
occurrence of the specified delimiter.

Example

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>ExampleBucket</Name>
<Prefix></Prefix>
<Marker></Marker>
<MaxKeys>1000</MaxKeys>
<Delimiter>/</Delimiter>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>sample.jpg</Key>
<LastModified>2011-07-24T19:39:30.000Z</LastModified>
<ETag>&quot;d1a7fb5eab1c16cb4f7cf341cf188c3d&quot;</ETag>
<Size>6</Size>
<Owner>
<ID>75cc57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
<DisplayName>displayname</DisplayName>
</Owner>
<StorageClass>STANDARD</StorageClass>
</Contents>
<CommonPrefixes>
<Prefix>photos/</Prefix>
</CommonPrefixes>
</ListBucketResult>

Listing Keys Using the AWS SDK for Java


Example

The following example lists the object keys in a bucket. The example uses pagination to retrieve a set
of object keys. If there are more keys to return after the first page, Amazon S3 includes a continuation

API Version 2006-03-01


220
Amazon Simple Storage Service Developer Guide
Listing Object Keys

token in the response. The example uses the continuation token in the subsequent request to fetch the
next set of object keys.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ListObjectsV2Request;
import com.amazonaws.services.s3.model.ListObjectsV2Result;
import com.amazonaws.services.s3.model.S3ObjectSummary;

public class ListKeys {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

System.out.println("Listing objects");

// maxKeys is set to 2 to demonstrate the use of


// ListObjectsV2Result.getNextContinuationToken()
ListObjectsV2Request req = new
ListObjectsV2Request().withBucketName(bucketName).withMaxKeys(2);
ListObjectsV2Result result;

do {
result = s3Client.listObjectsV2(req);

for (S3ObjectSummary objectSummary : result.getObjectSummaries()) {


System.out.printf(" - %s (size: %d)\n", objectSummary.getKey(),
objectSummary.getSize());
}
// If there are more than maxKeys keys in the bucket, get a continuation
token
// and list the next objects.
String token = result.getNextContinuationToken();
System.out.println("Next Continuation Token: " + token);
req.setContinuationToken(token);
} while (result.isTruncated());
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();

API Version 2006-03-01


221
Amazon Simple Storage Service Developer Guide
Listing Object Keys

}
}
}

Listing Keys Using the AWS SDK for .NET


Example

The following C# example lists the object keys for a bucket. In the example, we use pagination to retrieve
a set of object keys. If there are more keys to return, Amazon S3 includes a continuation token in the
response. The code uses the continuation token in the subsequent request to fetch the next set of object
keys.

For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class ListObjectsTest
{
private const string bucketName = "*** bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;

private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
ListingObjectsAsync().Wait();
}

static async Task ListingObjectsAsync()


{
try
{
ListObjectsV2Request request = new ListObjectsV2Request
{
BucketName = bucketName,
MaxKeys = 10
};
ListObjectsV2Response response;
do
{
response = await client.ListObjectsV2Async(request);

// Process the response.


foreach (S3Object entry in response.S3Objects)
{
Console.WriteLine("key = {0} size = {1}",
entry.Key, entry.Size);
}

API Version 2006-03-01


222
Amazon Simple Storage Service Developer Guide
Listing Object Keys

Console.WriteLine("Next Continuation Token: {0}",


response.NextContinuationToken);
request.ContinuationToken = response.NextContinuationToken;
} while (response.IsTruncated);
}
catch (AmazonS3Exception amazonS3Exception)
{
Console.WriteLine("S3 error occurred. Exception: " +
amazonS3Exception.ToString());
Console.ReadKey();
}
catch (Exception e)
{
Console.WriteLine("Exception: " + e.ToString());
Console.ReadKey();
}
}
}
}

Listing Keys Using the AWS SDK for PHP


This topic guides you through using classes from version 3 of the AWS SDK for PHP to list the object keys
contained in an Amazon S3 bucket.

This topic assumes that you are already following the instructions for Using the AWS SDK for PHP and
Running PHP Examples (p. 664) and have the AWS SDK for PHP properly installed.

To list the object keys contained in a bucket using the AWS SDK for PHP you first must list the objects
contained in the bucket and then extract the key from each of the listed objects. When listing objects in
a bucket you have the option of using the low-level Aws\S3\S3Client::listObjects() method or the high-
level Aws\ResultPaginator class.

The low-level listObjects() method maps to the underlying Amazon S3 REST API. Each
listObjects() request returns a page of up to 1,000 objects. If you have more than 1,000 objects in
the bucket, your response will be truncated and you will need to send another listObjects() request
to retrieve the next set of 1,000 objects.

You can use the high-level ListObjects paginator to make your task of listing the objects contained
in a bucket a bit easier. To use the ListObjects paginator to create a list of objects you execute
the Amazon S3 client getPaginator() method that is inherited from Aws/AwsClientInterface class
with the ListObjects command as the first argument and an array to contain the returned objects
from the specified bucket as the second argument. When used as a ListObjects paginator the
getPaginator() method returns all the objects contained in the specified bucket. There is no 1,000
object limit, so you don't need to worry if the response is truncated or not.

The following tasks guide you through using the PHP Amazon S3 client methods to list the objects
contained in a bucket from which you can list the object keys.

Example of Listing Object Keys


The following PHP example demonstrates how to list the keys from a specified bucket. It shows how
to use the high-level getIterator() method to list the objects in a bucket and then how to extract
the key from each of the objects in the list. It also show how to use the low-level listObjects()
method to list the objects in a bucket and then how to extract the key from each of the objects in
the list returned. For information about running the PHP examples in this guide, go to Running PHP
Examples (p. 665).

<?php

API Version 2006-03-01


223
Amazon Simple Storage Service Developer Guide
Deleting Objects

require 'vendor/autoload.php';

use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;

$bucket = '*** Your Bucket Name ***';

// Instantiate the client.


$s3 = new S3Client([
'version' => 'latest',
'region' => 'us-east-1'
]);

// Use the high-level iterators (returns ALL of your objects).


try {
$results = $s3->getPaginator('ListObjects', [
'Bucket' => $bucket
]);

foreach ($results as $result) {


foreach ($result['Contents'] as $object) {
echo $object['Key'] . PHP_EOL;
}
}
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

// Use the plain API (returns ONLY up to 1000 of your objects).


try {
$objects = $s3->listObjects([
'Bucket' => $bucket
]);
foreach ($objects['Contents'] as $object) {
echo $object['Key'] . PHP_EOL;
}
} catch (S3Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

Related Resources
• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class
• Paginators
• AWS SDK for PHP Documentation

Listing Keys Using the REST API


You can use the AWS SDK to list the object keys in a bucket. However, if your application requires it,
you can send REST requests directly. You can send a GET request to return some or all of the objects
in a bucket or you can use selection criteria to return a subset of the objects in a bucket. For more
information, go to GET Bucket (List Objects) Version 2.

Deleting Objects
Topics
• Deleting Objects from a Version-Enabled Bucket (p. 225)
• Deleting Objects from an MFA-Enabled Bucket (p. 225)

API Version 2006-03-01


224
Amazon Simple Storage Service Developer Guide
Deleting Objects

• Related Resources (p. 226)


• Deleting One Object Per Request (p. 226)
• Deleting Multiple Objects Per Request (p. 231)

You can delete one or more objects directly from Amazon S3. You have the following options when
deleting an object:

• Delete a single object—Amazon S3 provides the DELETE API that you can use to delete one object in a
single HTTP request.
• Delete multiple objects—Amazon S3 also provides the Multi-Object Delete API that you can use to
delete up to 1000 objects in a single HTTP request.

When deleting objects from a bucket that is not version-enabled, you provide only the object key name,
however, when deleting objects from a version-enabled bucket, you can optionally provide version ID of
the object to delete a specific version of the object.

Deleting Objects from a Version-Enabled Bucket


If your bucket is version-enabled, then multiple versions of the same object can exist in the bucket. When
working with version-enabled buckets, the delete API enables the following options:

• Specify a non-versioned delete request—That is, you specify only the object's key, and not the
version ID. In this case, Amazon S3 creates a delete marker and returns its version ID in the response.
This makes your object disappear from the bucket. For information about object versioning and the
delete marker concept, see Object Versioning (p. 104).
• Specify a versioned delete request—That is, you specify both the key and also a version ID. In this
case the following two outcomes are possible:
• If the version ID maps to a specific object version, then Amazon S3 deletes the specific version of the
object.
• If the version ID maps to the delete marker of that object, Amazon S3 deletes the delete marker.
This makes the object reappear in your bucket.

Deleting Objects from an MFA-Enabled Bucket


When deleting objects from a Multi Factor Authentication (MFA) enabled bucket, note the following:

• If you provide an invalid MFA token, the request always fails.


• If you have an MFA-enabled bucket, and you make a versioned delete request (you provide an object
key and version ID), the request will fail if you don't provide a valid MFA token. In addition, when using
the Multi-Object Delete API on an MFA-enabled bucket, if any of the deletes is a versioned delete
request (that is, you specify object key and version ID), the entire request will fail if you don't provide
an MFA token.

On the other hand, in the following cases the request succeeds:

• If you have an MFA-enabled bucket, and you make a non-versioned delete request (you are not
deleting a versioned object), and you don't provide an MFA token, the delete succeeds.
• If you have a Multi-Object Delete request specifying only non-versioned objects to delete from an
MFA-enabled bucket, and you don't provide an MFA token, the deletions succeed.

For information on MFA delete, see MFA Delete (p. 427).

API Version 2006-03-01


225
Amazon Simple Storage Service Developer Guide
Deleting Objects

Related Resources
• Using the AWS SDKs, CLI, and Explorers (p. 655)

Deleting One Object Per Request


Topics
• Deleting an Object Using the AWS SDK for Java (p. 226)
• Deleting an Object Using the AWS SDK for .NET (p. 228)
• Deleting an Object Using the AWS SDK for PHP (p. 231)
• Deleting an Object Using the REST API (p. 231)

To delete one object per request, use the DELETE API (see DELETE Object). To learn more about object
deletion, see Deleting Objects (p. 224).

You can use either the REST API directly or the wrapper libraries provided by the AWS SDKs that simplify
application development.

Deleting an Object Using the AWS SDK for Java


You can delete an object from a bucket. If you have versioning enabled on the bucket, you have the
following options:

• Delete a specific object version by specifying a version ID.


• Delete an object without specifying a version ID, in which case S3 adds a delete marker to the object.

For more information about versioning, see Object Versioning (p. 104).

Example Example 1: Deleting an Object (Non-Versioned Bucket)


The following example deletes an object from a bucket. The example assumes that the bucket is not
versioning-enabled and the object doesn't have any version IDs. In the delete request, you specify only
the object key and not a version ID. For instructions on creating and testing a working sample, see
Testing the Amazon S3 Java Code Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.DeleteObjectRequest;

public class DeleteObjectNonVersionedBucket {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ****";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()

API Version 2006-03-01


226
Amazon Simple Storage Service Developer Guide
Deleting Objects

.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

s3Client.deleteObject(new DeleteObjectRequest(bucketName, keyName));


}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Example Example 2: Deleting an Object (Versioned Bucket)

The following example deletes an object from a versioned bucket. The example deletes a specific object
version by specifying the object key name and version ID. The example does the following:

1. Adds a sample object to the bucket. Amazon S3 returns the version ID of the newly added object. The
example uses this version ID in the delete request.
2. Deletes the object version by specifying both the object key name and a version ID. If there are no
other versions of that object, Amazon S3 deletes the object entirely. Otherwise, Amazon S3 only
deletes the specified version.
Note
You can get the version IDs of an object by sending a ListVersions request.

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketVersioningConfiguration;
import com.amazonaws.services.s3.model.DeleteVersionRequest;
import com.amazonaws.services.s3.model.PutObjectResult;

public class DeleteObjectVersionEnabledBucket {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";
String keyName = "*** Key name ****";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Check to ensure that the bucket is versioning-enabled.

API Version 2006-03-01


227
Amazon Simple Storage Service Developer Guide
Deleting Objects

String bucketVersionStatus =
s3Client.getBucketVersioningConfiguration(bucketName).getStatus();
if(!bucketVersionStatus.equals(BucketVersioningConfiguration.ENABLED)) {
System.out.printf("Bucket %s is not versioning-enabled.", bucketName);
}
else {
// Add an object.
PutObjectResult putResult = s3Client.putObject(bucketName, keyName, "Sample
content for deletion example.");
System.out.printf("Object %s added to bucket %s\n", keyName, bucketName);

// Delete the version of the object that we just created.


System.out.println("Deleting versioned object " + keyName);
s3Client.deleteVersion(new DeleteVersionRequest(bucketName, keyName,
putResult.getVersionId()));
System.out.printf("Object %s, version %s deleted\n", keyName,
putResult.getVersionId());
}
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Deleting an Object Using the AWS SDK for .NET


When you delete an object from a non-versioned bucket, the object is removed. If you have versioning
enabled on the bucket, you have the following options:

• Delete a specific version of an object by specifying a version ID.


• Delete an object without specifying a version ID. Amazon S3 adds a delete marker. For more
information about delete markers, see Object Versioning (p. 104).

The following examples show how to delete an object from both versioned and non-versioned buckets.
For more information about versioning, see Object Versioning (p. 104).

Example Deleting an Object from a Non-versioned Bucket


The following C# example deletes an object from a non-versioned bucket. The example assumes that
the objects don't have version IDs, so you don't specify version IDs. You specify only the object key. For
information about how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{

API Version 2006-03-01


228
Amazon Simple Storage Service Developer Guide
Deleting Objects

class DeleteObjectNonVersionedBucketTest
{
private const string bucketName = "*** bucket name ***";
private const string keyName = "*** object key ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
DeleteObjectNonVersionedBucketAsync().Wait();
}

private static async Task DeleteObjectNonVersionedBucketAsync()


{
try
{
var deleteObjectRequest = new DeleteObjectRequest
{
BucketName = bucketName,
Key = keyName
};

Console.WriteLine("Deleting an object");
await client.DeleteObjectAsync(deleteObjectRequest);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}
}
}

Example Deleting an Object from a Versioned Bucket

The following C# example deletes an object from a versioned bucket. It deletes a specific version of the
object by specifying the object key name and version ID.

The code performs the following tasks:

1. Enables versioning on a bucket that you specify (if versioning is already enabled, this has no effect).
2. Adds a sample object to the bucket. In response, Amazon S3 returns the version ID of the newly added
object. The example uses this version ID in the delete request.
3. Deletes the sample object by specifying both the object key name and a version ID.
Note
You can also get the version ID of an object by sending a ListVersions request:

var listResponse = client.ListVersions(new ListVersionsRequest { BucketName =


bucketName, Prefix = keyName });

For information about how to create and test a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

API Version 2006-03-01


229
Amazon Simple Storage Service Developer Guide
Deleting Objects

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class DeleteObjectVersion
{
private const string bucketName = "*** versioning-enabled bucket name ***";
private const string keyName = "*** Object Key Name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 client;

public static void Main()


{
client = new AmazonS3Client(bucketRegion);
CreateAndDeleteObjectVersionAsync().Wait();
}

private static async Task CreateAndDeleteObjectVersionAsync()


{
try
{
// Add a sample object.
string versionID = await PutAnObject(keyName);

// Delete the object by specifying an object key and a version ID.


DeleteObjectRequest request = new DeleteObjectRequest
{
BucketName = bucketName,
Key = keyName,
VersionId = versionID
};
Console.WriteLine("Deleting an object");
await client.DeleteObjectAsync(request);
}
catch (AmazonS3Exception e)
{
Console.WriteLine("Error encountered on server. Message:'{0}' when writing
an object", e.Message);
}
catch (Exception e)
{
Console.WriteLine("Unknown encountered on server. Message:'{0}' when
writing an object", e.Message);
}
}

static async Task<string> PutAnObject(string objectKey)


{
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketName,
Key = objectKey,
ContentBody = "This is the content body!"
};
PutObjectResponse response = await client.PutObjectAsync(request);
return response.VersionId;
}

API Version 2006-03-01


230
Amazon Simple Storage Service Developer Guide
Deleting Objects

}
}

Deleting an Object Using the AWS SDK for PHP


This topic shows how to use classes from version 3 of the AWS SDK for PHP to delete an object from a
non-versioned bucket. For information on deleting an object from a versioned bucket, see Deleting an
Object Using the REST API (p. 231).

This topic assumes that you are already following the instructions for Using the AWS SDK for PHP and
Running PHP Examples (p. 664) and have the AWS SDK for PHP properly installed.

The following PHP example deletes an object from a bucket. Because this example shows how to delete
objects from non-versioned buckets, it provides only the bucket name and object key (not a version ID)
in the delete request. . For information about running the PHP examples in this guide, see Running PHP
Examples (p. 665).

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// Delete an object from the bucket.


$s3->deleteObject([
'Bucket' => $bucket,
'Key' => $keyname
]);

Related Resources

• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class


• AWS SDK for PHP Documentation

Deleting an Object Using the REST API


You can use the AWS SDKs to delete an object. However, if your application requires it, you can send
REST requests directly. For more information, go to DELETE Object in the Amazon Simple Storage Service
API Reference.

Deleting Multiple Objects Per Request


Topics
• Deleting Multiple Objects Using the AWS SDK for Java (p. 232)
• Deleting Multiple Objects Using the AWS SDK for .NET (p. 236)
• Deleting Multiple Objects Using the AWS SDK for PHP (p. 241)
• Deleting Multiple Objects Using the REST API (p. 243)

API Version 2006-03-01


231
Amazon Simple Storage Service Developer Guide
Deleting Objects

Amazon S3 provides the Multi-Object Delete API (see Delete - Multi-Object Delete), which enables you
to delete multiple objects in a single request. The API supports two modes for the response: verbose and
quiet. By default, the operation uses verbose mode. In verbose mode, the response includes the result
of the deletion of each key that is specified in your request. In quiet mode, the response includes only
keys for which the delete operation encountered an error. If all keys are successfully deleted when you're
using quiet mode, Amazon S3 returns an empty response.

To learn more about object deletion, see Deleting Objects (p. 224).

You can use the REST API directly or use the AWS SDKs.

Deleting Multiple Objects Using the AWS SDK for Java


The AWS SDK for Java provides the AmazonS3Client.deleteObjects() method for deleting multiple
objects. For each object that you want to delete, you specify the key name. If the bucket is versioning-
enabled, you have the following options:

• Specify only the object's key name. Amazon S3 will add a delete marker to the object.
• Specify both the object's key name and a version ID to be deleted. Amazon S3 will delete the specified
version of the object.

Example

The following example uses the Multi-Object Delete API to delete objects from a bucket that
is not version-enabled. The example uploads sample objects to the bucket and then uses the
AmazonS3Client.deleteObjects() method to delete the objects in a single request. In the
DeleteObjectsRequest, the example specifies only the object key names because the objects do not
have version IDs.

For more information about deleting objects, see Deleting Objects (p. 224). For instructions on creating
and testing a working sample, see Testing the Amazon S3 Java Code Examples (p. 662).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;
import java.util.ArrayList;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;

public class DeleteMultipleObjectsNonVersionedBucket {

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
String bucketName = "*** Bucket name ***";

try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Upload three sample objects.

API Version 2006-03-01


232
Amazon Simple Storage Service Developer Guide
Deleting Objects

ArrayList<KeyVersion> keys = new ArrayList<KeyVersion>();


for (int i = 0; i < 3; i++) {
String keyName = "delete object example " + i;
s3Client.putObject(bucketName, keyName, "Object number " + i + " to be
deleted.");
keys.add(new KeyVersion(keyName));
}
System.out.println(keys.size() + " objects successfully created.");

// Delete the sample objects.


DeleteObjectsRequest multiObjectDeleteRequest = new
DeleteObjectsRequest(bucketName)
.withKeys(keys)
.withQuiet(false);

// Verify that the objects were deleted successfully.


DeleteObjectsResult delObjRes =
s3Client.deleteObjects(multiObjectDeleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " objects successfully deleted.");
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}

Example

The following example uses the Multi-Object Delete API to delete objects from a version-enabled bucket.
It does the following:

1. Creates sample objects and then deletes them, specifying the key name and version ID for each object
to delete. The operation deletes only the specified object versions.
2. Creates sample objects and then deletes them by specifying only the key names. Because the example
doesn't specify version IDs, the operation adds a delete marker to each object, without deleting any
specific object versions. After the delete markers are added, these objects will not appear in the AWS
Management Console.
3. Remove the delete markers by specifying the object keys and version IDs of the delete markers.
The operation deletes the delete markers, which results in the objects reappearing in the AWS
Management Console.

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;

API Version 2006-03-01


233
Amazon Simple Storage Service Developer Guide
Deleting Objects

import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.BucketVersioningConfiguration;
import com.amazonaws.services.s3.model.DeleteObjectsRequest;
import com.amazonaws.services.s3.model.DeleteObjectsRequest.KeyVersion;
import com.amazonaws.services.s3.model.DeleteObjectsResult;
import com.amazonaws.services.s3.model.DeleteObjectsResult.DeletedObject;
import com.amazonaws.services.s3.model.PutObjectResult;

public class DeleteMultipleObjectsVersionEnabledBucket {


private static AmazonS3 S3_CLIENT;
private static String VERSIONED_BUCKET_NAME;

public static void main(String[] args) throws IOException {


String clientRegion = "*** Client region ***";
VERSIONED_BUCKET_NAME = "*** Bucket name ***";

try {
S3_CLIENT = AmazonS3ClientBuilder.standard()
.withCredentials(new ProfileCredentialsProvider())
.withRegion(clientRegion)
.build();

// Check to make sure that the bucket is versioning-enabled.


String bucketVersionStatus =
S3_CLIENT.getBucketVersioningConfiguration(VERSIONED_BUCKET_NAME).getStatus();
if(!bucketVersionStatus.equals(BucketVersioningConfiguration.ENABLED)) {
System.out.printf("Bucket %s is not versioning-enabled.",
VERSIONED_BUCKET_NAME);
}
else {
// Upload and delete sample objects, using specific object versions.
uploadAndDeleteObjectsWithVersions();

// Upload and delete sample objects without specifying version IDs.


// Amazon S3 creates a delete marker for each object rather than deleting
// specific versions.
DeleteObjectsResult unversionedDeleteResult =
uploadAndDeleteObjectsWithoutVersions();

// Remove the delete markers placed on objects in the non-versioned create/


delete method.
multiObjectVersionedDeleteRemoveDeleteMarkers(unversionedDeleteResult);
}
}
catch(AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
}
catch(SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}

private static void uploadAndDeleteObjectsWithVersions() {


System.out.println("Uploading and deleting objects with versions specified.");

// Upload three sample objects.


ArrayList<KeyVersion> keys = new ArrayList<KeyVersion>();
for (int i = 0; i < 3; i++) {
String keyName = "delete object without version ID example " + i;
PutObjectResult putResult = S3_CLIENT.putObject(VERSIONED_BUCKET_NAME,
keyName,
"Object number " + i + " to be deleted.");

API Version 2006-03-01


234
Amazon Simple Storage Service Developer Guide
Deleting Objects

// Gather the new object keys with version IDs.


keys.add(new KeyVersion(keyName, putResult.getVersionId()));
}

// Delete the specified versions of the sample objects.


DeleteObjectsRequest multiObjectDeleteRequest = new
DeleteObjectsRequest(VERSIONED_BUCKET_NAME)
.withKeys(keys)
.withQuiet(false);

// Verify that the object versions were successfully deleted.


DeleteObjectsResult delObjRes = S3_CLIENT.deleteObjects(multiObjectDeleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " objects successfully deleted");
}

private static DeleteObjectsResult uploadAndDeleteObjectsWithoutVersions() {


System.out.println("Uploading and deleting objects with no versions specified.");

// Upload three sample objects.


ArrayList<KeyVersion> keys = new ArrayList<KeyVersion>();
for (int i = 0; i < 3; i++) {
String keyName = "delete object with version ID example " + i;
S3_CLIENT.putObject(VERSIONED_BUCKET_NAME, keyName, "Object number " + i + " to
be deleted.");
// Gather the new object keys without version IDs.
keys.add(new KeyVersion(keyName));
}

// Delete the sample objects without specifying versions.


DeleteObjectsRequest multiObjectDeleteRequest = new
DeleteObjectsRequest(VERSIONED_BUCKET_NAME).withKeys(keys)
.withQuiet(false);

// Verify that delete markers were successfully added to the objects.


DeleteObjectsResult delObjRes = S3_CLIENT.deleteObjects(multiObjectDeleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " objects successfully marked for deletion
without versions.");
return delObjRes;
}

private static void multiObjectVersionedDeleteRemoveDeleteMarkers(DeleteObjectsResult


response) {
List<KeyVersion> keyList = new ArrayList<KeyVersion>();
for (DeletedObject deletedObject : response.getDeletedObjects()) {
// Note that the specified version ID is the version ID for the delete marker.
keyList.add(new KeyVersion(deletedObject.getKey(),
deletedObject.getDeleteMarkerVersionId()));
}
// Create a request to delete the delete markers.
DeleteObjectsRequest deleteRequest = new
DeleteObjectsRequest(VERSIONED_BUCKET_NAME).withKeys(keyList);

// Delete the delete markers, leaving the objects intact in the bucket.
DeleteObjectsResult delObjRes = S3_CLIENT.deleteObjects(deleteRequest);
int successfulDeletes = delObjRes.getDeletedObjects().size();
System.out.println(successfulDeletes + " delete markers successfully deleted");
}
}

API Version 2006-03-01


235
Amazon Simple Storage Service Developer Guide
Deleting Objects

Deleting Multiple Objects Using the AWS SDK for .NET


The AWS SDK for .NET provides a convenient method for deleting multiple objects: DeleteObjects.
For each object that you want to delete, you specify the key name and the version of the object. If the
bucket is not versioning-enabled, you specify null for the version ID. If an exception occurs, review the
DeleteObjectsException response to determine which objects were not deleted and why.

Example Deleting Multiple Objects from a Non-Versioning Bucket


The following C# example uses the multi-object delete API to delete objects from a bucket that
is not version-enabled. The example uploads the sample objects to the bucket, and then uses the
DeleteObjects method to delete the objects in a single request. In the DeleteObjectsRequest, the
example specifies only the object key names because the version IDs are null.

For information about creating and testing a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class DeleteMultipleObjectsNonVersionedBucketTest
{
private const string bucketName = "*** versioning-enabled bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
MultiObjectDeleteAsync().Wait();
}

static async Task MultiObjectDeleteAsync()


{
// Create sample objects (for subsequent deletion).
var keysAndVersions = await PutObjectsAsync(3);

// a. multi-object delete by specifying the key names and version IDs.


DeleteObjectsRequest multiObjectDeleteRequest = new DeleteObjectsRequest
{
BucketName = bucketName,
Objects = keysAndVersions // This includes the object keys and null version
IDs.
};
// You can add specific object key to the delete request using the .AddKey.
// multiObjectDeleteRequest.AddKey("TickerReference.csv", null);
try
{
DeleteObjectsResponse response = await
s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} items",
response.DeletedObjects.Count);
}
catch (DeleteObjectsException e)

API Version 2006-03-01


236
Amazon Simple Storage Service Developer Guide
Deleting Objects

{
PrintDeletionErrorStatus(e);
}
}

private static void PrintDeletionErrorStatus(DeleteObjectsException e)


{
// var errorResponse = e.ErrorResponse;
DeleteObjectsResponse errorResponse = e.Response;
Console.WriteLine("x {0}", errorResponse.DeletedObjects.Count);

Console.WriteLine("No. of objects successfully deleted = {0}",


errorResponse.DeletedObjects.Count);
Console.WriteLine("No. of objects failed to delete = {0}",
errorResponse.DeleteErrors.Count);

Console.WriteLine("Printing error data...");


foreach (DeleteError deleteError in errorResponse.DeleteErrors)
{
Console.WriteLine("Object Key: {0}\t{1}\t{2}", deleteError.Key,
deleteError.Code, deleteError.Message);
}
}

static async Task<List<KeyVersion>> PutObjectsAsync(int number)


{
List<KeyVersion> keys = new List<KeyVersion>();
for (int i = 0; i < number; i++)
{
string key = "ExampleObject-" + new System.Random().Next();
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketName,
Key = key,
ContentBody = "This is the content body!",
};

PutObjectResponse response = await s3Client.PutObjectAsync(request);


KeyVersion keyVersion = new KeyVersion
{
Key = key,
// For non-versioned bucket operations, we only need object key.
// VersionId = response.VersionId
};
keys.Add(keyVersion);
}
return keys;
}
}
}

Example Multi-Object Deletion for a Version-Enabled Bucket

The following C# example uses the multi-object delete API to delete objects from a version-enabled
bucket. The example performs the following actions:

1. Creates sample objects and deletes them by specifying the key name and version ID for each object.
The operation deletes specific versions of the objects.
2. Creates sample objects and deletes them by specifying only the key names. Because the example
doesn't specify version IDs, the operation only adds delete markers. It doesn't delete any specific
versions of the objects. After deletion, these objects don't appear in the Amazon S3 console.
3. Deletes the delete markers by specifying the object keys and version IDs of the delete markers. When
the operation deletes the delete markers, the objects reappear in the console.

API Version 2006-03-01


237
Amazon Simple Storage Service Developer Guide
Deleting Objects

For information about creating and testing a working sample, see Running the Amazon S3 .NET Code
Examples (p. 664).

// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.


// SPDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-s3-
developer-guide/blob/master/LICENSE-SAMPLECODE.)

using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
class DeleteMultipleObjVersionedBucketTest
{
private const string bucketName = "*** versioning-enabled bucket name ***";
// Specify your bucket region (an example region is shown).
private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
private static IAmazonS3 s3Client;

public static void Main()


{
s3Client = new AmazonS3Client(bucketRegion);
DeleteMultipleObjectsFromVersionedBucketAsync().Wait();
}

private static async Task DeleteMultipleObjectsFromVersionedBucketAsync()


{

// Delete objects (specifying object version in the request).


await DeleteObjectVersionsAsync();

// Delete objects (without specifying object version in the request).


var deletedObjects = await DeleteObjectsAsync();

// Additional exercise - remove the delete markers S3 returned in the preceding


response.
// This results in the objects reappearing in the bucket (you can
// verify the appearance/disappearance of objects in the console).
await RemoveDeleteMarkersAsync(deletedObjects);
}

private static async Task<List<DeletedObject>> DeleteObjectsAsync()


{
// Upload the sample objects.
var keysAndVersions2 = await PutObjectsAsync(3);

// Delete objects using only keys. Amazon S3 creates a delete marker and
// returns its version ID in the response.
List<DeletedObject> deletedObjects = await
NonVersionedDeleteAsync(keysAndVersions2);
return deletedObjects;
}

private static async Task DeleteObjectVersionsAsync()


{
// Upload the sample objects.
var keysAndVersions1 = await PutObjectsAsync(3);

// Delete the specific object versions.


await VersionedDeleteAsync(keysAndVersions1);
}

API Version 2006-03-01


238
Amazon Simple Storage Service Developer Guide
Deleting Objects

private static void PrintDeletionReport(DeleteObjectsException e)


{
var errorResponse = e.Response;
Console.WriteLine("No. of objects successfully deleted = {0}",
errorResponse.DeletedObjects.Count);
Console.WriteLine("No. of objects failed to delete = {0}",
errorResponse.DeleteErrors.Count);
Console.WriteLine("Printing error data...");
foreach (var deleteError in errorResponse.DeleteErrors)
{
Console.WriteLine("Object Key: {0}\t{1}\t{2}", deleteError.Key,
deleteError.Code, deleteError.Message);
}
}

static async Task VersionedDeleteAsync(List<KeyVersion> keys)


{
// a. Perform a multi-object delete by specifying the key names and version
IDs.
var multiObjectDeleteRequest = new DeleteObjectsRequest
{
BucketName = bucketName,
Objects = keys // This includes the object keys and specific version IDs.
};
try
{
Console.WriteLine("Executing VersionedDelete...");
DeleteObjectsResponse response = await
s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} items",
response.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
}
}

static async Task<List<DeletedObject>> NonVersionedDeleteAsync(List<KeyVersion>


keys)
{
// Create a request that includes only the object key names.
DeleteObjectsRequest multiObjectDeleteRequest = new DeleteObjectsRequest();
multiObjectDeleteRequest.BucketName = bucketName;

foreach (var key in keys)


{
multiObjectDeleteRequest.AddKey(key.Key);
}
// Execute DeleteObjects - Amazon S3 add delete marker for each object
// deletion. The objects disappear from your bucket.
// You can verify that using the Amazon S3 console.
DeleteObjectsResponse response;
try
{
Console.WriteLine("Executing NonVersionedDelete...");
response = await s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} items",
response.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
throw; // Some deletes failed. Investigate before continuing.
}

API Version 2006-03-01


239
Amazon Simple Storage Service Developer Guide
Deleting Objects

// This response contains the DeletedObjects list which we use to delete the
delete markers.
return response.DeletedObjects;
}

private static async Task RemoveDeleteMarkersAsync(List<DeletedObject>


deletedObjects)
{
var keyVersionList = new List<KeyVersion>();

foreach (var deletedObject in deletedObjects)


{
KeyVersion keyVersion = new KeyVersion
{
Key = deletedObject.Key,
VersionId = deletedObject.DeleteMarkerVersionId
};
keyVersionList.Add(keyVersion);
}
// Create another request to delete the delete markers.
var multiObjectDeleteRequest = new DeleteObjectsRequest
{
BucketName = bucketName,
Objects = keyVersionList
};

// Now, delete the delete marker to bring your objects back to the bucket.
try
{
Console.WriteLine("Removing the delete markers .....");
var deleteObjectResponse = await
s3Client.DeleteObjectsAsync(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} delete markers",
deleteObjectResponse.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
PrintDeletionReport(e);
}
}

static async Task<List<KeyVersion>> PutObjectsAsync(int number)


{
var keys = new List<KeyVersion>();

for (var i = 0; i < number; i++)


{
string key = "ObjectToDelete-" + new System.Random().Next();
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketName,
Key = key,
ContentBody = "This is the content body!",

};

var response = await s3Client.PutObjectAsync(request);


KeyVersion keyVersion = new KeyVersion
{
Key = key,
VersionId = response.VersionId
};

keys.Add(keyVersion);
}
return keys;

API Version 2006-03-01


240
Amazon Simple Storage Service Developer Guide
Deleting Objects

}
}
}

Deleting Multiple Objects Using the AWS SDK for PHP


This topic shows how to use classes from version 3 of the AWS SDK for PHP to delete multiple objects
from versioned and non-versioned Amazon S3 buckets. For more information about versioning, see
Using Versioning (p. 425).

This topic assumes that you are already following the instructions for Using the AWS SDK for PHP and
Running PHP Examples (p. 664) and have the AWS SDK for PHP properly installed.

Example Deleting Multiple Objects from a Non-Versioned Bucket


The following PHP example uses the deleteObjects() method to delete multiple objects from a
bucket that is not version-enabled.

For information about running the PHP examples in this guide, see Running PHP Examples (p. 665).

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// 1. Create a few objects.


for ($i = 1; $i <= 3; $i++) {
$s3->putObject([
'Bucket' => $bucket,
'Key' => "key{$i}",
'Body' => "content {$i}",
]);
}

// 2. List the objects and get the keys.


$keys = $s3->listObjects([
'Bucket' => $bucket
]) ->getPath('Contents/*/Key');

// 3. Delete the objects.


$s3->deleteObjects([
'Bucket' => $bucket,
'Delete' => [
'Objects' => array_map(function ($key) {
return ['Key' => $key];
}, $keys)
],
]);

Example Deleting Multiple Objects from a Version-enabled Bucket


The following PHP example uses the deleteObjects() method to delete multiple objects from a
version-enabled bucket.

For information about running the PHP examples in this guide, see Running PHP Examples (p. 665).

API Version 2006-03-01


241
Amazon Simple Storage Service Developer Guide
Deleting Objects

<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$bucket = '*** Your Bucket Name ***';


$keyname = '*** Your Object Key ***';

$s3 = new S3Client([


'version' => 'latest',
'region' => 'us-east-1'
]);

// 1. Enable object versioning for the bucket.


$s3->putBucketVersioning([
'Bucket' => $bucket,
'Status' => 'Enabled',
]);

// 2. Create a few versions of an object.


for ($i = 1; $i <= 3; $i++) {
$s3->putObject([
'Bucket' => $bucket,
'Key' => $keyname,
'Body' => "content {$i}",
]);
}

// 3. List the objects versions and get the keys and version IDs.
$versions = $s3->listObjectVersions(['Bucket' => $bucket])
->getPath('Versions');

// 4. Delete the object versions.


$s3->deleteObjects([
'Bucket' => $bucket,
'Delete' => [
'Objects' => array_map(function ($version) {
return [
'Key' => $version['Key'],
'VersionId' => $version['VersionId']
}, $versions),
],
]);

echo "The following objects were deleted successfully:". PHP_EOL;


foreach ($result['Deleted'] as $object) {
echo "Key: {$object['Key']}, VersionId: {$object['VersionId']}" . PHP_EOL;
}

echo PHP_EOL . "The following objects could not be deleted:" . PHP_EOL;


foreach ($result['Errors'] as $object) {
echo "Key: {$object['Key']}, VersionId: {$object['VersionId']}" . PHP_EOL;
}

// 5. Suspend object versioning for the bucket.


$s3->putBucketVersioning([
'Bucket' => $bucket,
'Status' => 'Suspended',
]);

Related Resources

• AWS SDK for PHP for Amazon S3 Aws\S3\S3Client Class

API Version 2006-03-01


242
Amazon Simple Storage Service Developer Guide
Selecting Content from Objects

• AWS SDK for PHP Documentation

Deleting Multiple Objects Using the REST API


You can use the AWS SDKs to delete multiple objects using the Multi-Object Delete API. However, if your
application requires it, you can send REST requests directly. For more information, go to Delete Multiple
Objects in the Amazon Simple Storage Service API Reference.

Selecting Content from Objects


With Amazon S3 Select, you can use simple structured query language (SQL) statements to filter the
contents of Amazon S3 objects and retrieve just the subset of data that you need. By using Amazon S3
Select to filter this data, you can reduce the amount of data that Amazon S3 transfers, which reduces the
cost and latency to retrieve this data.

Amazon S3 Select works on objects stored in CSV, JSON, or Apache Parquet format. It also works
with objects that are compressed with GZIP or BZIP2 (for CSV and JSON objects only), and server-
side encrypted objects. You can specify the format of the results as either CSV or JSON, and you can
determine how the records in the result are delimited.

You pass SQL expressions to Amazon S3 in the request. Amazon S3 Select supports a subset of SQL. For
more information about the SQL elements that are supported by Amazon S3 Select, see SQL Reference
for Amazon S3 Select and Glacier Select (p. 700).

You can perform SQL queries using AWS SDKs, the SELECT Object Content REST API, the AWS Command
Line Interface (AWS CLI), or the Amazon S3 console. The Amazon S3 console limits the amount of data
returned to 40 MB. To retrieve more data, use the AWS CLI or the API.

Requirements and Limits


The following are requirements for using Amazon S3 Select:

• You must have s3:GetObject permission for the object you are querying.
• If the object you are querying is encrypted with a customer-provided encryption key (SSE-C), you must
use https, and you must provide the encryption key in the request.

The following limits apply when using Amazon S3 Select:

• The maximum length of a SQL expression is 256 KB.


• The maximum length of a record in the result is 1 MB.

Additional limitations apply when using Amazon S3 Select with Parquet objects:

• Amazon S3 Select supports only columnar compression using GZIP or Snappy. Amazon S3 Select
doesn't support whole-object compression for Parquet objects.
• Amazon S3 Select doesn't support Parquet output. You must specify the output format as CSV or
JSON.
• The maximum uncompressed block size is 256 MB.
• The maximum number of columns is 100.
• You must use the data types specified in the object's schema.
• Selecting on a repeated field returns only the last value.

API Version 2006-03-01


243
Amazon Simple Storage Service Developer Guide
Selecting Content from Objects

Constructing a Request
When you construct a request, you provide details of the object that is being queried using an
InputSerialization object. You provide details of how the results are to be returned using an
OutputSerialization object. You also include the SQL expression that Amazon S3 uses to filter the
request.

For more information about constructing an Amazon S3 Select request, see SELECT Object Content in
the Amazon Simple Storage Service API Reference. You can also see one of the SDK code examples in the
following sections.

Errors
Amazon S3 Select returns an error code and associated error message when an issue is encountered
while attempting to execute a query. For a list of error codes and descriptions, see the Special Errors
section of the SELECT Object Content page in the Amazon Simple Storage Service API Reference.

Topics
• Related Resources (p. 244)
• Selecting Content from Objects Using the SDK for Java (p. 244)
• Selecting Content from Objects Using the REST API (p. 246)
• Selecting Content from Objects Using Other SDKs (p. 246)

Related Resources
• Using the AWS SDKs, CLI, and Explorers (p. 655)

Selecting Content from Objects Using the SDK for Java


You use Amazon S3 Select to select contents of an object with Java using the selectObjectContent
method, which on success returns the results of the SQL expression. The specified bucket and object key
must exist, or an error results.

Example Example
The following Java code returns the value of the first column for each record that is stored in an object
that contains data stored in CSV format. It also requests Progress and Stats messages to be returned.
You must provide a valid bucket name and an object that contains data in CSV format.

For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code
Examples (p. 662).

package com.amazonaws;

import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.CSVInput;
import com.amazonaws.services.s3.model.CSVOutput;
import com.amazonaws.services.s3.model.CompressionType;
import com.amazonaws.services.s3.model.ExpressionType;
import com.amazonaws.services.s3.model.InputSerialization;
import com.amazonaws.services.s3.model.OutputSerialization;
import com.amazonaws.services.s3.model.SelectObjectContentEvent;
import com.amazonaws.services.s3.model.SelectObjectContentEventVisitor;
import com.amazonaws.services.s3.model.SelectObjectContentRequest;
import com.amazonaws.services.s3.model.SelectObjectContentResult;

API Version 2006-03-01


244
Amazon Simple Storage Service Developer Guide
Selecting Content from Objects

import java.io.File;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.concurrent.atomic.AtomicBoolean;

import static com.amazonaws.util.IOUtils.copy;

/**
* This example shows how to query data from S3Select and consume the response in the form
of an
* InputStream of records and write it to a file.
*/

public class RecordInputStreamExample {

private static final String BUCKET_NAME = "${my-s3-bucket}";


private static final String CSV_OBJECT_KEY = "${my-csv-object-key}";
private static final String S3_SELECT_RESULTS_PATH = "${my-s3-select-results-path}";
private static final String QUERY = "select s._1 from S3Object s";

public static void main(String[] args) throws Exception {


final AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();

SelectObjectContentRequest request = generateBaseCSVRequest(BUCKET_NAME,


CSV_OBJECT_KEY, QUERY);
final AtomicBoolean isResultComplete = new AtomicBoolean(false);

try (OutputStream fileOutputStream = new FileOutputStream(new File


(S3_SELECT_RESULTS_PATH));
SelectObjectContentResult result = s3Client.selectObjectContent(request)) {
InputStream resultInputStream = result.getPayload().getRecordsInputStream(
new SelectObjectContentEventVisitor() {
@Override
public void visit(SelectObjectContentEvent.StatsEvent event)
{
System.out.println(
"Received Stats, Bytes Scanned: " +
event.getDetails().getBytesScanned()
+ " Bytes Processed: " +
event.getDetails().getBytesProcessed());
}

/*
* An End Event informs that the request has finished successfully.
*/
@Override
public void visit(SelectObjectContentEvent.EndEvent event)
{
isResultComplete.set(true);
System.out.println("Received End Event. Result is complete.");
}
}
);

copy(resultInputStream, fileOutputStream);
}

/*
* The End Event indicates all matching records have been transmitted.
* If the End Event is not received, the results may be incomplete.
*/
if (!isResultComplete.get()) {
throw new Exception("S3 Select request was incomplete as End Event was not
received.");

API Version 2006-03-01


245
Amazon Simple Storage Service Developer Guide
Restoring Archived Objects

}
}

private static SelectObjectContentRequest generateBaseCSVRequest(String bucket, String


key, String query) {
SelectObjectContentRequest request = new SelectObjectContentRequest();
request.setBucketName(bucket);
request.setKey(key);
request.setExpression(query);
request.setExpressionType(ExpressionType.SQL);

InputSerialization inputSerialization = new InputSerialization();


inputSerialization.setCsv(new CSVInput());
inputSerialization.setCompressionType(CompressionType.NONE);
request.setInputSerialization(inputSerialization);

OutputSerialization outputSerialization = new OutputSerialization();


outputSerialization.setCsv(new CSVOutput());
request.setOutputSerialization(outputSerialization);

return request;
}
}

Selecting Content from Objects Using the REST API


You can use the AWS SDK to select content from objects. However, if your application requires it, you can
send REST requests directly. For more information about the request and response format, see SELECT
Object Content.

Selecting Content