0% found this document useful (0 votes)
1K views1,785 pages

AboutBlobStorage AZURE

Uploaded by

sycs student
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views1,785 pages

AboutBlobStorage AZURE

Uploaded by

sycs student
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1785

Contents

Azure Blob Storage documentation


Overview
What is Azure Blob Storage?
Compare core storage services
Blob storage
Overview
Introduction to Blob storage
Quickstarts
Work with blobs
Azure portal
Storage Explorer
PowerShell
CLI
.NET
.NET (v12 SDK)
.NET (v11 SDK)
Java
Java (v12 SDK)
Java (v8 SDK)
Python
Python (v12 SDK)
Python (v2.1 SDK)
JavaScript
JavaScript (v12 SDK for Node.js)
JavaScript (v10 SDK for Node.js)
JavaScript (v12 SDK for browser)
JavaScript (v10 SDK for browser)
C++ (v12 SDK)
Go
PHP
Ruby
Xamarin
Tutorials
Upload and process an image
1 - Upload and retrieve image data in the cloud
2 - Trigger Azure Functions to process blobs
3 - Secure application data
4 - Monitor and troubleshoot storage
Migrate data to cloud with AzCopy
Optimize storage application performance and scalability
1 - Create a VM and storage account
2 - Upload large data to an Azure storage account
3- Download large data from an Azure storage account
4 - Verify throughput and latency metrics
Host a static website
Design applications for high availability
1 - Make your application highly available
2 - Simulate a failure in reading data from the primary region
Encrypt and decrypt blobs using Azure Key Vault
Add role assignment conditions
Portal
PowerShell
CLI
Samples
.NET
Java
Python
JavaScript
C++
Other languages
Azure PowerShell
Azure CLI
Azure Resource Graph queries
Concepts
Storage accounts
Overview
Premium performance
Authorization
Authorizing data operations
Authorize with Azure AD
Authorize with Azure roles
Authorize with conditions
Actions and attributes for conditions
Security for conditions
Example conditions
Authorize with Shared Key
Delegate access with shared access signatures (SAS)
Authorizing management operations
Security
Security recommendations
Security baseline
Azure Storage encryption at rest
Encryption with customer-managed keys
Encryption on the request with customer-provided keys
Encryption scopes
Use Azure Private Endpoints
Configure network routing preference
Data protection
Overview
Soft delete for containers
Soft delete for blobs
Versioning
Point-in-time restore
Snapshots
Change feed
Immutable storage for blobs
Overview
Time-based retention policies
Legal holds
Redundancy and disaster recovery
Data redundancy
Customer-managed failover for disaster recovery
Blob access tiers and lifecycle management
Overview
Blob rehydration from Archive tier
Lifecycle management policies
Object replication
Performance, scaling, and cost optimization
Performance and scalability checklist
Latency in Blob storage
Azure Storage reserved capacity
Plan and manage costs
Scalability and performance targets
Blob storage
Standard storage accounts
Premium block blob storage accounts
Premium page blob storage accounts
Storage resource provider
Find, search, and understand blob data
Blob inventory
Blob index tags
Full text search
Azure Cognitive Search
Data migration
Storage migration overview
Compare data transfer solutions
Large dataset, low network bandwidth
Large dataset, moderate to high network bandwidth
Small dataset, low to moderate network bandwidth
Periodic data transfer
Monitoring
Monitor Blob storage
Transition from classic metrics
Monitoring (classic)
Storage Analytics
Metrics
Logs
Monitor, diagnose, and troubleshoot
Protocol support
NFS 3.0
Overview
Performance considerations
Known issues
SFTP
Overview
Known issues
Event handling
Page blob features
Static websites
Upgrading to Data Lake Storage Gen2
Blob Storage feature support
How to
Create and manage storage accounts
Create a storage account
Upgrade a storage account
Recover a deleted storage account
Get account configuration properties
Create and manage containers
Create or delete a container (.NET)
List containers (.NET)
Manage properties and metadata (.NET)
Create and manage blobs
List blobs (SDK)
Manage blob properties and metadata (.NET)
Copy a blob (SDK)
Use blob index tags
Enable blob inventory reports
Calculate blobs count and total size
Authorize access to blob data
Authorization options for users
Portal
PowerShell
Azure CLI
Manage access rights with Azure RBAC
Authenticate and authorize with Azure AD
Get an access token for authorization with Azure AD
Authorize from an application running in Azure
Authorize from a native or web application
Authorize with Shared Key
View and manage account keys
Configure connection strings
Use the Azure Storage REST API
Prevent authorization with Shared Key
Delegate access with shared access signatures (SAS)
Create a user delegation SAS
PowerShell
Azure CLI
.NET
Create a service SAS
Create an account SAS (.NET)
Define a stored access policy
Create a SAS expiration policy
Manage anonymous read access to blob data
Configure anonymous read access for containers and blobs
Prevent anonymous read access to blob data
Access public containers and blobs anonymously (.NET)
Secure blob data
Manage Azure Storage encryption
Check whether a blob is encrypted
Manage encryption keys for the storage account
Check the encryption key model for the account
Configure encryption with customer-managed keys
Provide an encryption key on a request
Manage encryption scopes
Enable infrastructure encryption for the account
Configure client-side encryption
.NET
Java
Python
Configure network security
Require secure transfer
Configure firewalls and virtual networks
Manage Transport Layer Security (TLS)
Enforce minimum TLS version for incoming requests
Configure TLS version for a client application
Configure network routing preference
Enable threat protection with Microsoft Defender for Storage
Protect blob data
Lock a storage account
Enable soft delete for containers
Enable blob versioning
Enable and manage soft delete for blobs
Enable blob soft delete
Manage and restore soft-deleted blobs
Enable point-in-time restore
Create snapshots (.NET)
Process change feed logs
Configure an immutability policy
Configure version-level immutability policies
Configure container-level immutability policies
Manage redundancy and failover
Change redundancy configuration
Design highly available applications
Check the Last Sync Time property
Initiate account failover
Manage blob tiering and lifecycle
Change a blob's access tier
Manage data archiving
Archive a blob
Rehydrate an archived blob
Handle an event on blob rehydration
Configure lifecycle management policies
Manage object replication
Configure object replication policies
Prevent object replication across tenants
Manage concurrency
Use a storage emulator
Use the Azurite open-source emulator
Use Azurite to run automated tests
Use the Azure Storage emulator (deprecated)
Host a static website
Host a static website
Integrate with Azure CDN
Use GitHub Actions to deploy a static site to Azure Storage
Use a custom domain
Route events to a custom endpoint
Transfer data
AzCopy
Get started
Authorize with Azure AD
Optimize performance
Use logs to find errors and resume jobs
Examples: Upload
Examples: Download
Examples: Copy between accounts
Examples: Synchronize
Examples: Amazon S3 buckets
Examples: Google Cloud Storage buckets
Azure Data Factory
Transfer data by using SFTP
Mount storage by using NFS
Mount storage from Linux using blobfuse
Transfer data with the Data Movement library
Develop with blobs
iOS
Java
Use the Spring Boot Starter
Move across regions
Upgrade to Data Lake Storage Gen2
Monitor
Scenarios and best practices
Use Storage insights
Monitor (classic)
Enable and manage metrics (classic)
Enable and manage logs (classic)
Troubleshoot
Latency issues
Reference
Blob Storage API reference
AzCopy v10
Configuration settings
azcopy
azcopy bench
azcopy copy
azcopy doc
azcopy env
azcopy jobs
azcopy jobs clean
azcopy jobs list
azcopy jobs remove
azcopy jobs resume
azcopy jobs show
azcopy load
azcopy load clfs
azcopy list
azcopy login
azcopy logout
azcopy make
azcopy remove
azcopy sync
Resource Manager template
Monitoring data
Host keys (SFTP) support
Azure Policy built-ins
Resources
Azure updates
Azure Storage Explorer
Download Storage Explorer
Get started with Storage Explorer
Sign in to Storage Explorer
Storage Explorer networking
Storage Explorer release notes
Troubleshoot Storage Explorer
Storage Explorer command line options
Storage Explorer direct link
Storage Explorer security
Storage Explorer soft delete
Storage Explorer blob versioning
Storage Explorer manage Azure Blob storage
Storage Explorer create file shares
Storage Explorer support policy and lifecycle
Storage Explorer accessibility
Blob storage on Microsoft Q&A
Blob storage on Stack Overflow
Pricing for block blobs
Pricing for page blobs
Azure pricing calculator
Videos
Compare access with NFS to Azure Blob Storage, Azure Files, and Azure NetApp
Files
NuGet packages (.NET)
Microsoft.Azure.Storage.Common (version 11.x)
Azure.Storage.Common (version 12.x)
Microsoft.Azure.Storage.Blob (version 11.x)
Azure.Storage.Blob (version 12.x)
Azure Configuration Manager
Azure Storage Data Movement library
Storage Resource Provider library
Source code
.NET
Azure Storage client library
Version 12.x
Version 11.x and earlier
Data Movement library
Storage Resource Provider library
Java
Azure Storage client library version 12.x
Azure Storage client library version 8.x and earlier
Node.js
Azure Storage client library version 12.x
Azure Storage client library version 10.x
Python
Azure Storage client library version 12.x
Azure Storage client library version 2.1
Compliance offerings
Data Lake Storage Gen2
Switch to Data Lake Storage Gen1 documentation
Overview
Introduction to Data Lake Storage
Tutorials
Use with Synapse SQL
Use with Databricks and Spark
Use with Apache Hive and HDInsight
Use with Databricks Delta and Event Grid
Use with other Azure services
Concepts
Best practices
Query acceleration
Premium tier for Data Lake Storage
Architecture
Azure Blob File System driver for Hadoop
Azure Blob File System URI
About hierarchical namespaces
Security
Security recommendations
Access control model
Access control lists
Compatibility
Multi-protocol access
Supported Blob storage features
Supported Azure services
Supported open source platforms
Monitoring
Monitor Blob storage
Transition from classic metrics
Monitoring (classic)
Storage Analytics
Metrics
Logs
Monitor, diagnose, and troubleshoot
How to
Create a storage account
Migrate accounts and data stores
Migrate from Data Lake Storage Gen1
Migrate from Gen1
Migrate by using Azure portal
Migrate HDFS stores
Migrate an HDFS store offline
Migrate an HDFS store online
Transfer data
AzCopy
Get started
Authorize with Azure AD
Optimize performance
Use logs to find errors and resume jobs
Examples: Upload
Examples: Download
Examples: Copy between accounts
Examples: Synchronize
Examples: Amazon S3 buckets
Examples: Google Cloud Storage buckets
Azure Data Factory
Transfer data with the DistCp tool
Transfer data with the Data Movement library
Manage access control
Azure portal
Storage Explorer
PowerShell
CLI
.NET
Java
Python
JavaScript
Work with data
Storage Explorer
PowerShell
CLI
.NET
Java
Python
JavaScript
Query acceleration
Hadoop File System CLI
Use with Azure services
Reference
.NET
Java
Python
JavaScript
C++
REST
Azure CLI
Query acceleration reference
AzCopy v10
azcopy
azcopy bench
azcopy copy
azcopy doc
azcopy env
azcopy jobs
azcopy jobs clean
azcopy jobs list
azcopy jobs remove
azcopy jobs resume
azcopy jobs show
azcopy list
azcopy login
azcopy logout
azcopy make
azcopy remove
azcopy sync
Resource Manager template
Azure Policy built-ins
Resources
Known issues
Azure Roadmap
Azure updates
Azure Storage client tools
Azure Storage Explorer
Download Storage Explorer
Get started with Storage Explorer
Storage Explorer release notes
Troubleshoot Storage Explorer
Storage Explorer security
Storage Explorer blob versioning
Storage Explorer manage Azure Blob storage
Storage Explorer create file shares
Storage Explorer support policy and lifecycle
Storage Explorer accessibility
Microsoft Q&A question page
Azure Storage on Stack Overflow
Pricing
Azure pricing calculator
Pricing calculator
Service updates
Stack Overflow
Videos
Partners
What is Azure Blob storage?
11/25/2021 • 2 minutes to read • Edit Online

Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing
massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model
or definition, such as text or binary data.

About Blob storage


Blob storage is designed for:
Serving images or documents directly to a browser.
Storing files for distributed access.
Streaming video and audio.
Writing to log files.
Storing data for backup and restore, disaster recovery, and archiving.
Storing data for analysis by an on-premises or Azure-hosted service.
Users or client applications can access objects in Blob storage via HTTP/HTTPS, from anywhere in the world.
Objects in Blob storage are accessible via the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure
Storage client library. Client libraries are available for different languages, including:
.NET
Java
Node.js
Python
Go
PHP
Ruby

About Azure Data Lake Storage Gen2


Blob storage supports Azure Data Lake Storage Gen2, Microsoft's enterprise big data analytics solution for the
cloud. Azure Data Lake Storage Gen2 offers a hierarchical file system as well as the advantages of Blob storage,
including:
Low-cost, tiered storage
High availability
Strong consistency
Disaster recovery capabilities
For more information about Data Lake Storage Gen2, see Introduction to Azure Data Lake Storage Gen2.

Next steps
Introduction to Azure Blob storage
Introduction to Azure Data Lake Storage Gen2
Introduction to the core Azure Storage services
11/25/2021 • 11 minutes to read • Edit Online

The Azure Storage platform is Microsoft's cloud storage solution for modern data storage scenarios. Core
storage services offer a massively scalable object store for data objects, disk storage for Azure virtual machines
(VMs), a file system service for the cloud, a messaging store for reliable messaging, and a NoSQL store. The
services are:
Durable and highly available. Redundancy ensures that your data is safe in the event of transient
hardware failures. You can also opt to replicate data across datacenters or geographical regions for additional
protection from local catastrophe or natural disaster. Data replicated in this way remains highly available in
the event of an unexpected outage.
Secure. All data written to an Azure storage account is encrypted by the service. Azure Storage provides you
with fine-grained control over who has access to your data.
Scalable. Azure Storage is designed to be massively scalable to meet the data storage and performance
needs of today's applications.
Managed. Azure handles hardware maintenance, updates, and critical issues for you.
Accessible. Data in Azure Storage is accessible from anywhere in the world over HTTP or HTTPS. Microsoft
provides client libraries for Azure Storage in a variety of languages, including .NET, Java, Node.js, Python, PHP,
Ruby, Go, and others, as well as a mature REST API. Azure Storage supports scripting in Azure PowerShell or
Azure CLI. And the Azure portal and Azure Storage Explorer offer easy visual solutions for working with your
data.

Core storage services


The Azure Storage platform includes the following data services:
Azure Blobs: A massively scalable object store for text and binary data. Also includes support for big data
analytics through Data Lake Storage Gen2.
Azure Files: Managed file shares for cloud or on-premises deployments.
Azure Queues: A messaging store for reliable messaging between application components.
Azure Tables: A NoSQL store for schemaless storage of structured data.
Azure Disks: Block-level storage volumes for Azure VMs.
Each service is accessed through a storage account. To get started, see Create a storage account.

Example scenarios
The following table compares Files, Blobs, Disks, Queues, and Tables, and shows example scenarios for each.

F EAT URE DESC RIP T IO N W H EN TO USE


F EAT URE DESC RIP T IO N W H EN TO USE

Azure Files Offers fully managed cloud file shares You want to "lift and shift" an
that you can access from anywhere via application to the cloud that already
the industry standard Server Message uses the native file system APIs to
Block (SMB) protocol. share data between it and other
applications running in Azure.
You can mount Azure file shares from
cloud or on-premises deployments of You want to replace or supplement on-
Windows, Linux, and macOS. premises file servers or NAS devices.

You want to store development and


debugging tools that need to be
accessed from many virtual machines.

Azure Blobs Allows unstructured data to be stored You want your application to support
and accessed at a massive scale in streaming and random access
block blobs. scenarios.

Also supports Azure Data Lake Storage You want to be able to access
Gen2 for enterprise big data analytics application data from anywhere.
solutions.
You want to build an enterprise data
lake on Azure and perform big data
analytics.

Azure Disks Allows data to be persistently stored You want to "lift and shift" applications
and accessed from an attached virtual that use native file system APIs to read
hard disk. and write data to persistent disks.

You want to store data that is not


required to be accessed from outside
the virtual machine to which the disk is
attached.

Azure Queues Allows for asynchronous message You want to decouple application
queueing between application components and use asynchronous
components. messaging to communicate between
them.

For guidance around when to use


Queue storage versus Service Bus
queues, see Storage queues and
Service Bus queues - compared and
contrasted.

Azure Tables Allow you to store structured NoSQL You want to store flexible datasets like
data in the cloud, providing a user data for web applications, address
key/attribute store with a schemaless books, device information, or other
design. types of metadata your service
requires.

For guidance around when to use


Table storage versus the Azure Cosmos
DB Table API, see Developing with
Azure Cosmos DB Table API and Azure
Table storage.

Blob storage
Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing
massive amounts of unstructured data, such as text or binary data.
Blob storage is ideal for:
Serving images or documents directly to a browser.
Storing files for distributed access.
Streaming video and audio.
Storing data for backup and restore, disaster recovery, and archiving.
Storing data for analysis by an on-premises or Azure-hosted service.
Objects in Blob storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client
applications can access blobs via URLs, the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure
Storage client library. The storage client libraries are available for multiple languages, including .NET, Java,
Node.js, Python, PHP, and Ruby.
For more information about Blob storage, see Introduction to Blob storage.

Azure Files
Azure Files enables you to set up highly available network file shares that can be accessed by using the standard
Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read
and write access. You can also read the files using the REST interface or the storage client libraries.
One thing that distinguishes Azure Files from files on a corporate file share is that you can access the files from
anywhere in the world using a URL that points to the file and includes a shared access signature (SAS) token.
You can generate SAS tokens; they allow specific access to a private asset for a specific amount of time.
File shares can be used for many common scenarios:
Many on-premises applications use file shares. This feature makes it easier to migrate those applications
that share data to Azure. If you mount the file share to the same drive letter that the on-premises
application uses, the part of your application that accesses the file share should work with minimal, if any,
changes.
Configuration files can be stored on a file share and accessed from multiple VMs. Tools and utilities used
by multiple developers in a group can be stored on a file share, ensuring that everybody can find them,
and that they use the same version.
Resource logs, metrics, and crash dumps are just three examples of data that can be written to a file share
and processed or analyzed later.
For more information about Azure Files, see Introduction to Azure Files.
Some SMB features are not applicable to the cloud. For more information, see Features not supported by the
Azure File service.

Queue storage
The Azure Queue service is used to store and retrieve messages. Queue messages can be up to 64 KB in size,
and a queue can contain millions of messages. Queues are generally used to store lists of messages to be
processed asynchronously.
For example, say you want your customers to be able to upload pictures, and you want to create thumbnails for
each picture. You could have your customer wait for you to create the thumbnails while uploading the pictures.
An alternative would be to use a queue. When the customer finishes their upload, write a message to the queue.
Then have an Azure Function retrieve the message from the queue and create the thumbnails. Each of the parts
of this processing can be scaled separately, giving you more control when tuning it for your usage.
For more information about Azure Queues, see Introduction to Queues.

Table storage
Azure Table storage is now part of Azure Cosmos DB. To see Azure Table storage documentation, see the Azure
Table Storage Overview. In addition to the existing Azure Table storage service, there is a new Azure Cosmos DB
Table API offering that provides throughput-optimized tables, global distribution, and automatic secondary
indexes. To learn more and try out the new premium experience, see Azure Cosmos DB Table API.
For more information about Table storage, see Overview of Azure Table storage.

Disk storage
An Azure managed disk is a virtual hard disk (VHD). You can think of it like a physical disk in an on-premises
server but, virtualized. Azure-managed disks are stored as page blobs, which are a random IO storage object in
Azure. We call a managed disk 'managed' because it is an abstraction over page blobs, blob containers, and
Azure storage accounts. With managed disks, all you have to do is provision the disk, and Azure takes care of
the rest.
For more information about managed disks, see Introduction to Azure managed disks.

Types of storage accounts


Azure Storage offers several types of storage accounts. Each type supports different features and has its own
pricing model. For more information about storage account types, see Azure storage account overview.

Secure access to storage accounts


Every request to Azure Storage must be authorized. Azure Storage supports the following authorization
methods:
Azure Active Director y (Azure AD) integration for blob, queue, and table data. Azure Storage
supports authentication and authorization with Azure AD for the Blob and Queue services via Azure role-
based access control (Azure RBAC). Authorization with Azure AD is also supported for the Table service in
preview. Authorizing requests with Azure AD is recommended for superior security and ease of use. For
more information, see Authorize access to data in Azure Storage.
Azure AD authorization over SMB for Azure Files. Azure Files supports identity-based authorization
over SMB (Server Message Block) through either Azure Active Directory Domain Services (Azure AD DS) or
on-premises Active Directory Domain Services (preview). Your domain-joined Windows VMs can access
Azure file shares using Azure AD credentials. For more information, see Overview of Azure Files identity-
based authentication support for SMB access and Planning for an Azure Files deployment.
Authorization with Shared Key. The Azure Storage Blob, Files, Queue, and Table services support
authorization with Shared Key. A client using Shared Key authorization passes a header with every request
that is signed using the storage account access key. For more information, see Authorize with Shared Key.
Authorization using shared access signatures (SAS). A shared access signature (SAS) is a string
containing a security token that can be appended to the URI for a storage resource. The security token
encapsulates constraints such as permissions and the interval of access. For more information, see Using
Shared Access Signatures (SAS).
Anonymous access to containers and blobs. A container and its blobs may be publicly available. When
you specify that a container or blob is public, anyone can read it anonymously; no authentication is required.
For more information, see Manage anonymous read access to containers and blobs.

Encryption
There are two basic kinds of encryption available for the core storage services. For more information about
security and encryption, see the Azure Storage security guide.
Encryption at rest
Azure Storage encryption protects and safeguards your data to meet your organizational security and
compliance commitments. Azure Storage automatically encrypts all data prior to persisting to the storage
account and decrypts it prior to retrieval. The encryption, decryption, and key management processes are
transparent to users. Customers can also choose to manage their own keys using Azure Key Vault. For more
information, see Azure Storage encryption for data at rest.
Client-side encryption
The Azure Storage client libraries provide methods for encrypting data from the client library before sending it
across the wire and decrypting the response. Data encrypted via client-side encryption is also encrypted at rest
by Azure Storage. For more information about client-side encryption, see Client-side encryption with .NET for
Azure Storage.

Redundancy
To ensure that your data is durable, Azure Storage stores multiple copies of your data. When you set up your
storage account, you select a redundancy option. For more information, see Azure Storage redundancy.

Transfer data to and from Azure Storage


You have several options for moving data into or out of Azure Storage. Which option you choose depends on
the size of your dataset and your network bandwidth. For more information, see Choose an Azure solution for
data transfer.

Pricing
When making decisions about how your data is stored and accessed, you should also consider the costs
involved. For more information, see Azure Storage pricing.

Storage APIs, libraries, and tools


You can access resources in a storage account by any language that can make HTTP/HTTPS requests.
Additionally, the core Azure Storage services offer programming libraries for several popular languages. These
libraries simplify many aspects of working with Azure Storage by handling details such as synchronous and
asynchronous invocation, batching of operations, exception management, automatic retries, operational
behavior, and so forth. Libraries are currently available for the following languages and platforms, with others in
the pipeline:
Azure Storage data API and library references
Azure Storage REST API
Azure Storage client library for .NET
Azure Storage client library for Java/Android
Azure Storage client library for Node.js
Azure Storage client library for Python
Azure Storage client library for PHP
Azure Storage client library for Ruby
Azure Storage client library for C++
Azure Storage management API and library references
Storage Resource Provider REST API
Storage Resource Provider Client Library for .NET
Storage Service Management REST API (Classic)
Azure Storage data movement API and library references
Storage Import/Export Service REST API
Storage Data Movement Client Library for .NET
Tools and utilities
Azure PowerShell Cmdlets for Storage
Azure CLI Cmdlets for Storage
AzCopy Command-Line Utility
Azure Storage Explorer is a free, standalone app from Microsoft that enables you to work visually with Azure
Storage data on Windows, macOS, and Linux.
Azure Resource Manager templates for Azure Storage

Next steps
To get up and running with core Azure Storage services, see Create a storage account.
Introduction to Azure Blob storage
11/25/2021 • 4 minutes to read • Edit Online

Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing
massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model
or definition, such as text or binary data.

About Blob storage


Blob storage is designed for:
Serving images or documents directly to a browser.
Storing files for distributed access.
Streaming video and audio.
Writing to log files.
Storing data for backup and restore, disaster recovery, and archiving.
Storing data for analysis by an on-premises or Azure-hosted service.
Users or client applications can access objects in Blob storage via HTTP/HTTPS, from anywhere in the world.
Objects in Blob storage are accessible via the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure
Storage client library. Client libraries are available for different languages, including:
.NET
Java
Node.js
Python
Go
PHP
Ruby

About Azure Data Lake Storage Gen2


Blob storage supports Azure Data Lake Storage Gen2, Microsoft's enterprise big data analytics solution for the
cloud. Azure Data Lake Storage Gen2 offers a hierarchical file system as well as the advantages of Blob storage,
including:
Low-cost, tiered storage
High availability
Strong consistency
Disaster recovery capabilities
For more information about Data Lake Storage Gen2, see Introduction to Azure Data Lake Storage Gen2.

Blob storage resources


Blob storage offers three types of resources:
The storage account
A container in the storage account
A blob in a container
The following diagram shows the relationship between these resources.

Storage accounts
A storage account provides a unique namespace in Azure for your data. Every object that you store in Azure
Storage has an address that includes your unique account name. The combination of the account name and the
Azure Storage blob endpoint forms the base address for the objects in your storage account.
For example, if your storage account is named mystorageaccount, then the default endpoint for Blob storage is:

http://mystorageaccount.blob.core.windows.net

To create a storage account, see Create a storage account. To learn more about storage accounts, see Azure
storage account overview.
Containers
A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an
unlimited number of containers, and a container can store an unlimited number of blobs.

NOTE
The container name must be lowercase. For more information about naming containers, see Naming and Referencing
Containers, Blobs, and Metadata.

Blobs
Azure Storage supports three types of blobs:
Block blobs store text and binary data. Block blobs are made up of blocks of data that can be managed
individually. Block blobs can store up to about 190.7 TiB.
Append blobs are made up of blocks like block blobs, but are optimized for append operations. Append
blobs are ideal for scenarios such as logging data from virtual machines.
Page blobs store random access files up to 8 TiB in size. Page blobs store virtual hard drive (VHD) files and
serve as disks for Azure virtual machines. For more information about page blobs, see Overview of Azure
page blobs
For more information about the different types of blobs, see Understanding Block Blobs, Append Blobs, and
Page Blobs.

Move data to Blob storage


A number of solutions exist for migrating existing data to Blob storage:
AzCopy is an easy-to-use command-line tool for Windows and Linux that copies data to and from Blob
storage, across containers, or across storage accounts. For more information about AzCopy, see Transfer data
with the AzCopy v10.
The Azure Storage Data Movement librar y is a .NET library for moving data between Azure Storage
services. The AzCopy utility is built with the Data Movement library. For more information, see the reference
documentation for the Data Movement library.
Azure Data Factor y supports copying data to and from Blob storage by using the account key, a shared
access signature, a service principal, or managed identities for Azure resources. For more information, see
Copy data to or from Azure Blob storage by using Azure Data Factory.
Blobfuse is a virtual file system driver for Azure Blob storage. You can use blobfuse to access your existing
block blob data in your Storage account through the Linux file system. For more information, see How to
mount Blob storage as a file system with blobfuse.
Azure Data Box service is available to transfer on-premises data to Blob storage when large datasets or
network constraints make uploading data over the wire unrealistic. Depending on your data size, you can
request Azure Data Box Disk, Azure Data Box, or Azure Data Box Heavy devices from Microsoft. You can then
copy your data to those devices and ship them back to Microsoft to be uploaded into Blob storage.
The Azure Impor t/Expor t ser vice provides a way to import or export large amounts of data to and from
your storage account using hard drives that you provide. For more information, see Use the Microsoft Azure
Import/Export service to transfer data to Blob storage.

Next steps
Create a storage account
Scalability and performance targets for Blob storage
Quickstart: Upload, download, and list blobs with
the Azure portal
11/25/2021 • 3 minutes to read • Edit Online

In this quickstart, you learn how to use the Azure portal to create a container in Azure Storage, and to upload
and download block blobs in that container.

Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.

Create a container
To create a container in the Azure portal, follow these steps:
1. Navigate to your new storage account in the Azure portal.
2. In the left menu for the storage account, scroll to the Data storage section, then select Blob containers .
3. Select the + Container button.
4. Type a name for your new container. The container name must be lowercase, must start with a letter or
number, and can include only letters, numbers, and the dash (-) character. For more information about
container and blob names, see Naming and referencing containers, blobs, and metadata.
5. Set the level of public access to the container. The default level is Private (no anonymous access) .
6. Select OK to create the container.
Upload a block blob
Block blobs consist of blocks of data assembled to make a blob. Most scenarios using Blob storage employ block
blobs. Block blobs are ideal for storing text and binary data in the cloud, like files, images, and videos. This
quickstart shows how to work with block blobs.
To upload a block blob to your new container in the Azure portal, follow these steps:
1. In the Azure portal, navigate to the container you created in the previous section.
2. Select the container to show a list of blobs it contains. This container is new, so it won't yet contain any
blobs.
3. Select the Upload button to open the upload blade and browse your local file system to find a file to
upload as a block blob. You can optionally expand the Advanced section to configure other settings for
the upload operation.
4. Select the Upload button to upload the blob.
5. Upload as many blobs as you like in this way. You'll see that the new blobs are now listed within the
container.

Download a block blob


You can download a block blob to display in the browser or save to your local file system. To download a block
blob, follow these steps:
1. Navigate to the list of blobs that you uploaded in the previous section.
2. Right-click the blob you want to download, and select Download .
Delete a block blob
To delete one or more blobs in the Azure portal, follow these steps:
1. In the Azure portal, navigate to the container.
2. Display the list of blobs in the container.
3. Use the checkbox to select one or more blobs from the list.
4. Select the Delete button to delete the selected blobs.
5. In the dialog, confirm the deletion, and indicate whether you also want to delete blob snapshots.

Clean up resources
To remove all the resources you created in this quickstart, you can simply delete the container. All blobs in the
container will also be deleted.
To delete the container:
1. In the Azure portal, navigate to the list of containers in your storage account.
2. Select the container to delete.
3. Select the More button (...), and select Delete .
4. Confirm that you want to delete the container.
Next steps
In this quickstart, you learned how to create a container and upload a blob with Azure portal. To learn about
working with Blob storage from a web app, continue to a tutorial that shows how to upload images to a storage
account.
Tutorial: Upload image data in the cloud with Azure Storage
Quickstart: Use Azure Storage Explorer to create a
blob
11/25/2021 • 4 minutes to read • Edit Online

In this quickstart, you learn how to use Azure Storage Explorer to create a container and a blob. Next, you learn
how to download the blob to your local computer, and how to view all of the blobs in a container. You also learn
how to create a snapshot of a blob, manage container access policies, and create a shared access signature.

Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
This quickstart requires that you install Azure Storage Explorer. To install Azure Storage Explorer for Windows,
Macintosh, or Linux, see Azure Storage Explorer.

Log in to Storage Explorer


On first launch, the Microsoft Azure Storage Explorer - Connect window is shown. Storage Explorer
provides several ways to connect to storage accounts. The following table lists the different ways you can
connect:

TA SK P URP O SE

Add an Azure Account Redirects you to your organization's sign-in page to


authenticate you to Azure.

Use a connection string or shared access signature URI Can be used to directly access a container or storage
account with a SAS token or a shared connection string.

Use a storage account name and key Use the storage account name and key of your storage
account to connect to Azure storage.

Select Add an Azure Account and click Sign in... Follow the on-screen prompts to sign into your Azure
account.
After Storage Explorer finishes connecting, it displays the Explorer tab. This view gives you insight to all of your
Azure storage accounts as well as local storage configured through the Azurite storage emulator, Cosmos DB
accounts, or Azure Stack environments.
Create a container
To create a container, expand the storage account you created in the proceeding step. Select Blob Containers ,
right-click and select Create Blob Container . Enter the name for your blob container. See the Create a
container section for a list of rules and restrictions on naming blob containers. When complete, press Enter to
create the blob container. Once the blob container has been successfully created, it is displayed under the Blob
Containers folder for the selected storage account.

Upload blobs to the container


Blob storage supports block blobs, append blobs, and page blobs. VHD files used to back IaaS VMs are page
blobs. Append blobs are used for logging, such as when you want to write to a file and then keep adding more
information. Most files stored in Blob storage are block blobs.
On the container ribbon, select Upload . This operation gives you the option to upload a folder or a file.
Choose the files or folder to upload. Select the blob type . Acceptable choices are Append , Page , or Block blob.
If uploading a .vhd or .vhdx file, choose Upload .vhd/.vhdx files as page blobs (recommended) .
In the Upload to folder (optional) field either a folder name to store the files or folders in a folder under the
container. If no folder is chosen, the files are uploaded directly under the container.
When you select OK , the files selected are queued to upload, each file is uploaded. When the upload is complete,
the results are shown in the Activities window.

View blobs in a container


In the Azure Storage Explorer application, select a container under a storage account. The main pane shows a
list of the blobs in the selected container.
Download blobs
To download blobs using Azure Storage Explorer , with a blob selected, select Download from the ribbon. A
file dialog opens and provides you the ability to enter a file name. Select Save to start the download of a blob to
the local location.

Manage snapshots
Azure Storage Explorer provides the capability to take and manage snapshots of your blobs. To take a snapshot
of a blob, right-click the blob and select Create Snapshot . To view snapshots for a blob, right-click the blob and
select Manage Snapshots . A list of the snapshots for the blob are shown in the current tab.

Generate a shared access signature


You can use Storage Explorer to generate a shared access signatures (SAS). Right-click a storage account,
container, or blob and choose Get Shared Access Signature.... Choose the start and expiry time, and
permissions for the SAS URL and select Create . Storage Explorer generates the SAS token with the parameters
you specified and displays it for copying.
When you create a SAS for a storage account, Storage Explorer generates an account SAS. For more information
about the account SAS, see Create an account SAS.
When you create a SAS for a container or blob, Storage Explorer generates a service SAS. For more information
about the service SAS, see Create a service SAS.

NOTE
When you create a SAS with Storage Explorer, the SAS is always assigned with the storage account key. Storage Explorer
does not currently support creating a user delegation SAS, which is a SAS that is signed with Azure AD credentials.

Next steps
In this quickstart, you learned how to transfer files between a local disk and Azure Blob storage using Azure
Storage Explorer . To learn more about working with Blob storage, continue to the Blob storage overview.
Introduction to Azure Blob Storage
Quickstart: Upload, download, and list blobs with
PowerShell
11/25/2021 • 5 minutes to read • Edit Online

Use the Azure PowerShell module to create and manage Azure resources. Creating or managing Azure
resources can be done from the PowerShell command line or in scripts. This guide describes using PowerShell
to transfer files between local disk and Azure Blob storage.

Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, then create a
free account before you begin.
You will also need the Storage Blob Data Contributor role to read, write, and delete Azure Storage containers and
blobs.

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

This quickstart requires the Azure PowerShell module Az version 0.7 or later. Run
Get-InstalledModule -Name Az -AllVersions | select Name,Version to find the version. If you need to install or
upgrade, see Install Azure PowerShell module.

Sign in to Azure
Sign in to your Azure subscription with the Connect-AzAccount command and follow the on-screen directions.

Connect-AzAccount

If you don't know which location you want to use, you can list the available locations. Display the list of locations
by using the following code example and find the one you want to use. This example uses eastus . Store the
location in a variable and use the variable so you can change it in one place.

Get-AzLocation | select Location


$location = "eastus"

Create a resource group


Create an Azure resource group with New-AzResourceGroup. A resource group is a logical container into which
Azure resources are deployed and managed.

$resourceGroup = "myResourceGroup"
New-AzResourceGroup -Name $resourceGroup -Location $location
Create a storage account
Create a standard, general-purpose storage account with LRS replication by using New-AzStorageAccount. Next,
get the storage account context that defines the storage account you want to use. When acting on a storage
account, reference the context instead of repeatedly passing in the credentials. Use the following example to
create a storage account called mystorageaccount with locally redundant storage (LRS) and blob encryption
(enabled by default).

$storageAccount = New-AzStorageAccount -ResourceGroupName $resourceGroup `


-Name "mystorageaccount" `
-SkuName Standard_LRS `
-Location $location `

$ctx = $storageAccount.Context

Create a container
Blobs are always uploaded into a container. You can organize groups of blobs like the way you organize your
files on your computer in folders.
Set the container name, then create the container by using New-AzStorageContainer. Set the permissions to
blob to allow public access of the files. The container name in this example is quickstartblobs.

$containerName = "quickstartblobs"
New-AzStorageContainer -Name $containerName -Context $ctx -Permission blob

Upload blobs to the container


Blob storage supports block blobs, append blobs, and page blobs. VHD files that back IaaS VMs are page blobs.
Use append blobs for logging, such as when you want to write to a file and then keep adding more information.
Most files stored in Blob storage are block blobs.
To upload a file to a block blob, get a container reference, then get a reference to the block blob in that container.
Once you have the blob reference, you can upload data to it by using Set-AzStorageBlobContent. This operation
creates the blob if it doesn't exist, or overwrites the blob if it exists.
The following examples upload Image001.jpg and Image002.png from the D:\_TestImages folder on the local
disk to the container you created.
# upload a file to the default account (inferred) access tier
Set-AzStorageBlobContent -File "D:\_TestImages\Image000.jpg" `
-Container $containerName `
-Blob "Image001.jpg" `
-Context $ctx

# upload a file to the Hot access tier


Set-AzStorageBlobContent -File "D:\_TestImages\Image001.jpg" `
-Container $containerName `
-Blob "Image001.jpg" `
-Context $ctx `
-StandardBlobTier Hot

# upload another file to the Cool access tier


Set-AzStorageBlobContent -File "D:\_TestImages\Image002.png" `
-Container $containerName `
-Blob "Image002.png" `
-Context $ctx `
-StandardBlobTier Cool

# upload a file to a folder to the Archive access tier


Set-AzStorageBlobContent -File "D:\_TestImages\foldername\Image003.jpg" `
-Container $containerName `
-Blob "Foldername/Image003.jpg" `
-Context $ctx `
-StandardBlobTier Archive

Upload as many files as you like before continuing.

List the blobs in a container


Get a list of blobs in the container by using Get-AzStorageBlob. This example shows just the names of the blobs
uploaded.

Get-AzStorageBlob -Container $ContainerName -Context $ctx | select Name

Download blobs
Download the blobs to your local disk. For each blob you want to download, set the name and call Get-
AzStorageBlobContent to download the blob.
This example downloads the blobs to D:\_TestImages\Downloads on the local disk.

# download first blob


Get-AzStorageBlobContent -Blob "Image001.jpg" `
-Container $containerName `
-Destination "D:\_TestImages\Downloads\" `
-Context $ctx

# download another blob


Get-AzStorageBlobContent -Blob "Image002.png" `
-Container $containerName `
-Destination "D:\_TestImages\Downloads\" `
-Context $ctx

Data transfer with AzCopy


The AzCopy command-line utility offers high-performance, scriptable data transfer for Azure Storage. You can
use AzCopy to transfer data to and from Blob storage and Azure Files. For more information about AzCopy v10,
the latest version of AzCopy, see Get started with AzCopy. To learn about using AzCopy v10 with Blob storage,
see Transfer data with AzCopy and Blob storage.
The following example uses AzCopy to upload a local file to a blob. Remember to replace the sample values with
your own values:

azcopy login
azcopy copy 'C:\myDirectory\myTextFile.txt'
'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt'

Clean up resources
Remove all of the assets you've created. The easiest way to remove the assets is to delete the resource group.
Removing the resource group also deletes all resources included within the group. In the following example,
removing the resource group removes the storage account and the resource group itself.

Remove-AzResourceGroup -Name $resourceGroup

Next steps
In this quickstart, you transferred files between a local file system and Azure Blob storage. To learn more about
working with Blob storage by using PowerShell, explore Azure PowerShell samples for Blob storage.
Azure PowerShell samples for Azure Blob storage
Microsoft Azure PowerShell Storage cmdlets reference
Storage PowerShell cmdlets
Microsoft Azure Storage Explorer
Microsoft Azure Storage Explorer is a free, standalone app from Microsoft that enables you to work visually
with Azure Storage data on Windows, macOS, and Linux.
Quickstart: Create, download, and list blobs with
Azure CLI
11/25/2021 • 5 minutes to read • Edit Online

The Azure CLI is Azure's command-line experience for managing Azure resources. You can use it in your browser
with Azure Cloud Shell. You can also install it on macOS, Linux, or Windows and run it from the command line.
In this quickstart, you learn to use the Azure CLI to upload and download data to and from Azure Blob storage.

Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
Prepare your environment for the Azure CLI
Use the Bash environment in Azure Cloud Shell.

If you prefer, install the Azure CLI to run CLI reference commands.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This article requires version 2.0.46 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is
already installed.

Authorize access to Blob storage


You can authorize access to Blob storage from the Azure CLI either with Azure AD credentials or by using the
storage account access key. Using Azure AD credentials is recommended. This article shows how to authorize
Blob storage operations using Azure AD.
Azure CLI commands for data operations against Blob storage support the --auth-mode parameter, which
enables you to specify how to authorize a given operation. Set the --auth-mode parameter to login to
authorize with Azure AD credentials. For more information, see Authorize access to blob or queue data with
Azure CLI.
Only Blob storage data operations support the --auth-mode parameter. Management operations, such as
creating a resource group or storage account, automatically use Azure AD credentials for authorization.
Create a resource group
Create an Azure resource group with the az group create command. A resource group is a logical container into
which Azure resources are deployed and managed.
Remember to replace placeholder values in angle brackets with your own values:

az group create \
--name <resource-group> \
--location <location>

Create a storage account


Create a general-purpose storage account with the az storage account create command. The general-purpose
storage account can be used for all four services: blobs, files, tables, and queues.
Remember to replace placeholder values in angle brackets with your own values:

az storage account create \


--name <storage-account> \
--resource-group <resource-group> \
--location <location> \
--sku Standard_ZRS \
--encryption-services blob

Create a container
Blobs are always uploaded into a container. You can organize groups of blobs in containers similar to the way
you organize your files on your computer in folders. Create a container for storing blobs with the az storage
container create command.
The following example uses your Azure AD account to authorize the operation to create the container. Before
you create the container, assign the Storage Blob Data Contributor role to yourself. Even if you are the account
owner, you need explicit permissions to perform data operations against the storage account. For more
information about assigning Azure roles, see Assign an Azure role for access to blob data.
Remember to replace placeholder values in angle brackets with your own values:

az ad signed-in-user show --query objectId -o tsv | az role assignment create \


--role "Storage Blob Data Contributor" \
--assignee @- \
--scope "/subscriptions/<subscription>/resourceGroups/<resource-
group>/providers/Microsoft.Storage/storageAccounts/<storage-account>"

az storage container create \


--account-name <storage-account> \
--name <container> \
--auth-mode login

IMPORTANT
Azure role assignments may take a few minutes to propagate.

You can also use the storage account key to authorize the operation to create the container. For more
information about authorizing data operations with Azure CLI, see Authorize access to blob or queue data with
Azure CLI.

Upload a blob
Blob storage supports block blobs, append blobs, and page blobs. The examples in this quickstart show how to
work with block blobs.
First, create a file to upload to a block blob. If you're using Azure Cloud Shell, use the following command to
create a file:

vi helloworld

When the file opens, press inser t . Type Hello world, then press Esc . Next, type :x, then press Enter .
In this example, you upload a blob to the container you created in the last step using the az storage blob upload
command. It's not necessary to specify a file path since the file was created at the root directory. Remember to
replace placeholder values in angle brackets with your own values:

az storage blob upload \


--account-name <storage-account> \
--container-name <container> \
--name helloworld \
--file helloworld \
--auth-mode login

This operation creates the blob if it doesn't already exist, and overwrites it if it does. Upload as many files as you
like before continuing.
To upload multiple files at the same time, you can use the az storage blob upload-batch command.

List the blobs in a container


List the blobs in the container with the az storage blob list command. Remember to replace placeholder values
in angle brackets with your own values:

az storage blob list \


--account-name <storage-account> \
--container-name <container> \
--output table \
--auth-mode login

Download a blob
Use the az storage blob download command to download the blob you uploaded earlier. Remember to replace
placeholder values in angle brackets with your own values:

az storage blob download \


--account-name <storage-account> \
--container-name <container> \
--name helloworld \
--file ~/destination/path/for/file \
--auth-mode login

Data transfer with AzCopy


The AzCopy command-line utility offers high-performance, scriptable data transfer for Azure Storage. You can
use AzCopy to transfer data to and from Blob storage and Azure Files. For more information about AzCopy v10,
the latest version of AzCopy, see Get started with AzCopy. To learn about using AzCopy v10 with Blob storage,
see Transfer data with AzCopy and Blob storage.
The following example uses AzCopy to upload a local file to a blob. Remember to replace the sample values with
your own values:

azcopy login
azcopy copy 'C:\myDirectory\myTextFile.txt'
'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt'

Clean up resources
If you want to delete the resources you created as part of this quickstart, including the storage account, delete
the resource group by using the az group delete command. Remember to replace placeholder values in angle
brackets with your own values:

az group delete \
--name <resource-group> \
--no-wait

Next steps
In this quickstart, you learned how to transfer files between a local file system and a container in Azure Blob
storage. To learn more about working with Blob storage by using Azure CLI, explore Azure CLI samples for Blob
storage.
Azure CLI samples for Blob storage
Quickstart: Azure Blob Storage client library v12 for
.NET
11/25/2021 • 7 minutes to read • Edit Online

Get started with the Azure Blob Storage client library v12 for .NET. Azure Blob Storage is Microsoft's object
storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob
storage is optimized for storing massive amounts of unstructured data.
The examples in this quickstart show you how to use the Azure Blob Storage client library v12 for .NET to:
Get the connection string
Create a container
Upload a blob to a container
List blobs in a container
Download a blob
Delete a container
Additional resources:
API reference documentation
Library source code
Package (NuGet)
Samples

Prerequisites
Azure subscription - create one for free
Azure storage account - create a storage account
Current .NET Core SDK for your operating system. Be sure to get the SDK and not the runtime.

Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for
.NET.
Create the project
Create a .NET Core application named BlobQuickstartV12.
1. In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new
console app with the name BlobQuickstartV12. This command creates a simple "Hello World" C# project
with a single source file: Program.cs.

dotnet new console -n BlobQuickstartV12

2. Switch to the newly created BlobQuickstartV12 directory.

cd BlobQuickstartV12
3. In side the BlobQuickstartV12 directory, create another directory called data. This is where the blob data
files will be created and stored.

mkdir data

Install the package


While still in the application directory, install the Azure Blob Storage client library for .NET package by using the
dotnet add package command.

dotnet add package Azure.Storage.Blobs

Set up the app framework


From the project directory:
1. Open the Program.cs file in your editor.
2. Remove the Console.WriteLine("Hello World!"); statement.
3. Add using directives.
4. Update the Main method declaration to support async.
Here's the code:

using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using System;
using System.IO;
using System.Threading.Tasks;

namespace BlobQuickstartV12
{
class Program
{
static async Task Main()
{
}
}
}

Copy your credentials from the Azure portal


When the sample application makes a request to Azure Storage, it must be authorized. To authorize a request,
add your storage account credentials to the application as a connection string. To view your storage account
credentials, follow these steps:
1. Sign in to the Azure portal.
2. Locate your storage account.
3. In the storage account menu pane, under Security + networking , select Access keys . Here, you can
view the account access keys and the complete connection string for each key.
4. In the Access keys pane, select Show keys .
5. In the key1 section, locate the Connection string value. Select the Copy to clipboard icon to copy the
connection string. You will add the connection string value to an environment variable in the next section.

Configure your storage connection string


After you copy the connection string, write it to a new environment variable on the local machine running the
application. To set the environment variable, open a console window, and follow the instructions for your
operating system. Replace <yourconnectionstring> with your actual connection string.
Windows

setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"

After you add the environment variable in Windows, you must start a new instance of the command window.
Linux

export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

macOS

export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.

Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers
three types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.
Use the following .NET classes to interact with these resources:
BlobServiceClient: The BlobServiceClient class allows you to manipulate Azure Storage resources and blob
containers.
BlobContainerClient: The BlobContainerClient class allows you to manipulate Azure Storage containers and
their blobs.
BlobClient: The BlobClient class allows you to manipulate Azure Storage blobs.

Code examples
The sample code snippets in the following sections show you how to perform basic data operations with the
Azure Blob Storage client library for .NET.
Get the connection string
The code below retrieves the connection string for the storage account from the environment variable created in
the Configure your storage connection string section.
Add this code inside the Main method:

Console.WriteLine("Azure Blob Storage v12 - .NET quickstart sample\n");

// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING. If the
// environment variable is created after the application is launched in a
// console or with Visual Studio, the shell or application needs to be closed
// and reloaded to take the environment variable into account.
string connectionString = Environment.GetEnvironmentVariable("AZURE_STORAGE_CONNECTION_STRING");

Create a container
Decide on a name for the new container. The code below appends a GUID value to the container name to ensure
that it is unique.

IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.

Create an instance of the BlobServiceClient class. Then, call the CreateBlobContainerAsync method to create the
container in your storage account.
Add this code to the end of the Main method:
// Create a BlobServiceClient object which will be used to create a container client
BlobServiceClient blobServiceClient = new BlobServiceClient(connectionString);

//Create a unique name for the container


string containerName = "quickstartblobs" + Guid.NewGuid().ToString();

// Create the container and return a container client object


BlobContainerClient containerClient = await blobServiceClient.CreateBlobContainerAsync(containerName);

Upload a blob to a container


The following code snippet:
1. Creates a text file in the local data directory.
2. Gets a reference to a BlobClient object by calling the GetBlobClient method on the container from the Create
a container section.
3. Uploads the local text file to the blob by calling the UploadAsync method. This method creates the blob if it
doesn't already exist, and overwrites it if it does.
Add this code to the end of the Main method:

// Create a local file in the ./data/ directory for uploading and downloading
string localPath = "./data/";
string fileName = "quickstart" + Guid.NewGuid().ToString() + ".txt";
string localFilePath = Path.Combine(localPath, fileName);

// Write text to the file


await File.WriteAllTextAsync(localFilePath, "Hello, World!");

// Get a reference to a blob


BlobClient blobClient = containerClient.GetBlobClient(fileName);

Console.WriteLine("Uploading to Blob storage as blob:\n\t {0}\n", blobClient.Uri);

// Upload data from the local file


await blobClient.UploadAsync(localFilePath, true);

List blobs in a container


List the blobs in the container by calling the GetBlobsAsync method. In this case, only one blob has been added
to the container, so the listing operation returns just that one blob.
Add this code to the end of the Main method:

Console.WriteLine("Listing blobs...");

// List all blobs in the container


await foreach (BlobItem blobItem in containerClient.GetBlobsAsync())
{
Console.WriteLine("\t" + blobItem.Name);
}

Download a blob
Download the previously created blob by calling the DownloadToAsync method. The example code adds a suffix
of "DOWNLOADED" to the file name so that you can see both files in local file system.
Add this code to the end of the Main method:
// Download the blob to a local file
// Append the string "DOWNLOADED" before the .txt extension
// so you can compare the files in the data directory
string downloadFilePath = localFilePath.Replace(".txt", "DOWNLOADED.txt");

Console.WriteLine("\nDownloading blob to\n\t{0}\n", downloadFilePath);

// Download the blob's contents and save it to a file


await blobClient.DownloadToAsync(downloadFilePath);

Delete a container
The following code cleans up the resources the app created by deleting the entire container by using
DeleteAsync. It also deletes the local files created by the app.
The app pauses for user input by calling Console.ReadLine before it deletes the blob, container, and local files.
This is a good chance to verify that the resources were actually created correctly, before they are deleted.
Add this code to the end of the Main method:

// Clean up
Console.Write("Press any key to begin clean up");
Console.ReadLine();

Console.WriteLine("Deleting blob container...");


await containerClient.DeleteAsync();

Console.WriteLine("Deleting the local source and downloaded files...");


File.Delete(localFilePath);
File.Delete(downloadFilePath);

Console.WriteLine("Done");

Run the code


This app creates a test file in your local data folder and uploads it to Blob storage. The example then lists the
blobs in the container and downloads the file with a new name so that you can compare the old and new files.
Navigate to your application directory, then build and run the application.

dotnet build

dotnet run

The output of the app is similar to the following example:


Azure Blob Storage v12 - .NET quickstart sample

Uploading to Blob storage as blob:


https://mystorageacct.blob.core.windows.net/quickstartblobs60c70d78-8d93-43ae-954d-
8322058cfd64/quickstart2fe6c5b4-7918-46cb-96f4-8c4c5cb2fd31.txt

Listing blobs...
quickstart2fe6c5b4-7918-46cb-96f4-8c4c5cb2fd31.txt

Downloading blob to
./data/quickstart2fe6c5b4-7918-46cb-96f4-8c4c5cb2fd31DOWNLOADED.txt

Press any key to begin clean up


Deleting blob container...
Deleting the local source and downloaded files...
Done

Before you begin the clean up process, check your data folder for the two files. You can open them and observe
that they are identical.
After you've verified the files, press the Enter key to delete the test files and finish the demo.

Next steps
In this quickstart, you learned how to upload, download, and list blobs using .NET.
To see Blob storage sample apps, continue to:
Azure Blob Storage SDK v12 .NET samples
For tutorials, samples, quick starts and other documentation, visit Azure for .NET and .NET Core developers.
To learn more about .NET Core, see Get started with .NET in 10 minutes.
Quickstart: Azure Blob storage client library v11 for
.NET
11/25/2021 • 10 minutes to read • Edit Online

Get started with the Azure Blob Storage client library v11 for .NET. Azure Blob Storage is Microsoft's object
storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob
storage is optimized for storing massive amounts of unstructured data.

NOTE
This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see
Quickstart: Azure Blob storage client library v12 for .NET.

Use the Azure Blob Storage client library for .NET to:
Create a container
Set permissions on a container
Create a blob in Azure Storage
Download the blob to your local computer
List all of the blobs in a container
Delete a container
Additional resources:
API reference documentation
Library source code
Package (NuGet)
Samples

Prerequisites
Azure subscription - create one for free
Azure Storage account - create a storage account
Current .NET Core SDK for your operating system. Be sure to get the SDK and not the runtime.

Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library for .NET.
Create the project
First, create a .NET Core application named blob-quickstart.
1. In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new
console app with the name blob-quickstart. This command creates a simple "Hello World" C# project with
a single source file: Program.cs.

dotnet new console -n blob-quickstart


2. Switch to the newly created blob-quickstart folder and build the app to verify that all is well.

cd blob-quickstart

dotnet build

The expected output from the build should look something like this:

C:\QuickStarts\blob-quickstart> dotnet build


Microsoft (R) Build Engine version 16.0.450+ga8dc7f1d34 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

Restore completed in 44.31 ms for C:\QuickStarts\blob-quickstart\blob-quickstart.csproj.


blob-quickstart -> C:\QuickStarts\blob-quickstart\bin\Debug\netcoreapp2.1\blob-quickstart.dll

Build succeeded.
0 Warning(s)
0 Error(s)

Time Elapsed 00:00:03.08

Install the package


While still in the application directory, install the Azure Blob Storage client library for .NET package by using the
dotnet add package command.

dotnet add package Microsoft.Azure.Storage.Blob

Set up the app framework


From the project directory:
1. Open the Program.cs file in your editor
2. Remove the Console.WriteLine statement
3. Add using directives
4. Create a ProcessAsync method where the main code for the example will reside
5. Asynchronously call the ProcessAsync method from Main
Here's the code:
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.Azure.Storage;
using Microsoft.Azure.Storage.Blob;

namespace blob_quickstart
{
class Program
{
public static async Task Main()
{
Console.WriteLine("Azure Blob Storage - .NET quickstart sample\n");

await ProcessAsync();

Console.WriteLine("Press any key to exit the sample application.");


Console.ReadLine();
}

private static async Task ProcessAsync()


{
}
}
}

Copy your credentials from the Azure portal


When the sample application makes a request to Azure Storage, it must be authorized. To authorize a request,
add your storage account credentials to the application as a connection string. View your storage account
credentials by following these steps:
1. Navigate to the Azure portal.
2. Locate your storage account.
3. In the Settings section of the storage account overview, select Access keys . Here, you can view your
account access keys and the complete connection string for each key.
4. Find the Connection string value under key1 , and select the Copy button to copy the connection
string. You will add the connection string value to an environment variable in the next step.

Configure your storage connection string


After you have copied your connection string, write it to a new environment variable on the local machine
running the application. To set the environment variable, open a console window, and follow the instructions for
your operating system. Replace <yourconnectionstring> with your actual connection string.
Windows

setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"

After you add the environment variable in Windows, you must start a new instance of the command window.
Linux
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

MacOS

export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before continuing.

Object model
Azure Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers
three types of resources:
The storage account.
A container in the storage account
A blob in a container
The following diagram shows the relationship between these resources.

Use the following .NET classes to interact with these resources:


CloudStorageAccount: The CloudStorageAccount class represents your Azure storage account. Use this class
to authorize access to Blob storage using your account access keys.
CloudBlobClient: The CloudBlobClient class provides a point of access to the Blob service in your code.
CloudBlobContainer: The CloudBlobContainer class represents a blob container in your code.
CloudBlockBlob: The CloudBlockBlob object represents a block blob in your code. Block blobs are made up of
blocks of data that can be managed individually.

Code examples
These example code snippets show you how to perform the following with the Azure Blob storage client library
for .NET:
Authenticate the client
Create a container
Set permissions on a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Authenticate the client
The code below checks that the environment variable contains a connection string that can be parsed to create a
CloudStorageAccount object pointing to the storage account. To check that the connection string is valid, use the
TryParse method. If TryParse is successful, it initializes the storageAccount variable and returns true .
Add this code inside the ProcessAsync method:

// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING. If the
// environment variable is created after the application is launched in a
// console or with Visual Studio, the shell or application needs to be closed
// and reloaded to take the environment variable into account.
string storageConnectionString = Environment.GetEnvironmentVariable("AZURE_STORAGE_CONNECTION_STRING");

// Check whether the connection string can be parsed.


CloudStorageAccount storageAccount;
if (CloudStorageAccount.TryParse(storageConnectionString, out storageAccount))
{
// If the connection string is valid, proceed with operations against Blob
// storage here.
// ADD OTHER OPERATIONS HERE
}
else
{
// Otherwise, let the user know that they need to define the environment variable.
Console.WriteLine(
"A connection string has not been defined in the system environment variables. " +
"Add an environment variable named 'AZURE_STORAGE_CONNECTION_STRING' with your storage " +
"connection string as a value.");
Console.WriteLine("Press any key to exit the application.");
Console.ReadLine();
}

NOTE
To perform the rest of the operations in this article, replace // ADD OTHER OPERATIONS HERE in the code above with the
code snippets in the following sections.

Create a container
To create the container, first create an instance of the CloudBlobClient object, which points to Blob storage in
your storage account. Next, create an instance of the CloudBlobContainer object, then create the container.
In this case, the code calls the CreateAsync method to create the container. A GUID value is appended to the
container name to ensure that it is unique. In a production environment, it's often preferable to use the
CreateIfNotExistsAsync method to create a container only if it does not already exist.

IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.
// Create the CloudBlobClient that represents the
// Blob storage endpoint for the storage account.
CloudBlobClient cloudBlobClient = storageAccount.CreateCloudBlobClient();

// Create a container called 'quickstartblobs' and


// append a GUID value to it to make the name unique.
CloudBlobContainer cloudBlobContainer =
cloudBlobClient.GetContainerReference("quickstartblobs" +
Guid.NewGuid().ToString());
await cloudBlobContainer.CreateAsync();

Set permissions on a container


Set permissions on the container so that any blobs in the container are public. If a blob is public, it can be
accessed anonymously by any client.

// Set the permissions so the blobs are public.


BlobContainerPermissions permissions = new BlobContainerPermissions
{
PublicAccess = BlobContainerPublicAccessType.Blob
};
await cloudBlobContainer.SetPermissionsAsync(permissions);

Upload blobs to a container


The following code snippet gets a reference to a CloudBlockBlob object by calling the GetBlockBlobReference
method on the container created in the previous section. It then uploads the selected local file to the blob by
calling the UploadFromFileAsync method. This method creates the blob if it doesn't already exist, and overwrites
it if it does.

// Create a file in your local MyDocuments folder to upload to a blob.


string localPath = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);
string localFileName = "QuickStart_" + Guid.NewGuid().ToString() + ".txt";
string sourceFile = Path.Combine(localPath, localFileName);
// Write text to the file.
File.WriteAllText(sourceFile, "Hello, World!");

Console.WriteLine("Temp file = {0}", sourceFile);


Console.WriteLine("Uploading to Blob storage as blob '{0}'", localFileName);

// Get a reference to the blob address, then upload the file to the blob.
// Use the value of localFileName for the blob name.
CloudBlockBlob cloudBlockBlob = cloudBlobContainer.GetBlockBlobReference(localFileName);
await cloudBlockBlob.UploadFromFileAsync(sourceFile);

List the blobs in a container


List the blobs in the container by using the ListBlobsSegmentedAsync method. In this case, only one blob has
been added to the container, so the listing operation returns just that one blob.
If there are too many blobs to return in one call (by default, more than 5000), then the ListBlobsSegmentedAsync
method returns a segment of the total result set and a continuation token. To retrieve the next segment of blobs,
you provide in the continuation token returned by the previous call, and so on, until the continuation token is
null. A null continuation token indicates that all of the blobs have been retrieved. The code shows how to use the
continuation token for the sake of best practices.
// List the blobs in the container.
Console.WriteLine("List blobs in container.");
BlobContinuationToken blobContinuationToken = null;
do
{
var results = await cloudBlobContainer.ListBlobsSegmentedAsync(null, blobContinuationToken);
// Get the value of the continuation token returned by the listing call.
blobContinuationToken = results.ContinuationToken;
foreach (IListBlobItem item in results.Results)
{
Console.WriteLine(item.Uri);
}
} while (blobContinuationToken != null); // Loop while the continuation token is not null.

Download blobs
Download the blob created previously to your local file system by using the DownloadToFileAsync method. The
example code adds a suffix of "_DOWNLOADED" to the blob name so that you can see both files in local file
system.

// Download the blob to a local file, using the reference created earlier.
// Append the string "_DOWNLOADED" before the .txt extension so that you
// can see both files in MyDocuments.
string destinationFile = sourceFile.Replace(".txt", "_DOWNLOADED.txt");
Console.WriteLine("Downloading blob to {0}", destinationFile);
await cloudBlockBlob.DownloadToFileAsync(destinationFile, FileMode.Create);

Delete a container
The following code cleans up the resources the app created by deleting the entire container using CloudBlob
Container.DeleteAsync. You can also delete the local files if you like.

Console.WriteLine("Press the 'Enter' key to delete the example files, " +


"example container, and exit the application.");
Console.ReadLine();
// Clean up resources. This includes the container and the two temp files.
Console.WriteLine("Deleting the container");
if (cloudBlobContainer != null)
{
await cloudBlobContainer.DeleteIfExistsAsync();
}
Console.WriteLine("Deleting the source, and downloaded files");
File.Delete(sourceFile);
File.Delete(destinationFile);

Run the code


This app creates a test file in your local MyDocuments folder and uploads it to Blob storage. The example then
lists the blobs in the container and downloads the file with a new name so that you can compare the old and
new files.
Navigate to your application directory, then build and run the application.

dotnet build

dotnet run
The output of the app is similar to the following example:

Azure Blob storage - .NET Quickstart example

Created container 'quickstartblobs33c90d2a-eabd-4236-958b-5cc5949e731f'

Temp file = C:\Users\myusername\Documents\QuickStart_c5e7f24f-a7f8-4926-a9da-96


97c748f4db.txt
Uploading to Blob storage as blob 'QuickStart_c5e7f24f-a7f8-4926-a9da-9697c748f
4db.txt'

Listing blobs in container.


https://storagesamples.blob.core.windows.net/quickstartblobs33c90d2a-eabd-4236-
958b-5cc5949e731f/QuickStart_c5e7f24f-a7f8-4926-a9da-9697c748f4db.txt

Downloading blob to C:\Users\myusername\Documents\QuickStart_c5e7f24f-a7f8-4926


-a9da-9697c748f4db_DOWNLOADED.txt

Press any key to delete the example files and example container.

When you press the Enter key, the application deletes the storage container and the files. Before you delete
them, check your MyDocuments folder for the two files. You can open them and observe that they are identical.
Copy the blob's URL from the console window and paste it into a browser to view the contents of the blob.
After you've verified the files, hit any key to finish the demo and delete the test files.

Next steps
In this quickstart, you learned how to upload, download, and list blobs using .NET.
To learn how to create a web app that uploads an image to Blob storage, continue to:
Upload and process an image
To learn more about .NET Core, see Get started with .NET in 10 minutes.
To explore a sample application that you can deploy from Visual Studio for Windows, see the .NET Photo
Gallery Web Application Sample with Azure Blob Storage.
Quickstart: Manage blobs with Java v12 SDK
11/25/2021 • 8 minutes to read • Edit Online

In this quickstart, you learn to manage blobs by using Java. Blobs are objects that can hold large amounts of text
or binary data, including images, documents, streaming media, and archive data. You'll upload, download, and
list blobs, and you'll create and delete containers.
Additional resources:
API reference documentation
Library source code
Package (Maven)
Samples

Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
Java Development Kit (JDK) version 8 or above.
Apache Maven.

Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for
Java.
Create the project
Create a Java application named blob-quickstart-v12.
1. In a console window (such as cmd, PowerShell, or Bash), use Maven to create a new console app with the
name blob-quickstart-v12. Type the following mvn command to create a "Hello world!" Java project.
PowerShell
Bash

mvn archetype:generate `
--define interactiveMode=n `
--define groupId=com.blobs.quickstart `
--define artifactId=blob-quickstart-v12 `
--define archetypeArtifactId=maven-archetype-quickstart `
--define archetypeVersion=1.4

2. The output from generating the project should look something like this:
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------< org.apache.maven:standalone-pom >-------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] --------------------------------[ pom ]---------------------------------
[INFO]
[INFO] >>> maven-archetype-plugin:3.1.2:generate (default-cli) > generate-sources @ standalone-pom
>>>
[INFO]
[INFO] <<< maven-archetype-plugin:3.1.2:generate (default-cli) < generate-sources @ standalone-pom
<<<
[INFO]
[INFO]
[INFO] --- maven-archetype-plugin:3.1.2:generate (default-cli) @ standalone-pom ---
[INFO] Generating project in Batch mode
[INFO] ----------------------------------------------------------------------------
[INFO] Using following parameters for creating project from Archetype: maven-archetype-quickstart:1.4
[INFO] ----------------------------------------------------------------------------
[INFO] Parameter: groupId, Value: com.blobs.quickstart
[INFO] Parameter: artifactId, Value: blob-quickstart-v12
[INFO] Parameter: version, Value: 1.0-SNAPSHOT
[INFO] Parameter: package, Value: com.blobs.quickstart
[INFO] Parameter: packageInPathFormat, Value: com/blobs/quickstart
[INFO] Parameter: version, Value: 1.0-SNAPSHOT
[INFO] Parameter: package, Value: com.blobs.quickstart
[INFO] Parameter: groupId, Value: com.blobs.quickstart
[INFO] Parameter: artifactId, Value: blob-quickstart-v12
[INFO] Project created from Archetype in dir: C:\QuickStarts\blob-quickstart-v12
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.056 s
[INFO] Finished at: 2019-10-23T11:09:21-07:00
[INFO] ------------------------------------------------------------------------
```

3. Switch to the newly created blob-quickstart-v12 folder.

cd blob-quickstart-v12

4. In side the blob-quickstart-v12 directory, create another directory called data. This is where the blob data
files will be created and stored.

mkdir data

Install the package


Open the pom.xml file in your text editor. Add the following dependency element to the group of dependencies.

<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-storage-blob</artifactId>
<version>12.13.0</version>
</dependency>

Set up the app framework


From the project directory:
1. Navigate to the /src/main/java/com/blobs/quickstart directory
2. Open the App.java file in your editor
3. Delete the System.out.println("Hello world!"); statement
4. Add import directives
Here's the code:

package com.blobs.quickstart;

/**
* Azure blob storage v12 SDK quickstart
*/
import com.azure.storage.blob.*;
import com.azure.storage.blob.models.*;
import java.io.*;

public class App


{
public static void main( String[] args ) throws IOException
{
}
}

Copy your credentials from the Azure portal


When the sample application makes a request to Azure Storage, it must be authorized. To authorize a request,
add your storage account credentials to the application as a connection string. To view your storage account
credentials, follow these steps:
1. Sign in to the Azure portal.
2. Locate your storage account.
3. In the storage account menu pane, under Security + networking , select Access keys . Here, you can
view the account access keys and the complete connection string for each key.

4. In the Access keys pane, select Show keys .


5. In the key1 section, locate the Connection string value. Select the Copy to clipboard icon to copy the
connection string. You will add the connection string value to an environment variable in the next section.

Configure your storage connection string


After you copy the connection string, write it to a new environment variable on the local machine running the
application. To set the environment variable, open a console window, and follow the instructions for your
operating system. Replace <yourconnectionstring> with your actual connection string.
Windows

setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"

After you add the environment variable in Windows, you must start a new instance of the command window.
Linux

export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

macOS

export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.

Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers
three types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.

Use the following Java classes to interact with these resources:


BlobServiceClient: The BlobServiceClient class allows you to manipulate Azure Storage resources and blob
containers. The storage account provides the top-level namespace for the Blob service.
BlobServiceClientBuilder: The BlobServiceClientBuilder class provides a fluent builder API to help aid the
configuration and instantiation of BlobServiceClient objects.
BlobContainerClient: The BlobContainerClient class allows you to manipulate Azure Storage containers and
their blobs.
BlobClient: The BlobClient class allows you to manipulate Azure Storage blobs.
BlobItem: The BlobItem class represents individual blobs returned from a call to listBlobs.

Code examples
These example code snippets show you how to perform the following with the Azure Blob Storage client library
for Java:
Get the connection string
Create a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Get the connection string
The code below retrieves the connection string for the storage account from the environment variable created in
the Configure your storage connection string section.
Add this code inside the Main method:

System.out.println("Azure Blob Storage v12 - Java quickstart sample\n");

// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING. If the environment variable
// is created after the application is launched in a console or with
// Visual Studio, the shell or application needs to be closed and reloaded
// to take the environment variable into account.
String connectStr = System.getenv("AZURE_STORAGE_CONNECTION_STRING");

Create a container
Decide on a name for the new container. The code below appends a UUID value to the container name to ensure
that it is unique.

IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.

Next, create an instance of the BlobContainerClient class, then call the create method to actually create the
container in your storage account.
Add this code to the end of the Main method:

// Create a BlobServiceClient object which will be used to create a container client


BlobServiceClient blobServiceClient = new
BlobServiceClientBuilder().connectionString(connectStr).buildClient();

//Create a unique name for the container


String containerName = "quickstartblobs" + java.util.UUID.randomUUID();

// Create the container and return a container client object


BlobContainerClient containerClient = blobServiceClient.createBlobContainer(containerName);

Upload blobs to a container


The following code snippet:
1. Creates a text file in the local data directory.
2. Gets a reference to a BlobClient object by calling the getBlobClient method on the container from the Create
a container section.
3. Uploads the local text file to the blob by calling the uploadFromFile method. This method creates the blob if it
doesn't already exist, but will not overwrite it if it does.
Add this code to the end of the Main method:

// Create a local file in the ./data/ directory for uploading and downloading
String localPath = "./data/";
String fileName = "quickstart" + java.util.UUID.randomUUID() + ".txt";
File localFile = new File(localPath + fileName);

// Write text to the file


FileWriter writer = new FileWriter(localPath + fileName, true);
writer.write("Hello, World!");
writer.close();

// Get a reference to a blob


BlobClient blobClient = containerClient.getBlobClient(fileName);

System.out.println("\nUploading to Blob storage as blob:\n\t" + blobClient.getBlobUrl());

// Upload the blob


blobClient.uploadFromFile(localPath + fileName);

List the blobs in a container


List the blobs in the container by calling the listBlobs method. In this case, only one blob has been added to the
container, so the listing operation returns just that one blob.
Add this code to the end of the Main method:

System.out.println("\nListing blobs...");

// List the blob(s) in the container.


for (BlobItem blobItem : containerClient.listBlobs()) {
System.out.println("\t" + blobItem.getName());
}

Download blobs
Download the previously created blob by calling the downloadToFile method. The example code adds a suffix of
"DOWNLOAD" to the file name so that you can see both files in local file system.
Add this code to the end of the Main method:

// Download the blob to a local file


// Append the string "DOWNLOAD" before the .txt extension so that you can see both files.
String downloadFileName = fileName.replace(".txt", "DOWNLOAD.txt");
File downloadedFile = new File(localPath + downloadFileName);

System.out.println("\nDownloading blob to\n\t " + localPath + downloadFileName);

blobClient.downloadToFile(localPath + downloadFileName);

Delete a container
The following code cleans up the resources the app created by removing the entire container using the delete
method. It also deletes the local files created by the app.
The app pauses for user input by calling System.console().readLine() before it deletes the blob, container, and
local files. This is a good chance to verify that the resources were created correctly, before they are deleted.
Add this code to the end of the Main method:
// Clean up
System.out.println("\nPress the Enter key to begin clean up");
System.console().readLine();

System.out.println("Deleting blob container...");


containerClient.delete();

System.out.println("Deleting the local source and downloaded files...");


localFile.delete();
downloadedFile.delete();

System.out.println("Done");

Run the code


This app creates a test file in your local folder and uploads it to Blob storage. The example then lists the blobs in
the container and downloads the file with a new name so that you can compare the old and new files.
Navigate to the directory containing the pom.xml file and compile the project by using the following mvn
command.

mvn compile

Then, build the package.

mvn package

Run the following mvn command to execute the app.

mvn exec:java -Dexec.mainClass="com.blobs.quickstart.App" -Dexec.cleanupDaemonThreads=false

The output of the app is similar to the following example:

Azure Blob Storage v12 - Java quickstart sample

Uploading to Blob storage as blob:


https://mystorageacct.blob.core.windows.net/quickstartblobsf9aa68a5-260e-47e6-bea2-
2dcfcfa1fd9a/quickstarta9c3a53e-ae9d-4863-8b34-f3d807992d65.txt

Listing blobs...
quickstarta9c3a53e-ae9d-4863-8b34-f3d807992d65.txt

Downloading blob to
./data/quickstarta9c3a53e-ae9d-4863-8b34-f3d807992d65DOWNLOAD.txt

Press the Enter key to begin clean up

Deleting blob container...


Deleting the local source and downloaded files...
Done

Before you begin the clean up process, check your data folder for the two files. You can open them and observe
that they are identical.
After you've verified the files, press the Enter key to delete the test files and finish the demo.
Next steps
In this quickstart, you learned how to upload, download, and list blobs using Java.
To see Blob storage sample apps, continue to:
Azure Blob Storage SDK v12 Java samples
To learn more, see the Azure SDK for Java.
For tutorials, samples, quickstarts, and other documentation, visit Azure for Java cloud developers.
Quickstart: Manage blobs with Java v8 SDK
11/25/2021 • 8 minutes to read • Edit Online

In this quickstart, you learn to manage blobs by using Java. Blobs are objects that can hold large amounts of text
or binary data, including images, documents, streaming media, and archive data. You'll upload, download, and
list blobs. You'll also create, set permissions on, and delete containers.

NOTE
This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see
Quickstart: Manage blobs with Java v12 SDK.

Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
An IDE that has Maven integration. This guide uses Eclipse with the "Eclipse IDE for Java Developers"
configuration.

Download the sample application


The sample application is a basic console application.
Use git to download a copy of the application to your development environment.

git clone https://github.com/Azure-Samples/storage-blobs-java-quickstart.git

This command clones the repository to your local git folder. To open the project, launch Eclipse and close the
Welcome screen. Select File then Open Projects from File System . Make sure Detect and configure
project natures is checked. Select Director y then navigate to where you stored the cloned repository. Inside
the cloned repository, select the blobAzureApp folder. Make sure the blobAzureApp project appears as an
Eclipse project, then select Finish .
Once the project completes importing, open AzureApp.java (located in blobQuickstar t.blobAzureApp
inside of src/main/java ), and replace the accountname and accountkey inside of the storageConnectionString
string. Then run the application. Specific instructions for completing these tasks are described in the following
sections.

Copy your credentials from the Azure portal


The sample application needs to authenticate access to your storage account. To authenticate, add your storage
account credentials to the application as a connection string. View your storage account credentials by following
these steps:
1. Sign in to the Azure portal.
2. Locate your storage account.
3. In the Settings section of the storage account overview, select Access keys . Here, you can view your
account access keys and the complete connection string for each key.
4. Find the Connection string value under key1 , and select the Copy button to copy the connection
string. You will add the connection string value to an environment variable in the next step.

Configure your storage connection string


In the application, you must provide the connection string for your storage account. Open the AzureApp.Java
file. Find the storageConnectionString variable and paste the connection string value that you copied in the
previous section. Your storageConnectionString variable should look similar to the following code example:

public static final String storageConnectionString =


"DefaultEndpointsProtocol=https;" +
"AccountName=<account-name>;" +
"AccountKey=<account-key>";

Run the sample


This sample application creates a test file in your default directory (C:\Users<user>\AppData\Local\Temp, for
Windows users), uploads it to Blob storage, lists the blobs in the container, then downloads the file with a new
name so you can compare the old and new files.
Run the sample using Maven at the command line. Open a shell and navigate to blobAzureApp inside of your
cloned directory. Then enter mvn compile exec:java .
The following example shows the output if you were to run the application on Windows.

Azure Blob storage quick start sample


Creating container: quickstartcontainer
Creating a sample file at: C:\Users\<user>\AppData\Local\Temp\sampleFile514658495642546986.txt
Uploading the sample file
URI of blob is:
https://myexamplesacct.blob.core.windows.net/quickstartcontainer/sampleFile514658495642546986.txt
The program has completed successfully.
Press the 'Enter' key while in the console to delete the sample files, example container, and exit the
application.

Deleting the container


Deleting the source, and downloaded files

Before you continue, check your default directory (C:\Users<user>\AppData\Local\Temp, for Windows users)
for the sample file. Copy the URL for the blob out of the console window and paste it into a browser to view the
contents of the file in Blob storage. If you compare the sample file in your directory with the contents stored in
Blob storage, you will see that they are the same.

NOTE
You can also use a tool such as the Azure Storage Explorer to view the files in Blob storage. Azure Storage Explorer is a free
cross-platform tool that allows you to access your storage account information.
After you've verified the files, press the Enter key to complete the demo and delete the test files. Now that you
know what the sample does, open the AzureApp.java file to look at the code.

Understand the sample code


Next, we walk through the sample code so that you can understand how it works.
Get references to the storage objects
The first thing to do is create the references to the objects used to access and manage Blob storage. These
objects build on each other -- each is used by the next one in the list.
Create an instance of the CloudStorageAccount object pointing to the storage account.
The CloudStorageAccount object is a representation of your storage account and it allows you to set
and access storage account properties programmatically. Using the CloudStorageAccount object you
can create an instance of the CloudBlobClient , which is necessary to access the blob service.
Create an instance of the CloudBlobClient object, which points to the Blob service in your storage
account.
The CloudBlobClient provides you a point of access to the blob service, allowing you to set and access
Blob storage properties programmatically. Using the CloudBlobClient you can create an instance of the
CloudBlobContainer object, which is necessary to create containers.
Create an instance of the CloudBlobContainer object, which represents the container you are accessing.
Use containers to organize your blobs like you use folders on your computer to organize your files.
Once you have the CloudBlobContainer , you can create an instance of the CloudBlockBlob object that
points to the specific blob you're interested in, and perform an upload, download, copy, or other
operation.

IMPORTANT
Container names must be lowercase. For more information about containers, see Naming and Referencing Containers,
Blobs, and Metadata.

Create a container
In this section, you create an instance of the objects, create a new container, and then set permissions on the
container so the blobs are public and can be accessed with just a URL. The container is called
quickstar tcontainer .
This example uses CreateIfNotExists because we want to create a new container each time the sample is run. In a
production environment, where you use the same container throughout an application, it's better practice to
only call CreateIfNotExists once. Alternatively, you can create the container ahead of time so you don't need to
create it in the code.

// Parse the connection string and create a blob client to interact with Blob storage
storageAccount = CloudStorageAccount.parse(storageConnectionString);
blobClient = storageAccount.createCloudBlobClient();
container = blobClient.getContainerReference("quickstartcontainer");

// Create the container if it does not exist with public access.


System.out.println("Creating container: " + container.getName());
container.createIfNotExists(BlobContainerPublicAccessType.CONTAINER, new BlobRequestOptions(), new
OperationContext());

Upload blobs to the container


To upload a file to a block blob, get a reference to the blob in the target container. Once you have the blob
reference, you can upload data to it by using CloudBlockBlob.Upload. This operation creates the blob if it doesn't
already exist, or overwrites the blob if it already exists.
The sample code creates a local file to be used for the upload and download, storing the file to be uploaded as
source and the name of the blob in blob . The following example uploads the file to your container called
quickstar tcontainer .

//Creating a sample file


sourceFile = File.createTempFile("sampleFile", ".txt");
System.out.println("Creating a sample file at: " + sourceFile.toString());
Writer output = new BufferedWriter(new FileWriter(sourceFile));
output.write("Hello Azure!");
output.close();

//Getting a blob reference


CloudBlockBlob blob = container.getBlockBlobReference(sourceFile.getName());

//Creating blob and uploading file to it


System.out.println("Uploading the sample file ");
blob.uploadFromFile(sourceFile.getAbsolutePath());

There are several upload methods including upload, uploadBlock, uploadFullBlob, uploadStandardBlobTier, and
uploadText which you can use with Blob storage. For example, if you have a string, you can use the UploadText
method rather than the Upload method.
Block blobs can be any type of text or binary file. Page blobs are primarily used for the VHD files that back IaaS
VMs. Use append blobs for logging, such as when you want to write to a file and then keep adding more
information. Most objects stored in Blob storage are block blobs.
List the blobs in a container
You can get a list of files in the container using CloudBlobContainer.ListBlobs. The following code retrieves the
list of blobs, then loops through them, showing the URIs of the blobs found. You can copy the URI from the
command window and paste it into a browser to view the file.

//Listing contents of container


for (ListBlobItem blobItem : container.listBlobs()) {
System.out.println("URI of blob is: " + blobItem.getUri());
}

Download blobs
Download blobs to your local disk using CloudBlob.DownloadToFile.
The following code downloads the blob uploaded in a previous section, adding a suffix of "_DOWNLOADED" to
the blob name so you can see both files on local disk.

// Download blob. In most cases, you would have to retrieve the reference
// to cloudBlockBlob here. However, we created that reference earlier, and
// haven't changed the blob we're interested in, so we can reuse it.
// Here we are creating a new file to download to. Alternatively you can also pass in the path as a string
into downloadToFile method: blob.downloadToFile("/path/to/new/file").
downloadedFile = new File(sourceFile.getParentFile(), "downloadedFile.txt");
blob.downloadToFile(downloadedFile.getAbsolutePath());

Clean up resources
If you no longer need the blobs that you have uploaded, you can delete the entire container using
CloudBlobContainer.DeleteIfExists. This method also deletes the files in the container.
try {
if(container != null)
container.deleteIfExists();
} catch (StorageException ex) {
System.out.println(String.format("Service error. Http code: %d and error code: %s", ex.getHttpStatusCode(),
ex.getErrorCode()));
}

System.out.println("Deleting the source, and downloaded files");

if(downloadedFile != null)
downloadedFile.deleteOnExit();

if(sourceFile != null)
sourceFile.deleteOnExit();

Next steps
In this article, you learned how to transfer files between a local disk and Azure Blob storage using Java. To learn
more about working with Java, continue to our GitHub source code repository.
Java API Reference Code Samples for Java
Quickstart: Manage blobs with Python v12 SDK
11/25/2021 • 7 minutes to read • Edit Online

In this quickstart, you learn to manage blobs by using Python. Blobs are objects that can hold large amounts of
text or binary data, including images, documents, streaming media, and archive data. You'll upload, download,
and list blobs, and you'll create and delete containers.
More resources:
API reference documentation
Library source code
Package (Python Package Index)
Samples

Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
Python 2.7 or 3.6+.

Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for
Python.
Create the project
Create a Python application named blob-quickstart-v12.
1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for the project.

mkdir blob-quickstart-v12

2. Switch to the newly created blob-quickstart-v12 directory.

cd blob-quickstart-v12

3. In side the blob-quickstart-v12 directory, create another directory called data. This directory is where the
blob data files will be created and stored.

mkdir data

Install the package


While still in the application directory, install the Azure Blob Storage client library for Python package by using
the pip install command.

pip install azure-storage-blob

This command installs the Azure Blob Storage client library for Python package and all the libraries on which it
depends. In this case, that is just the Azure core library for Python.
Set up the app framework
From the project directory:
1. Open a new text file in your code editor
2. Add import statements
3. Create the structure for the program, including basic exception handling
Here's the code:

import os, uuid


from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient, __version__

try:
print("Azure Blob Storage v" + __version__ + " - Python quickstart sample")

# Quick start code goes here

except Exception as ex:


print('Exception:')
print(ex)

4. Save the new file as blob-quickstart-v12.py in the blob-quickstart-v12 directory.


Copy your credentials from the Azure portal
When the sample application makes a request to Azure Storage, it must be authorized. To authorize a request,
add your storage account credentials to the application as a connection string. To view your storage account
credentials, follow these steps:
1. Sign in to the Azure portal.
2. Locate your storage account.
3. In the storage account menu pane, under Security + networking , select Access keys . Here, you can
view the account access keys and the complete connection string for each key.

4. In the Access keys pane, select Show keys .


5. In the key1 section, locate the Connection string value. Select the Copy to clipboard icon to copy the
connection string. You will add the connection string value to an environment variable in the next section.
Configure your storage connection string
After you copy the connection string, write it to a new environment variable on the local machine running the
application. To set the environment variable, open a console window, and follow the instructions for your
operating system. Replace <yourconnectionstring> with your actual connection string.
Windows

setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"

After you add the environment variable in Windows, you must start a new instance of the command window.
Linux

export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

macOS

export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.

Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that doesn't adhere to a particular data model or definition, such as text or binary data. Blob storage offers three
types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.

Use the following Python classes to interact with these resources:


BlobServiceClient: The BlobServiceClient class allows you to manipulate Azure Storage resources and blob
containers.
ContainerClient: The ContainerClient class allows you to manipulate Azure Storage containers and their
blobs.
BlobClient: The BlobClient class allows you to manipulate Azure Storage blobs.

Code examples
These example code snippets show you how to do the following tasks with the Azure Blob Storage client library
for Python:
Get the connection string
Create a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Get the connection string
The code below retrieves the storage account connection string from the environment variable created in the
Configure your storage connection string section.
Add this code inside the try block:

# Retrieve the connection string for use with the application. The storage
# connection string is stored in an environment variable on the machine
# running the application called AZURE_STORAGE_CONNECTION_STRING. If the environment variable is
# created after the application is launched in a console or with Visual Studio,
# the shell or application needs to be closed and reloaded to take the
# environment variable into account.
connect_str = os.getenv('AZURE_STORAGE_CONNECTION_STRING')

Create a container
Decide on a name for the new container. The code below appends a UUID value to the container name to ensure
that it's unique.

IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.

Create an instance of the BlobServiceClient class by calling the from_connection_string method. Then, call the
create_container method to actually create the container in your storage account.
Add this code to the end of the try block:

# Create the BlobServiceClient object which will be used to create a container client
blob_service_client = BlobServiceClient.from_connection_string(connect_str)

# Create a unique name for the container


container_name = str(uuid.uuid4())

# Create the container


container_client = blob_service_client.create_container(container_name)

Upload blobs to a container


The following code snippet:
1. Creates a local directory to hold data files.
2. Creates a text file in the local directory.
3. Gets a reference to a BlobClient object by calling the get_blob_client method on the BlobServiceClient from
the Create a container section.
4. Uploads the local text file to the blob by calling the upload_blob method.
Add this code to the end of the try block:
# Create a local directory to hold blob data
local_path = "./data"
os.mkdir(local_path)

# Create a file in the local data directory to upload and download


local_file_name = str(uuid.uuid4()) + ".txt"
upload_file_path = os.path.join(local_path, local_file_name)

# Write text to the file


file = open(upload_file_path, 'w')
file.write("Hello, World!")
file.close()

# Create a blob client using the local file name as the name for the blob
blob_client = blob_service_client.get_blob_client(container=container_name, blob=local_file_name)

print("\nUploading to Azure Storage as blob:\n\t" + local_file_name)

# Upload the created file


with open(upload_file_path, "rb") as data:
blob_client.upload_blob(data)

List the blobs in a container


List the blobs in the container by calling the list_blobs method. In this case, only one blob has been added to the
container, so the listing operation returns just that one blob.
Add this code to the end of the try block:

print("\nListing blobs...")

# List the blobs in the container


blob_list = container_client.list_blobs()
for blob in blob_list:
print("\t" + blob.name)

Download blobs
Download the previously created blob by calling the download_blob method. The example code adds a suffix of
"DOWNLOAD" to the file name so that you can see both files in local file system.
Add this code to the end of the try block:

# Download the blob to a local file


# Add 'DOWNLOAD' before the .txt extension so you can see both files in the data directory
download_file_path = os.path.join(local_path, str.replace(local_file_name ,'.txt', 'DOWNLOAD.txt'))
print("\nDownloading blob to \n\t" + download_file_path)

with open(download_file_path, "wb") as download_file:


download_file.write(blob_client.download_blob().readall())

Delete a container
The following code cleans up the resources the app created by removing the entire container using the
delete_container method. You can also delete the local files, if you like.
The app pauses for user input by calling input() before it deletes the blob, container, and local files. Verify that
the resources were created correctly, before they're deleted.
Add this code to the end of the try block:
# Clean up
print("\nPress the Enter key to begin clean up")
input()

print("Deleting blob container...")


container_client.delete_container()

print("Deleting the local source and downloaded files...")


os.remove(upload_file_path)
os.remove(download_file_path)
os.rmdir(local_path)

print("Done")

Run the code


This app creates a test file in your local folder and uploads it to Azure Blob Storage. The example then lists the
blobs in the container, and downloads the file with a new name. You can compare the old and new files.
Navigate to the directory containing the blob-quickstart-v12.py file, then execute the following python
command to run the app.

python blob-quickstart-v12.py

The output of the app is similar to the following example:

Azure Blob Storage v12 - Python quickstart sample

Uploading to Azure Storage as blob:


quickstartcf275796-2188-4057-b6fb-038352e35038.txt

Listing blobs...
quickstartcf275796-2188-4057-b6fb-038352e35038.txt

Downloading blob to
./data/quickstartcf275796-2188-4057-b6fb-038352e35038DOWNLOAD.txt

Press the Enter key to begin clean up

Deleting blob container...


Deleting the local source and downloaded files...
Done

Before you begin the cleanup process, check your data folder for the two files. You can open them and observe
that they're identical.
After you've verified the files, press the Enter key to delete the test files and finish the demo.

Next steps
In this quickstart, you learned how to upload, download, and list blobs using Python.
To see Blob storage sample apps, continue to:
Azure Blob Storage SDK v12 Python samples
To learn more, see the Azure Storage client libraries for Python.
For tutorials, samples, quickstarts, and other documentation, visit Azure for Python Developers.
Quickstart: Manage blobs with Python v2.1 SDK
11/25/2021 • 6 minutes to read • Edit Online

In this quickstart, you learn to manage blobs by using Python. Blobs are objects that can hold large amounts of
text or binary data, including images, documents, streaming media, and archive data. You'll upload, download,
and list blobs, and you'll create and delete containers.

NOTE
This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see
Quickstart: Manage blobs with Python v12 SDK.

Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
Python.
Azure Storage SDK for Python.

Download the sample application


The sample application in this quickstart is a basic Python application.
Use the following git command to download the application to your development environment.

git clone https://github.com/Azure-Samples/storage-blobs-python-quickstart.git

To review the Python program, open the example.py file at the root of the repository.

Copy your credentials from the Azure portal


The sample application needs to authorize access to your storage account. Provide your storage account
credentials to the application in the form of a connection string. To view your storage account credentials:
1. In to the Azure portal go to your storage account.
2. In the Settings section of the storage account overview, select Access keys to display your account
access keys and connection string.
3. Note the name of your storage account, which you'll need for authorization.
4. Find the Key value under key1 , and select Copy to copy the account key.
Configure your storage connection string
In the application, provide your storage account name and account key to create a BlockBlobService object.
1. Open the example.py file from the Solution Explorer in your IDE.
2. Replace the accountname and accountkey values with your storage account name and key:

block_blob_service = BlockBlobService(
account_name='accountname', account_key='accountkey')

3. Save and close the file.

Run the sample


The sample program creates a test file in your Documents folder, uploads the file to Blob storage, lists the blobs
in the file, and downloads the file with a new name.
1. Install the dependencies:

pip install azure-storage-blob==2.1.0

2. Go to the sample application:

cd storage-blobs-python-quickstart

3. Run the sample:

python example.py

You'll see messages similar to the following output:

Temp file = C:\Users\azureuser\Documents\QuickStart_9f4ed0f9-22d3-43e1-98d0-8b2c05c01078.txt

Uploading to Blob storage as blobQuickStart_9f4ed0f9-22d3-43e1-98d0-8b2c05c01078.txt

List blobs in the container


Blob name: QuickStart_9f4ed0f9-22d3-43e1-98d0-8b2c05c01078.txt

Downloading blob to C:\Users\azureuser\Documents\QuickStart_9f4ed0f9-22d3-43e1-98d0-


8b2c05c01078_DOWNLOADED.txt

4. Before you continue, go to your Documents folder and check for the two files.
QuickStart_<universally-unique-identifier>
QuickStart_<universally-unique-identifier>_DOWNLOADED
5. You can open them and see they're the same.
You can also use a tool like the Azure Storage Explorer. It's good for viewing the files in Blob storage.
Azure Storage Explorer is a free cross-platform tool that lets you access your storage account info.
6. After you've looked at the files, press any key to finish the sample and delete the test files.

Learn about the sample code


Now that you know what the sample does, open the example.py file to look at the code.
Get references to the storage objects
In this section, you instantiate the objects, create a new container, and then set permissions on the container so
the blobs are public. You'll call the container quickstartblobs .

# Create the BlockBlockService that the system uses to call the Blob service for the storage account.
block_blob_service = BlockBlobService(
account_name='accountname', account_key='accountkey')

# Create a container called 'quickstartblobs'.


container_name = 'quickstartblobs'
block_blob_service.create_container(container_name)

# Set the permission so the blobs are public.


block_blob_service.set_container_acl(
container_name, public_access=PublicAccess.Container)

First, you create the references to the objects used to access and manage Blob storage. These objects build on
each other, and each is used by the next one in the list.
Instantiate the BlockBlobSer vice object, which points to the Blob service in your storage account.
Instantiate the CloudBlobContainer object, which represents the container you're accessing. The system
uses containers to organize your blobs like you use folders on your computer to organize your files.
Once you have the Cloud Blob container, instantiate the CloudBlockBlob object that points to the specific blob
that you're interested in. You can then upload, download, and copy the blob as you need.

IMPORTANT
Container names must be lowercase. For more information about container and blob names, see Naming and Referencing
Containers, Blobs, and Metadata.

Upload blobs to the container


Blob storage supports block blobs, append blobs, and page blobs. Block blobs can be as large as 4.7 TB, and can
be anything from Excel spreadsheets to large video files. You can use append blobs for logging when you want
to write to a file and then keep adding more information. Page blobs are primarily used for the Virtual Hard Disk
(VHD) files that back infrastructure as a service virtual machines (IaaS VMs). Block blobs are the most commonly
used. This quickstart uses block blobs.
To upload a file to a blob, get the full file path by joining the directory name with the file name on your local
drive. You can then upload the file to the specified path using the create_blob_from_path method.
The sample code creates a local file the system uses for the upload and download, storing the file the system
uploads as full_path_to_file and the name of the blob as local_file_name. This example uploads the file to your
container called quickstartblobs :
# Create a file in Documents to test the upload and download.
local_path = os.path.expanduser("~\Documents")
local_file_name = "QuickStart_" + str(uuid.uuid4()) + ".txt"
full_path_to_file = os.path.join(local_path, local_file_name)

# Write text to the file.


file = open(full_path_to_file, 'w')
file.write("Hello, World!")
file.close()

print("Temp file = " + full_path_to_file)


print("\nUploading to Blob storage as blob" + local_file_name)

# Upload the created file, use local_file_name for the blob name.
block_blob_service.create_blob_from_path(
container_name, local_file_name, full_path_to_file)

There are several upload methods that you can use with Blob storage. For example, if you have a memory
stream, you can use the create_blob_from_stream method rather than create_blob_from_path .
List the blobs in a container
The following code creates a generator for the list_blobs method. The code loops through the list of blobs in
the container and prints their names to the console.

# List the blobs in the container.


print("\nList blobs in the container")
generator = block_blob_service.list_blobs(container_name)
for blob in generator:
print("\t Blob name: " + blob.name)

Download the blobs


Download blobs to your local disk using the get_blob_to_path method. The following code downloads the blob
you uploaded previously. The system appends _DOWNLOADED to the blob name so you can see both files on
your local disk.

# Download the blob(s).


# Add '_DOWNLOADED' as prefix to '.txt' so you can see both files in Documents.
full_path_to_file2 = os.path.join(local_path, local_file_name.replace(
'.txt', '_DOWNLOADED.txt'))
print("\nDownloading blob to " + full_path_to_file2)
block_blob_service.get_blob_to_path(
container_name, local_file_name, full_path_to_file2)

Clean up resources
If you no longer need the blobs uploaded in this quickstart, you can delete the entire container using the
delete_container method. To delete individual files instead, use the delete_blob method.

# Clean up resources. This includes the container and the temp files.
block_blob_service.delete_container(container_name)
os.remove(full_path_to_file)
os.remove(full_path_to_file2)

Resources for developing Python applications with blobs


For more about Python development with Blob storage, see these additional resources:
Binaries and source code
View, download, and install the Python client library source code for Azure Storage on GitHub.
Client library reference and samples
For more about the Python client library, see the Azure Storage libraries for Python.
Explore Blob storage samples written using the Python client library.

Next steps
In this quickstart, you learned how to transfer files between a local disk and Azure Blob storage using Python.
For more about the Storage Explorer and Blobs, see Manage Azure Blob storage resources with Storage Explorer.
Quickstart: Manage blobs with JavaScript v12 SDK
in Node.js
11/25/2021 • 8 minutes to read • Edit Online

In this quickstart, you learn to manage blobs by using Node.js. Blobs are objects that can hold large amounts of
text or binary data, including images, documents, streaming media, and archive data. You'll upload, download,
and list blobs, and you'll create and delete containers.
Additional resources:
API reference documentation
Library source code
Package (Node Package Manager)
Samples

Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
Node.js.

Setting up
This section walks you through preparing a project to work with the Azure Blob storage client library v12 for
JavaScript.
Create the project
Create a JavaScript application named blob-quickstart-v12.
1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for the project.

mkdir blob-quickstart-v12

2. Switch to the newly created blob-quickstart-v12 directory.

cd blob-quickstart-v12

3. Create a new text file called package.json. This file defines the Node.js project. Save this file in the blob-
quickstart-v12 directory. Here is the contents of the file:
{
"name": "blob-quickstart-v12",
"version": "1.0.0",
"description": "Use the @azure/storage-blob SDK version 12 to interact with Azure Blob storage",
"main": "blob-quickstart-v12.js",
"scripts": {
"start": "node blob-quickstart-v12.js"
},
"author": "Your Name",
"license": "MIT",
"dependencies": {
"@azure/storage-blob": "^12.0.0",
"@types/dotenv": "^4.0.3",
"dotenv": "^6.0.0"
}
}

You can put your own name in for the author field, if you'd like.
Install the package
While still in the blob-quickstart-v12 directory, install the Azure Blob storage client library for JavaScript
package by using the npm install command. This command reads the package.json file and installs the Azure
Blob storage client library v12 for JavaScript package and all the libraries on which it depends.

npm install

Set up the app framework


From the project directory:
1. Open another new text file in your code editor
2. Add require calls to load Azure and Node.js modules
3. Create the structure for the program, including basic exception handling
Here's the code:

const { BlobServiceClient } = require('@azure/storage-blob');


const { v1: uuidv1} = require('uuid');

async function main() {


console.log('Azure Blob storage v12 - JavaScript quickstart sample');
// Quick start code goes here
}

main().then(() => console.log('Done')).catch((ex) => console.log(ex.message));

4. Save the new file as blob-quickstart-v12.js in the blob-quickstart-v12 directory.


Copy your credentials from the Azure portal
When the sample application makes a request to Azure Storage, it must be authorized. To authorize a request,
add your storage account credentials to the application as a connection string. To view your storage account
credentials, follow these steps:
1. Sign in to the Azure portal.
2. Locate your storage account.
3. In the storage account menu pane, under Security + networking , select Access keys . Here, you can
view the account access keys and the complete connection string for each key.

4. In the Access keys pane, select Show keys .


5. In the key1 section, locate the Connection string value. Select the Copy to clipboard icon to copy the
connection string. You will add the connection string value to an environment variable in the next section.

Configure your storage connection string


After you copy the connection string, write it to a new environment variable on the local machine running the
application. To set the environment variable, open a console window, and follow the instructions for your
operating system. Replace <yourconnectionstring> with your actual connection string.
Windows

setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"

After you add the environment variable in Windows, you must start a new instance of the command window.
Linux

export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

macOS

export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.

Object model
Azure Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers
three types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.

Use the following JavaScript classes to interact with these resources:


BlobServiceClient: The BlobServiceClient class allows you to manipulate Azure Storage resources and blob
containers.
ContainerClient: The ContainerClient class allows you to manipulate Azure Storage containers and their
blobs.
BlobClient: The BlobClient class allows you to manipulate Azure Storage blobs.

Code examples
These example code snippets show you how to perform the following with the Azure Blob storage client library
for JavaScript:
Get the connection string
Create a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Get the connection string
The code below retrieves the connection string for the storage account from the environment variable created in
the Configure your storage connection string section.
Add this code inside the main function:

// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING. If the
// environment variable is created after the application is launched in a
// console or with Visual Studio, the shell or application needs to be closed
// and reloaded to take the environment variable into account.
const AZURE_STORAGE_CONNECTION_STRING = process.env.AZURE_STORAGE_CONNECTION_STRING;

Create a container
Decide on a name for the new container. The code below appends a UUID value to the container name to ensure
that it is unique.

IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.

Create an instance of the BlobServiceClient class by calling the fromConnectionString method. Then, call the
getContainerClient method to get a reference to a container. Finally, call create to actually create the container in
your storage account.
Add this code to the end of the main function:

// Create the BlobServiceClient object which will be used to create a container client
const blobServiceClient = BlobServiceClient.fromConnectionString(AZURE_STORAGE_CONNECTION_STRING);

// Create a unique name for the container


const containerName = 'quickstart' + uuidv1();

console.log('\nCreating container...');
console.log('\t', containerName);

// Get a reference to a container


const containerClient = blobServiceClient.getContainerClient(containerName);

// Create the container


const createContainerResponse = await containerClient.create();
console.log("Container was created successfully. requestId: ", createContainerResponse.requestId);

Upload blobs to a container


The following code snippet:
1. Creates a text string to upload to a blob.
2. Gets a reference to a BlockBlobClient object by calling the getBlockBlobClient method on the ContainerClient
from the Create a container section.
3. Uploads the text string data to the blob by calling the upload method.
Add this code to the end of the main function:

// Create a unique name for the blob


const blobName = 'quickstart' + uuidv1() + '.txt';

// Get a block blob client


const blockBlobClient = containerClient.getBlockBlobClient(blobName);

console.log('\nUploading to Azure storage as blob:\n\t', blobName);

// Upload data to the blob


const data = 'Hello, World!';
const uploadBlobResponse = await blockBlobClient.upload(data, data.length);
console.log("Blob was uploaded successfully. requestId: ", uploadBlobResponse.requestId);

List the blobs in a container


List the blobs in the container by calling the listBlobsFlat method. In this case, only one blob has been added to
the container, so the listing operation returns just that one blob.
Add this code to the end of the main function:

console.log('\nListing blobs...');

// List the blob(s) in the container.


for await (const blob of containerClient.listBlobsFlat()) {
console.log('\t', blob.name);
}

Download blobs
Download the previously created blob by calling the download method. The example code includes a helper
function called streamToString , which is used to read a Node.js readable stream into a string.
Add this code to the end of the main function:

// Get blob content from position 0 to the end


// In Node.js, get downloaded data by accessing downloadBlockBlobResponse.readableStreamBody
// In browsers, get downloaded data by accessing downloadBlockBlobResponse.blobBody
const downloadBlockBlobResponse = await blockBlobClient.download(0);
console.log('\nDownloaded blob content...');
console.log('\t', await streamToString(downloadBlockBlobResponse.readableStreamBody));

Add this helper function after the main function:

// A helper function used to read a Node.js readable stream into a string


async function streamToString(readableStream) {
return new Promise((resolve, reject) => {
const chunks = [];
readableStream.on("data", (data) => {
chunks.push(data.toString());
});
readableStream.on("end", () => {
resolve(chunks.join(""));
});
readableStream.on("error", reject);
});
}

Delete a container
The following code cleans up the resources the app created by removing the entire container using the delete
method. You can also delete the local files, if you like.
Add this code to the end of the main function:

console.log('\nDeleting container...');

// Delete container
const deleteContainerResponse = await containerClient.delete();
console.log("Container was deleted successfully. requestId: ", deleteContainerResponse.requestId);

Run the code


This app creates a text string and uploads it to Blob storage. The example then lists the blob(s) in the container,
downloads the blob, and displays the downloaded data.
From a console prompt, navigate to the directory containing the blob-quickstart-v12.js file, then execute the
following node command to run the app.

node blob-quickstart-v12.js

The output of the app is similar to the following example:


Azure Blob storage v12 - JavaScript quickstart sample

Creating container...
quickstart4a0780c0-fb72-11e9-b7b9-b387d3c488da

Uploading to Azure Storage as blob:


quickstart4a3128d0-fb72-11e9-b7b9-b387d3c488da.txt

Listing blobs...
quickstart4a3128d0-fb72-11e9-b7b9-b387d3c488da.txt

Downloaded blob content...


Hello, World!

Deleting container...
Done

Step through the code in your debugger and check your Azure portal throughout the process. Check to see that
the container is being created. You can open the blob inside the container and view the contents.

Next steps
In this quickstart, you learned how to upload, download, and list blobs using JavaScript.
For tutorials, samples, quickstarts, and other documentation, visit:
Azure for JavaScript developer center
To learn how to deploy a web app that uses Azure Blob storage, see Tutorial: Upload image data in the cloud
with Azure Storage
To see Blob storage sample apps, continue to Azure Blob storage client library v12 JavaScript samples.
To learn more, see the Azure Blob storage client library for JavaScript.
Quickstart: Manage blobs with JavaScript v10 SDK
in Node.js
11/25/2021 • 9 minutes to read • Edit Online

In this quickstart, you learn to manage blobs by using Node.js. Blobs are objects that can hold large amounts of
text or binary data, including images, documents, streaming media, and archive data. You'll upload, download,
list, and delete blobs, and you'll manage containers.

NOTE
This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see
Quickstart: Manage blobs with JavaScript v12 SDK in Node.js.

Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
Node.js.

Download the sample application


The sample application in this quickstart is a simple Node.js console application. To begin, clone the repository
to your machine using the following command:

git clone https://github.com/Azure-Samples/azure-storage-js-v10-quickstart.git

Next, change folders for the application:

cd azure-storage-js-v10-quickstart

Now, open the folder in your favorite code editing environment.

Configure your storage credentials


Before running the application, you must provide the security credentials for your storage account. The sample
repository includes a file named .env.example. Rename this file by removing the .example extension, which
results in a file named .env. Inside the .env file, add your account name and access key values after the
AZURE_STORAGE_ACCOUNT_NAME and AZURE_STORAGE_ACCOUNT_ACCESS_KEY keys.

Install required packages


In the application directory, run npm install to install the required packages for the application.

npm install

Run the sample


Now that the dependencies are installed, you can run the sample by issuing the following command:

npm start

The output from the app will be similar to the following example:

Container "demo" is created


Containers:
- container-one
- container-two
- demo
Blob "quickstart.txt" is uploaded
Local file "./readme.md" is uploaded
Blobs in "demo" container:
- quickstart.txt
- readme-stream.md
- readme.md
Blob downloaded blob content: "hello!"
Blob "quickstart.txt" is deleted
Container "demo" is deleted
Done

If you're using a new storage account for this quickstart, then you may only see the demo container listed under
the label "Containers:".

Understanding the code


The sample begins by importing a number of classes and functions from the Azure Blob storage namespace.
Each of the imported items is discussed in context as they're used in the sample.

const {
Aborter,
BlobURL,
BlockBlobURL,
ContainerURL,
ServiceURL,
SharedKeyCredential,
StorageURL,
uploadStreamToBlockBlob
} = require('@azure/storage-blob');

Credentials are read from environment variables based on the appropriate context.

if (process.env.NODE_ENV !== 'production') {


require('dotenv').config();
}

The dotenv module loads environment variables when running the app locally for debugging. Values are
defined in a file named .env and loaded into the current execution context. In production, the server
configuration provides these values, which is why this code only runs when the script is not running under a
"production" environment.
The next block of modules is imported to help interface with the file system.

const fs = require('fs');
const path = require('path');
The purpose of these modules is as follows:
fs is the native Node.js module used to work with the file system
path is required to determine the absolute path of the file, which is used when uploading a file to Blob
storage
Next, environment variable values are read and set aside in constants.

const STORAGE_ACCOUNT_NAME = process.env.AZURE_STORAGE_ACCOUNT_NAME;


const ACCOUNT_ACCESS_KEY = process.env.AZURE_STORAGE_ACCOUNT_ACCESS_KEY;

The next set of constants helps to reveal the intent of file size calculations during upload operations.

const ONE_MEGABYTE = 1024 * 1024;


const FOUR_MEGABYTES = 4 * ONE_MEGABYTE;

Requests made by the API can be set to time out after a given interval. The Aborter class is responsible for
managing how requests are timed-out and the following constant is used to define timeouts used in this
sample.

const ONE_MINUTE = 60 * 1000;

Calling code
To support JavaScript's async/await syntax, all the calling code is wrapped in a function named execute. Then
execute is called and handled as a promise.

async function execute() {


// commands...
}

execute().then(() => console.log("Done")).catch((e) => console.log(e));

All of the following code runs inside the execute function where the // commands... comment is placed.
First, the relevant variables are declared to assign names, sample content and to point to the local file to upload
to Blob storage.

const containerName = "demo";


const blobName = "quickstart.txt";
const content = "hello!";
const localFilePath = "./readme.md";

Account credentials are used to create a pipeline, which is responsible for managing how requests are sent to
the REST API. Pipelines are thread-safe and specify logic for retry policies, logging, HTTP response
deserialization rules, and more.

const credentials = new SharedKeyCredential(STORAGE_ACCOUNT_NAME, ACCOUNT_ACCESS_KEY);


const pipeline = StorageURL.newPipeline(credentials);
const serviceURL = new ServiceURL(`https://${STORAGE_ACCOUNT_NAME}.blob.core.windows.net`, pipeline);

The following classes are used in this block of code:


The SharedKeyCredential class is responsible for wrapping storage account credentials to provide them
to a request pipeline.
The StorageURL class is responsible for creating a new pipeline.
The ServiceURL models a URL used in the REST API. Instances of this class allow you to perform actions
like list containers and provide context information to generate container URLs.
The instance of ServiceURL is used with the ContainerURL and BlockBlobURL instances to manage containers
and blobs in your storage account.

const containerURL = ContainerURL.fromServiceURL(serviceURL, containerName);


const blockBlobURL = BlockBlobURL.fromContainerURL(containerURL, blobName);

The containerURL and blockBlobURL variables are reused throughout the sample to act on the storage account.
At this point, the container doesn't exist in the storage account. The instance of ContainerURL represents a URL
that you can act upon. By using this instance, you can create and delete the container. The location of this
container equates to a location such as this:

https://<ACCOUNT_NAME>.blob.core.windows.net/demo

The blockBlobURL is used to manage individual blobs, allowing you to upload, download, and delete blob
content. The URL represented here is similar to this location:

https://<ACCOUNT_NAME>.blob.core.windows.net/demo/quickstart.txt

As with the container, the block blob doesn't exist yet. The blockBlobURL variable is used later to create the blob
by uploading content.
Using the Aborter class
Requests made by the API can be set to time out after a given interval. The Aborter class is responsible for
managing how requests are timed out. The following code creates a context where a set of requests is given 30
minutes to execute.

const aborter = Aborter.timeout(30 * ONE_MINUTE);

Aborters give you control over requests by allowing you to:


designate the amount of time given for a batch of requests
designate how long an individual request has to execute in the batch
allow you to cancel requests
use the Aborter.none static member to stop your requests from timing out all together
Create a container
To create a container, the ContainerURL's create method is used.

await containerURL.create(aborter);
console.log(`Container: "${containerName}" is created`);

As the name of the container is defined when calling ContainerURL.fromServiceURL(serviceURL,


containerName), calling the create method is all that's required to create the container.
Show container names
Accounts can store a vast number of containers. The following code demonstrates how to list containers in a
segmented fashion, which allows you to cycle through a large number of containers. The showContainerNames
function is passed instances of ServiceURL and Aborter.

console.log("Containers:");
await showContainerNames(serviceURL, aborter);

The showContainerNames function uses the listContainersSegment method to request batches of container
names from the storage account.

async function showContainerNames(aborter, serviceURL) {


let marker = undefined;

do {
const listContainersResponse = await serviceURL.listContainersSegment(aborter, marker);
marker = listContainersResponse.nextMarker;
for(let container of listContainersResponse.containerItems) {
console.log(` - ${ container.name }`);
}
} while (marker);
}

When the response is returned, then the containerItems are iterated to log the name to the console.
Upload text
To upload text to the blob, use the upload method.

await blockBlobURL.upload(aborter, content, content.length);


console.log(`Blob "${blobName}" is uploaded`);

Here the text and its length are passed into the method.
Upload a local file
To upload a local file to the container, you need a container URL and the path to the file.

await uploadLocalFile(aborter, containerURL, localFilePath);


console.log(`Local file "${localFilePath}" is uploaded`);

The uploadLocalFile function calls the uploadFileToBlockBlob function, which takes the file path and an instance
of the destination of the block blob as arguments.

async function uploadLocalFile(aborter, containerURL, filePath) {

filePath = path.resolve(filePath);

const fileName = path.basename(filePath);


const blockBlobURL = BlockBlobURL.fromContainerURL(containerURL, fileName);

return await uploadFileToBlockBlob(aborter, filePath, blockBlobURL);


}

Upload a stream
Uploading streams is also supported. This sample opens a local file as a stream to pass to the upload method.
await uploadStream(containerURL, localFilePath, aborter);
console.log(`Local file "${localFilePath}" is uploaded as a stream`);

The uploadStream function calls uploadStreamToBlockBlob to upload the stream to the storage container.

async function uploadStream(aborter, containerURL, filePath) {


filePath = path.resolve(filePath);

const fileName = path.basename(filePath).replace('.md', '-stream.md');


const blockBlobURL = BlockBlobURL.fromContainerURL(containerURL, fileName);

const stream = fs.createReadStream(filePath, {


highWaterMark: FOUR_MEGABYTES,
});

const uploadOptions = {
bufferSize: FOUR_MEGABYTES,
maxBuffers: 5,
};

return await uploadStreamToBlockBlob(


aborter,
stream,
blockBlobURL,
uploadOptions.bufferSize,
uploadOptions.maxBuffers);
}

During an upload, uploadStreamToBlockBlob allocates buffers to cache data from the stream in case a retry is
necessary. The maxBuffers value designates at most how many buffers are used as each buffer creates a
separate upload request. Ideally, more buffers equate to higher speeds, but at the cost of higher memory usage.
The upload speed plateaus when the number of buffers is high enough that the bottleneck transitions to the
network or disk instead of the client.
Show blob names
Just as accounts can contain many containers, each container can potentially contain a vast amount of blobs.
Access to each blob in a container are available via an instance of the ContainerURL class.

console.log(`Blobs in "${containerName}" container:`);


await showBlobNames(aborter, containerURL);

The function showBlobNames calls listBlobFlatSegment to request batches of blobs from the container.

async function showBlobNames(aborter, containerURL) {


let marker = undefined;

do {
const listBlobsResponse = await containerURL.listBlobFlatSegment(Aborter.none, marker);
marker = listBlobsResponse.nextMarker;
for (const blob of listBlobsResponse.segment.blobItems) {
console.log(` - ${ blob.name }`);
}
} while (marker);
}

Download a blob
Once a blob is created, you can download the contents by using the download method.
const downloadResponse = await blockBlobURL.download(aborter, 0);
const downloadedContent = await streamToString(downloadResponse.readableStreamBody);
console.log(`Downloaded blob content: "${downloadedContent}"`);

The response is returned as a stream. In this example, the stream is converted to a string by using the following
streamToString helper function.

// A helper method used to read a Node.js readable stream into a string


async function streamToString(readableStream) {
return new Promise((resolve, reject) => {
const chunks = [];
readableStream.on("data", data => {
chunks.push(data.toString());
});
readableStream.on("end", () => {
resolve(chunks.join(""));
});
readableStream.on("error", reject);
});
}

Delete a blob
The delete method from a BlockBlobURL instance deletes a blob from the container.

await blockBlobURL.delete(aborter)
console.log(`Block blob "${blobName}" is deleted`);

Delete a container
The delete method from a ContainerURL instance deletes a container from the storage account.

await containerURL.delete(aborter);
console.log(`Container "${containerName}" is deleted`);

Clean up resources
All data written to the storage account is automatically deleted at the end of the code sample.

Next steps
This quickstart demonstrates how to manage blobs and containers in Azure Blob storage using Node.js. To learn
more about working with this SDK, refer to the GitHub repository.
Azure Storage v10 SDK for JavaScript repository Azure Storage JavaScript API Reference
Quickstart: Manage blobs with JavaScript v12 SDK
in a browser
11/25/2021 • 12 minutes to read • Edit Online

Azure Blob storage is optimized for storing large amounts of unstructured data. Blobs are objects that can hold
text or binary data, including images, documents, streaming media, and archive data. In this quickstart, you learn
to manage blobs by using JavaScript in a browser. You'll upload and list blobs, and you'll create and delete
containers.
Additional resources:
API reference documentation
Library source code
Package (npm)
Samples

Prerequisites
An Azure account with an active subscription
An Azure Storage account
Node.js
Microsoft Visual Studio Code
A Visual Studio Code extension for browser debugging, such as:
Debugger for Microsoft Edge
Debugger for Chrome
Debugger for Firefox

Object model
Blob storage offers three types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.

In this quickstart, you'll use the following JavaScript classes to interact with these resources:
BlobServiceClient: The BlobServiceClient class allows you to manipulate Azure Storage resources and blob
containers.
ContainerClient: The ContainerClient class allows you to manipulate Azure Storage containers and their
blobs.
BlockBlobClient: The BlockBlobClient class allows you to manipulate Azure Storage blobs.

Setting up
This section walks you through preparing a project to work with the Azure Blob storage client library v12 for
JavaScript.
Create a CORS rule
Before your web application can access blob storage from the client, you must configure your account to enable
cross-origin resource sharing, or CORS.
In the Azure portal, select your storage account. To define a new CORS rule, navigate to the Settings section
and select CORS . For this quickstart, you create an open CORS rule:

The following table describes each CORS setting and explains the values used to define the rule.

SET T IN G VA L UE DESC RIP T IO N

ALLOWED ORIGINS * Accepts a comma-delimited list of


domains set as acceptable origins.
Setting the value to * allows all
domains access to the storage
account.

ALLOWED METHODS DELETE , GET , HEAD , MERGE , POST , Lists the HTTP verbs allowed to
OPTIONS, and PUT execute against the storage account.
For the purposes of this quickstart,
select all available options.

ALLOWED HEADERS * Defines a list of request headers


(including prefixed headers) allowed by
the storage account. Setting the value
to * allows all headers access.
SET T IN G VA L UE DESC RIP T IO N

EXPOSED HEADERS * Lists the allowed response headers by


the account. Setting the value to *
allows the account to send any header.

MAX AGE 86400 The maximum amount of time the


browser caches the preflight OPTIONS
request in seconds. A value of 86400
allows the cache to remain for a full
day.

After you fill in the fields with the values from this table, click the Save button.

IMPORTANT
Ensure any settings you use in production expose the minimum amount of access necessary to your storage account to
maintain secure access. The CORS settings described here are appropriate for a quickstart as it defines a lenient security
policy. These settings, however, are not recommended for a real-world context.

Create a shared access signature


The shared access signature (SAS) is used by code running in the browser to authorize Azure Blob storage
requests. By using the SAS, the client can authorize access to storage resources without the account access key
or connection string. For more information on SAS, see Using shared access signatures (SAS).
Follow these steps to get the Blob service SAS URL:
1. In the Azure portal, select your storage account.
2. Navigate to the Security + networking section and select Shared access signature .
3. Scroll down and click the Generate SAS and connection string button.
4. Scroll down further and locate the Blob ser vice SAS URL field
5. Click the Copy to clipboard button at the far-right end of the Blob ser vice SAS URL field.
6. Save the copied URL somewhere for use in an upcoming step.
Add the Azure Blob storage client library
On your local computer, create a new folder called azure-blobs-js-browser and open it in Visual Studio Code.
Select View > Terminal to open a console window inside Visual Studio Code. Run the following Node.js
Package Manager (npm) command in the terminal window to create a package.json file.

npm init -y

The Azure SDK is composed of many separate packages. You can choose which packages you need based on the
services you intend to use. Run following npm command in the terminal window to install the
@azure/storage-blob package.

npm install --save @azure/storage-blob

Bundle the Azure Blob storage client library


To use Azure SDK libraries on a website, convert your code to work inside the browser. You do this using a tool
called a bundler. Bundling takes JavaScript code written using Node.js conventions and converts it into a format
that's understood by browsers. This quickstart article uses the Parcel bundler.
Install Parcel by running the following npm command in the terminal window:

npm install -g parcel-bundler

In Visual Studio Code, open the package.json file and add a browserlist between the license and
dependencies entries. This browserlist targets the latest version of three popular browsers. The full
package.json file should now look like this:

{
"name": "azure-blobs-javascript",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"browserslist": [
"last 1 Edge version",
"last 1 Chrome version",
"last 1 Firefox version"
],
"dependencies": {
"@azure/storage-blob": "^12.1.1"
}
}

Save the package.json file.


Import the Azure Blob storage client library
To use Azure SDK libraries inside JavaScript, import the @azure/storage-blob package. Create a new file in Visual
Studio Code containing the following JavaScript code.

// index.js
const { BlobServiceClient } = require("@azure/storage-blob");
// Now do something interesting with BlobServiceClient

Save the file as index.js in the azure-blobs-js-browser directory.


Implement the HTML page
Create a new file in Visual Studio Code and add the following HTML code.
<!-- index.html -->
<!DOCTYPE html>
<html>

<body>
<button id="create-container-button">Create container</button>
<button id="delete-container-button">Delete container</button>
<button id="select-button">Select and upload files</button>
<input type="file" id="file-input" multiple style="display: none;" />
<button id="list-button">List files</button>
<button id="delete-button">Delete selected files</button>
<p><b>Status:</b></p>
<p id="status" style="height:160px; width: 593px; overflow: scroll;" />
<p><b>Files:</b></p>
<select id="file-list" multiple style="height:222px; width: 593px; overflow: scroll;" />
</body>

<script src="./index.js"></script>

</html>

Save the file as index.html in the azure-blobs-js-browser folder.

Code examples
The example code shows you how to accomplish the following tasks with the Azure Blob storage client library
for JavaScript:
Declare fields for UI elements
Add your storage account info
Create client objects
Create and delete a storage container
List blobs
Upload blobs
Delete blobs
You'll run the code after you add all the snippets to the index.js file.
Declare fields for UI elements
Add the following code to the end of the index.js file.

const createContainerButton = document.getElementById("create-container-button");


const deleteContainerButton = document.getElementById("delete-container-button");
const selectButton = document.getElementById("select-button");
const fileInput = document.getElementById("file-input");
const listButton = document.getElementById("list-button");
const deleteButton = document.getElementById("delete-button");
const status = document.getElementById("status");
const fileList = document.getElementById("file-list");

const reportStatus = message => {


status.innerHTML += `${message}<br/>`;
status.scrollTop = status.scrollHeight;
}

Save the index.js file.


This code declares fields for each HTML element and implements a reportStatus function to display output.
In the following sections, add each new block of JavaScript code after the previous block.
Add your storage account info
Add code to access your storage account. Replace the placeholder with your Blob service SAS URL that you
generated earlier. Add the following code to the end of the index.js file.

// Update <placeholder> with your Blob service SAS URL string


const blobSasUrl = "<placeholder>";

Save the index.js file.


Create client objects
Create BlobServiceClient and ContainerClient objects for interacting with the Azure Blob storage service. Add
the following code to the end of the index.js file.

// Create a new BlobServiceClient


const blobServiceClient = new BlobServiceClient(blobSasUrl);

// Create a unique name for the container by


// appending the current time to the file name
const containerName = "container" + new Date().getTime();

// Get a container client from the BlobServiceClient


const containerClient = blobServiceClient.getContainerClient(containerName);

Save the index.js file.


Create and delete a storage container
Create and delete the storage container when you click the corresponding button on the web page. Add the
following code to the end of the index.js file.

const createContainer = async () => {


try {
reportStatus(`Creating container "${containerName}"...`);
await containerClient.create();
reportStatus(`Done.`);
} catch (error) {
reportStatus(error.message);
}
};

const deleteContainer = async () => {


try {
reportStatus(`Deleting container "${containerName}"...`);
await containerClient.delete();
reportStatus(`Done.`);
} catch (error) {
reportStatus(error.message);
}
};

createContainerButton.addEventListener("click", createContainer);
deleteContainerButton.addEventListener("click", deleteContainer);

Save the index.js file.


List blobs
List the contents of the storage container when you click the List files button. Add the following code to the end
of the index.js file.
const listFiles = async () => {
fileList.size = 0;
fileList.innerHTML = "";
try {
reportStatus("Retrieving file list...");
let iter = containerClient.listBlobsFlat();
let blobItem = await iter.next();
while (!blobItem.done) {
fileList.size += 1;
fileList.innerHTML += `<option>${blobItem.value.name}</option>`;
blobItem = await iter.next();
}
if (fileList.size > 0) {
reportStatus("Done.");
} else {
reportStatus("The container does not contain any files.");
}
} catch (error) {
reportStatus(error.message);
}
};

listButton.addEventListener("click", listFiles);

Save the index.js file.


This code calls the ContainerClient.listBlobsFlat function, then uses an iterator to retrieve the name of each
BlobItem returned. For each BlobItem , it updates the Files list with the name property value.
Upload blobs
Upload files to the storage container when you click the Select and upload files button. Add the following
code to the end of the index.js file.

const uploadFiles = async () => {


try {
reportStatus("Uploading files...");
const promises = [];
for (const file of fileInput.files) {
const blockBlobClient = containerClient.getBlockBlobClient(file.name);
promises.push(blockBlobClient.uploadBrowserData(file));
}
await Promise.all(promises);
reportStatus("Done.");
listFiles();
}
catch (error) {
reportStatus(error.message);
}
}

selectButton.addEventListener("click", () => fileInput.click());


fileInput.addEventListener("change", uploadFiles);

Save the index.js file.


This code connects the Select and upload files button to the hidden file-input element. The button click
event triggers the file input click event and displays the file picker. After you select files and close the dialog
box, the input event occurs and the uploadFiles function is called. This function creates a BlockBlobClient
object, then calls the browser-only uploadBrowserData function for each file you selected. Each call returns a
Promise . Each Promise is added to a list so that they can all be awaited together, causing the files to upload in
parallel.
Delete blobs
Delete files from the storage container when you click the Delete selected files button. Add the following code
to the end of the index.js file.

const deleteFiles = async () => {


try {
if (fileList.selectedOptions.length > 0) {
reportStatus("Deleting files...");
for (const option of fileList.selectedOptions) {
await containerClient.deleteBlob(option.text);
}
reportStatus("Done.");
listFiles();
} else {
reportStatus("No files selected.");
}
} catch (error) {
reportStatus(error.message);
}
};

deleteButton.addEventListener("click", deleteFiles);

Save the index.js file.


This code calls the ContainerClient.deleteBlob function to remove each file selected in the list. It then calls the
listFiles function shown earlier to refresh the contents of the Files list.

Run the code


To run the code inside the Visual Studio Code debugger, configure the launch.json file for your browser.
Configure the debugger
To set up the debugger extension in Visual Studio Code:
1. Select Run > Add Configuration
2. Select Edge , Chrome , or Firefox , depending on which extension you installed in the Prerequisites section
earlier.
Adding a new configuration creates a launch.json file and opens it in the editor. Modify the launch.json file so
that the url value is http://localhost:1234/index.html , as shown here:

{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "edge",
"request": "launch",
"name": "Launch Edge against localhost",
"url": "http://localhost:1234/index.html",
"webRoot": "${workspaceFolder}"
}
]
}

After updating, save the launch.json file. This configuration tells Visual Studio Code which browser to open and
which URL to load.
Launch the web server
To launch the local development web server, select View > Terminal to open a console window inside Visual
Studio Code, then enter the following command.

parcel index.html

Parcel bundles your code and starts a local development server for your page at
http://localhost:1234/index.html . Changes you make to index.js will automatically be built and reflected on the
development server whenever you save the file.
If you receive a message that says configured por t 1234 could not be used , you can change the port by
running the command parcel -p <port#> index.html . In the launch.json file, update the port in the URL path to
match.
Start debugging
Run the page in the debugger and get a feel for how blob storage works. If any errors occur, the Status pane on
the web page will display the error message received.
To open index.html in the browser with the Visual Studio Code debugger attached, select Run > Star t
Debugging or press F5 in Visual Studio Code.
Use the web app
In the Azure portal, you can verify the results of the API calls as you follow the steps below.
Step 1 - Create a container
1. In the web app, select Create container . The status indicates that a container was created.
2. To verify in the Azure portal, select your storage account. Under Blob ser vice , select Containers . Verify that
the new container appears. (You may need to select Refresh .)
Step 2 - Upload a blob to the container
1. On your local computer, create and save a test file, such as test.txt.
2. In the web app, click Select and upload files .
3. Browse to your test file, and then select Open . The status indicates that the file was uploaded, and the file list
was retrieved.
4. In the Azure portal, select the name of the new container that you created earlier. Verify that the test file
appears.
Step 3 - Delete the blob
1. In the web app, under Files , select the test file.
2. Select Delete selected files . The status indicates that the file was deleted and that the container contains no
files.
3. In the Azure portal, select Refresh . Verify that you see No blobs found .
Step 4 - Delete the container
1. In the web app, select Delete container . The status indicates that the container was deleted.
2. In the Azure portal, select the <account-name> | Containers link at the top-left of the portal pane.
3. Select Refresh . The new container disappears.
4. Close the web app.
Clean up resources
Click on the Terminal console in Visual Studio Code and press CTRL+C to stop the web server.
To clean up the resources created during this quickstart, go to the Azure portal and delete the resource group
you created in the Prerequisites section.
Next steps
In this quickstart, you learned how to upload, list, and delete blobs using JavaScript. You also learned how to
create and delete a blob storage container.
For tutorials, samples, quickstarts, and other documentation, visit:
Azure for JavaScript documentation
To learn more, see the Azure Blob storage client library for JavaScript.
To see Blob storage sample apps, continue to Azure Blob storage client library v12 JavaScript samples.
Quickstart: Manage blobs with JavaScript v10 SDK
in browser
11/25/2021 • 12 minutes to read • Edit Online

In this quickstart, you learn to manage blobs by using JavaScript code running entirely in the browser. Blobs are
objects that can hold large amounts of text or binary data, including images, documents, streaming media, and
archive data. You'll use required security measures to ensure protected access to your blob storage account.

NOTE
This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see
Quickstart: Manage blobs with JavaScript v12 SDK in a browser.

Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
A local web server. This article uses Node.js to open a basic server.
Visual Studio Code.
A VS Code extension for browser debugging, such as Debugger for Chrome or Debugger for Microsoft Edge.

Setting up storage account CORS rules


Before your web application can access a blob storage from the client, you must configure your account to
enable cross-origin resource sharing, or CORS.
Return to the Azure portal and select your storage account. To define a new CORS rule, navigate to the Settings
section and click on the CORS link. Next, click the Add button to open the Add CORS rule window. For this
quickstart, you create an open CORS rule:
The following table describes each CORS setting and explains the values used to define the rule.

SET T IN G VA L UE DESC RIP T IO N

Allowed origins * Accepts a comma-delimited list of


domains set as acceptable origins.
Setting the value to * allows all
domains access to the storage
account.

Allowed methods delete, get, head, merge, post, options, Lists the HTTP verbs allowed to
and put execute against the storage account.
For the purposes of this quickstart,
select all available options.

Allowed headers * Defines a list of request headers


(including prefixed headers) allowed by
the storage account. Setting the value
to * allows all headers access.

Exposed headers * Lists the allowed response headers by


the account. Setting the value to *
allows the account to send any header.

Max age (seconds) 86400 The maximum amount of time the


browser caches the preflight OPTIONS
request. A value of 86400 allows the
cache to remain for a full day.
IMPORTANT
Ensure any settings you use in production expose the minimum amount of access necessary to your storage account to
maintain secure access. The CORS settings described here are appropriate for a quickstart as it defines a lenient security
policy. These settings, however, are not recommended for a real-world context.

Next, you use the Azure cloud shell to create a security token.

Use Azure Cloud Shell


Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through your browser. You can
use either Bash or PowerShell with Cloud Shell to work with Azure services. You can use the Cloud Shell
preinstalled commands to run the code in this article without having to install anything on your local
environment.
To start Azure Cloud Shell:

O P T IO N EXA M P L E/ L IN K

Select Tr y It in the upper-right corner of a code block.


Selecting Tr y It doesn't automatically copy the code to
Cloud Shell.

Go to https://shell.azure.com, or select the Launch Cloud


Shell button to open Cloud Shell in your browser.

Select the Cloud Shell button on the menu bar at the


upper right in the Azure portal.

To run the code in this article in Azure Cloud Shell:


1. Start Cloud Shell.
2. Select the Copy button on a code block to copy the code.
3. Paste the code into the Cloud Shell session by selecting Ctrl +Shift +V on Windows and Linux or by
selecting Cmd +Shift +V on macOS.
4. Select Enter to run the code.

Create a shared access signature


The shared access signature (SAS) is used by the code running in the browser to authorize requests to Blob
storage. By using the SAS, the client can authorize access to storage resources without the account access key or
connection string. For more information on SAS, see Using shared access signatures (SAS).
You can create a SAS using the Azure CLI through the Azure cloud shell, or with the Azure portal or Azure
Storage Explorer. The following table describes the parameters you need to provide values for to generate a SAS
with the CLI.

PA RA M ET ER DESC RIP T IO N P L A C EH O L DER


PA RA M ET ER DESC RIP T IO N P L A C EH O L DER

expiry The expiration date of the access token FUTURE_DATE


in YYYY-MM-DD format. Enter
tomorrow's date for use with this
quickstart.

account-name The storage account name. Use the YOUR_STORAGE_ACCOUNT_NAME


name set aside in an earlier step.

account-key The storage account key. Use the key YOUR_STORAGE_ACCOUNT_KEY


set aside in an earlier step.

Use the following CLI command, with actual values for each placeholder, to generate a SAS that you can use in
your JavaScript code.

az storage account generate-sas \


--permissions racwdl \
--resource-types sco \
--services b \
--expiry FUTURE_DATE \
--account-name YOUR_STORAGE_ACCOUNT_NAME \
--account-key YOUR_STORAGE_ACCOUNT_KEY

You may find the series of values after each parameter a bit cryptic. These parameter values are taken from the
first letter of their respective permission. The following table explains where the values come from:

PA RA M ET ER VA L UE DESC RIP T IO N

permissions racwdl This SAS allows read, append, create,


write, delete, and list capabilities.

resource-types sco The resources affected by the SAS are


service, container, and object.

services b The service affected by the SAS is the


blob service.

Now that the SAS is generated, copy the return value and save it somewhere for use in an upcoming step. If you
generated your SAS using a method other than the Azure CLI, you will need to remove the initial ? if it is
present. This character is a URL separator that is already provided in the URL template later in this topic where
the SAS is used.

IMPORTANT
In production, always pass SAS tokens using TLS. Also, SAS tokens should be generated on the server and sent to the
HTML page in order pass back to Azure Blob Storage. One approach you may consider is to use a serverless function to
generate SAS tokens. The Azure Portal includes function templates that feature the ability to generate a SAS with a
JavaScript function.

Implement the HTML page


In this section, you'll create a basic web page and configure VS Code to launch and debug the page. Before you
can launch, however, you'll need to use Node.js to start a local web server and serve the page when your
browser requests it. Next, you'll add JavaScript code to call various blob storage APIs and display the results in
the page. You can also see the results of these calls in the Azure portal, Azure Storage Explorer, and the Azure
Storage extension for VS Code.
Set up the web application
First, create a new folder named azure-blobs-javascript and open it in VS Code. Then create a new file in VS
Code, add the following HTML, and save it as index.html in the azure-blobs-javascript folder.

<!DOCTYPE html>
<html>

<body>
<button id="create-container-button">Create container</button>
<button id="delete-container-button">Delete container</button>
<button id="select-button">Select and upload files</button>
<input type="file" id="file-input" multiple style="display: none;" />
<button id="list-button">List files</button>
<button id="delete-button">Delete selected files</button>
<p><b>Status:</b></p>
<p id="status" style="height:160px; width: 593px; overflow: scroll;" />
<p><b>Files:</b></p>
<select id="file-list" multiple style="height:222px; width: 593px; overflow: scroll;" />
</body>

<!-- You'll add code here later in this quickstart. -->

</html>

Configure the debugger


To set up the debugger extension in VS Code, select Debug > Add Configuration..., then select Chrome or
Edge , depending on which extension you installed in the Prerequisites section earlier. This action creates a
launch.json file and opens it in the editor.
Next, modify the launch.json file so that the url value includes /index.html as shown:

{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "chrome",
"request": "launch",
"name": "Launch Chrome against localhost",
"url": "http://localhost:8080/index.html",
"webRoot": "${workspaceFolder}"
}
]
}

This configuration tells VS Code which browser to launch and which URL to load.
Launch the web server
To launch the local Node.js web server, select View > Terminal to open a console window inside VS Code, then
enter the following command.

npx http-server

This command will install the http-server package and launch the server, making the current folder available
through default URLs including the one indicated in the previous step.
Start debugging
To launch index.html in the browser with the VS Code debugger attached, select Debug > Star t Debugging or
press F5 in VS Code.
The UI displayed doesn't do anything yet, but you'll add JavaScript code in the following section to implement
each function shown. You can then set breakpoints and interact with the debugger when it's paused on your
code.
When you make changes to index.html, be sure to reload the page to see the changes in the browser. In VS
Code, you can also select Debug > Restar t Debugging or press CTRL + SHIFT + F5.
Add the blob storage client library
To enable calls to the blob storage API, first Download the Azure Storage SDK for JavaScript - Blob client library,
extract the contents of the zip, and place the azure-storage-blob.js file in the azure-blobs-javascript folder.
Next, paste the following HTML into index.html after the </body> closing tag, replacing the placeholder
comment.

<script src="azure-storage-blob.js" charset="utf-8"></script>

<script>
// You'll add code here in the following sections.
</script>

This code adds a reference to the script file and provides a place for your own JavaScript code. For the purposes
of this quickstart, we're using the azure-storage-blob.js script file so that you can open it in VS Code, read its
contents, and set breakpoints. In production, you should use the more compact azure-storage.blob.min.js file
that is also provided in the zip file.
You can find out more about each blob storage function in the reference documentation. Note that some of the
functions in the SDK are only available in Node.js or only available in the browser.
The code in azure-storage-blob.js exports a global variable called azblob , which you'll use in your JavaScript
code to access the blob storage APIs.
Add the initial JavaScript code
Next, paste the following code into the <script> element shown in the previous code block, replacing the
placeholder comment.

const createContainerButton = document.getElementById("create-container-button");


const deleteContainerButton = document.getElementById("delete-container-button");
const selectButton = document.getElementById("select-button");
const fileInput = document.getElementById("file-input");
const listButton = document.getElementById("list-button");
const deleteButton = document.getElementById("delete-button");
const status = document.getElementById("status");
const fileList = document.getElementById("file-list");

const reportStatus = message => {


status.innerHTML += `${message}<br/>`;
status.scrollTop = status.scrollHeight;
}

This code creates fields for each HTML element that the following code will use, and implements a reportStatus
function to display output.
In the following sections, add each new block of JavaScript code after the previous block.
Add your storage account info
Next, add code to access your storage account, replacing the placeholders with your account name and the SAS
you generated in a previous step.

const accountName = "<Add your storage account name>";


const sasString = "<Add the SAS you generated earlier>";
const containerName = "testcontainer";
const containerURL = new azblob.ContainerURL(
`https://${accountName}.blob.core.windows.net/${containerName}?${sasString}`,
azblob.StorageURL.newPipeline(new azblob.AnonymousCredential));

This code uses your account info and SAS to create a ContainerURL instance, which is useful for creating and
manipulating a storage container.
Create and delete a storage container
Next, add code to create and delete the storage container when you press the corresponding button.

const createContainer = async () => {


try {
reportStatus(`Creating container "${containerName}"...`);
await containerURL.create(azblob.Aborter.none);
reportStatus(`Done.`);
} catch (error) {
reportStatus(error.body.message);
}
};

const deleteContainer = async () => {


try {
reportStatus(`Deleting container "${containerName}"...`);
await containerURL.delete(azblob.Aborter.none);
reportStatus(`Done.`);
} catch (error) {
reportStatus(error.body.message);
}
};

createContainerButton.addEventListener("click", createContainer);
deleteContainerButton.addEventListener("click", deleteContainer);

This code calls the ContainerURL create and delete functions without using an Aborter instance. To keep things
simple for this quickstart, this code assumes that your storage account has been created and is enabled. In
production code, use an Aborter instance to add timeout functionality.
List blobs
Next, add code to list the contents of the storage container when you press the List files button.
const listFiles = async () => {
fileList.size = 0;
fileList.innerHTML = "";
try {
reportStatus("Retrieving file list...");
let marker = undefined;
do {
const listBlobsResponse = await containerURL.listBlobFlatSegment(
azblob.Aborter.none, marker);
marker = listBlobsResponse.nextMarker;
const items = listBlobsResponse.segment.blobItems;
for (const blob of items) {
fileList.size += 1;
fileList.innerHTML += `<option>${blob.name}</option>`;
}
} while (marker);
if (fileList.size > 0) {
reportStatus("Done.");
} else {
reportStatus("The container does not contain any files.");
}
} catch (error) {
reportStatus(error.body.message);
}
};

listButton.addEventListener("click", listFiles);

This code calls the ContainerURL.listBlobFlatSegment function in a loop to ensure that all segments are
retrieved. For each segment, it loops over the list of blob items it contains and updates the Files list.
Upload blobs
Next, add code to upload files to the storage container when you press the Select and upload files button.

const uploadFiles = async () => {


try {
reportStatus("Uploading files...");
const promises = [];
for (const file of fileInput.files) {
const blockBlobURL = azblob.BlockBlobURL.fromContainerURL(containerURL, file.name);
promises.push(azblob.uploadBrowserDataToBlockBlob(
azblob.Aborter.none, file, blockBlobURL));
}
await Promise.all(promises);
reportStatus("Done.");
listFiles();
} catch (error) {
reportStatus(error.body.message);
}
}

selectButton.addEventListener("click", () => fileInput.click());


fileInput.addEventListener("change", uploadFiles);

This code connects the Select and upload files button to the hidden file-input element. In this way, the
button click event triggers the file input click event and displays the file picker. After you select files and
close the dialog box, the input event occurs and the uploadFiles function is called. This function calls the
browser-only uploadBrowserDataToBlockBlob function for each file you selected. Each call returns a Promise,
which is added to a list so that they can all be awaited at once, causing the files to upload in parallel.
Delete blobs
Next, add code to delete files from the storage container when you press the Delete selected files button.
const deleteFiles = async () => {
try {
if (fileList.selectedOptions.length > 0) {
reportStatus("Deleting files...");
for (const option of fileList.selectedOptions) {
const blobURL = azblob.BlobURL.fromContainerURL(containerURL, option.text);
await blobURL.delete(azblob.Aborter.none);
}
reportStatus("Done.");
listFiles();
} else {
reportStatus("No files selected.");
}
} catch (error) {
reportStatus(error.body.message);
}
};

deleteButton.addEventListener("click", deleteFiles);

This code calls the BlobURL.delete function to remove each file selected in the list. It then calls the listFiles
function shown earlier to refresh the contents of the Files list.
Run and test the web application
At this point, you can launch the page and experiment to get a feel for how blob storage works. If any errors
occur (for example, when you try to list files before you've created the container), the Status pane will display
the error message received. You can also set breakpoints in the JavaScript code to examine the values returned
by the storage APIs.

Clean up resources
To clean up the resources created during this quickstart, go to the Azure portal and delete the resource group
you created in the Prerequisites section.

Next steps
In this quickstart, you've created a simple website that accesses blob storage from browser-based JavaScript. To
learn how you can host a website itself on blob storage, continue to the following tutorial:
Host a static website on Blob Storage
Quickstart: Azure Blob Storage client library v12 for
C++
11/25/2021 • 6 minutes to read • Edit Online

Get started with the Azure Blob Storage client library v12 for C++. Azure Blob Storage is Microsoft's object
storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob
Storage is optimized for storing massive amounts of unstructured data.
Use the Azure Blob Storage client library v12 for C++ to:
Create a container
Upload a blob to Azure Storage
List all of the blobs in a container
Download the blob to your local computer
Delete a container
Resources:
API reference documentation
Library source code
Samples

Prerequisites
Azure subscription
Azure storage account
C++ compiler
CMake
Vcpkg - C and C++ package manager

Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for
C++.
Install the packages
The vcpkg install command will install the Azure Storage Blobs SDK for C++ and necessary dependencies:

vcpkg.exe install azure-storage-blobs-cpp:x64-windows

For more information, visit GitHub to acquire and build the Azure SDK for C++.
Create the project
In Visual Studio, create a new C++ console application for Windows called BlobQuickstartV12.
Copy your credentials from the Azure portal
When the sample application makes a request to Azure Storage, it must be authorized. To authorize a request,
add your storage account credentials to the application as a connection string. To view your storage account
credentials, follow these steps:
1. Sign in to the Azure portal.
2. Locate your storage account.
3. In the storage account menu pane, under Security + networking , select Access keys . Here, you can
view the account access keys and the complete connection string for each key.

4. In the Access keys pane, select Show keys .


5. In the key1 section, locate the Connection string value. Select the Copy to clipboard icon to copy the
connection string. You will add the connection string value to an environment variable in the next section.
Configure your storage connection string
After you copy the connection string, write it to a new environment variable on the local machine running the
application. To set the environment variable, open a console window, and follow the instructions for your
operating system. Replace <yourconnectionstring> with your actual connection string.
Windows

setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"

After you add the environment variable in Windows, you must start a new instance of the command window.
Linux

export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

macOS

export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"

Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.

Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that doesn't adhere to a particular data model or definition, such as text or binary data. Blob Storage offers three
types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.

Use these C++ classes to interact with these resources:


BlobServiceClient: The BlobServiceClient class allows you to manipulate Azure Storage resources and blob
containers.
BlobContainerClient: The BlobContainerClient class allows you to manipulate Azure Storage containers and
their blobs.
BlobClient: The BlobClient class allows you to manipulate Azure Storage blobs. It's the base class for all
specialized blob classes.
BlockBlobClient: The BlockBlobClient class allows you to manipulate Azure Storage block blobs.

Code examples
These example code snippets show you how to do the following tasks with the Azure Blob Storage client library
for C++:
Add include files
Get the connection string
Create a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Add include files
From the project directory:
1. Open the BlobQuickstartV12.sln solution file in Visual Studio
2. Inside Visual Studio, open the BlobQuickstartV12.cpp source file
3. Remove any code inside main that was autogenerated
4. Add #include statements

#include <stdlib.h>
#include <iostream>
#include <azure/storage/blobs.hpp>

Get the connection string


The code below retrieves the connection string for your storage account from the environment variable created
in Configure your storage connection string.
Add this code inside main() :

// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING.
// Note that _MSC_VER is set when using MSVC compiler.
static const char* AZURE_STORAGE_CONNECTION_STRING = "AZURE_STORAGE_CONNECTION_STRING";
#if !defined(_MSC_VER)
const char* connectionString = std::getenv(AZURE_STORAGE_CONNECTION_STRING);
#else
// Use getenv_s for MSVC
size_t requiredSize;
getenv_s(&requiredSize, NULL, NULL, AZURE_STORAGE_CONNECTION_STRING);
if (requiredSize == 0) {
throw std::runtime_error("missing connection string from env.");
}
std::vector<char> value(requiredSize);
getenv_s(&requiredSize, value.data(), value.size(), AZURE_STORAGE_CONNECTION_STRING);
std::string connectionStringStr = std::string(value.begin(), value.end());
const char* connectionString = connectionStringStr.c_str();
#endif

Create a container
Create an instance of the BlobContainerClient class by calling the CreateFromConnectionString function. Then
call CreateIfNotExists to create the actual container in your storage account.

IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.
Add this code to the end of main() :

using namespace Azure::Storage::Blobs;

std::string containerName = "myblobcontainer";

// Initialize a new instance of BlobContainerClient


BlobContainerClient containerClient
= BlobContainerClient::CreateFromConnectionString(connectionString, containerName);

// Create the container. This will do nothing if the container already exists.
std::cout << "Creating container: " << containerName << std::endl;
containerClient.CreateIfNotExists();

Upload blobs to a container


The following code snippet:
1. Declares a string containing "Hello Azure!".
2. Gets a reference to a BlockBlobClient object by calling GetBlockBlobClient on the container from the Create a
container section.
3. Uploads the string to the blob by calling the UploadFrom function. This function creates the blob if it doesn't
already exist, or updates it if it does.
Add this code to the end of main() :

std::string blobName = "blob.txt";


uint8_t blobContent[] = "Hello Azure!";
// Create the block blob client
BlockBlobClient blobClient = containerClient.GetBlockBlobClient(blobName);

// Upload the blob


std::cout << "Uploading blob: " << blobName << std::endl;
blobClient.UploadFrom(blobContent, sizeof(blobContent));

List the blobs in a container


List the blobs in the container by calling the ListBlobs function. Only one blob has been added to the container,
so the operation returns just that blob.
Add this code to the end of main() :

std::cout << "Listing blobs..." << std::endl;


auto listBlobsResponse = containerClient.ListBlobs();
for (auto blobItem : listBlobsResponse.Blobs)
{
std::cout << "Blob name: " << blobItem.Name << std::endl;
}

Download blobs
Get the properties of the uploaded blob. Then, declare and resize a new std::vector<uint8_t> object by using
the properties of the uploaded blob. Download the previously created blob into the new std::vector<uint8_t>
object by calling the DownloadTo function in the BlobClient base class. Finally, display the downloaded blob data.
Add this code to the end of main() :
auto properties = blobClient.GetProperties().Value;
std::vector<uint8_t> downloadedBlob(properties.BlobSize);

blobClient.DownloadTo(downloadedBlob.data(), downloadedBlob.size());
std::cout << "Downloaded blob contents: " << std::string(downloadedBlob.begin(), downloadedBlob.end()) <<
std::endl;

Delete a Blob
The following code deletes the blob from the Azure Blob Storage container by calling the BlobClient.Delete
function.

std::cout << "Deleting blob: " << blobName << std::endl;


blobClient.Delete();

Delete a container
The following code cleans up the resources the app created by deleting the entire container by using
BlobContainerClient.Delete.
Add this code to the end of main() :

std::cout << "Deleting container: " << containerName << std::endl;


containerClient.Delete();

Run the code


This app creates a container and uploads a text file to Azure Blob Storage. The example then lists the blobs in the
container, downloads the file, and displays the file contents. Finally, the app deletes the blob and the container.
The output of the app is similar to the following example:

Azure Blob Storage v12 - C++ quickstart sample


Creating container: myblobcontainer
Uploading blob: blob.txt
Listing blobs...
Blob name: blob.txt
Downloaded blob contents: Hello Azure!
Deleting blob: blob.txt
Deleting container: myblobcontainer

Next steps
In this quickstart, you learned how to upload, download, and list blobs using C++. You also learned how to
create and delete an Azure Blob Storage container.
To see a C++ Blob Storage sample, continue to:
Azure Blob Storage SDK v12 for C++ sample
Quickstart: Upload, download, and list blobs using
Go
11/25/2021 • 8 minutes to read • Edit Online

In this quickstart, you learn how to use the Go programming language to upload, download, and list block blobs
in a container in Azure Blob storage.

Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
Make sure you have the following additional prerequisites installed:
Go 1.8 or above
Azure Storage Blob SDK for Go, using the following command:
go get -u github.com/Azure/azure-storage-blob-go/azblob

NOTE
Make sure that you capitalize Azure in the URL to avoid case-related import problems when working with the
SDK. Also capitalize Azure in your import statements.

Download the sample application


The sample application used in this quickstart is a basic Go application.
Use git to download a copy of the application to your development environment.

git clone https://github.com/Azure-Samples/storage-blobs-go-quickstart

This command clones the repository to your local git folder. To open the Go sample for Blob storage, look for
storage-quickstart.go file.

Copy your credentials from the Azure portal


The sample application needs to authorize access to your storage account. Provide your storage account
credentials to the application in the form of a connection string. To view your storage account credentials:
1. In to the Azure portal go to your storage account.
2. In the Settings section of the storage account overview, select Access keys to display your account
access keys and connection string.
3. Note the name of your storage account, which you'll need for authorization.
4. Find the Key value under key1 , and select Copy to copy the account key.

Configure your storage connection string


This solution requires your storage account name and key to be securely stored in environment variables local
to the machine running the sample. Follow one of the examples below depending on your operating System to
create the environment variables.
Linux
Windows

export AZURE_STORAGE_ACCOUNT="<youraccountname>"
export AZURE_STORAGE_ACCESS_KEY="<youraccountkey>"

Run the sample


This sample creates a test file in the current folder, uploads the test file to Blob storage, lists the blobs in the
container, and downloads the file into a buffer.
To run the sample, issue the following command:
go run storage-quickstart.go

The following output is an example of the output returned when running the application:

Azure Blob storage quick start sample


Creating a container named quickstart-5568059279520899415
Creating a dummy file to test the upload and download
Uploading the file with blob name: 630910657703031215
Blob name: 630910657703031215
Downloaded the blob: hello world
this is a blob
Press the enter key to delete the sample files, example container, and exit the application.

When you press the key to continue, the sample program deletes the storage container and the files.

TIP
You can also use a tool such as the Azure Storage Explorer to view the files in Blob storage. Azure Storage Explorer is a free
cross-platform tool that allows you to access your storage account information.

Understand the sample code


Next, we walk through the sample code so that you can understand how it works.
Create ContainerURL and BlobURL objects
First, create the references to the ContainerURL and BlobURL objects used to access and manage Blob storage.
These objects offer low-level APIs such as Create, Upload, and Download to issue REST APIs.
Use SharedKeyCredential struct to store your credentials.
Create a Pipeline using the credentials and options. The pipeline specifies things like retry policies,
logging, deserialization of HTTP response payloads, and more.
Instantiate a new ContainerURL , and a new BlobURL object to run operations on container (Create)
and blobs (Upload and Download).
Once you have the ContainerURL, you can instantiate the BlobURL object that points to a blob, and perform
operations such as upload, download, and copy.

IMPORTANT
Container names must be lowercase. See Naming and Referencing Containers, Blobs, and Metadata for more information
about container and blob names.

In this section, you create a new container. The container is called quickstar tblobs-[random string] .

// From the Azure portal, get your storage account name and key and set environment variables.
accountName, accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT"), os.Getenv("AZURE_STORAGE_ACCESS_KEY")
if len(accountName) == 0 || len(accountKey) == 0 {
log.Fatal("Either the AZURE_STORAGE_ACCOUNT or AZURE_STORAGE_ACCESS_KEY environment variable is not
set")
}

// Create a default request pipeline using your storage account name and account key.
credential, err := azblob.NewSharedKeyCredential(accountName, accountKey)
if err != nil {
log.Fatal("Invalid credentials with error: " + err.Error())
}
p := azblob.NewPipeline(credential, azblob.PipelineOptions{})

// Create a random string for the quick start container


containerName := fmt.Sprintf("quickstart-%s", randomString())

// From the Azure portal, get your storage account blob service URL endpoint.
URL, _ := url.Parse(
fmt.Sprintf("https://%s.blob.core.windows.net/%s", accountName, containerName))

// Create a ContainerURL object that wraps the container URL and a request
// pipeline to make requests.
containerURL := azblob.NewContainerURL(*URL, p)

// Create the container


fmt.Printf("Creating a container named %s\n", containerName)
ctx := context.Background() // This example uses a never-expiring context
_, err = containerURL.Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone)
handleErrors(err)

Upload blobs to the container


Blob storage supports block blobs, append blobs, and page blobs. Block blobs are the most commonly used, and
that is what is used in this quickstart.
To upload a file to a blob, open the file using os.Open . You can then upload the file to the specified path using
one of the REST APIs: Upload (PutBlob), StageBlock/CommitBlockList (PutBlock/PutBlockList).
Alternatively, the SDK offers high-level APIs that are built on top of the low-level REST APIs. As an example,
UploadFileToBlockBlob function uses StageBlock (PutBlock) operations to concurrently upload a file in chunks
to optimize the throughput. If the file is less than 256 MB, it uses Upload (PutBlob) instead to complete the
transfer in a single transaction.
The following example uploads the file to your container called quickstar tblobs-[randomstring] .

// Create a file to test the upload and download.


fmt.Printf("Creating a dummy file to test the upload and download\n")
data := []byte("hello world this is a blob\n")
fileName := randomString()
err = ioutil.WriteFile(fileName, data, 0700)
handleErrors(err)

// Here's how to upload a blob.


blobURL := containerURL.NewBlockBlobURL(fileName)
file, err := os.Open(fileName)
handleErrors(err)

// You can use the low-level Upload (PutBlob) API to upload files. Low-level APIs are simple wrappers for
the Azure Storage REST APIs.
// Note that Upload can upload up to 256MB data in one shot. Details:
https://docs.microsoft.com/rest/api/storageservices/put-blob
// To upload more than 256MB, use StageBlock (PutBlock) and CommitBlockList (PutBlockList) functions.
// Following is commented out intentionally because we will instead use UploadFileToBlockBlob API to upload
the blob
// _, err = blobURL.Upload(ctx, file, azblob.BlobHTTPHeaders{ContentType: "text/plain"}, azblob.Metadata{},
azblob.BlobAccessConditions{})
// handleErrors(err)

// The high-level API UploadFileToBlockBlob function uploads blocks in parallel for optimal performance, and
can handle large files as well.
// This function calls StageBlock/CommitBlockList for files larger 256 MBs, and calls Upload for any file
smaller
fmt.Printf("Uploading the file with blob name: %s\n", fileName)
_, err = azblob.UploadFileToBlockBlob(ctx, file, blobURL, azblob.UploadToBlockBlobOptions{
BlockSize: 4 * 1024 * 1024,
Parallelism: 16})
handleErrors(err)

List the blobs in a container


Get a list of files in the container using the ListBlobs method on a ContainerURL . ListBlobs returns a single
segment of blobs (up to 5000) starting from the specified Marker . Use an empty Marker to start enumeration
from the beginning. Blob names are returned in lexicographic order. After getting a segment, process it, and then
call ListBlobs again passing the previously returned Marker.

// List the container that we have created above


fmt.Println("Listing the blobs in the container:")
for marker := (azblob.Marker{}); marker.NotDone(); {
// Get a result segment starting with the blob indicated by the current Marker.
listBlob, err := containerURL.ListBlobsFlatSegment(ctx, marker, azblob.ListBlobsSegmentOptions{})
handleErrors(err)

// ListBlobs returns the start of the next segment; you MUST use this to get
// the next segment (after processing the current result segment).
marker = listBlob.NextMarker

// Process the blobs returned in this result segment (if the segment is empty, the loop body won't
execute)
for _, blobInfo := range listBlob.Segment.BlobItems {
fmt.Print(" Blob name: " + blobInfo.Name + "\n")
}
}

Download the blob


Download blobs using the Download low-level function on a BlobURL. This will return a DownloadResponse
struct. Run the function Body on the struct to get a Retr yReader stream for reading data. If a connection fails
while reading, it will make additional requests to re-establish a connection and continue reading. Specifying a
RetryReaderOption's with MaxRetryRequests set to 0 (the default), returns the original response body and no
retries will be performed. Alternatively, use the high-level APIs DownloadBlobToBuffer or
DownloadBlobToFile to simplify your code.
The following code downloads the blob using the Download function. The contents of the blob is written into a
buffer and shown on the console.

// Here's how to download the blob


downloadResponse, err := blobURL.Download(ctx, 0, azblob.CountToEnd, azblob.BlobAccessConditions{}, false)

// NOTE: automatically retries are performed if the connection fails


bodyStream := downloadResponse.Body(azblob.RetryReaderOptions{MaxRetryRequests: 20})

// read the body into a buffer


downloadedData := bytes.Buffer{}
_, err = downloadedData.ReadFrom(bodyStream)
handleErrors(err)

Clean up resources
If you no longer need the blobs uploaded in this quickstart, you can delete the entire container using the Delete
method.

// Cleaning up the quick start by deleting the container and the file created locally
fmt.Printf("Press enter key to delete the sample files, example container, and exit the application.\n")
bufio.NewReader(os.Stdin).ReadBytes('\n')
fmt.Printf("Cleaning up.\n")
containerURL.Delete(ctx, azblob.ContainerAccessConditions{})
file.Close()
os.Remove(fileName)

Resources for developing Go applications with blobs


See these additional resources for Go development with Blob storage:
View and install the Go client library source code for Azure Storage on GitHub.
Explore Blob storage samples written using the Go client library.

Next steps
In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using Go. For
more information about the Azure Storage Blob SDK, view the Source Code and API Reference.
Transfer objects to/from Azure Blob storage using
PHP
11/25/2021 • 6 minutes to read • Edit Online

In this quickstart, you learn how to use PHP to upload, download, and list block blobs in a container in Azure
Blob storage.

Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
Make sure you have the following additional prerequisites installed:
PHP
Azure Storage SDK for PHP

Download the sample application


The sample application used in this quickstart is a basic PHP application.
Use git to download a copy of the application to your development environment.

git clone https://github.com/Azure-Samples/storage-blobs-php-quickstart.git

This command clones the repository to your local git folder. To open the PHP sample application, look for the
storage-blobs-php-quickstart folder, and open the phpqs.php file.

Copy your credentials from the Azure portal


The sample application needs to authorize access to your storage account. Provide your storage account
credentials to the application in the form of a connection string. To view your storage account credentials:
1. In to the Azure portal go to your storage account.
2. In the Settings section of the storage account overview, select Access keys to display your account
access keys and connection string.
3. Note the name of your storage account, which you'll need for authorization.
4. Find the Key value under key1 , and select Copy to copy the account key.
Configure your storage connection string
In the application, you must provide your storage account name and account key to create the BlobRestProxy
instance for your application. It is recommended to store these identifiers within an environment variable on the
local machine running the application. Use one of the following examples depending on your Operating System
to create the environment variable. Replace the youraccountname and youraccountkey values with your
account name and key.
Linux
Windows

export ACCOUNT_NAME=<youraccountname>
export ACCOUNT_KEY=<youraccountkey>

Configure your environment


Take the folder from your local git folder and place it in a directory served by your PHP server. Then, open a
command prompt scoped to that same directory and enter: php composer.phar install

Run the sample


This sample creates a test file in the '.' folder. The sample program uploads the test file to Blob storage, lists the
blobs in the container, and downloads the file with a new name.
Run the sample. The following output is an example of the output returned when running the application:

Uploading BlockBlob: HelloWorld.txt


These are the blobs present in the container: HelloWorld.txt:
https://myexamplesacct.blob.core.windows.net/blockblobsleqvxd/HelloWorld.txt

This is the content of the blob uploaded: Hello Azure!

When you press the button displayed, the sample program deletes the storage container and the files. Before
you continue, check your server's folder for the two files. You can open them and see they are identical.
You can also use a tool such as the Azure Storage Explorer to view the files in Blob storage. Azure Storage
Explorer is a free cross-platform tool that allows you to access your storage account information.
After you've verified the files, hit any key to finish the demo and delete the test files. Now that you know what
the sample does, open the example.rb file to look at the code.

Understand the sample code


Next, we walk through the sample code so that you can understand how it works.
Get references to the storage objects
The first thing to do is create the references to the objects used to access and manage Blob storage. These
objects build on each other, and each is used by the next one in the list.
Create an instance of the Azure storage BlobRestProxy object to set up connection credentials.
Create the BlobSer vice object that points to the Blob service in your storage account.
Create the Container object, which represents the container you are accessing. Containers are used to
organize your blobs like you use folders on your computer to organize your files.
Once you have the blobClient container object, you can create the Block blob object that points to the specific
blob in which you are interested. Then you can perform operations such as upload, download, and copy.

IMPORTANT
Container names must be lowercase. See Naming and Referencing Containers, Blobs, and Metadata for more information
about container and blob names.

In this section, you set up an instance of Azure storage client, instantiate the blob service object, create a new
container, and set permissions on the container so the blobs are public. The container is called quickstar tblobs .

# Setup a specific instance of an Azure::Storage::Client


$connectionString =
"DefaultEndpointsProtocol=https;AccountName=".getenv('account_name').";AccountKey=".getenv('account_key');

// Create blob client.


$blobClient = BlobRestProxy::createBlobService($connectionString);

# Create the BlobService that represents the Blob service for the storage account
$createContainerOptions = new CreateContainerOptions();

$createContainerOptions->setPublicAccess(PublicAccessType::CONTAINER_AND_BLOBS);

// Set container metadata.


$createContainerOptions->addMetaData("key1", "value1");
$createContainerOptions->addMetaData("key2", "value2");

$containerName = "blockblobs".generateRandomString();

try {
// Create container.
$blobClient->createContainer($containerName, $createContainerOptions);

Upload blobs to the container


Blob storage supports block blobs, append blobs, and page blobs. Block blobs are the most commonly used, and
that is what is used in this quickstart.
To upload a file to a blob, get the full path of the file by joining the directory name and the file name on your
local drive. You can then upload the file to the specified path using the createBlockBlob() method.
The sample code takes a local file and uploads it to Azure. The file is stored as myfile and the name of the blob
as fileToUpload in the code. The following example uploads the file to your container called quickstar tblobs .
$myfile = fopen("HelloWorld.txt", "w") or die("Unable to open file!");
fclose($myfile);

# Upload file as a block blob


echo "Uploading BlockBlob: ".PHP_EOL;
echo $fileToUpload;
echo "<br />";

$content = fopen($fileToUpload, "r");

//Upload blob
$blobClient->createBlockBlob($containerName, $fileToUpload, $content);

To perform a partial update of the content of a block blob, use the createblocklist() method. Block blobs can be
as large as 4.7 TB, and can be anything from Excel spreadsheets to large video files. Page blobs are primarily
used for the VHD files used to back IaaS VMs. Append blobs are used for logging, such as when you want to
write to a file and then keep adding more information. Append blob should be used in a single writer model.
Most objects stored in Blob storage are block blobs.
List the blobs in a container
You can get a list of files in the container using the listBlobs() method. The following code retrieves the list of
blobs, then loops through them, showing the names of the blobs found in a container.

$listBlobsOptions = new ListBlobsOptions();


$listBlobsOptions->setPrefix("HelloWorld");

echo "These are the blobs present in the container: ";

do{
$result = $blobClient->listBlobs($containerName, $listBlobsOptions);
foreach ($result->getBlobs() as $blob)
{
echo $blob->getName().": ".$blob->getUrl()."<br />";
}

$listBlobsOptions->setContinuationToken($result->getContinuationToken());
} while($result->getContinuationToken());

Get the content of your blobs


Get the contents of your blobs using the getBlob() method. The following code displays the contents of the
blob uploaded in a previous section.

$blob = $blobClient->getBlob($containerName, $fileToUpload);


fpassthru($blob->getContentStream());

Clean up resources
If you no longer need the blobs uploaded in this quickstart, you can delete the entire container using the
deleteContainer() method. If the files created are no longer needed, you use the deleteBlob() method to
delete the files.
// Delete blob.
echo "Deleting Blob".PHP_EOL;
echo $fileToUpload;
echo "<br />";
$blobClient->deleteBlob($_GET["containerName"], $fileToUpload);

// Delete container.
echo "Deleting Container".PHP_EOL;
echo $_GET["containerName"].PHP_EOL;
echo "<br />";
$blobClient->deleteContainer($_GET["containerName"]);

//Deleting local file


echo "Deleting file".PHP_EOL;
echo "<br />";
unlink($fileToUpload);

Resources for developing PHP applications with blobs


See these additional resources for PHP development with Blob storage:
View, download, and install the PHP client library source code for Azure Storage on GitHub.
Explore Blob storage samples written using the PHP client library.

Next steps
In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using PHP. To
learn more about working with PHP, continue to our PHP Developer center.
PHP Developer Center
For more information about the Storage Explorer and Blobs, see Manage Azure Blob storage resources with
Storage Explorer.
Quickstart: Azure Blob Storage client library for
Ruby
11/25/2021 • 5 minutes to read • Edit Online

Learn how to use Ruby to create, download, and list blobs in a container in Microsoft Azure Blob Storage.

Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
Make sure you have the following additional prerequisites installed:
Ruby
Azure Storage library for Ruby, using the RubyGem package:

gem install azure-storage-blob

Download the sample application


The sample application used in this quickstart is a basic Ruby application.
Use Git to download a copy of the application to your development environment. This command clones the
repository to your local machine:

git clone https://github.com/Azure-Samples/storage-blobs-ruby-quickstart.git

Navigate to the storage-blobs-ruby-quickstart folder, and open the example.rb file in your code editor.

Copy your credentials from the Azure portal


The sample application needs to authorize access to your storage account. Provide your storage account
credentials to the application in the form of a connection string. To view your storage account credentials:
1. In to the Azure portal go to your storage account.
2. In the Settings section of the storage account overview, select Access keys to display your account
access keys and connection string.
3. Note the name of your storage account, which you'll need for authorization.
4. Find the Key value under key1 , and select Copy to copy the account key.
Configure your storage connection string
Provide your storage account name and account key to create a BlobService instance for your application.
The following code in the example.rb file instantiates a new BlobService object. Replace the accountname and
accountkey values with your account name and key.

# Create a BlobService object


account_name = "accountname"
account_key = "accountkey"

blob_client = Azure::Storage::Blob::BlobService.create(
storage_account_name: account_name,
storage_access_key: account_key
)

Run the sample


The sample creates a container in Blob Storage, creates a new blob in the container, lists the blobs in the
container, and downloads the blob to a local file.
Run the sample. Here is an example of the output from running the application:

C:\azure-samples\storage-blobs-ruby-quickstart> ruby example.rb

Creating a container: quickstartblobs18cd9ec0-f4ac-4688-a979-75c31a70503e

Creating blob: QuickStart_6f8f29a8-879a-41fb-9db2-0b8595180728.txt

List blobs in the container following continuation token


Blob name: QuickStart_6f8f29a8-879a-41fb-9db2-0b8595180728.txt

Downloading blob to C:/Users/azureuser/Documents/QuickStart_6f8f29a8-879a-41fb-9db2-0b8595180728.txt

Paused, press the Enter key to delete resources created by the sample and exit the application

When you press Enter to continue, the sample program deletes the storage container and the local file. Before
you continue, check your Documents folder for the downloaded file.
You can also use Azure Storage Explorer to view the files in your storage account. Azure Storage Explorer is a
free cross-platform tool that allows you to access your storage account information.
After you've verified the files, press the Enter key to delete the test files and end the demo. Open the example.rb
file to look at the code.

Understand the sample code


Next, we walk through the sample code so you can understand how it works.
Get references to the storage objects
The first thing to do is create instances of the objects used to access and manage Blob Storage. These objects
build on each other. Each is used by the next one in the list.
Create an instance of the Azure storage BlobService object to set up connection credentials.
Create the Container object, which represents the container you're accessing. Containers are used to organize
your blobs like you use folders on your computer to organize your files.
Once you have the container object, you can create a Block blob object that points to a specific blob in which
you're interested. Use the Block object to create, download, and copy blobs.

IMPORTANT
Container names must be lowercase. For more information about container and blob names, see Naming and Referencing
Containers, Blobs, and Metadata.

The following example code:


Creates a new container
Sets permissions on the container so the blobs are public. The container is called quickstartblobs with a
unique ID appended.

# Create a container
container_name = "quickstartblobs" + SecureRandom.uuid
puts "\nCreating a container: " + container_name
container = blob_client.create_container(container_name)

# Set the permission so the blobs are public


blob_client.set_container_acl(container_name, "container")

Create a blob in the container


Blob Storage supports block blobs, append blobs, and page blobs. To create a blob, call the create_block_blob
method passing in the data for the blob.
The following example creates a blob called QuickStart_ with a unique ID and a .txt file extension in the container
created earlier.

# Create a new block blob containing 'Hello, World!'


blob_name = "QuickStart_" + SecureRandom.uuid + ".txt"
blob_data = "Hello, World!"
puts "\nCreating blob: " + blob_name
blob_client.create_block_blob(container.name, blob_name, blob_data)

Block blobs can be as large as 4.7 TB, and can be anything from spreadsheets to large video files. Page blobs are
primarily used for the VHD files that back IaaS virtual machines. Append blobs are commonly used for logging,
such as when you want to write to a file and then keep adding more information.
List the blobs in a container
Get a list of files in the container using the list_blobs method. The following code retrieves the list of blobs, then
displays their names.
# List the blobs in the container
puts "\nList blobs in the container following continuation token"
nextMarker = nil
loop do
blobs = blob_client.list_blobs(container_name, { marker: nextMarker })
blobs.each do |blob|
puts "\tBlob name: #{blob.name}"
end
nextMarker = blobs.continuation_token
break unless nextMarker && !nextMarker.empty?
end

Download a blob
Download a blob to your local disk using the get_blob method. The following code downloads the blob created
in a previous section.

# Download the blob

# Set the path to the local folder for downloading


if(is_windows)
local_path = File.expand_path("~/Documents")
else
local_path = File.expand_path("~/")
end

# Create the full path to the downloaded file


full_path_to_file = File.join(local_path, blob_name)

puts "\nDownloading blob to " + full_path_to_file


blob, content = blob_client.get_blob(container_name, blob_name)
File.open(full_path_to_file,"wb") {|f| f.write(content)}

Clean up resources
If a blob is no longer needed, use delete_blob to remove it. Delete an entire container using the delete_container
method. Deleting a container also deletes any blobs stored in the container.

# Clean up resources, including the container and the downloaded file


blob_client.delete_container(container_name)
File.delete(full_path_to_file)

Resources for developing Ruby applications with blobs


See these additional resources for Ruby development:
View and download the Ruby client library source code for Azure Storage on GitHub.
Explore Azure samples written using the Ruby client library.
Sample: Getting Started with Azure Storage in Ruby

Next steps
In this quickstart, you learned how to transfer files between Azure Blob Storage and a local disk by using Ruby.
To learn more about working with Blob Storage, continue to the Storage account overview.
Storage account overview
For more information about the Storage Explorer and Blobs, see Manage Azure Blob Storage resources with
Storage Explorer.
Quickstart: Azure Blob Storage client library v12
with Xamarin
11/25/2021 • 6 minutes to read • Edit Online

Get started with the Azure Blob Storage client library v12 with Xamarin. Azure Blob Storage is Microsoft's object
storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob
storage is optimized for storing massive amounts of unstructured data.
Use the Azure Blob Storage client library v12 with Xamarin to:
Create a container
Upload a blob to Azure Storage
List all of the blobs in a container
Download the blob to your device
Delete a container
Reference links:
API reference documentation
Library source code
Package (NuGet)
Sample

Prerequisites
Azure subscription - create one for free
Azure storage account - create a storage account
Visual Studio with Mobile Development for .NET workload installed or Visual Studio for Mac

Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 with
Xamarin.
Create the project
1. Open Visual Studio and create a Blank Forms App.
2. Name it: BlobQuickstartV12
Install the package
1. Right-click your solution in the Solution Explorer pane and select Manage NuGet Packages for Solution .
2. Search for Azure.Storage.Blobs and install the latest stable version into all projects in your solution.
Set up the app framework
From the BlobQuickstar tV12 directory:
1. Open up the MainPage.xaml file in your editor
2. Remove everything between the <ContentPage></ContentPage> elements and replace with the below:
<StackLayout HorizontalOptions="Center" VerticalOptions="Center">

<Button x:Name="uploadButton" Text="Upload Blob" Clicked="Upload_Clicked" IsEnabled="False"/>


<Button x:Name="listButton" Text="List Blobs" Clicked="List_Clicked" IsEnabled="False" />
<Button x:Name="downloadButton" Text="Download Blob" Clicked="Download_Clicked" IsEnabled="False" />
<Button x:Name="deleteButton" Text="Delete Container" Clicked="Delete_Clicked" IsEnabled="False" />

<Label Text="" x:Name="resultsLabel" HorizontalTextAlignment="Center" Margin="0,20,0,0" TextColor="Red"


/>

</StackLayout>

Copy your credentials from the Azure portal


When the sample application makes a request to Azure Storage, it must be authorized. To authorize a request,
add your storage account credentials to the application as a connection string. View your storage account
credentials by following these steps:
1. Sign in to the Azure portal.
2. Locate your storage account.
3. In the Settings section of the storage account overview, select Access keys . Here, you can view your
account access keys and the complete connection string for each key.
4. Find the Connection string value under key1 , and select the Copy button to copy the connection
string. You will add the connection string value to an environment variable in the next step.

Configure your storage connection string


After you have copied your connection string, set it to a class level variable in your MainPage.xaml.cs file. Open
up MainPaage.xaml.cs and find the storageConnectionString variable. Replace <yourconnectionstring> with your
actual connection string.
Here's the code:

string storageConnectionString = "<yourconnectionstring>";

Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers
three types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.
Use the following .NET classes to interact with these resources:
BlobServiceClient: The BlobServiceClient class allows you to manipulate Azure Storage resources and blob
containers.
BlobContainerClient: The BlobContainerClient class allows you to manipulate Azure Storage containers and
their blobs.
BlobClient: The BlobClient class allows you to manipulate Azure Storage blobs.
BlobDownloadInfo: The BlobDownloadInfo class represents the properties and content returned from
downloading a blob.

Code examples
These example code snippets show you how to perform the following tasks with the Azure Blob Storage client
library for .NET in a Xamarin.Forms app:
Create class level variables
Create a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Create class level variables
The code below declares several class level variables. They needed to communicate to Azure Blob Storage
throughout the rest of this sample.
These are in addition to the connection string for the storage account set in the Configure your storage
connection string section.
Add this code as class level variables inside the MainPage.xaml.cs file:

string storageConnectionString = "{set in the Configure your storage connection string section}";
string fileName = $"{Guid.NewGuid()}-temp.txt";

BlobServiceClient client;
BlobContainerClient containerClient;
BlobClient blobClient;

Create a container
Decide on a name for the new container. The code below appends a GUID value to the container name to ensure
that it is unique.

IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.

Create an instance of the BlobServiceClient class. Then, call the CreateBlobContainerAsync method to create the
container in your storage account.
Add this code to MainPage.xaml.cs file:

protected async override void OnAppearing()


{
string containerName = $"quickstartblobs{Guid.NewGuid()}";

client = new BlobServiceClient(storageConnectionString);


containerClient = await client.CreateBlobContainerAsync(containerName);

resultsLabel.Text = "Container Created\n";

blobClient = containerClient.GetBlobClient(fileName);

uploadButton.IsEnabled = true;
}

Upload blobs to a container


The following code snippet:
1. Creates a MemoryStream of text.
2. Uploads the text to a Blob by calling the UploadAsync function of the BlobContainerClient class, passing it in
both the filename and the MemoryStream of text. This method creates the blob if it doesn't already exist, and
overwrites it if it does.
Add this code to the MainPage.xaml.cs file:

async void Upload_Clicked(object sender, EventArgs e)


{
using MemoryStream memoryStream = new MemoryStream(Encoding.UTF8.GetBytes("Hello World!"));

await containerClient.UploadBlobAsync(fileName, memoryStream);

resultsLabel.Text += "Blob Uploaded\n";

uploadButton.IsEnabled = false;
listButton.IsEnabled = true;
}

List the blobs in a container


List the blobs in the container by calling the GetBlobsAsync method. In this case, only one blob has been added
to the container, so the listing operation returns just that one blob.
Add this code to the MainPage.xaml.cs file:

async void List_Clicked(object sender, EventArgs e)


{
await foreach (BlobItem blobItem in containerClient.GetBlobsAsync())
{
resultsLabel.Text += blobItem.Name + "\n";
}

listButton.IsEnabled = false;
downloadButton.IsEnabled = true;
}

Download blobs
Download the previously created blob by calling the DownloadToAsync method. The example code copies the
Stream representation of the blob first into a MemoryStream and then into a StreamReader so the text can be
displayed.
Add this code to the MainPage.xaml.cs file:

async void Download_Clicked(object sender, EventArgs e)


{
BlobDownloadInfo downloadInfo = await blobClient.DownloadAsync();

using MemoryStream memoryStream = new MemoryStream();

await downloadInfo.Content.CopyToAsync(memoryStream);
memoryStream.Position = 0;

using StreamReader streamReader = new StreamReader(memoryStream);

resultsLabel.Text += "Blob Contents: \n";


resultsLabel.Text += await streamReader.ReadToEndAsync();
resultsLabel.Text += "\n";

downloadButton.IsEnabled = false;
deleteButton.IsEnabled = true;
}

Delete a container
The following code cleans up the resources the app created by deleting the entire container by using
DeleteAsync.
The app first prompts to confirm before it deletes the blob and container. This is a good chance to verify that the
resources were created correctly, before they are deleted.
Add this code to the MainPage.xaml.cs file:

async void Delete_Clicked(object sender, EventArgs e)


{
var deleteContainer = await Application.Current.MainPage.DisplayAlert("Delete Container",
"You are about to delete the container proceeed?", "OK", "Cancel");

if (deleteContainer == false)
return;

await containerClient.DeleteAsync();

resultsLabel.Text += "Container Deleted";

deleteButton.IsEnabled = false;
}

Run the code


When the app starts, it will first create the container as it appears. Then you will need to click the buttons in
order to upload, list, download the blobs, and delete the container.
To run the app on Windows press F5. To run the app on Mac press Cmd+Enter.
The app writes to the screen after every operation. The output of the app is similar to the example below:
Container Created
Blob Uploaded
98d9a472-8e98-4978-ba4f-081d69d2e6f8-temp.txt
Blob Contents:
Hello World!
Container Deleted

Before you begin the clean-up process, verify the output of the blob's contents on screen match the value that
was uploaded.
After you've verified the values, confirm the prompt to delete the container and finish the demo.

Next steps
In this quickstart, you learned how to upload, download, and list blobs using Azure Blob Storage client library
v12 with Xamarin.
To see Blob storage sample apps, continue to:
Azure Blob Storage SDK v12 Xamarin sample
For tutorials, samples, quick starts and other documentation, visit Azure for mobile developers.
To learn more about Xamarin, see Getting started with Xamarin.
Tutorial: Upload image data in the cloud with Azure
Storage
11/25/2021 • 11 minutes to read • Edit Online

This tutorial is part one of a series. In this tutorial, you'll learn how to deploy a web app. The web app uses the
Azure Blob Storage client library to upload images to a storage account. When you're finished, you'll have a web
app that stores and displays images from Azure storage.
.NET v12 SDK
JavaScript v12 SDK

In part one of the series, you learn how to:

Create a storage account


Create a container and set permissions
Retrieve an access key
Deploy a web app to Azure
Configure app settings
Interact with the web app
Prerequisites
To complete this tutorial, you need an Azure subscription. Create a free account before you begin.

Use Azure Cloud Shell


Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through your browser. You can
use either Bash or PowerShell with Cloud Shell to work with Azure services. You can use the Cloud Shell
preinstalled commands to run the code in this article without having to install anything on your local
environment.
To start Azure Cloud Shell:

O P T IO N EXA M P L E/ L IN K

Select Tr y It in the upper-right corner of a code block.


Selecting Tr y It doesn't automatically copy the code to
Cloud Shell.

Go to https://shell.azure.com, or select the Launch Cloud


Shell button to open Cloud Shell in your browser.

Select the Cloud Shell button on the menu bar at the


upper right in the Azure portal.

To run the code in this article in Azure Cloud Shell:


1. Start Cloud Shell.
2. Select the Copy button on a code block to copy the code.
3. Paste the code into the Cloud Shell session by selecting Ctrl +Shift +V on Windows and Linux or by
selecting Cmd +Shift +V on macOS.
4. Select Enter to run the code.
To install and use the CLI locally, run Azure CLI version 2.0.4 or later. Run az --version to find the version. If you
need to install or upgrade, see Install the Azure CLI.

Create a resource group


The following example creates a resource group named myResourceGroup .

PowerShell
Azure CLI

Create a resource group with the New-AzResourceGroup command. An Azure resource group is a logical
container into which Azure resources are deployed and managed.

New-AzResourceGroup -Name myResourceGroup -Location southeastasia

Create a storage account


The sample uploads images to a blob container in an Azure storage account. A storage account provides a
unique namespace to store and access your Azure storage data objects.
IMPORTANT
In part 2 of the tutorial, you use Azure Event Grid with Blob storage. Make sure to create your storage account in an
Azure region that supports Event Grid. For a list of supported regions, see Azure products by region.

In the following command, replace your own globally unique name for the Blob storage account where you see
the <blob_storage_account> placeholder.

PowerShell
Azure CLI

Create a storage account in the resource group you created by using the New-AzStorageAccount command.

$blobStorageAccount="<blob_storage_account>"

New-AzStorageAccount -ResourceGroupName myResourceGroup -Name $blobStorageAccount -SkuName Standard_LRS -


Location southeastasia -Kind StorageV2 -AccessTier Hot

Create Blob storage containers


The app uses two containers in the Blob storage account. Containers are similar to folders and store blobs. The
images container is where the app uploads full-resolution images. In a later part of the series, an Azure function
app uploads resized image thumbnails to the *thumbnail
The images container's public access is set to off . The thumbnails container's public access is set to container .
The container public access setting permits users who visit the web page to view the thumbnails.

PowerShell
Azure CLI

Get the storage account key by using the Get-AzStorageAccountKey command. Then, use this key to create two
containers with the New-AzStorageContainer command.

$blobStorageAccountKey = (Get-AzStorageAccountKey -ResourceGroupName myResourceGroup -Name


$blobStorageAccount).Key1
$blobStorageContext = New-AzStorageContext -StorageAccountName $blobStorageAccount -StorageAccountKey
$blobStorageAccountKey

New-AzStorageContainer -Name images -Context $blobStorageContext


New-AzStorageContainer -Name thumbnails -Permission Container -Context $blobStorageContext

Make a note of your Blob storage account name and key. The sample app uses these settings to connect to the
storage account to upload the images.

Create an App Service plan


An App Service plan specifies the location, size, and features of the web server farm that hosts your app.
The following example creates an App Service plan named myAppServicePlan in the Free pricing tier:

PowerShell
Azure CLI

Create an App Service plan with the New-AzAppServicePlan command.


New-AzAppServicePlan -ResourceGroupName myResourceGroup -Name myAppServicePlan -Tier "Free"

Create a web app


The web app provides a hosting space for the sample app code that's deployed from the GitHub sample
repository.
In the following command, replace <web_app> with a unique name. Valid characters are a-z , 0-9 , and - . If
<web_app> isn't unique, you get the error message: Website with given name <web_app> already exists. The
default URL of the web app is https://<web_app>.azurewebsites.net .

PowerShell
Azure CLI

Create a web app in the myAppServicePlan App Service plan with the New-AzWebApp command.

$webapp="<web_app>"

New-AzWebApp -ResourceGroupName myResourceGroup -Name $webapp -AppServicePlan myAppServicePlan

Deploy the sample app from the GitHub repository


.NET v12 SDK
JavaScript v12 SDK

App Service supports several ways to deploy content to a web app. In this tutorial, you deploy the web app from
a public GitHub sample repository. Configure GitHub deployment to the web app with the az webapp
deployment source config command.
The sample project contains an ASP.NET MVC app. The app accepts an image, saves it to a storage account, and
displays images from a thumbnail container. The web app uses the Azure.Storage, Azure.Storage.Blobs, and
Azure.Storage.Blobs.Models namespaces to interact with the Azure Storage service.

az webapp deployment source config --name $webapp --resource-group myResourceGroup \


--branch master --manual-integration \
--repo-url https://github.com/Azure-Samples/storage-blob-upload-from-webapp

az webapp deployment source config --name $webapp --resource-group myResourceGroup `


--branch master --manual-integration `
--repo-url https://github.com/Azure-Samples/storage-blob-upload-from-webapp

Configure web app settings


.NET v12 SDK
JavaScript v12 SDK

The sample web app uses the Azure Storage APIs for .NET to upload images. Storage account credentials are set
in the app settings for the web app. Add app settings to the deployed app with the az webapp config appsettings
set or New-AzStaticWebAppSetting command.
az webapp config appsettings set --name $webapp --resource-group myResourceGroup \
--settings AzureStorageConfig__AccountName=$blobStorageAccount \
AzureStorageConfig__ImageContainer=images \
AzureStorageConfig__ThumbnailContainer=thumbnails \
AzureStorageConfig__AccountKey=$blobStorageAccountKey

az webapp config appsettings set --name $webapp --resource-group myResourceGroup `


--settings AzureStorageConfig__AccountName=$blobStorageAccount `
AzureStorageConfig__ImageContainer=images `
AzureStorageConfig__ThumbnailContainer=thumbnails `
AzureStorageConfig__AccountKey=$blobStorageAccountKey

After you deploy and configure the web app, you can test the image upload functionality in the app.

Upload an image
To test the web app, browse to the URL of your published app. The default URL of the web app is
https://<web_app>.azurewebsites.net .

.NET v12 SDK


JavaScript v12 SDK

Select the Upload photos region to specify and upload a file, or drag a file onto the region. The image
disappears if successfully uploaded. The Generated Thumbnails section will remain empty until we test it later
in this tutorial.

In the sample code, the UploadFileToStorage task in the Storagehelper.cs file is used to upload the images to the
images container within the storage account using the UploadAsync method. The following code sample
contains the UploadFileToStorage task.
public static async Task<bool> UploadFileToStorage(Stream fileStream, string fileName,
AzureStorageConfig _storageConfig)
{
// Create a URI to the blob
Uri blobUri = new Uri("https://" +
_storageConfig.AccountName +
".blob.core.windows.net/" +
_storageConfig.ImageContainer +
"/" + fileName);

// Create StorageSharedKeyCredentials object by reading


// the values from the configuration (appsettings.json)
StorageSharedKeyCredential storageCredentials =
new StorageSharedKeyCredential(_storageConfig.AccountName, _storageConfig.AccountKey);

// Create the blob client.


BlobClient blobClient = new BlobClient(blobUri, storageCredentials);

// Upload the file


await blobClient.UploadAsync(fileStream);

return await Task.FromResult(true);


}

The following classes and methods are used in the preceding task:

C L A SS M ET H O D

Uri Uri constructor

StorageSharedKeyCredential StorageSharedKeyCredential(String, String) constructor

BlobClient UploadAsync

Verify the image is shown in the storage account


Sign in to the Azure portal. From the left menu, select Storage accounts , then select the name of your storage
account. Select Containers , then select the images container.
Verify the image is shown in the container.

Test thumbnail viewing


To test thumbnail viewing, you'll upload an image to the thumbnails container to check whether the app can
read the thumbnails container.
Sign in to the Azure portal. From the left menu, select Storage accounts , then select the name of your storage
account. Select Containers , then select the thumbnails container. Select Upload to open the Upload blob
pane.
Choose a file with the file picker and select Upload .
Navigate back to your app to verify that the image uploaded to the thumbnails container is visible.

.NET v12 SDK


JavaScript v12 SDK

In part two of the series, you automate thumbnail image creation so you don't need this image. In the
thumbnails container, select the image you uploaded, and select Delete to remove the image.
You can enable Content Delivery Network (CDN) to cache content from your Azure storage account. For more
information, see Integrate an Azure storage account with Azure CDN.

Next steps
In part one of the series, you learned how to configure a web app to interact with storage.
Go on to part two of the series to learn about using Event Grid to trigger an Azure function to resize an image.
Use Event Grid to trigger an Azure Function to resize an uploaded image
Tutorial: Automate resizing uploaded images using
Event Grid
11/25/2021 • 8 minutes to read • Edit Online

Azure Event Grid is an eventing service for the cloud. Event Grid enables you to create subscriptions to events
raised by Azure services or third-party resources.
This tutorial is part two of a series of Storage tutorials. It extends the previous Storage tutorial to add serverless
automatic thumbnail generation using Azure Event Grid and Azure Functions. Event Grid enables Azure
Functions to respond to Azure Blob storage events and generate thumbnails of uploaded images. An event
subscription is created against the Blob storage create event. When a blob is added to a specific Blob storage
container, a function endpoint is called. Data passed to the function binding from Event Grid is used to access the
blob and generate the thumbnail image.
You use the Azure CLI and the Azure portal to add the resizing functionality to an existing image upload app.

.NET v12 SDK


Node.js v10 SDK
In this tutorial, you learn how to:
Create an Azure Storage account
Deploy serverless code using Azure Functions
Create a Blob storage event subscription in Event Grid

Prerequisites
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

To complete this tutorial:


You need an Azure subscription. This tutorial doesn't work with the free subscription.
You must have completed the previous Blob storage tutorial: Upload image data in the cloud with Azure
Storage.
Create an Azure Storage account
Azure Functions requires a general storage account. In addition to the Blob storage account you created in the
previous tutorial, create a separate general storage account in the resource group. Storage account names must
be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
Set variables to hold the name of the resource group that you created in the previous tutorial, the location for
resources to be created, and the name of the new storage account that Azure Functions requires. Then, create
the storage account for the Azure function.

PowerShell
Azure CLI

Use the New-AzStorageAccount command.


1. Specify a name for the resource group.

$resourceGroupName="myResourceGroup"

2. Specify the location for the storage account.

$location="eastus"

3. Specify the name of the storage account to be used by the function.

$functionstorage="<name of the storage account to be used by the function>"

4. Create a storage account.

New-AzStorageAccount -ResourceGroupName $resourceGroupName -AccountName $functionstorage -Location


$location -SkuName Standard_LRS -Kind StorageV2

Create a function app


You must have a function app to host the execution of your function. The function app provides an environment
for serverless execution of your function code.
In the following command, provide your own unique function app name. The function app name is used as the
default DNS domain for the function app, and so the name needs to be unique across all apps in Azure.
Specify a name for the function app that's to be created, then create the Azure function.
PowerShell
Azure CLI

Create a function app by using the New-AzFunctionApp command.


1. Specify a name for the function app.

$functionapp="<name of the function app>"

2. Create a function app.


New-AzFunctionApp -Location $location -Name $functionapp -ResourceGroupName $resourceGroupName -
Runtime PowerShell -StorageAccountName $functionstorage

Now configure the function app to connect to the Blob storage account you created in the previous tutorial.

Configure the function app


The function needs credentials for the Blob storage account, which are added to the application settings of the
function app using either the az functionapp config appsettings set or Update-AzFunctionAppSetting command.
.NET v12 SDK
Node.js v10 SDK

storageConnectionString=$(az storage account show-connection-string --resource-group $resourceGroupName --


name $blobStorageAccount --query connectionString --output tsv)

az functionapp config appsettings set --name $functionapp --resource-group $resourceGroupName --settings


"AzureWebJobsStorage=$storageConnectionString THUMBNAIL_CONTAINER_NAME=thumbnails THUMBNAIL_WIDTH=100
FUNCTIONS_EXTENSION_VERSION=~2 FUNCTIONS_WORKER_RUNTIME=dotnet"

$storageConnectionString=$(az storage account show-connection-string --resource-group $resourceGroupName --


name $blobStorageAccount --query connectionString --output tsv)

Update-AzFunctionAppSetting -Name $functionapp -ResourceGroupName $resourceGroupName -AppSetting


@{AzureWebJobsStorage=$storageConnectionString; THUMBNAIL_CONTAINER_NAME=thumbnails; THUMBNAIL_WIDTH=100
FUNCTIONS_EXTENSION_VERSION=~2; 'FUNCTIONS_WORKER_RUNTIME'='dotnet'}

The FUNCTIONS_EXTENSION_VERSION=~2 setting makes the function app run on version 2.x of the Azure Functions
runtime.
You can now deploy a function code project to this function app.

Deploy the function code


.NET v12 SDK
Node.js v10 SDK

The sample C# resize function is available on GitHub. Deploy this code project to the function app by using the
az functionapp deployment source config command.

az functionapp deployment source config --name $functionapp --resource-group $resourceGroupName --branch


master --manual-integration --repo-url https://github.com/Azure-Samples/function-image-upload-resize

The image resize function is triggered by HTTP requests sent to it from the Event Grid service. You tell Event Grid
that you want to get these notifications at your function's URL by creating an event subscription. For this tutorial
you subscribe to blob-created events.
The data passed to the function from the Event Grid notification includes the URL of the blob. That URL is in turn
passed to the input binding to obtain the uploaded image from Blob storage. The function generates a
thumbnail image and writes the resulting stream to a separate container in Blob storage.
This project uses EventGridTrigger for the trigger type. Using the Event Grid trigger is recommended over
generic HTTP triggers. Event Grid automatically validates Event Grid Function triggers. With generic HTTP
triggers, you must implement the validation response.

.NET v12 SDK


Node.js v10 SDK

To learn more about this function, see the function.json and run.csx files.
The function project code is deployed directly from the public sample repository. To learn more about
deployment options for Azure Functions, see Continuous deployment for Azure Functions.

Create an event subscription


An event subscription indicates which provider-generated events you want sent to a specific endpoint. In this
case, the endpoint is exposed by your function. Use the following steps to create an event subscription that
sends notifications to your function in the Azure portal:
1. In the Azure portal, at the top of the page search for and select Function App and choose the function
app that you just created. Select Functions and choose the Thumbnail function.

2. Select select Integration then choose the Event Grid Trigger and select Create Event Grid
subscription .

3. Use the event subscription settings as specified in the table.


SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Name imageresizersub Name that identifies your new event


subscription.

Topic type Storage accounts Choose the Storage account event


provider.

Subscription Your Azure subscription By default, your current Azure


subscription is selected.

Resource group myResourceGroup Select Use existing and choose the


resource group you have been
using in this tutorial.

Resource Your Blob storage account Choose the Blob storage account
you created.

System Topic Name imagestoragesystopic Specify a name for the system topic.
To learn about system topics, see
System topics overview.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Event types Blob created Uncheck all types other than Blob
created . Only event types of
Microsoft.Storage.BlobCreated
are passed to the function.

Endpoint type autogenerated Pre-defined as Azure Function .

Endpoint autogenerated Name of the function. In this case,


it's Thumbnail.

4. Switch to the Filters tab, and do the following actions:


a. Select Enable subject filtering option.
b. For Subject begins with , enter the following value :
/blobSer vices/default/containers/images/ .

5. Select Create to add the event subscription. This creates an event subscription that triggers the
Thumbnail function when a blob is added to the images container. The function resizes the images and
adds them to the thumbnails container.

Now that the backend services are configured, you test the image resize functionality in the sample web app.

Test the sample app


To test image resizing in the web app, browse to the URL of your published app. The default URL of the web app
is https://<web_app>.azurewebsites.net .
.NET v12 SDK
Node.js v10 SDK

Click the Upload photos region to select and upload a file. You can also drag a photo to this region.
Notice that after the uploaded image disappears, a copy of the uploaded image is displayed in the Generated
Thumbnails carousel. This image was resized by the function, added to the thumbnails container, and
downloaded by the web client.

Next steps
In this tutorial, you learned how to:
Create a general Azure Storage account
Deploy serverless code using Azure Functions
Create a Blob storage event subscription in Event Grid
Advance to part three of the Storage tutorial series to learn how to secure access to the storage account.
Secure access to an applications data in the cloud
To learn more about Event Grid, see An introduction to Azure Event Grid.
To try another tutorial that features Azure Functions, see Create a function that integrates with Azure Logic
Apps.
Secure access to application data
11/25/2021 • 4 minutes to read • Edit Online

This tutorial is part three of a series. You learn how to secure access to the storage account.
In part three of the series, you learn how to:
Use SAS tokens to access thumbnail images
Turn on server-side encryption
Enable HTTPS-only transport
Azure blob storage provides a robust service to store files for applications. This tutorial extends the previous
topic to show how to secure access to your storage account from a web application. When you're finished the
images are encrypted and the web app uses secure SAS tokens to access the thumbnail images.

Prerequisites
To complete this tutorial you must have completed the previous Storage tutorial: Automate resizing uploaded
images using Event Grid.

Set container public access


In this part of the tutorial series, SAS tokens are used for accessing the thumbnails. In this step, you set the
public access of the thumbnails container to off .

PowerShell
Azure CLI

$blobStorageAccount="<blob_storage_account>"

blobStorageAccountKey=(Get-AzStorageAccountKey -ResourceGroupName myResourceGroup -AccountName


$blobStorageAccount).Key1

Set-AzStorageAccount -ResourceGroupName "MyResourceGroup" -AccountName $blobStorageAccount -KeyName


$blobStorageAccountKey -AllowBlobPublicAccess $false

Configure SAS tokens for thumbnails


In part one of this tutorial series, the web application was showing images from a public container. In this part of
the series, you use shared access signatures (SAS) tokens to retrieve the thumbnail images. SAS tokens allow
you to provide restricted access to a container or blob based on IP, protocol, time interval, or rights allowed. For
more information about SAS, see Grant limited access to Azure Storage resources using shared access
signatures (SAS).
In this example, the source code repository uses the sasTokens branch, which has an updated code sample.
Delete the existing GitHub deployment with the az webapp deployment source delete. Next, configure GitHub
deployment to the web app with the az webapp deployment source config command.
In the following command, <web-app> is the name of your web app.
az webapp deployment source delete --name <web-app> --resource-group myResourceGroup

az webapp deployment source config --name <web_app> \


--resource-group myResourceGroup --branch sasTokens --manual-integration \
--repo-url https://github.com/Azure-Samples/storage-blob-upload-from-webapp

az webapp deployment source delete --name <web-app> --resource-group myResourceGroup

az webapp deployment source config --name <web_app> `


--resource-group myResourceGroup --branch sasTokens --manual-integration `
--repo-url https://github.com/Azure-Samples/storage-blob-upload-from-webapp

The sasTokens branch of the repository updates the StorageHelper.cs file. It replaces the GetThumbNailUrls
task with the code example below. The updated task retrieves the thumbnail URLs by using a BlobSasBuilder to
specify the start time, expiry time, and permissions for the SAS token. Once deployed the web app now retrieves
the thumbnails with a URL using a SAS token. The updated task is shown in the following example:
public static async Task<List<string>> GetThumbNailUrls(AzureStorageConfig _storageConfig)
{
List<string> thumbnailUrls = new List<string>();

// Create a URI to the storage account


Uri accountUri = new Uri("https://" + _storageConfig.AccountName + ".blob.core.windows.net/");

// Create BlobServiceClient from the account URI


BlobServiceClient blobServiceClient = new BlobServiceClient(accountUri);

// Get reference to the container


BlobContainerClient container =
blobServiceClient.GetBlobContainerClient(_storageConfig.ThumbnailContainer);

if (container.Exists())
{
// Set the expiration time and permissions for the container.
// In this case, the start time is specified as a few
// minutes in the past, to mitigate clock skew.
// The shared access signature will be valid immediately.
BlobSasBuilder sas = new BlobSasBuilder
{
Resource = "c",
BlobContainerName = _storageConfig.ThumbnailContainer,
StartsOn = DateTimeOffset.UtcNow.AddMinutes(-5),
ExpiresOn = DateTimeOffset.UtcNow.AddHours(1)
};

sas.SetPermissions(BlobContainerSasPermissions.All);

// Create StorageSharedKeyCredentials object by reading


// the values from the configuration (appsettings.json)
StorageSharedKeyCredential storageCredential =
new StorageSharedKeyCredential(_storageConfig.AccountName, _storageConfig.AccountKey);

// Create a SAS URI to the storage account


UriBuilder sasUri = new UriBuilder(accountUri);
sasUri.Query = sas.ToSasQueryParameters(storageCredential).ToString();

foreach (BlobItem blob in container.GetBlobs())


{
// Create the URI using the SAS query token.
string sasBlobUri = container.Uri + "/" +
blob.Name + sasUri.Query;

//Return the URI string for the container, including the SAS token.
thumbnailUrls.Add(sasBlobUri);
}
}
return await Task.FromResult(thumbnailUrls);
}

The following classes, properties, and methods are used in the preceding task:

C L A SS P RO P ERT IES M ET H O DS

StorageSharedKeyCredential

BlobServiceClient GetBlobContainerClient

BlobContainerClient Uri Exists


GetBlobs
C L A SS P RO P ERT IES M ET H O DS

BlobSasBuilder SetPermissions
ToSasQueryParameters

BlobItem Name

UriBuilder Query

List Add

Azure Storage encryption


Azure Storage encryption helps you protect and safeguard your data by encrypting data at rest and by handling
encryption and decryption. All data is encrypted using 256-bit AES encryption, one of the strongest block
ciphers available.
You can choose to have Microsoft manage encryption keys, or you can bring your own keys with customer-
managed keys stored in Azure Key Vault or Key Vault Managed Hardware Security Model (HSM) (preview). For
more information, see Customer-managed keys for Azure Storage encryption.
Azure Storage encryption automatically encrypts data in all performance tiers (Standard and Premium), all
deployment models (Azure Resource Manager and Classic), and all of the Azure Storage services (Blob, Queue,
Table, and File).

Enable HTTPS only


In order to ensure that requests for data to and from a storage account are secure, you can limit requests to
HTTPS only. Update the storage account required protocol by using the az storage account update command.

az storage account update --resource-group myresourcegroup --name <storage-account-name> --https-only true

Test the connection using curl using the HTTP protocol.

curl http://<storage-account-name>.blob.core.windows.net/<container>/<blob-name> -I

Now that secure transfer is required, you receive the following message:

HTTP/1.1 400 The account being accessed does not support http.

Next steps
In part three of the series, you learned how to secure access to the storage account, such as how to:
Use SAS tokens to access thumbnail images
Turn on server-side encryption
Enable HTTPS-only transport
Advance to part four of the series to learn how to monitor and troubleshoot a cloud storage application.
Monitor and troubleshoot application cloud application storage
Monitor and troubleshoot a cloud storage
application
11/25/2021 • 3 minutes to read • Edit Online

This tutorial is part four and the final part of a series. You learn how to monitor and troubleshoot a cloud
storage application.
In part four of the series, you learn how to:
Turn on logging and metrics
Enable alerts for authorization errors
Run test traffic with incorrect SAS tokens
Download and analyze logs
Azure storage analytics provides logging and metric data for a storage account. This data provides insights into
the health of your storage account. To collect data from Azure storage analytics, you can configure logging,
metrics and alerts. This process involves turning on logging, configuring metrics, and enabling alerts.
Logging and metrics from storage accounts are enabled from the Diagnostics tab in the Azure portal. Storage
logging enables you to record details for both successful and failed requests in your storage account. These logs
enable you to see details of read, write, and delete operations against your Azure tables, queues, and blobs. They
also enable you to see the reasons for failed requests such as timeouts, throttling, and authorization errors.

Log in to the Azure portal


Log in to the Azure portal

Turn on logging and metrics


From the left menu, select Resource Groups , select myResourceGroup , and then select your storage account
in the resource list.
Under Diagnostics settings (classic) set Status to On . Ensure all of the options under Blob proper ties are
enabled.
When complete, click Save
Enable alerts
Alerts provide a way to email administrators or trigger a webhook based on a metric breaching a threshold. In
this example, you enable an alert for the SASClientOtherError metric.
Navigate to the storage account in the Azure portal
Under the Monitoring section, select Aler ts (classic) .
Select Add metric aler t (classic) and complete the Add rule form by filling in the required information. From
the Metric dropdown, select SASClientOtherError . To allow your alert to trigger upon the first error, from the
Condition dropdown select Greater than or equal to .
Simulate an error
To simulate a valid alert, you can attempt to request a non-existent blob from your storage account. The
following command requires a storage container name. You can either use the name of an existing container or
create a new one for the purposes of this example.
Replace the placeholders with real values (make sure <INCORRECT_BLOB_NAME> is set to a value that does not exist)
and run the command.
sasToken=$(az storage blob generate-sas \
--account-name <STORAGE_ACCOUNT_NAME> \
--account-key <STORAGE_ACCOUNT_KEY> \
--container-name <CONTAINER_NAME> \
--name <INCORRECT_BLOB_NAME> \
--permissions r \
--expiry `date --date="next day" +%Y-%m-%d`)

curl https://<STORAGE_ACCOUNT_NAME>.blob.core.windows.net/<CONTAINER_NAME>/<INCORRECT_BLOB_NAME>?$sasToken

The following image is an example alert that is based off the simulated failure ran with the preceding example.

Download and view logs


Storage logs store data in a set of blobs in a blob container named $logs in your storage account. This container
does not show up if you list all the blob containers in your account but you can see its contents if you access it
directly.
In this scenario, you use Microsoft Message Analyzer to interact with your Azure storage account.
Download Microsoft Message Analyzer
Download Microsoft Message Analyzer and install the application.
Launch the application and choose File > Open > From Other File Sources .
In the File Selector dialog, select + Add Azure Connection . Enter in your storage account name and
account key and click OK .

Once you are connected, expand the containers in the storage tree view to view the log blobs. Select the latest
log and click OK .

On the New Session dialog, click Star t to view your log.


Once the log opens, you can view the storage events. As you can see from the following image, there was an
SASClientOtherError triggered on the storage account. For additional information on storage logging, visit
Storage Analytics.
Storage Explorer is another tool that can be used to interact with your storage accounts, including the $logs
container and the logs that are contained in it.

Next steps
In part four and the final part of the series, you learned how to monitor and troubleshoot your storage account,
such as how to:
Turn on logging and metrics
Enable alerts for authorization errors
Run test traffic with incorrect SAS tokens
Download and analyze logs
Follow this link to see pre-built storage samples.
Azure storage script samples
Tutorial: Migrate on-premises data to cloud storage
with AzCopy
11/25/2021 • 6 minutes to read • Edit Online

AzCopy is a command-line tool for copying data to or from Azure Blob storage, Azure Files, and Azure Table
storage, by using simple commands. The commands are designed for optimal performance. Using AzCopy, you
can either copy data between a file system and a storage account, or between storage accounts. AzCopy may be
used to copy data from local (on-premises) data to a storage account.
In this tutorial, you learn how to:
Create a storage account.
Use AzCopy to upload all your data.
Modify the data for test purposes.
Create a scheduled task or cron job to identify new files to upload.
If you don't have an Azure subscription, create a free account before you begin.

Prerequisites
To complete this tutorial, download the latest version of AzCopy. See Get started with AzCopy.
If you're on Windows, you will require Schtasks as this tutorial makes use of it in order to schedule a task. Linux
users will make use of the crontab command, instead.
To create a general-purpose v2 storage account in the Azure portal, follow these steps:
1. On the Azure portal menu, select All ser vices . In the list of resources, type Storage Accounts . As you
begin typing, the list filters based on your input. Select Storage Accounts .
2. On the Storage Accounts window that appears, choose + New .
3. On the Basics blade, select the subscription in which to create the storage account.
4. Under the Resource group field, select your desired resource group, or create a new resource group. For
more information on Azure resource groups, see Azure Resource Manager overview.
5. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name
also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
6. Select a region for your storage account, or use the default region.
7. Select a performance tier. The default tier is Standard.
8. Specify how the storage account will be replicated. The default redundancy option is Geo-redundant storage
(GRS). For more information about available replication options, see Azure Storage redundancy.
9. Additional options are available on the Advanced , Networking , Data protection , and Tags blades. To use
Azure Data Lake Storage, choose the Advanced blade, and then set Hierarchical namespace to Enabled .
For more information, see Azure Data Lake Storage Gen2 Introduction
10. Select Review + Create to review your storage account settings and create the account.
11. Select Create .
The following image shows the settings on the Basics blade for a new storage account:
Create a container
The first step is to create a container, because blobs must always be uploaded into a container. Containers are
used as a method of organizing groups of blobs like you would files on your computer, in folders.
Follow these steps to create a container:
1. Select the Storage accounts button from the main page, and select the storage account that you
created.
2. Select Blobs under Ser vices , and then select Container .
Container names must start with a letter or number. They can contain only letters, numbers, and the hyphen
character (-). For more rules about naming blobs and containers, see Naming and referencing containers, blobs,
and metadata.

Download AzCopy
Download the AzCopy V10 executable file.
Windows (zip)
Linux (tar)
macOS (zip)
Place the AzCopy file anywhere on your computer. Add the location of the file to your system path variable so
that you can refer to this executable file from any folder on your computer.

Authenticate with Azure AD


First, assign the Storage Blob Data Contributor role to your identity. See Assign an Azure role for access to blob
data.
Then, open a command prompt, type the following command, and press the ENTER key.

azcopy login

This command returns an authentication code and the URL of a website. Open the website, provide the code,
and then choose the Next button.

A sign-in window will appear. In that window, sign into your Azure account by using your Azure account
credentials. After you've successfully signed in, you can close the browser window and begin using AzCopy.

Upload contents of a folder to Blob storage


You can use AzCopy to upload all files in a folder to Blob storage on Windows or Linux. To upload all blobs in a
folder, enter the following AzCopy command:

azcopy copy "<local-folder-path>" "https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-


name>" --recursive=true

Replace the <local-folder-path> placeholder with the path to a folder that contains files (For example:
C:\myFolder or /mnt/myFolder ).
Replace the <storage-account-name> placeholder with the name of your storage account.
Replace the <container-name> placeholder with the name of the container that you created.
To upload the contents of the specified directory to Blob storage recursively, specify the --recursive option.
When you run AzCopy with this option, all subfolders and their files are uploaded as well.

Upload modified files to Blob storage


You can use AzCopy to upload files based on their last-modified time.
To try this, modify or create new files in your source directory for test purposes. Then, use the AzCopy sync
command.

azcopy sync "<local-folder-path>" "https://<storage-account-name>.blob.core.windows.net/<container-name>" --


recursive=true

Replace the <local-folder-path> placeholder with the path to a folder that contains files (For example:
C:\myFolder or /mnt/myFolder .
Replace the <storage-account-name> placeholder with the name of your storage account.
Replace the <container-name> placeholder with the name of the container that you created.
To learn more about the sync command, see Synchronize files.

Create a scheduled task


You can create a scheduled task or cron job that runs an AzCopy command script. The script identifies and
uploads new on-premises data to cloud storage at a specific time interval.
Copy the AzCopy command to a text editor. Update the parameter values of the AzCopy command to the
appropriate values. Save the file as script.sh (Linux) or script.bat (Windows) for AzCopy.
These examples assume that your folder is named myFolder , your storage account name is mystorageaccount
and your container name is mycontainer .

NOTE
The Linux example appends a SAS token. You'll need to provide one in your command. The current version of AzCopy V10
doesn't support Azure AD authorization in cron jobs.

Linux
Windows

azcopy sync "/mnt/myfiles" "https://mystorageaccount.blob.core.windows.net/mycontainer?sv=2018-03-


28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-05-30T06:57:40Z&st=2019-05-
29T22:57:40Z&spr=https&sig=BXHippZxxx54hQn%2F4tBY%2BE2JHGCTRv52445rtoyqgFBUo%3D" --recursive=true

In this tutorial, Schtasks is used to create a scheduled task on Windows. The Crontab command is used to create
a cron job on Linux.
Schtasks enables an administrator to create, delete, query, change, run, and end scheduled tasks on a local or
remote computer. Cron enables Linux and Unix users to run commands or scripts at a specified date and time
by using cron expressions.

Linux
Windows

To create a cron job on Linux, enter the following command on a terminal:

crontab -e
*/5 * * * * sh /path/to/script.sh

Specifying the cron expression */5 * * * * in the command indicates that the shell script script.sh should
run every five minutes. You can schedule the script to run at a specific time daily, monthly, or yearly. To learn
more about setting the date and time for job execution, see cron expressions.
To validate that the scheduled task/cron job runs correctly, create new files in your myFolder directory. Wait five
minutes to confirm that the new files have been uploaded to your storage account. Go to your log directory to
view output logs of the scheduled task or cron job.

Next steps
To learn more about ways to move on-premises data to Azure Storage and vice versa, follow this link:
Move data to and from Azure Storage.
For more information about AzCopy, see any of these articles:
Get started with AzCopy
Transfer data with AzCopy and blob storage
Transfer data with AzCopy and file storage
Transfer data with AzCopy and Amazon S3 buckets
Configure, optimize, and troubleshoot AzCopy
Create a virtual machine and storage account for a
scalable application
11/25/2021 • 4 minutes to read • Edit Online

This tutorial is part one of a series. This tutorial shows you deploy an application that uploads and download
large amounts of random data with an Azure storage account. When you're finished, you have a console
application running on a virtual machine that you upload and download large amounts of data to a storage
account.
In part one of the series, you learn how to:
Create a storage account
Create a virtual machine
Configure a custom script extension
If you don't have an Azure subscription, create a free account before you begin.

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Use Azure Cloud Shell


Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through your browser. You can
use either Bash or PowerShell with Cloud Shell to work with Azure services. You can use the Cloud Shell
preinstalled commands to run the code in this article without having to install anything on your local
environment.
To start Azure Cloud Shell:

O P T IO N EXA M P L E/ L IN K

Select Tr y It in the upper-right corner of a code block.


Selecting Tr y It doesn't automatically copy the code to
Cloud Shell.

Go to https://shell.azure.com, or select the Launch Cloud


Shell button to open Cloud Shell in your browser.

Select the Cloud Shell button on the menu bar at the


upper right in the Azure portal.

To run the code in this article in Azure Cloud Shell:


1. Start Cloud Shell.
2. Select the Copy button on a code block to copy the code.
3. Paste the code into the Cloud Shell session by selecting Ctrl +Shift +V on Windows and Linux or by
selecting Cmd +Shift +V on macOS.
4. Select Enter to run the code.
If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module Az
version 0.7 or later. Run Get-Module -ListAvailable Az to find the version. If you need to upgrade, see Install
Azure PowerShell module. If you are running PowerShell locally, you also need to run Connect-AzAccount to
create a connection with Azure.

Create a resource group


Create an Azure resource group with New-AzResourceGroup. A resource group is a logical container into which
Azure resources are deployed and managed.

New-AzResourceGroup -Name myResourceGroup -Location EastUS

Create a storage account


The sample uploads 50 large files to a blob container in an Azure Storage account. A storage account provides a
unique namespace to store and access your Azure storage data objects. Create a storage account in the resource
group you created by using the New-AzStorageAccount command.
In the following command, substitute your own globally unique name for the Blob storage account where you
see the <blob_storage_account> placeholder.

$storageAccount = New-AzStorageAccount -ResourceGroupName myResourceGroup `


-Name "<blob_storage_account>" `
-Location EastUS `
-SkuName Standard_LRS `
-Kind Storage `

Create a virtual machine


Create a virtual machine configuration. This configuration includes the settings that are used when deploying
the virtual machine such as a virtual machine image, size, and authentication configuration. When running this
step, you are prompted for credentials. The values that you enter are configured as the user name and password
for the virtual machine.
Create the virtual machine with New-AzVM.
# Variables for common values
$resourceGroup = "myResourceGroup"
$location = "eastus"
$vmName = "myVM"

# Create user object


$cred = Get-Credential -Message "Enter a username and password for the virtual machine."

# Create a subnet configuration


$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name mySubnet -AddressPrefix 192.168.1.0/24

# Create a virtual network


$vnet = New-AzVirtualNetwork -ResourceGroupName $resourceGroup -Location $location `
-Name MYvNET -AddressPrefix 192.168.0.0/16 -Subnet $subnetConfig

# Create a public IP address and specify a DNS name


$pip = New-AzPublicIpAddress -ResourceGroupName $resourceGroup -Location $location `
-Name "mypublicdns$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4

# Create a virtual network card and associate with public IP address


$nic = New-AzNetworkInterface -Name myNic -ResourceGroupName $resourceGroup -Location $location `
-SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id

# Create a virtual machine configuration


$vmConfig = New-AzVMConfig -VMName myVM -VMSize Standard_DS14_v2 | `
Set-AzVMOperatingSystem -Windows -ComputerName myVM -Credential $cred | `
Set-AzVMSourceImage -PublisherName MicrosoftWindowsServer -Offer WindowsServer `
-Skus 2016-Datacenter -Version latest | Add-AzVMNetworkInterface -Id $nic.Id

# Create a virtual machine


New-AzVM -ResourceGroupName $resourceGroup -Location $location -VM $vmConfig

Write-host "Your public IP address is $($pip.IpAddress)"

Deploy configuration
For this tutorial, there are pre-requisites that must be installed on the virtual machine. The custom script
extension is used to run a PowerShell script that completes the following tasks:
Install .NET core 2.0
Install chocolatey
Install GIT
Clone the sample repo
Restore NuGet packages
Creates 50 1-GB files with random data
Run the following cmdlet to finalize configuration of the virtual machine. This step takes 5-15 minutes to
complete.

# Start a CustomScript extension to use a simple PowerShell script to install .NET core, dependencies, and
pre-create the files to upload.
Set-AzVMCustomScriptExtension -ResourceGroupName myResourceGroup `
-VMName myVM `
-Location EastUS `
-FileUri https://raw.githubusercontent.com/azure-samples/storage-dotnet-perf-scale-
app/master/setup_env.ps1 `
-Run 'setup_env.ps1' `
-Name DemoScriptExtension
Next steps
In part one of the series, you learned about creating a storage account, deploying a virtual machine and
configuring the virtual machine with the required pre-requisites such as how to:
Create a storage account
Create a virtual machine
Configure a custom script extension
Advance to part two of the series to upload large amounts of data to a storage account using exponential retry
and parallelism.
Upload large amounts of large files in parallel to a storage account
Upload large amounts of random data in parallel to
Azure storage
11/25/2021 • 7 minutes to read • Edit Online

This tutorial is part two of a series. This tutorial shows you deploy an application that uploads large amount of
random data to an Azure storage account.
In part two of the series, you learn how to:
Configure the connection string
Build the application
Run the application
Validate the number of connections
Microsoft Azure Blob Storage provides a scalable service for storing your data. To ensure your application is as
performant as possible, an understanding of how blob storage works is recommended. Knowledge of the limits
for Azure blobs is important, to learn more about these limits visit: Scalability and performance targets for Blob
storage.
Partition naming is another potentially important factor when designing a high-performance application using
blobs. For block sizes greater than or equal to 4 MiB, High-Throughput block blobs are used, and partition
naming will not impact performance. For block sizes less than 4 MiB, Azure storage uses a range-based
partitioning scheme to scale and load balance. This configuration means that files with similar naming
conventions or prefixes go to the same partition. This logic includes the name of the container that the files are
being uploaded to. In this tutorial, you use files that have GUIDs for names as well as randomly generated
content. They are then uploaded to five different containers with random names.

Prerequisites
To complete this tutorial, you must have completed the previous Storage tutorial: Create a virtual machine and
storage account for a scalable application.

Remote into your virtual machine


Use the following command on your local machine to create a remote desktop session with the virtual machine.
Replace the IP address with the publicIPAddress of your virtual machine. When prompted, enter the credentials
you used when creating the virtual machine.

mstsc /v:<publicIpAddress>

Configure the connection string


In the Azure portal, navigate to your storage account. Select Access keys under Settings in your storage
account. Copy the connection string from the primary or secondary key. Log in to the virtual machine you
created in the previous tutorial. Open a Command Prompt as an administrator and run the setx command
with the /m switch, this command saves a machine setting environment variable. The environment variable is
not available until you reload the Command Prompt . Replace <storageConnectionString> in the following
sample:
setx storageconnectionstring "<storageConnectionString>" /m

When finished, open another Command Prompt , navigate to D:\git\storage-dotnet-perf-scale-app and type
dotnet build to build the application.

Run the application


Navigate to D:\git\storage-dotnet-perf-scale-app .
Type dotnet run to run the application. The first time you run dotnet it populates your local package cache, to
improve restore speed and enable offline access. This command takes up to a minute to complete and only
happens once.

dotnet run

The application creates five randomly named containers and begins uploading the files in the staging directory
to the storage account.
The UploadFilesAsync method is shown in the following example:

.NET v12 SDK


.NET v11 SDK

private static async Task UploadFilesAsync()


{
// Create five randomly named containers to store the uploaded files.
BlobContainerClient[] containers = await GetRandomContainersAsync();

// Path to the directory to upload


string uploadPath = Directory.GetCurrentDirectory() + "\\upload";

// Start a timer to measure how long it takes to upload all the files.
Stopwatch timer = Stopwatch.StartNew();

try
{
Console.WriteLine($"Iterating in directory: {uploadPath}");
int count = 0;

Console.WriteLine($"Found {Directory.GetFiles(uploadPath).Length} file(s)");

// Specify the StorageTransferOptions


BlobUploadOptions options = new BlobUploadOptions
{
TransferOptions = new StorageTransferOptions
{
// Set the maximum number of workers that
// may be used in a parallel transfer.
MaximumConcurrency = 8,

// Set the maximum length of a transfer to 50MB.


MaximumTransferSize = 50 * 1024 * 1024
}
};

// Create a queue of tasks that will each upload one file.


var tasks = new Queue<Task<Response<BlobContentInfo>>>();

// Iterate through the files


foreach (string filePath in Directory.GetFiles(uploadPath))
{
{
BlobContainerClient container = containers[count % 5];
string fileName = Path.GetFileName(filePath);
Console.WriteLine($"Uploading {fileName} to container {container.Name}");
BlobClient blob = container.GetBlobClient(fileName);

// Add the upload task to the queue


tasks.Enqueue(blob.UploadAsync(filePath, options));
count++;
}

// Run all the tasks asynchronously.


await Task.WhenAll(tasks);

timer.Stop();
Console.WriteLine($"Uploaded {count} files in {timer.Elapsed.TotalSeconds} seconds");
}
catch (RequestFailedException ex)
{
Console.WriteLine($"Azure request failed: {ex.Message}");
}
catch (DirectoryNotFoundException ex)
{
Console.WriteLine($"Error parsing files in the directory: {ex.Message}");
}
catch (Exception ex)
{
Console.WriteLine($"Exception: {ex.Message}");
}
}

The following example is a truncated application output running on a Windows system.

Created container 2dbb45f4-099e-49eb-880c-5b02ebac135e


Created container 0d784365-3bdf-4ef2-b2b2-c17b6480792b
Created container 42ac67f2-a316-49c9-8fdb-860fb32845d7
Created container f0357772-cb04-45c3-b6ad-ff9b7a5ee467
Created container 92480da9-f695-4a42-abe8-fb35e71eb887
Iterating in directory: C:\git\myapp\upload
Found 5 file(s)
Uploading 1d596d16-f6de-4c4c-8058-50ebd8141e4d.pdf to container 2dbb45f4-099e-49eb-880c-5b02ebac135e
Uploading 242ff392-78be-41fb-b9d4-aee8152a6279.pdf to container 0d784365-3bdf-4ef2-b2b2-c17b6480792b
Uploading 38d4d7e2-acb4-4efc-ba39-f9611d0d55ef.pdf to container 42ac67f2-a316-49c9-8fdb-860fb32845d7
Uploading 45930d63-b0d0-425f-a766-cda27ff00d32.pdf to container f0357772-cb04-45c3-b6ad-ff9b7a5ee467
Uploading 5129b385-5781-43be-8bac-e2fbb7d2bd82.pdf to container 92480da9-f695-4a42-abe8-fb35e71eb887
Uploaded 5 files in 16.9552163 seconds

Validate the connections


While the files are being uploaded, you can verify the number of concurrent connections to your storage
account. Open a console window and type netstat -a | find /c "blob:https" . This command shows the
number of connections that are currently opened. As you can see from the following example, 800 connections
were open when uploading the random files to the storage account. This value changes throughout running the
upload. By uploading in parallel block chunks, the amount of time required to transfer the contents is greatly
reduced.

C:\>netstat -a | find /c "blob:https"


800

C:\>
Next steps
In part two of the series, you learned about uploading large amounts of random data to a storage account in
parallel, such as how to:
Configure the connection string
Build the application
Run the application
Validate the number of connections
Advance to part three of the series to download large amounts of data from a storage account.
Download large amounts of random data from Azure storage
Download large amounts of random data from
Azure storage
11/25/2021 • 6 minutes to read • Edit Online

This tutorial is part three of a series. This tutorial shows you how to download large amounts of data from Azure
storage.
In part three of the series, you learn how to:
Update the application
Run the application
Validate the number of connections

Prerequisites
To complete this tutorial, you must have completed the previous Storage tutorial: Upload large amounts of
random data in parallel to Azure storage.

Remote into your virtual machine


To create a remote desktop session with the virtual machine, use the following command on your local machine.
Replace the IP address with the publicIPAddress of your virtual machine. When prompted, enter the credentials
used when creating the virtual machine.

mstsc /v:<publicIpAddress>

Update the application


In the previous tutorial, you only uploaded files to the storage account. Open
D:\git\storage-dotnet-perf-scale-app\Program.cs in a text editor. Replace the Main method with the following
sample. This example comments out the upload task and uncomments the download task and the task to delete
the content in the storage account when complete.
public static void Main(string[] args)
{
Console.WriteLine("Azure Blob storage performance and scalability sample");
// Set threading and default connection limit to 100 to
// ensure multiple threads and connections can be opened.
// This is in addition to parallelism with the storage
// client library that is defined in the functions below.
ThreadPool.SetMinThreads(100, 4);
ServicePointManager.DefaultConnectionLimit = 100; // (Or More)

bool exception = false;


try
{
// Call the UploadFilesAsync function.
// await UploadFilesAsync();

// Uncomment the following line to enable downloading of files from the storage account.
// This is commented out initially to support the tutorial at
// https://docs.microsoft.com/azure/storage/blobs/storage-blob-scalable-app-download-files
await DownloadFilesAsync();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
exception = true;
}
finally
{
// The following function will delete the container and all files contained in them.
// This is commented out initially as the tutorial at
// https://docs.microsoft.com/azure/storage/blobs/storage-blob-scalable-app-download-files
// has you upload only for one tutorial and download for the other.
if (!exception)
{
// await DeleteExistingContainersAsync();
}
Console.WriteLine("Press any key to exit the application");
Console.ReadKey();
}
}

After the application has been updated, you need to build the application again. Open a Command Prompt and
navigate to D:\git\storage-dotnet-perf-scale-app . Rebuild the application by running dotnet build as seen in
the following example:

dotnet build

Run the application


Now that the application has been rebuilt it is time to run the application with the updated code. If not already
open, open a Command Prompt and navigate to D:\git\storage-dotnet-perf-scale-app .
Type dotnet run to run the application.

dotnet run

The DownloadFilesAsync task is shown in the following example:

.NET v12 SDK


.NET v11 SDK
The application reads the containers located in the storage account specified in the storageconnectionstring .
It iterates through the blobs using the GetBlobs method and downloads them to the local machine using the
DownloadToAsync method.

private static async Task DownloadFilesAsync()


{
BlobServiceClient blobServiceClient = GetBlobServiceClient();

// Path to the directory to upload


string downloadPath = Directory.GetCurrentDirectory() + "\\download\\";
Directory.CreateDirectory(downloadPath);
Console.WriteLine($"Created directory {downloadPath}");

// Specify the StorageTransferOptions


var options = new StorageTransferOptions
{
// Set the maximum number of workers that
// may be used in a parallel transfer.
MaximumConcurrency = 8,

// Set the maximum length of a transfer to 50MB.


MaximumTransferSize = 50 * 1024 * 1024
};

List<BlobContainerClient> containers = new List<BlobContainerClient>();

foreach (BlobContainerItem container in blobServiceClient.GetBlobContainers())


{
containers.Add(blobServiceClient.GetBlobContainerClient(container.Name));
}

// Start a timer to measure how long it takes to download all the files.
Stopwatch timer = Stopwatch.StartNew();

// Download the blobs


try
{
int count = 0;

// Create a queue of tasks that will each upload one file.


var tasks = new Queue<Task<Response>>();

foreach (BlobContainerClient container in containers)


{
// Iterate through the files
foreach (BlobItem blobItem in container.GetBlobs())
{
string fileName = downloadPath + blobItem.Name;
Console.WriteLine($"Downloading {blobItem.Name} to {downloadPath}");

BlobClient blob = container.GetBlobClient(blobItem.Name);

// Add the download task to the queue


tasks.Enqueue(blob.DownloadToAsync(fileName, default, options));
count++;
}
}

// Run all the tasks asynchronously.


await Task.WhenAll(tasks);

// Report the elapsed time.


timer.Stop();
Console.WriteLine($"Downloaded {count} files in {timer.Elapsed.TotalSeconds} seconds");
}
catch (RequestFailedException ex)
{
Console.WriteLine($"Azure request failed: {ex.Message}");
}
catch (DirectoryNotFoundException ex)
{
Console.WriteLine($"Error parsing files in the directory: {ex.Message}");
}
catch (Exception ex)
{
Console.WriteLine($"Exception: {ex.Message}");
}
}

Validate the connections


While the files are being downloaded, you can verify the number of concurrent connections to your storage
account. Open a console window and type netstat -a | find /c "blob:https" . This command shows the
number of connections that are currently opened. As you can see from the following example, over 280
connections were open when downloading files from the storage account.

C:\>netstat -a | find /c "blob:https"


289

C:\>

Next steps
In part three of the series, you learned about downloading large amounts of data from a storage account,
including how to:
Run the application
Validate the number of connections
Go to part four of the series to verify throughput and latency metrics in the portal.
Verify throughput and latency metrics in the portal
Verify throughput and latency metrics for a storage
account
11/25/2021 • 2 minutes to read • Edit Online

This tutorial is part four and the final part of a series. In the previous tutorials, you learned how to upload and
download larges amounts of random data to an Azure storage account. This tutorial shows you how you can use
metrics to view throughput and latency in the Azure portal.
In part four of the series, you learn how to:
Configure charts in the Azure portal
Verify throughput and latency metrics
Azure storage metrics uses Azure monitor to provide a unified view into the performance and availability of
your storage account.

Configure metrics
Navigate to Metrics under SETTINGS in your storage account.
Choose Blob from the SUB SERVICE drop-down.
Under METRIC , select one of the metrics found in the following table:
The following metrics give you an idea of the latency and throughput of the application. The metrics you
configure in the portal are in 1-minute averages. If a transaction finished in the middle of a minute that minute
data is halved for the average. In the application, the upload and download operations were timed and provided
you output of the actual amount of time it took to upload and download the files. This information can be used
in conjunction with the portal metrics to fully understand throughput.

M ET RIC DEF IN IT IO N

Success E2E Latency The average end-to-end latency of successful requests made
to a storage service or the specified API operation. This value
includes the required processing time within Azure Storage
to read the request, send the response, and receive
acknowledgment of the response.

Success Ser ver Latency The average time used to process a successful request by
Azure Storage. This value does not include the network
latency specified in SuccessE2ELatency.

Transactions The number of requests made to a storage service or the


specified API operation. This number includes successful and
failed requests, as well as requests that produced errors. In
the example, the block size was set to 100 MB. In this case,
each 100-MB block is considered a transaction.

Ingress The amount of ingress data. This number includes ingress


from an external client into Azure Storage as well as ingress
within Azure.
M ET RIC DEF IN IT IO N

Egress The amount of egress data. This number includes egress


from an external client into Azure Storage as well as egress
within Azure. As a result, this number does not reflect
billable egress.

Select Last 24 hours (Automatic) next to Time . Choose Last hour and Minute for Time granularity , then
click Apply .

Charts can have more than one metric assigned to them, but assigning more than one metric disables the ability
to group by dimensions.

Dimensions
Dimensions are used to look deeper into the charts and get more detailed information. Different metrics have
different dimensions. One dimension that is available is the API name dimension. This dimension breaks out
the chart into each separate API call. The first image below shows an example chart of total transactions for a
storage account. The second image shows the same chart but with the API name dimension selected. As you can
see, each transaction is listed giving more details into how many calls were made by API name.
Clean up resources
When no longer needed, delete the resource group, virtual machine, and all related resources. To do so, select
the resource group for the VM and click Delete.

Next steps
In part four of the series, you learned about viewing metrics for the example solution, such as how to:
Configure charts in the Azure portal
Verify throughput and latency metrics
Follow this link to see pre-built storage samples.
Azure storage script samples
Tutorial: Host a static website on Blob Storage
11/25/2021 • 4 minutes to read • Edit Online

In this tutorial, you'll learn how to build and deploy a static website to Azure Storage. When you're finished, you
will have a static website that users can access publicly.
In this tutorial, you learn how to:
Configure static website hosting
Deploy a Hello World website
Static websites have some limitations. For example, If you want to configure headers, you'll have to use Azure
Content Delivery Network (Azure CDN). There's no way to configure headers as part of the static website feature
itself. Also, AuthN and AuthZ are not supported.
If these features are important for your scenario, consider using Azure Static Web Apps. It's a great alternative to
static websites and is also appropriate in cases where you don't require a web server to render content. You can
configure headers and AuthN / AuthZ is fully supported. Azure Static Web Apps also provides a fully managed
continuous integration and continuous delivery (CI/CD) workflow from GitHub source to global deployment.

Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.

NOTE
Static websites are now available for general-purpose v2 Standard storage accounts as well as storage accounts with
hierarchical namespace enabled.

This tutorial uses Visual Studio Code, a free tool for programmers, to build the static website and deploy it to an
Azure Storage account.
After you install Visual Studio Code, install the Azure Storage preview extension. This extension integrates Azure
Storage management functionality with Visual Studio Code. You will use the extension to deploy your static
website to Azure Storage. To install the extension:
1. Launch Visual Studio Code.
2. On the toolbar, click Extensions . Search for Azure Storage, and select the Azure Storage extension from
the list. Then click the Install button to install the extension.
Sign in to the Azure portal
Sign in to the Azure portal to get started.

Configure static website hosting


The first step is to configure your storage account to host a static website in the Azure portal. When you
configure your account for static website hosting, Azure Storage automatically creates a container named $web.
The $web container will contain the files for your static website.
1. Open the Azure portal in your web browser.
2. Locate your storage account and display the account overview.
3. Select Static website to display the configuration page for static websites.
4. Select Enabled to enable static website hosting for the storage account.
5. In the Index document name field, specify a default index page of index.html. The default index page is
displayed when a user navigates to the root of your static website.
6. In the Error document path field, specify a default error page of 404.html. The default error page is
displayed when a user attempts to navigate to a page that does not exist in your static website.
7. Click Save . The Azure portal now displays your static website endpoint.

Deploy a Hello World website


Next, create a Hello World web page with Visual Studio Code and deploy it to the static website hosted in your
Azure Storage account.
1. Create an empty folder named mywebsite on your local file system.
2. Launch Visual Studio Code, and open the folder that you just created from the Explorer panel.

3. Create the default index file in the mywebsite folder and name it index.html.

4. Open index.html in the editor, paste the following text into the file, and save it:

<!DOCTYPE html>
<html>
<body>
<h1>Hello World!</h1>
</body>
</html>

5. Create the default error file and name it 404.html.


6. Open 404.html in the editor, paste the following text into the file, and save it:

<!DOCTYPE html>
<html>
<body>
<h1>404</h1>
</body>
</html>

7. Right-click under the mywebsite folder in the Explorer panel and select Deploy to Static Website... to
deploy your website. You will be prompted to log in to Azure to retrieve a list of subscriptions.
8. Select the subscription containing the storage account for which you enabled static website hosting. Next,
select the storage account when prompted.
Visual Studio Code will now upload your files to your web endpoint, and show the success status bar. Launch the
website to view it in Azure.
You've successfully completed the tutorial and deployed a static website to Azure.

Feature support
This table shows how this feature is supported in your account and the impact on support when you enable
certain capabilities.

B LO B STO RA GE ( DEFA ULT DATA L A K E STO RA GE GEN 2


STO RA GE A C C O UN T T Y P E SUP P O RT ) 1 N F S 3. 0 1

Standard general-purpose
v2

Premium block blobs

1 Data Lake Storage Gen2 and the Network File System (NFS) 3.0 protocol both require a storage account with a
hierarchical namespace enabled.

Next steps
In this tutorial, you learned how to configure your Azure Storage account for static website hosting, and how to
create and deploy a static website to an Azure endpoint.
Next, learn how to configure a custom domain with your static website.
Map a custom domain to an Azure Blob Storage endpoint
Tutorial: Build a highly available application with
Blob storage
11/25/2021 • 12 minutes to read • Edit Online

This tutorial is part one of a series. In it, you learn how to make your application data highly available in Azure.
When you've completed this tutorial, you will have a console application that uploads and retrieves a blob from
a read-access geo-zone-redundant (RA-GZRS) storage account.
Geo-redundancy in Azure Storage replicates transactions asynchronously from a primary region to a secondary
region that is hundreds of miles away. This replication process guarantees that the data in the secondary region
is eventually consistent. The console application uses the circuit breaker pattern to determine which endpoint to
connect to, automatically switching between endpoints as failures and recoveries are simulated.
If you don't have an Azure subscription, create a free account before you begin.
In part one of the series, you learn how to:
Create a storage account
Set the connection string
Run the console application

Prerequisites
To complete this tutorial:
.NET v12 SDK
.NET v11 SDK
Python v12 SDK
Python v2.1
Node.js v12 SDK
Node.js v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.

Sign in to the Azure portal


Sign in to the Azure portal.

Create a storage account


A storage account provides a unique namespace to store and access your Azure Storage data objects.
Follow these steps to create a read-access geo-zone-redundant (RA-GZRS) storage account:
1. Select the Create a resource button in the Azure portal.
2. Select Storage account - blob, file, table, queue from the New page.
3. Fill out the storage account form with the following information, as shown in the following image and
select Create :
SET T IN G SA M P L E VA L UE DESC RIP T IO N

Subscription My subscription For details about your subscriptions,


see Subscriptions.

ResourceGroup myResourceGroup For valid resource group names, see


Naming rules and restrictions.

Name mystorageaccount A unique name for your storage


account.

Location East US Choose a location.

Performance Standard Standard performance is a good


option for the example scenario.

Account kind StorageV2 Using a general-purpose v2 storage


account is recommended. For more
information on types of Azure
storage accounts, see Storage
account overview.

Replication Read-access geo-zone-redundant The primary region is zone-


storage (RA-GZRS) redundant and is replicated to a
secondary region, with read access
to the secondary region enabled.

Access tier Hot Use the hot tier for frequently-


accessed data.

Download the sample


.NET v12 SDK
.NET v11 SDK
Python v12 SDK
Python v2.1
Node.js v12 SDK
Node.js v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.

Configure the sample


.NET v12 SDK
.NET v11 SDK
Python v12 SDK
Python v2.1
Node.js v12 SDK
Node.js v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.

Run the console application


.NET v12 SDK
.NET v11 SDK
Python v12 SDK
Python v2.1
Node.js v12 SDK
Node.js v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.

Understand the sample code


.NET v12 SDK
.NET v11 SDK
Python v12 SDK
Python v2.1
Node.js v12 SDK
Node.js v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.

Next steps
In part one of the series, you learned about making an application highly available with RA-GZRS storage
accounts.
Advance to part two of the series to learn how to simulate a failure and force your application to use the
secondary RA-GZRS endpoint.
Simulate a failure in reading from the primary region
Tutorial: Simulate a failure in reading data from the
primary region
11/25/2021 • 5 minutes to read • Edit Online

This tutorial is part two of a series. In it, you learn about the benefits of read-access geo-zone-redundant storage
(RA-GZRS) by simulating a failure.
In order to simulate a failure, you can use either static routing or Fiddler. Both methods will allow you to
simulate failure for requests to the primary endpoint of your read-access geo-redundant (RA-GZRS) storage
account, leading the application to read from the secondary endpoint instead.
If you don't have an Azure subscription, create a free account before you begin.
In part two of the series, you learn how to:
Run and pause the application
Simulate a failure with an invalid static route or Fiddler
Simulate primary endpoint restoration

Prerequisites
Before you begin this tutorial, complete the previous tutorial: Make your application data highly available with
Azure storage.
To simulate a failure with static routing, you will use an elevated command prompt.
To simulate a failure using Fiddler, download and install Fiddler

Simulate a failure with an invalid static route


You can create an invalid static route for all requests to the primary endpoint of your read-access geo-redundant
(RA-GZRS) storage account. In this tutorial, the local host is used as the gateway for routing requests to the
storage account. Using the local host as the gateway causes all requests to your storage account primary
endpoint to loop back inside the host, which subsequently leads to failure. Follow the following steps to simulate
a failure, and primary endpoint restoration with an invalid static route.
Start and pause the application
Use the instructions in the previous tutorial to launch the sample and download the test file, confirming that it
comes from primary storage. Depending on your target platform, you can then manually pause the sample or
wait at a prompt.
Simulate failure
While the application is paused, open a command prompt on Windows as an administrator or run terminal as
root on Linux.
Get information about the storage account primary endpoint domain by entering the following command on a
command prompt or terminal, replacing STORAGEACCOUNTNAME with the name of your storage account.

nslookup STORAGEACCOUNTNAME.blob.core.windows.net

Copy to the IP address of your storage account to a text editor for later use.
To get the IP address of your local host, type ipconfig on the Windows command prompt, or ifconfig on the
Linux terminal.
To add a static route for a destination host, type the following command on a Windows command prompt or
Linux terminal, replacing <destination_ip> with your storage account IP address and <gateway_ip> with your
local host IP address.
Linux

route add <destination_ip> gw <gateway_ip>

Windows

route add <destination_ip> <gateway_ip>

In the window with the running sample, resume the application or press the appropriate key to download the
sample file and confirm that it comes from secondary storage. You can then pause the sample again or wait at
the prompt.
Simulate primary endpoint restoration
To simulate the primary endpoint becoming functional again, delete the invalid static route from the routing
table. This allows all requests to the primary endpoint to be routed through the default gateway. Type the
following command on a Windows command prompt or Linux terminal.
Linux

route del <destination_ip> gw <gateway_ip>

Windows

route delete <destination_ip>

You can then resume the application or press the appropriate key to download the sample file again, this time
confirming that it once again comes from primary storage.

Simulate a failure with Fiddler


To simulate failure with Fiddler, you inject a failed response for requests to the primary endpoint of your RA-
GZRS storage account.
The following sections depict how to simulate a failure and primary endpoint restoration with fiddler.
Launch fiddler
Open Fiddler, select Rules and Customize Rules .
The Fiddler ScriptEditor launches and displays the SampleRules.js file. This file is used to customize Fiddler.
Paste the following code sample in the OnBeforeResponse function, replacing STORAGEACCOUNTNAME with the name
of your storage account. Depending on the sample, you may also need to replace HelloWorld with the name of
the test file (or a prefix such as sampleFile ) being downloaded. The new code is commented out to ensure that
it doesn't run immediately.
Once complete, select File and Save to save your changes. Leave the ScriptEditor window open for use in the
following steps.

/*
// Simulate data center failure
// After it is successfully downloading the blob, pause the code in the sample,
// uncomment these lines of script, and save the script.
// It will intercept the (probably successful) responses and send back a 503 error.
// When you're ready to stop sending back errors, comment these lines of script out again
// and save the changes.

if ((oSession.hostname == "STORAGEACCOUNTNAME.blob.core.windows.net")
&& (oSession.PathAndQuery.Contains("HelloWorld"))) {
oSession.responseCode = 503;
}
*/
Start and pause the application
Use the instructions in the previous tutorial to launch the sample and download the test file, confirming that it
comes from primary storage. Depending on your target platform, you can then manually pause the sample or
wait at a prompt.
Simulate failure
While the application is paused, switch back to Fiddler and uncomment the custom rule you saved in the
OnBeforeResponse function. Be sure to select File and Save to save your changes so the rule will take effect. This
code looks for requests to the RA-GZRS storage account and, if the path contains the name of the sample file,
returns a response code of 503 - Service Unavailable .
In the window with the running sample, resume the application or press the appropriate key to download the
sample file and confirm that it comes from secondary storage. You can then pause the sample again or wait at
the prompt.
Simulate primary endpoint restoration
In Fiddler, remove or comment out the custom rule again. Select File and Save to ensure the rule will no longer
be in effect.
In the window with the running sample, resume the application or press the appropriate key to download the
sample file and confirm that it comes from primary storage once again. You can then exit the sample.

Next steps
In part two of the series, you learned about simulating a failure to test read access geo-redundant storage.
To learn more about how RA-GZRS storage works, as well as its associated risks, read the following article:
Designing HA apps with RA-GZRS
Tutorial - Encrypt and decrypt blobs using Azure
Key Vault
11/25/2021 • 7 minutes to read • Edit Online

This tutorial covers how to make use of client-side storage encryption with Azure Key Vault. It walks you through
how to encrypt and decrypt a blob in a console application using these technologies.
Estimated time to complete: 20 minutes
For overview information about Azure Key Vault, see What is Azure Key Vault?.
For overview information about client-side encryption for Azure Storage, see Client-Side Encryption and Azure
Key Vault for Microsoft Azure Storage.

Prerequisites
To complete this tutorial, you must have the following:
An Azure Storage account
Visual Studio 2013 or later
Azure PowerShell

Overview of client-side encryption


For an overview of client-side encryption for Azure Storage, see Client-Side Encryption and Azure Key Vault for
Microsoft Azure Storage
Here is a brief description of how client side encryption works:
1. The Azure Storage client SDK generates a content encryption key (CEK), which is a one-time-use symmetric
key.
2. Customer data is encrypted using this CEK.
3. The CEK is then wrapped (encrypted) using the key encryption key (KEK). The KEK is identified by a key
identifier and can be an asymmetric key pair or a symmetric key and can be managed locally or stored in
Azure Key Vault. The Storage client itself never has access to the KEK. It just invokes the key wrapping
algorithm that is provided by Key Vault. Customers can choose to use custom providers for key
wrapping/unwrapping if they want.
4. The encrypted data is then uploaded to the Azure Storage service.

Set up your Azure Key Vault


In order to proceed with this tutorial, you need to do the following steps, which are outlined in the tutorial
Quickstart: Set and retrieve a secret from Azure Key Vault by using a .NET web app:
Create a key vault.
Add a key or secret to the key vault.
Register an application with Azure Active Directory.
Authorize the application to use the key or secret.
Make note of the ClientID and ClientSecret that were generated when registering an application with Azure
Active Directory.
Create both keys in the key vault. We assume for the rest of the tutorial that you have used the following names:
ContosoKeyVault and TestRSAKey1.

Create a console application with packages and AppSettings


In Visual Studio, create a new console application.
Add necessary nuget packages in the Package Manager Console.

Install-Package Microsoft.Azure.ConfigurationManager
Install-Package Microsoft.Azure.Storage.Common
Install-Package Microsoft.Azure.Storage.Blob
Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory

Install-Package Microsoft.Azure.KeyVault
Install-Package Microsoft.Azure.KeyVault.Extensions

Add AppSettings to the App.Config.

<appSettings>
<add key="accountName" value="myaccount"/>
<add key="accountKey" value="theaccountkey"/>
<add key="clientId" value="theclientid"/>
<add key="clientSecret" value="theclientsecret"/>
<add key="container" value="stuff"/>
</appSettings>

Add the following using directives and make sure to add a reference to System.Configuration to the project.

.NET v12 SDK


.NET v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.

Add a method to get a token to your console application


The following method is used by Key Vault classes that need to authenticate for access to your key vault.
.NET v12 SDK
.NET v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.

Access Azure Storage and Key Vault in your program


In the Main() method, add the following code.
.NET v12 SDK
.NET v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
NOTE
Key Vault Object Models
It is important to understand that there are actually two Key Vault object models to be aware of: one is based on the
REST API (KeyVault namespace) and the other is an extension for client-side encryption.
The Key Vault Client interacts with the REST API and understands JSON Web Keys and secrets for the two kinds of things
that are contained in Key Vault.
The Key Vault Extensions are classes that seem specifically created for client-side encryption in Azure Storage. They contain
an interface for keys (IKey) and classes based on the concept of a Key Resolver. There are two implementations of IKey
that you need to know: RSAKey and SymmetricKey. Now they happen to coincide with the things that are contained in a
Key Vault, but at this point they are independent classes (so the Key and Secret retrieved by the Key Vault Client do not
implement IKey).

Encrypt blob and upload


Add the following code to encrypt a blob and upload it to your Azure storage account. The ResolveKeyAsync
method that is used returns an IKey.
.NET v12 SDK
.NET v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.

NOTE
If you look at the BlobEncryptionPolicy constructor, you will see that it can accept a key and/or a resolver. Be aware that
right now you cannot use a resolver for encryption because it does not currently support a default key.

Decrypt blob and download


Decryption is really when using the Resolver classes make sense. The ID of the key used for encryption is
associated with the blob in its metadata, so there is no reason for you to retrieve the key and remember the
association between key and blob. You just have to make sure that the key remains in Key Vault.
The private key of an RSA Key remains in Key Vault, so for decryption to occur, the Encrypted Key from the blob
metadata that contains the CEK is sent to Key Vault for decryption.
Add the following to decrypt the blob that you just uploaded.

.NET v12 SDK


.NET v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.

NOTE
There are a couple of other kinds of resolvers to make key management easier, including: AggregateKeyResolver and
CachingKeyResolver.
Use Key Vault secrets
The way to use a secret with client-side encryption is via the SymmetricKey class because a secret is essentially a
symmetric key. But, as noted above, a secret in Key Vault does not map exactly to a SymmetricKey. There are a
few things to understand:
The key in a SymmetricKey has to be a fixed length: 128, 192, 256, 384, or 512 bits.
The key in a SymmetricKey should be Base64 encoded.
A Key Vault secret that will be used as a SymmetricKey needs to have a Content Type of "application/octet-
stream" in Key Vault.
Here is an example in PowerShell of creating a secret in Key Vault that can be used as a SymmetricKey. Please
note that the hard coded value, $key, is for demonstration purpose only. In your own code you'll want to
generate this key.

// Here we are making a 128-bit key so we have 16 characters.


// The characters are in the ASCII range of UTF8 so they are
// each 1 byte. 16 x 8 = 128.
$key = "qwertyuiopasdfgh"
$b = [System.Text.Encoding]::UTF8.GetBytes($key)
$enc = [System.Convert]::ToBase64String($b)
$secretvalue = ConvertTo-SecureString $enc -AsPlainText -Force

// Substitute the VaultName and Name in this command.


$secret = Set-AzureKeyVaultSecret -VaultName 'ContosoKeyVault' -Name 'TestSecret2' -SecretValue $secretvalue
-ContentType "application/octet-stream"

In your console application, you can use the same call as before to retrieve this secret as a SymmetricKey.
.NET v12 SDK
.NET v11 SDK

We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.

Next steps
For more information about using Microsoft Azure Storage with C#, see Microsoft Azure Storage Client Library
for .NET.
For more information about the Blob REST API, see Blob Service REST API.
For the latest information on Microsoft Azure Storage, go to the Microsoft Azure Storage Team Blog.
Tutorial: Add a role assignment condition to restrict
access to blobs using the Azure portal (preview)
11/25/2021 • 4 minutes to read • Edit Online

IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some
cases you might want to provide more fine-grained access control by adding a role assignment condition.
In this tutorial, you learn how to:
Add a condition to a role assignment
Restrict access to blobs based on a blob index tag

Prerequisites
For information about the prerequisites to add or edit role assignment conditions, see Conditions prerequisites.

Condition
In this tutorial, you restrict access to blobs with a specific tag. For example, you add a condition to a role
assignment so that Chandra can only read files with the tag Project=Cascade .

If Chandra tries to read a blob without the tag Project=Cascade , access is not allowed.

Here is what the condition looks like in code:


(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
AND
SubOperationMatches{'Blob.Read.WithTagConditions'})
)
OR
(

@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
] StringEqualsIgnoreCase 'Cascade'
)
)

Step 1: Create a user


1. Sign in to the Azure portal as an Owner of a subscription.
2. Click Azure Active Director y .
3. Create a user or find an existing user. This tutorial uses Chandra as the example.

Step 2: Set up storage


1. Create a storage account that is compatible with the blob index tags feature. For more information, see
Manage and find Azure Blob data with blob index tags.
2. Create a new container within the storage account and set the Public access level to Private (no
anonymous access) .
3. In the container, click Upload to open the Upload blob pane.
4. Find a text file to upload.
5. Click Advanced to expand the pane.
6. In the Blob index tags section, add the following blob index tag to the text file.
If you don't see the Blob index tags section and you just registered your subscription, you might need to
wait a few minutes for changes to propagate. For more information, see Use blob index tags to manage
and find data on Azure Blob Storage.

NOTE
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to
blob index tags, you must use blob index tags with conditions.

K EY VA L UE

Project Cascade
7. Click the Upload button to upload the file.
8. Upload a second text file.
9. Add the following blob index tag to the second text file.

K EY VA L UE

Project Baker

Step 3: Assign a storage blob data role


1. Open the resource group.
2. Click Access control (IAM) .
3. Click the Role assignments tab to view the role assignments at this scope.
4. Click Add > Add role assignment .

The Add role assignment page opens.


5. On the Roles tab, select the Storage Blob Data Reader role.

6. On the Members tab, select the user you created earlier.

7. (Optional) In the Description box, enter Read access to blobs with the tag Project=Cascade .
8. Click Next .

Step 4: Add a condition


1. On the Conditions (optional) tab, click Add condition .
The Add role assignment condition page appears.
2. In the Add action section, click Add action .
The Select an action pane appears. This pane is a filtered list of data actions based on the role assignment
that will be the target of your condition.

3. Under Read a blog, click Read content from a blob with tag conditions and then click Select .
4. In the Build expression section, click Add expression .
The Expression section expands.
5. Specify the following expression settings:
SET T IN G VA L UE

Attribute source Resource

Attribute Blob index tags [Values in key]

Key Project

Operator StringEqualsIgnoreCase

Value Cascade

6. Scroll up to Editor type and click Code .


The condition is displayed as code. You can make changes to the condition in this code editor. To go back
to the visual editor, click Visual .
7. Click Save to add the condition and return the Add role assignment page.
8. Click Next .
9. On the Review + assign tab, click Review + assign to assign the role with a condition.
After a few moments, the security principal is assigned the role at the selected scope.

Step 5: Assign Reader role


Repeat the previous steps to assign the Reader role to the user you created earlier at resource group
scope.

NOTE
You typically don't need to assign the Reader role. However, this is done so that you can test the condition using
the Azure portal.
Step 6: Test the condition
1. In a new window, open the Azure portal.
2. Sign in as the user you created earlier.
3. Open the storage account and container you created.
4. Ensure that the authentication method is set to Azure AD User Account and not Access key .

5. Click the Baker text file.


You should NOT be able to view or download the blob and an authorization failed message should be
displayed.
6. Click Cascade text file.
You should be able to view and download the blob.

Step 7: Clean up resources


1. Remove the role assignment you added.
2. Delete the test storage account you created.
3. Delete the user you created.

Next steps
Example Azure role assignment conditions
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
Azure role assignment condition format and syntax
Tutorial: Add a role assignment condition to restrict
access to blobs using Azure PowerShell (preview)
11/25/2021 • 5 minutes to read • Edit Online

IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some
cases you might want to provide more fine-grained access control by adding a role assignment condition.
In this tutorial, you learn how to:
Add a condition to a role assignment
Restrict access to blobs based on a blob index tag

Prerequisites
For information about the prerequisites to add or edit role assignment conditions, see Conditions prerequisites.

Condition
In this tutorial, you restrict access to blobs with a specific tag. For example, you add a condition to a role
assignment so that Chandra can only read files with the tag Project=Cascade.

If Chandra tries to read a blob without the tag Project=Cascade, access is not allowed.

Here is what the condition looks like in code:


(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
AND
SubOperationMatches{'Blob.Read.WithTagConditions'})
)
OR
(

@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
] StringEquals 'Cascade'
)
)

Step 1: Install prerequisites


1. Open a PowerShell window.
2. Use Get-InstalledModule to check versions of installed modules.

Get-InstalledModule -Name Az
Get-InstalledModule -Name Az.Resources
Get-InstalledModule -Name Az.Storage

3. If necessary, use Install-Module to install the required versions for the Az, Az.Resources, and Az.Storage
modules.

Install-Module -Name Az -RequiredVersion 5.5.0


Install-Module -Name Az.Resources -RequiredVersion 3.2.1
Install-Module -Name Az.Storage -RequiredVersion 2.5.2-preview -AllowPrerelease

4. Close and reopen PowerShell to refresh session.

Step 2: Sign in to Azure


1. Use the Connect-AzAccount command and follow the instructions that appear to sign in to your directory
as User Access Administrator or Owner.

Connect-AzAccount

2. Use Get-AzSubscription to list all of your subscriptions.

Get-AzSubscription

3. Determine the subscription ID and initialize the variable.

$subscriptionId = "<subscriptionId>"

4. Set the subscription as the active subscription.

$context = Get-AzSubscription -SubscriptionId $subscriptionId


Set-AzContext $context
Step 3: Create a user
1. Use New-AzureADUser to create a user or find an existing user. This tutorial uses Chandra as the example.
2. Initialize the variable for the object ID of the user.

$userObjectId = "<userObjectId>"

Step 4: Set up storage


1. Use New-AzStorageAccount to create a storage account that is compatible with the blob index feature.
For more information, see Manage and find Azure Blob data with blob index tags (preview).
2. Use New-AzStorageContainer to create a new blob container within the storage account and set the
Public access level to Private (no anonymous access) .
3. Use Set-AzStorageBlobContent to upload a text file to the container.
4. Add the following blob index tag to the text file. For more information, see Use blob index tags (preview)
to manage and find data on Azure Blob Storage.

NOTE
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to
blob index tags, you must use blob index tags with conditions.

K EY VA L UE

Project Cascade

5. Upload a second text file to the container.


6. Add the following blob index tag to the second text file.

K EY VA L UE

Project Baker

7. Initialize the following variables with the names you used.

$resourceGroup = "<resourceGroup>"
$storageAccountName = "<storageAccountName>"
$containerName = "<containerName>"
$blobNameCascade = "<blobNameCascade>"
$blobNameBaker = "<blobNameBaker>"

Step 5: Assign a role with a condition


1. Initialize the Storage Blob Data Reader role variables.

$roleDefinitionName = "Storage Blob Data Reader"


$roleDefinitionId = "2a2b9908-6ea1-4ae2-8e65-a410df84e7d1"

2. Initialize the scope for the resource group.


$scope = "/subscriptions/$subscriptionId/resourceGroups/$resourceGroup"

3. Initialize the condition.

$condition = "((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_se
nsitive`$>] StringEquals 'Cascade'))"

In PowerShell, if your condition includes a dollar sign ($), you must prefix it with a backtick (`). For
example, this condition uses dollar signs to delineate the tag key name.
4. Initialize the condition version and description.

$conditionVersion = "2.0"
$description = "Read access to blobs with the tag Project=Cascade"

5. Use New-AzRoleAssignment to assign the Storage Blob Data Reader role with a condition to the user at a
resource group scope.

New-AzRoleAssignment -ObjectId $userObjectId -Scope $scope -RoleDefinitionId $roleDefinitionId -


Description $description -Condition $condition -ConditionVersion $conditionVersion

Here's an example of the output:

RoleAssignmentId : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microso
ft.Authorization/roleAssignments/<roleAssignmentId>
Scope : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>
DisplayName : Chandra
SignInName : [email protected]
RoleDefinitionName : Storage Blob Data Reader
RoleDefinitionId : 2a2b9908-6ea1-4ae2-8e65-a410df84e7d1
ObjectId : <userObjectId>
ObjectType : User
CanDelegate : False
Description : Read access to blobs with the tag Project=Cascade
ConditionVersion : 2.0
Condition : ((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/co
ntainers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'))

Step 6: (Optional) View the condition in the Azure portal


1. In the Azure portal, open the resource group.
2. Click Access control (IAM) .
3. On the Role assignments tab, find the role assignment.
4. In the Condition column, click View/Edit to view the condition.
Step 7: Test the condition
1. Open a new PowerShell window.
2. Use Connect-AzAccount to sign in as Chandra.

Connect-AzAccount

3. Initialize the following variables with the names you used.

$storageAccountName = "<storageAccountName>"
$containerName = "<containerName>"
$blobNameBaker = "<blobNameBaker>"
$blobNameCascade = "<blobNameCascade>"

4. Use New-AzStorageContext to create a specific context to access your storage account more easily.

$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName

5. Use Get-AzStorageBlob to try to read the file for the Baker project.

Get-AzStorageBlob -Container $containerName -Blob $blobNameBaker -Context $bearerCtx

Here's an example of the output. Notice that you can't read the file because of the condition you added.
Get-AzStorageBlob : This request is not authorized to perform this operation using this permission.
HTTP Status Code:
403 - HTTP Error Message: This request is not authorized to perform this operation using this
permission.
ErrorCode: AuthorizationPermissionMismatch
ErrorMessage: This request is not authorized to perform this operation using this permission.
RequestId: <requestId>
Time: Sat, 24 Apr 2021 13:26:25 GMT
At line:1 char:1
+ Get-AzStorageBlob -Container $containerName -Blob $blobNameBaker -Con ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [Get-AzStorageBlob], StorageException
+ FullyQualifiedErrorId :
StorageException,Microsoft.WindowsAzure.Commands.Storage.Blob.Cmdlet.GetAzureStorageBlob
Command

6. Read the file for the Cascade project.

Get-AzStorageBlob -Container $containerName -Blob $blobNameCascade -Context $bearerCtx

Here's an example of the output. Notice that you can read the file because it has the tag Project=Cascade.

AccountName: <storageAccountName>, ContainerName: <containerName>

Name BlobType Length ContentType LastModified


AccessTier SnapshotT

ime
---- -------- ------ ----------- ------------ --
-------- ---------
CascadeFile.txt BlockBlob 7 text/plain 2021-04-24 05:35:24Z
Hot

Step 8: (Optional) Edit the condition


1. In the other PowerShell window, use Get-AzRoleAssignment to get the role assignment you added.

$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId


$userObjectId

2. Edit the condition.

$condition = "((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_se
nsitive`$>] StringEquals 'Cascade' OR
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sen
sitive`$>] StringEquals 'Baker'))"

3. Initialize the condition and description.

$testRa.Condition = $condition
$testRa.Description = "Read access to blobs with the tag Project=Cascade or Project=Baker"

4. Use Set-AzRoleAssignment to update the condition for the role assignment.


Set-AzRoleAssignment -InputObject $testRa -PassThru

Here's an example of the output:

RoleAssignmentId : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microso
ft.Authorization/roleAssignments/<roleAssignmentId>
Scope : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>
DisplayName : Chandra
SignInName : [email protected]
RoleDefinitionName : Storage Blob Data Reader
RoleDefinitionId : 2a2b9908-6ea1-4ae2-8e65-a410df84e7d1
ObjectId : <userObjectId>
ObjectType : User
CanDelegate : False
Description : Read access to blobs with the tag Project=Cascade or Project=Baker
ConditionVersion : 2.0
Condition : ((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/co
ntainers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade' OR
@Resource[Microsoft.S

torage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
StringEquals 'Baker'))

Step 9: Clean up resources


1. Use Remove-AzRoleAssignment to remove the role assignment and condition you added.

Remove-AzRoleAssignment -ObjectId $userObjectId -RoleDefinitionName $roleDefinitionName -


ResourceGroupName $resourceGroup

2. Delete the storage account you created.


3. Delete the user you created.

Next steps
Example Azure role assignment conditions
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
Azure role assignment condition format and syntax
Tutorial: Add a role assignment condition to restrict
access to blobs using Azure CLI (preview)
11/25/2021 • 6 minutes to read • Edit Online

IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some
cases you might want to provide more fine-grained access control by adding a role assignment condition.
In this tutorial, you learn how to:
Add a condition to a role assignment
Restrict access to blobs based on a blob index tag

Prerequisites
For information about the prerequisites to add or edit role assignment conditions, see Conditions prerequisites.

Condition
In this tutorial, you restrict access to blobs with a specific tag. For example, you add a condition to a role
assignment so that Chandra can only read files with the tag Project=Cascade.

If Chandra tries to read a blob without the tag Project=Cascade, access is not allowed.

Here is what the condition looks like in code:


(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
AND
SubOperationMatches{'Blob.Read.WithTagConditions'})
)
OR
(

@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
] StringEquals 'Cascade'
)
)

Step 1: Sign in to Azure


1. Use the az login command and follow the instructions that appear to sign in to your directory as User
Access Administrator or Owner.

az login

2. Use az account show to get the ID of your subscriptions.

az account show

3. Determine the subscription ID and initialize the variable.

subscriptionId="<subscriptionId>"

Step 2: Create a user


1. Use az ad user create to create a user or find an existing user. This tutorial uses Chandra as the example.
2. Initialize the variable for the object ID of the user.

userObjectId="<userObjectId>"

Step 3: Set up storage


You can authorize access to Blob storage from the Azure CLI either with Azure AD credentials or by using the
storage account access key. This article shows how to authorize Blob storage operations using Azure AD. For
more information, see Quickstart: Create, download, and list blobs with Azure CLI
1. Use az storage account to create a storage account that is compatible with the blob index feature. For
more information, see Manage and find Azure Blob data with blob index tags (preview).
2. Use az storage container to create a new blob container within the storage account and set the Public
access level to Private (no anonymous access) .
3. Use az storage blob upload to upload a text file to the container.
4. Add the following blob index tag to the text file. For more information, see Use blob index tags (preview)
to manage and find data on Azure Blob Storage.
NOTE
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to
blob index tags, you must use blob index tags with conditions.

K EY VA L UE

Project Cascade

5. Upload a second text file to the container.


6. Add the following blob index tag to the second text file.

K EY VA L UE

Project Baker

7. Initialize the following variables with the names you used.

resourceGroup="<resourceGroup>"
storageAccountName="<storageAccountName>"
containerName="<containerName>"
blobNameCascade="<blobNameCascade>"
blobNameBaker="<blobNameBaker>"

Step 4: Assign a role with a condition


1. Initialize the Storage Blob Data Reader role variables.

roleDefinitionName="Storage Blob Data Reader"


roleDefinitionId="2a2b9908-6ea1-4ae2-8e65-a410df84e7d1"

2. Initialize the scope for the resource group.

scope="/subscriptions/$subscriptionId/resourceGroups/$resourceGroup"

3. Initialize the condition.

condition="((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
AND SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<\$key_case_se
nsitive\$>] StringEquals 'Cascade'))"

In Bash, if history expansion is enabled, you might see the message bash: !: event not found because of
the exclamation point (!). In this case, you can disable history expansion with the command set +H . To
re-enable history expansion, use set -H .
In Bash, a dollar sign ($) has special meaning for expansion. If your condition includes a dollar sign ($),
you might need to prefix it with a backslash (\). For example, this condition uses dollar signs to delineate
the tag key name. For more information about rules for quotation marks in Bash, see Double Quotes.
4. Initialize the condition version and description.
conditionVersion="2.0"
description="Read access to blobs with the tag Project=Cascade"

5. Use az role assignment create to assign the Storage Blob Data Reader role with a condition to the user at
a resource group scope.

az role assignment create --assignee-object-id $userObjectId --scope $scope --role $roleDefinitionId


--description "$description" --condition "$condition" --condition-version $conditionVersion

Here's an example of the output:

{
"canDelegate": null,
"condition": "((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sen
sitive$>] StringEquals 'Cascade'))",
"conditionVersion": "2.0",
"description": "Read access to blobs with the tag Project=Cascade",
"id":
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Authorization/rol
eAssignments/{roleAssignmentId}",
"name": "{roleAssignmentId}",
"principalId": "{userObjectId}",
"principalType": "User",
"resourceGroup": "{resourceGroup}",
"roleDefinitionId":
"/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-
4ae2-8e65-a410df84e7d1",
"scope": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}",
"type": "Microsoft.Authorization/roleAssignments"
}

Step 5: (Optional) View the condition in the Azure portal


1. In the Azure portal, open the resource group.
2. Click Access control (IAM) .
3. On the Role assignments tab, find the role assignment.
4. In the Condition column, click View/Edit to view the condition.
Step 6: Test the condition
1. Open a new command window.
2. Use az login to sign in as Chandra.

az login

3. Initialize the following variables with the names you used.

storageAccountName="<storageAccountName>"
containerName="<containerName>"
blobNameBaker="<blobNameBaker>"
blobNameCascade="<blobNameCascade>"

4. Use az storage blob show to try to read the properties of the file for the Baker project.

az storage blob show --account-name $storageAccountName --container-name $containerName --name


$blobNameBaker --auth-mode login

Here's an example of the output. Notice that you can't read the file because of the condition you added.

You do not have the required permissions needed to perform this operation.
Depending on your operation, you may need to be assigned one of the following roles:
"Storage Blob Data Contributor"
"Storage Blob Data Reader"
"Storage Queue Data Contributor"
"Storage Queue Data Reader"

If you want to use the old authentication method and allow querying for the right account key, please
use the "--auth-mode" parameter and "key" value.

5. Read the properties of the file for the Cascade project.


az storage blob show --account-name $storageAccountName --container-name $containerName --name
$blobNameCascade --auth-mode login

Here's an example of the output. Notice that you can read the properties of the file because it has the tag
Project=Cascade.

{
"container": "<containerName>",
"content": "",
"deleted": false,
"encryptedMetadata": null,
"encryptionKeySha256": null,
"encryptionScope": null,
"isAppendBlobSealed": null,
"isCurrentVersion": null,
"lastAccessedOn": null,
"metadata": {},
"name": "<blobNameCascade>",
"objectReplicationDestinationPolicy": null,
"objectReplicationSourceProperties": [],
"properties": {
"appendBlobCommittedBlockCount": null,
"blobTier": "Hot",
"blobTierChangeTime": null,
"blobTierInferred": true,
"blobType": "BlockBlob",
"contentLength": 7,
"contentRange": null,

...

Step 7: (Optional) Edit the condition


1. In the other command window, use az role assignment list to get the role assignment you added.

az role assignment list --assignee $userObjectId --resource-group $resourceGroup

The output will be similar to the following:


[
{
"canDelegate": null,
"condition": "((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sen
sitive$>] StringEquals 'Cascade'))",
"conditionVersion": "2.0",
"description": "Read access to blobs with the tag Project=Cascade",
"id":
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Authorization/rol
eAssignments/{roleAssignmentId}",
"name": "{roleAssignmentId}",
"principalId": "{userObjectId}",
"principalName": "[email protected]",
"principalType": "User",
"resourceGroup": "{resourceGroup}",
"roleDefinitionId":
"/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-
4ae2-8e65-a410df84e7d1",
"roleDefinitionName": "Storage Blob Data Reader",
"scope": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}",
"type": "Microsoft.Authorization/roleAssignments"
}
]

2. Create a JSON file with the following format and update the condition and description properties.

{
"canDelegate": null,
"condition": "((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sen
sitive$>] StringEquals 'Cascade' OR
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sens
itive$>] StringEquals 'Baker'))",
"conditionVersion": "2.0",
"description": "Read access to blobs with the tag Project=Cascade or Project=Baker",
"id":
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Authorization/rol
eAssignments/{roleAssignmentId}",
"name": "{roleAssignmentId}",
"principalId": "{userObjectId}",
"principalName": "[email protected]",
"principalType": "User",
"resourceGroup": "{resourceGroup}",
"roleDefinitionId":
"/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-
4ae2-8e65-a410df84e7d1",
"roleDefinitionName": "Storage Blob Data Reader",
"scope": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}",
"type": "Microsoft.Authorization/roleAssignments"
}

3. Use az role assignment update to update the condition for the role assignment.

az role assignment update --role-assignment "./path/roleassignment.json"

Step 8: Clean up resources


1. Use az role assignment delete to remove the role assignment and condition you added.

az role assignment delete --assignee $userObjectId --role "$roleDefinitionName" --resource-group


$resourceGroup

2. Delete the storage account you created.


3. Delete the user you created.

Next steps
Example Azure role assignment conditions
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
Azure role assignment condition format and syntax
Azure Storage samples using v12 .NET client
libraries
11/25/2021 • 2 minutes to read • Edit Online

The following table provides an overview of our samples repository and the scenarios covered in each sample.
Click on the links to view the corresponding sample code in GitHub.

NOTE
These samples use the latest Azure Storage .NET v12 library. For legacy v11 code, see Azure Blob Storage Samples for .NET
in the GitHub repository.

Blob samples
Authentication
Authenticate using a connection string
Authenticate using a shared key credential
Authenticate with Azure Identity
Authenticate using an Active Directory token
Anonymously access a public blob
Batching
Delete several blobs in one request
Set several blob access tiers in one request
Fine-grained control in a batch request
Catch errors from a failed sub-operation
Blob
Upload a file to a blob
Download a blob to a file
Download an image
List all blobs in a container
Troubleshooting
Trigger a recoverable error using a container client

Data Lake Storage Gen2 samples


Authentication
Anonymously access a public file
Authenticate using a shared key credential
Authenticate using a shared access signature (SAS)
Authenticate using an Active Directory token
File system
Create a file using a file system client
Get properties on a file and a directory
Rename a file and a directory
Directory
Create a directory
Create a file using a directory client
List directories
Traverse files and directories
File
Upload a file
Upload by appending to a file
Download a file
Set and get a file access control list
Set and get permissions of a file
Troubleshooting
Trigger a recoverable error

Azure Files samples


Authentication
Authenticate using a connection string
Authenticate using a shared key credential
Authenticate using a shared access signature (SAS))
File shares
Create a share and upload a file
Download a file
Traverse files and directories
Troubleshooting
Trigger a recoverable error using a share client

Queue samples
Authentication
Authenticate using Azure Active Directory
Authenticate using a connection string
Authenticate using a shared key credential
Authenticate using a shared access signature (SAS))
Authenticate using an Active Directory token
Queue
Create a queue and add a message
Message
Receive and process messages
Peek at messages
Receive messages and update visibility timeout
Troubleshooting
Trigger a recoverable error using a queue client

Table samples (v11)


Create table
Delete entity/table
Insert/merge/replace entity
Query entities
Query tables
Table ACL/properties
Update entity

Azure code sample libraries


To view the complete .NET sample libraries, go to:
Azure blob code samples
Azure Data Lake code samples
Azure Files code samples
Azure queue code samples
You can browse and clone the GitHub repository for each library.

Getting started guides


Check out the following guides if you are looking for instructions on how to install and get started with the
Azure Storage Client Libraries.
Getting Started with Azure Blob Service in .NET
Getting Started with Azure Queue Service in .NET
Getting Started with Azure Table Service in .NET
Getting Started with Azure File Service in .NET

Next steps
For information on samples for other languages:
Java: Azure Storage samples using Java
Python: Azure Storage samples using Python
JavaScript/Node.js: Azure Storage samples using JavaScript
C++: Azure Storage samples using C++
All other languages: Azure Storage samples
Azure Storage samples using v12 Java client libraries
11/25/2021 • 2 minutes to read • Edit Online

The following table provides an overview of our samples repository and the scenarios covered in each sample.
Click on the links to view the corresponding sample code in GitHub.

NOTE
These samples use the latest Azure Storage Java v12 library. For legacy v8 code, see Getting Started with Azure Blob
Service in Java in the GitHub repository.

Blob samples
Authentication
Authenticate using a shared key credential
Authenticate using Azure Identity
Blob service
Create a blob service client
List containers
Delete containers
Batching
Create a blob batch client
Bulk delete blobs
Set access tier on a batch of blobs
Container
Create a container client
Create a container
List blobs
Delete a container
Blob
Upload a blob
Download a blob
Delete a blob
Upload a blob from a large file
Download a large blob to a file
Troubleshooting
Trigger a recoverable error using a container client
Data Lake Storage Gen2 samples
Data Lake service
Create a Data Lake service client
Create a file system client
File system
Create a file system
Create a directory
Create a file and subdirectory
Create a file client
List paths in a file system
Delete a file system
List file systems in an Azure storage account
Directory
Create a directory client
Create a parent directory
Create a child directory
Create a file in a child directory
Get directory properties
Delete a child directory
Delete a parent folder
File
Create a file using a file client
Delete a file
Set access controls on a file
Get access controls on a file

Azure File samples


Authentication
Authenticate using a connection string
File service
Create file shares
Get properties
List shares
Delete shares
File share
Create a share client
Create a share
Create a share snapshot
Create a directory using a share client
Get properties of a share
Get root directory and list directories
Delete a share
Directory
Create a parent directory
Create a child directory
Create a file in a child directory
List directories and files
Delete a child folder
Delete a parent folder
File
Create a file client
Upload a file
Download a file
Get file properties
Delete a file

Queue samples
Authentication
Authenticate using a SAS token
Queue service
Create a queue
List queues
Delete queues
Queue
Create a queue client
Add messages to a queue
Message
Get the count of messages
Peek at messages
Receive messages
Update a message
Delete the first message
Clear all messages
Delete a queue

Table samples (v11)


Create table
Delete entity/table
Insert/merge/replace entity
Query entities
Query tables
Table ACL/properties
Update entity

Azure code sample libraries


To view the complete Java sample libraries, go to:
Azure blob code samples
Azure Data Lake code samples
Azure Files code samples
Azure queue code samples
You can browse and clone the GitHub repository for each library.

Getting started guides


Check out the following guides if you are looking for instructions on how to install and get started with the
Azure Storage Client Libraries.
Getting Started with Azure Blob Service in Java
Getting Started with Azure Queue Service in Java
Getting Started with Azure Table Service in Java
Getting Started with Azure File Service in Java

Next steps
For information on samples for other languages:
.NET: Azure Storage samples using .NET
Python: Azure Storage samples using Python
JavaScript/Node.js: Azure Storage samples using JavaScript
C++: Azure Storage samples using C++
All other languages: Azure Storage samples
Azure Storage samples using v12 Python client
libraries
11/25/2021 • 3 minutes to read • Edit Online

The following tables provide an overview of our samples repository and the scenarios covered in each sample.
Click on the links to view the corresponding sample code in GitHub.

NOTE
These samples use the latest Azure Storage .NET v12 library. For legacy v2.1 code, see Azure Storage: Getting Started with
Azure Storage in Python in the GitHub repository.

Blob samples
Authentication
Create blob service client using a connection string
Create container client using a connection string
Create blob client using a connection string
Create blob service client using a shared access key
Create blob client from URL
Create blob client SAS URL
Create blob service client using ClientSecretCredential
Create SAS token
Create blob service client using Azure Identity
Create blob snapshot
Blob service
Get blob service account info
Set blob service properties
Get blob service properties
Get blob service stats
Create container using service client
List containers
Delete container using service client
Get container client
Get blob client
Container
Create container client from service
Create container client using SAS URL
Create container using container client
Get container properties
Delete container using container client
Acquire lease on container
Set container metadata
Set container access policy
Get container access policy
Generate SAS token
Create container client using SAS token
Upload blob to container
List blobs in container
Get blob client
Blob
Upload a blob
Download a blob
Delete blob
Undelete blob
Get blob properties
Delete multiple blobs
Copy blob from URL
Abort copy blob from URL
Acquire lease on blob

Data Lake Storage Gen2 samples


Data Lake service
Create Data Lake service client
File system
Create file system client
Delete file system
Directory
Create directory client
Get directory permissions
Set directory permissions
Rename directory
Get directory properties
Delete directory
File
Create file client
Create file
Get file permissions
Set file permissions
Append data to file
Read data from file

Azure Files samples


Authentication
Create share service client from connection string
Create share service client from account and access key
Generate SAS token
File service
Set service properties
Get service properties
Create shares using file service client
List shares using file service client
Delete shares using file service client
File share
Create share client from connection string
Get share client
Create share using file share client
Create share snapshot
Delete share using file share client
Set share quota
Set share metadata
Get share properties
Directory
Create directory
Upload file to directory
Delete file from directory
Delete directory
Create subdirectory
List directories and files
Delete subdirectory
Get subdirectory client
List files in directory
File
Create file client
Create file
Upload file
Download file
Delete file
Copy file from URL

Queue samples
Authentication
Authenticate using connection string
Create queue service client token
Create queue client from connection string
Generate queue client SAS token
Queue service
Create queue service client
Set queue service properties
Get queue service properties
Create queue using service client
Delete queue using service client
Queue
Create queue client
Set queue metadata
Get queue properties
Create queue using queue client
Delete queue using queue client
List queues
Get queue client
Message
Send messages
Receive messages
Peek message
Update message
Delete message
Clear messages
Set message access policy

Table samples (SDK v2.1)


Create table
Delete entity/table
Insert/merge/replace entity
Query entities
Query tables
Table ACL/properties
Update entity

Azure code sample libraries


To view the complete Python sample libraries, go to:
Azure blob code samples
Azure Data Lake code samples
Azure Files code samples
Azure queue code samples
You can browse and clone the GitHub repository for each library.

Getting started guides


Check out the following guides if you are looking for instructions on how to install and get started with the
Azure Storage client libraries.
Getting Started with Azure Blob Service in Python
Getting Started with Azure Queue Service in Python
Getting Started with Azure Table Service in Python
Getting Started with Azure File Service in Python

Next steps
For information on samples for other languages:
.NET: Azure Storage samples using .NET
Java: Azure Storage samples using Java
JavaScript/Node.js: Azure Storage samples using JavaScript
C++: Azure Storage samples using C++
All other languages: Azure Storage samples
Azure Storage samples using v12 JavaScript client
libraries
11/25/2021 • 2 minutes to read • Edit Online

The following tables provide an overview of our samples repository and the scenarios covered in each sample.
Click on the links to view the corresponding sample code in GitHub.

NOTE
These samples use the latest Azure Storage JavaScript v12 library. For legacy v11 code, see Getting Started with Azure
Blob Service in Node.js in the GitHub repository.

Blob samples
Authentication
Authenticate using connection string
Authenticate using SAS connection string
Authenticate using shared key credential
Authenticate using AnonymousCredential
Authenticate using Azure Active Directory
Authenticate using a proxy
Connect using a custom pipeline
Blob service
Create blob service client using a SAS URL
Container
Create a container
Create a container using a shared key credential
List containers
List containers using an iterator
List containers by page
Delete a container
Blob
Create a blob
List blobs
Download a blob
List blobs using an iterator
List blobs by page
List blobs by hierarchy
Listing blobs without using await
Create a blob snapshot
Download a blob snapshot
Parallel upload a stream to a blob
Parallel download block blob
Set the access tier on a blob
Troubleshooting
Trigger a recoverable error using a container client

Data Lake Storage Gen2 samples


Create a Data Lake service client
Create a file system
List file systems
Create a file
List paths in a file system
Download a file
Delete a file system

Azure Files samples


Authentication
Authenticate using a connection string
Authenticate using a shared key credential
Authenticate using AnonymousCredential
Connect using a custom pipeline
Connect using a proxy
Share
Create a share
List shares
List shares by page
Delete a share
Directory
Create a directory
List files and directories
List files and directories by page
File
Parallel upload a file
Parallel upload a readable stream
Parallel download a file
List file handles
List file handles by page

Queue samples
Authentication
Authenticate using a connection string
Authenticate using a shared key credential
Authenticate using AnonymousCredential
Connect using a custom pipeline
Connect using a proxy
Authenticate using Azure Active Directory
Queue service
Create a queue service client
Queue
Create a new queue
List queues
List queues by page
Delete a queue
Message
Send a message into a queue
Peek at messages
Receive messages
Delete messages

Table samples (v11)


Batch entities
Create table
Delete entity/table
Insert/merge/replace entity
List tables
Query entities
Query tables
Range query
Shared Access Signature (SAS)
Table ACL
Table Cross-Origin Resource Sharing (CORS) rules
Table properties
Table stats
Update entity

Azure code sample libraries


To view the complete JavaScript sample libraries, go to:
Azure Blob code samples
Azure Data Lake code samples
Azure Files code samples
Azure Queue code samples
You can browse and clone the GitHub repository for each library.

Getting started guides


Check out the following guides if you are looking for instructions on how to install and get started with the
Azure Storage Client Libraries.
Getting Started with Azure Blob Service in JavaScript
Getting Started with Azure Queue Service in JavaScript
Getting Started with Azure Table Service in JavaScript

Next steps
For information on samples for other languages:
.NET: Azure Storage samples using .NET
Java: Azure Storage samples using Java
Python: Azure Storage samples using Python
C++: Azure Storage samples using C++
All other languages: Azure Storage samples
Azure Storage samples using v12 C++ client
libraries
11/25/2021 • 2 minutes to read • Edit Online

The following table provides an overview of our samples repository and the scenarios covered in each sample.
Click on the links to view the corresponding sample code in GitHub.

NOTE
These samples use the latest Azure Storage C++ v12 library.

Blob samples
Authenticate using a connection string
Create a blob container
Get a blob client
Upload a blob
Set metadata on a blob
Get blob properties
Download a blob

Data Lake Storage Gen2 samples


Create a service client using a connection string
Create a file system client using a connection string
Create a file system
Create a directory
Create a file
Append data to a file
Flush file data
Read a file
List all file systems
Delete file system

Azure Files samples


Create a share client using a connection string
Create a file share
Get a file client
Upload a file
Set metadata on a file
Get file properties
Download a file

Azure code sample libraries


To view the complete C++ sample libraries, go to:
Azure Blob code samples
Azure Data Lake code samples
Azure Files code samples
You can browse and clone the GitHub repository for each library.

Getting started guides


Check out the following guides if you are looking for instructions on how to install and get started with the
Azure Storage Client Libraries.
Quickstart: Azure Blob storage library v12 - C++

Next steps
For information on samples for other languages:
.NET: Azure Storage samples using .NET
Java: Azure Storage samples using Java
Python: Azure Storage samples using Python
JavaScript/Node.js: Azure Storage samples using JavaScript
All other languages: Azure Storage samples
Azure Storage samples
11/25/2021 • 2 minutes to read • Edit Online

Use the links below to view and download Azure Storage sample code and applications.

Azure Code Samples library


The Azure Code Samples library includes samples for Azure Storage that you can download and run locally. The
Code Sample Library provides sample code in .zip format. Alternatively, you can browse and clone the GitHub
repository for each sample.

.NET samples
To explore the .NET samples, download the .NET Storage Client Library from NuGet. The .NET storage client
library is also available in the Azure SDK for .NET.
Azure Storage samples using .NET

Java samples
To explore the Java samples, download the Java Storage Client Library.
Azure Storage samples using Java

Python samples
To explore the Python samples, download the Python Storage Client Library.
Azure Storage samples using Python

Node.js samples
To explore the Node.js samples, download the Node.js Storage Client Library.
Azure Storage samples using JavaScript/Node.js

C++ samples
To explore the C++ samples, get the Azure Storage Client Library for C++ from GitHub.
Get started with Azure Blobs
Get started with Azure Data Lake
Get started with Azure Files

Azure CLI
To explore the Azure CLI samples, first Install the Azure CLI.
Get started with the Azure CLI
Azure Storage samples using the Azure CLI

API reference and source code


L A N GUA GE A P I REF EREN C E SO URC E C O DE

.NET .NET Client Library Reference Source code for the .NET storage client
library

Java Java Client Library Reference Source code for the Java storage client
library

Python Python Client Library Reference Source code for the Python storage
client library

Node.js Node.js Client Library Reference Source code for the Node.js storage
client library

C++ C++ Client Library Reference Source code for the C++ storage client
library

Azure CLI Azure CLI Library Reference Source code for the Azure CLI storage
client library

Next steps
The following articles index each of the samples by service (blob, file, queue, table).
Azure Storage samples using .NET
Azure Storage samples using Java
Azure Storage samples using JavaScript
Azure Storage samples using Python
Azure Storage samples using C++
Azure Storage samples using the Azure CLI
Azure PowerShell samples for Azure Blob storage
11/25/2021 • 2 minutes to read • Edit Online

The following table includes links to PowerShell script samples that create and manage Azure Storage.

SC RIP T DESC RIP T IO N

Storage accounts

Create a storage account and retrieve/rotate the access keys Creates an Azure Storage account and retrieves and rotates
one of its access keys.

Migrate Blobs across storage accounts using AzCopy on Migrate blobs across Azure Storage accounts using AzCopy
Windows on Windows.

Blob storage

Calculate the total size of a Blob storage container Calculates the total size of all the blobs in a container.

Calculate the size of a Blob storage container for billing Calculates the size of a container in Blob storage for the
purposes purpose of estimating billing costs.

Delete containers with a specific prefix Deletes containers starting with a specified string.
Azure CLI samples for Azure Blob storage
11/25/2021 • 2 minutes to read • Edit Online

The following table includes links to Bash scripts built using the Azure CLI that create and manage Azure
Storage.

SC RIP T DESC RIP T IO N

Storage accounts

Create a storage account and retrieve/rotate the access keys Creates an Azure Storage account and retrieves and rotates
its access keys.

Blob storage

Calculate the total size of a Blob storage container Calculates the total size of all the blobs in a container.

Delete containers with a specific prefix Deletes containers starting with a specified string.
Azure Resource Graph sample queries for Azure
Storage
11/25/2021 • 3 minutes to read • Edit Online

This page is a collection of Azure Resource Graph sample queries for Azure Storage. For a complete list of Azure
Resource Graph samples, see Resource Graph samples by Category and Resource Graph samples by Table.

Sample queries
Find storage accounts with a specific case -insensitive tag on the resource group
Similar to the 'Find storage accounts with a specific case-sensitive tag on the resource group' query, but when
it's necessary to look for a case insensitive tag name and tag value, use mv-expand with the bagexpansion
parameter. This query uses more quota than the original query, so use mv-expand only if necessary.

Resources
| where type =~ 'microsoft.storage/storageaccounts'
| join kind=inner (
ResourceContainers
| where type =~ 'microsoft.resources/subscriptions/resourcegroups'
| mv-expand bagexpansion=array tags
| where isnotempty(tags)
| where tags[0] =~ 'key1' and tags[1] =~ 'value1'
| project subscriptionId, resourceGroup)
on subscriptionId, resourceGroup
| project-away subscriptionId1, resourceGroup1

Azure CLI
Azure PowerShell
Portal

az graph query -q "Resources | where type =~ 'microsoft.storage/storageaccounts' | join kind=inner (


ResourceContainers | where type =~ 'microsoft.resources/subscriptions/resourcegroups' | mv-expand
bagexpansion=array tags | where isnotempty(tags) | where tags[0] =~ 'key1' and tags[1] =~ 'value1' | project
subscriptionId, resourceGroup) on subscriptionId, resourceGroup | project-away subscriptionId1,
resourceGroup1"

Find storage accounts with a specific case -sensitive tag on the resource group
The following query uses an inner join to connect storage accounts with resource groups that have a
specified case-sensitive tag name and tag value.

Resources
| where type =~ 'microsoft.storage/storageaccounts'
| join kind=inner (
ResourceContainers
| where type =~ 'microsoft.resources/subscriptions/resourcegroups'
| where tags['Key1'] =~ 'Value1'
| project subscriptionId, resourceGroup)
on subscriptionId, resourceGroup
| project-away subscriptionId1, resourceGroup1
Azure CLI
Azure PowerShell
Portal

az graph query -q "Resources | where type =~ 'microsoft.storage/storageaccounts' | join kind=inner (


ResourceContainers | where type =~ 'microsoft.resources/subscriptions/resourcegroups' | where tags['Key1']
=~ 'Value1' | project subscriptionId, resourceGroup) on subscriptionId, resourceGroup | project-away
subscriptionId1, resourceGroup1"

List all storage accounts with specific tag value


Combine the filter functionality of the previous example and filter Azure resource type by type property. This
query also limits our search for specific types of Azure resources with a specific tag name and value.

Resources
| where type =~ 'Microsoft.Storage/storageAccounts'
| where tags['tag with a space']=='Custom value'

Azure CLI
Azure PowerShell
Portal

az graph query -q "Resources | where type =~ 'Microsoft.Storage/storageAccounts' | where tags['tag with a


space']=='Custom value'"

Show resources that contain storage


Instead of explicitly defining the type to match, this example query will find any Azure resource that contains
the word storage .

Resources
| where type contains 'storage' | distinct type

Azure CLI
Azure PowerShell
Portal

az graph query -q "Resources | where type contains 'storage' | distinct type"

Next steps
Learn more about the query language.
Learn more about how to explore resources.
See samples of Starter language queries.
See samples of Advanced language queries.
Storage account overview
11/25/2021 • 6 minutes to read • Edit Online

An Azure storage account contains all of your Azure Storage data objects: blobs, file shares, queues, tables, and
disks. The storage account provides a unique namespace for your Azure Storage data that's accessible from
anywhere in the world over HTTP or HTTPS. Data in your storage account is durable and highly available, secure,
and massively scalable.
To learn how to create an Azure storage account, see Create a storage account.

Types of storage accounts


Azure Storage offers several types of storage accounts. Each type supports different features and has its own
pricing model. Consider these differences before you create a storage account to determine the type of account
that's best for your applications.
The following table describes the types of storage accounts recommended by Microsoft for most scenarios. All
of these use the Azure Resource Manager deployment model.

T Y P E O F STO RA GE SUP P O RT ED STO RA GE


A C C O UN T SERVIC ES REDUN DA N C Y O P T IO N S USA GE

Standard general-purpose Blob (including Data Lake LRS/GRS/RA-GRS Standard storage account
v2 Storage1 ), Queue, and Table type for blobs, file shares,
storage, Azure Files ZRS/GZRS/RA-GZRS2 queues, and tables.
Recommended for most
scenarios using Azure
Storage. Note that if you
want support for NFS file
shares in Azure Files, use
the premium file shares
account type.

Premium block blobs3 Blob storage (including LRS Premium storage account
Data Lake Storage1 ) type for block blobs and
ZRS2 append blobs.
Recommended for scenarios
with high transactions rates,
or scenarios that use
smaller objects or require
consistently low storage
latency. Learn more about
example workloads.

Premium file shares3 Azure Files LRS Premium storage account


type for file shares only.
ZRS2 Recommended for
enterprise or high-
performance scale
applications. Use this
account type if you want a
storage account that
supports both SMB and
NFS file shares.
T Y P E O F STO RA GE SUP P O RT ED STO RA GE
A C C O UN T SERVIC ES REDUN DA N C Y O P T IO N S USA GE

Premium page blobs3 Page blobs only LRS Premium storage account
type for page blobs only.
Learn more about page
blobs and sample use cases.

1 Data Lake Storage is a set of capabilities dedicated to big data analytics, built on Azure Blob storage. For
more
information, see Introduction to Data Lake Storage Gen2 and Create a storage account to use with Data Lake
Storage Gen2.
2 Zone-redundant storage (ZRS) and geo-zone-redundant storage (GZRS/RA-GZRS) are available only for

standard general-purpose v2, premium block blobs, and premium file shares accounts in certain regions. For
more information, see Azure Storage redundancy.
3 Premium performance storage accounts use solid-state drives (SSDs) for low latency and high throughput.
Legacy storage accounts are also supported. For more information, see Legacy storage account types.

Storage account endpoints


A storage account provides a unique namespace in Azure for your data. Every object that you store in Azure
Storage has an address that includes your unique account name. The combination of the account name and the
Azure Storage service endpoint forms the endpoints for your storage account.
When naming your storage account, keep these rules in mind:
Storage account names must be between 3 and 24 characters in length and may contain numbers and
lowercase letters only.
Your storage account name must be unique within Azure. No two storage accounts can have the same name.
The following table lists the format of the endpoint for each of the Azure Storage services.

STO RA GE SERVIC E EN DP O IN T

Blob storage https://<storage-account>.blob.core.windows.net

Data Lake Storage Gen2 https://<storage-account>.dfs.core.windows.net

Azure Files https://<storage-account>.file.core.windows.net

Queue storage https://<storage-account>.queue.core.windows.net

Table storage https://<storage-account>.table.core.windows.net

Construct the URL for accessing an object in a storage account by appending the object's location in the storage
account to the endpoint. For example, the URL for a blob will be similar to:
http://*mystorageaccount*.blob.core.windows.net/*mycontainer*/*myblob*

You can also configure your storage account to use a custom domain for blobs. For more information, see
Configure a custom domain name for your Azure Storage account.

Migrate a storage account


The following table summarizes and points to guidance on how to move, upgrade, or migrate a storage account:

M IGRAT IO N SC EN A RIO DETA IL S

Move a storage account to a different subscription Azure Resource Manager provides options for moving a
resource to a different subscription. For more information,
see Move resources to a new resource group or
subscription.

Move a storage account to a different resource group Azure Resource Manager provides options for moving a
resource to a different resource group. For more
information, see Move resources to a new resource group or
subscription.

Move a storage account to a different region To move a storage account, create a copy of your storage
account in another region. Then, move your data to that
account by using AzCopy, or another tool of your choice. For
more information, see Move an Azure Storage account to
another region.

Upgrade to a general-purpose v2 storage account You can upgrade a general-purpose v1 storage account or
Blob storage account to a general-purpose v2 account. Note
that this action cannot be undone. For more information,
see Upgrade to a general-purpose v2 storage account.

Migrate a classic storage account to Azure Resource The Azure Resource Manager deployment model is superior
Manager to the classic deployment model in terms of functionality,
scalability, and security. For more information about
migrating a classic storage account to Azure Resource
Manager, see the "Migration of storage accounts" section of
Platform-supported migration of IaaS resources from classic
to Azure Resource Manager.

Transfer data into a storage account


Microsoft provides services and utilities for importing your data from on-premises storage devices or third-
party cloud storage providers. Which solution you use depends on the quantity of data you're transferring. For
more information, see Azure Storage migration overview.

Storage account encryption


All data in your storage account is automatically encrypted on the service side. For more information about
encryption and key management, see Azure Storage encryption for data at rest.

Storage account billing


Azure Storage bills based on your storage account usage. All objects in a storage account are billed together as a
group. Storage costs are calculated according to the following factors:
Region refers to the geographical region in which your account is based.
Account type refers to the type of storage account you're using.
Access tier refers to the data usage pattern you've specified for your general-purpose v2 or Blob storage
account.
Capacity refers to how much of your storage account allotment you're using to store data.
Redundancy determines how many copies of your data are maintained at one time, and in what locations.
Transactions refer to all read and write operations to Azure Storage.
Data egress refers to any data transferred out of an Azure region. When the data in your storage account is
accessed by an application that isn't running in the same region, you're charged for data egress. For
information about using resource groups to group your data and services in the same region to limit egress
charges, see What is an Azure resource group?.
The Azure Storage pricing page provides detailed pricing information based on account type, storage capacity,
replication, and transactions. The Data Transfers pricing details provides detailed pricing information for data
egress. You can use the Azure Storage pricing calculator to help estimate your costs.
Azure services cost money. Azure Cost Management helps you set budgets and configure alerts to keep
spending under control. Analyze, manage, and optimize your Azure costs with Cost Management. To learn more,
see the quickstart on analyzing your costs.

Legacy storage account types


The following table describes the legacy storage account types. These account types are not recommended by
Microsoft, but may be used in certain scenarios:

T Y P E O F L EGA C Y SUP P O RT ED REDUN DA N C Y DEP LO Y M EN T


STO RA GE A C C O UN T STO RA GE SERVIC ES O P T IO N S M O DEL USA GE
T Y P E O F L EGA C Y SUP P O RT ED REDUN DA N C Y DEP LO Y M EN T
STO RA GE A C C O UN T STO RA GE SERVIC ES O P T IO N S M O DEL USA GE

Standard general- Blob, Queue, and LRS/GRS/RA-GRS Resource Manager, General-purpose v1


purpose v1 Table storage, Azure Classic accounts may not
Files have the latest
features or the
lowest per-gigabyte
pricing. Consider
using for these
scenarios:
Your
applications
require the
Azure classic
deployment
model.
Your
applications
are
transaction-
intensive or
use significant
geo-
replication
bandwidth,
but don't
require large
capacity. In
this case,
general-
purpose v1
may be the
most
economical
choice.
You use a
version of the
Azure Storage
REST API that
is earlier than
2014-02-14
or a client
library with a
version lower
than 4.x, and
you can't
upgrade your
application.

Standard Blob Blob storage (block LRS/GRS/RA-GRS Resource Manager Microsoft


storage blobs and append recommends using
blobs only) standard general-
purpose v2 accounts
instead when
possible.

Next steps
Create a storage account
Upgrade to a general-purpose v2 storage account
Recover a deleted storage account
Premium block blob storage accounts
11/25/2021 • 13 minutes to read • Edit Online

Premium block blob storage accounts make data available via high-performance hardware. Data is stored on
solid-state drives (SSDs) which are optimized for low latency. SSDs provide higher throughput compared to
traditional hard drives. File transfer is much faster because data is stored on instantly accessible memory chips.
All parts of a drive accessible at once. By contrast, the performance of a hard disk drive (HDD) depends on the
proximity of data to the read/write heads.

High performance workloads


Premium block blob storage accounts are ideal for workloads that require fast and consistent response times
and/or have a high number of input output operations per second (IOP). Example workloads include:
Interactive workloads . Highly interactive and real-time applications must write data quickly. E-
commerce and mapping applications often require instant updates and user feedback. For example, in an
e-commerce application, less frequently viewed items are likely not cached. However, they must be
instantly displayed to the customer on demand. Interactive editing or multi-player online gaming
applications maintain a quality experience by providing real-time updates.
IoT/ streaming analytics . In an IoT scenario, lots of smaller write operations might be pushed to the
cloud every second. Large amounts of data might be taken in, aggregated for analysis purposes, and then
deleted almost immediately. The high ingestion capabilities of premium block blob storage make it
efficient for this type of workload.
Ar tificial intelligence/machine learning (AI/ML) . AI/ML deals with the consumption and processing
of different data types like visuals, speech, and text. This high-performance computing type of workload
deals with large amounts of data that requires rapid response and efficient ingestion times for data
analysis.

Cost effectiveness
Premium block blob storage accounts have a higher storage cost but a lower transaction cost as compared to
standard general-purpose v2 accounts. If your applications and workloads execute a large number of
transactions, premium blob blob storage can be cost-effective, especially if the workload is write-heavy.
In most cases, workloads executing more than 35 to 40 transactions per second per terabyte (TPS/TB) are good
candidates for this type of account. For example, if your workload executes 500 million read operations and 100
million write operations in a month, then you can calculate the TPS/TB as follows:
Write transactions per second = 100,000,000 / (30 x 24 x 60 x 60) = 39 (rounded to the nearest whole
number)
Read transactions per second = 500,000,000 / (30 x 24 x 60 x 60) = 193 (rounded to the nearest whole
number)
Total transactions per second = 193 + 39 = 232
Assuming your account had 5TB data on average, then TPS/TB would be 230 / 5 = 46 .
NOTE
Prices differ per operation and per region. Use the Azure pricing calculator to compare pricing between standard and
premium performance tiers.

The following table demonstrates the cost-effectiveness of premium block blob storage accounts. The numbers
in this table are based on a Azure Data Lake Storage Gen2 enabled premium block blob storage account (also
referred to as the premium tier for Azure Data Lake Storage). Each column represents the number of
transactions in a month. Each row represents the percentage of transactions that are read transactions. Each cell
in the table shows the percentage of cost reduction associated with a read transaction percentage and the
number of transactions executed.
For example, assuming that your account is in the East US 2 region, the number of transactions with your
account exceeds 90M, and 70% of those transactions are read transactions, premium block blob storage
accounts are more cost-effective.

NOTE
If you prefer to evaluate cost effectiveness based on the number of transactions per second for each TB of data, you can
use the column headings that appear at the bottom of the table.

Premium scenarios
This section contains real-world examples of how some of our Azure Storage partners use premium block blob
storage. Some of them also enable Azure Data Lake Storage Gen2 which introduces a hierarchical file structure
that can further enhance transaction performance in certain scenarios.

TIP
If you have an analytics use case, we highly recommend that you use Azure Data Lake Storage Gen2 along with a
premium block blob storage account.

This section contains the following examples:


Fast data hydration
Interactive editing applications
Data visualization software
E-commerce businesses
Interactive analytics
Data processing pipelines
Internet of Things (IoT)
Machine Learning
Real-time streaming analytics
Fast data hydration
Premium block blob storage can help you hydrate or bring up your environment quickly. In industries such as
banking, certain regulatory requirements might require companies to regularly tear down their environments,
and then bring them back up from scratch. The data used to hydrate their environment must load quickly.
Some of our partners store a copy of their MongoDB instance each week to a premium block blob storage
account. The system is then torn down. To get the system back online quickly again, the latest copy of the
MangoDB instance is read and loaded. For audit purposes, previous copies are maintained in cloud storage for a
period of time.
Interactive editing applications
In applications where multiple users edit the same content, the speed of updates becomes critical for a smooth
user experience.
Some of our partners develop video editing software. Any update that a user makes to a video is immediately
visible to other users. Users can focus on their tasks instead of waiting for content updates to appear. The low
latencies associated with premium block blob storage helps to create this seamless and collaborative experience.
Data visualization software
Users can be far more productive with data visualization software if rendering time is quick.
We've seen companies in the mapping industry use mapping editors to detect issues with maps. These editors
use data that is generated from customer Global Positioning System (GPS) data. To create map overlaps, the
editing software renders small sections of a map by quickly performing key lookups.
In one case, before using premium block blob storage, a partner used HBase clusters backed by standard
general-purpose v2 storage. However, it became expensive to keep large clusters running all of the time. This
partner decided to move away from this architecture, and instead used premium block blob storage for fast key
lookups. To create overlaps, they used REST APIs to render tiles corresponding to GPS coordinates. The premium
block blob storage account provided them with a cost-effective solution, and latencies were far more
predictable.
E-commerce businesses
In addition to supporting their customer facing stores, e-commerce businesses might also provide data
warehousing and analytics solutions to internal teams. We've seen partners use premium block blob storage
accounts to support the low latency requirements by these data warehousing and analytics solutions. In one
case, a catalog team maintains a data warehousing application for data that pertains to offers, pricing, ship
methods, suppliers, inventory, and logistics. Information is queried, scanned, extracted, and mined for multiple
use cases. The team runs analytics on this data to provide various merchandising teams with relevant insights
and information.
Interactive analytics
In almost every industry, there is a need for enterprises to query and analyze their data interactively.
Data scientists, analysts, and developers can derive time-sensitive insights faster by running queries on data that
is stored in a premium block blob storage account. Executives can load their dashboards much more quickly
when the data that appears in those dashboards comes from a premium block blob storage account instead of a
standard general-purpose v2 account.
In one scenario, analysts needed to analyze telemetry data from millions of devices quickly to better understand
how their products are used, and to make product release decisions. Storing data in SQL databases is expensive.
To reduce cost, and to increase queryable surface area, they used an Azure Data Lake Storage Gen2 enabled
premium block blob storage account and performed computation in Presto and Spark to produce insights from
hive tables. This way, even rarely accessed data has all of the same power of compute as frequently accessed
data.
To close the gap between SQL's subsecond performance and Presto's input output operations per second (IOPs)
to external storage, consistency and speed are critical, especially when dealing with small optimized row
columnar (ORC) files. A premium block blob storage account when used with Data Lake Storage Gen2, has
repeatedly demonstrated a 3X performance improvement over a standard general-purpose v2 account in this
scenario. Queries executed fast enough to feel local to the compute machine.
In another case, a partner stores and queries logs that are generated from their security solution. The logs are
generated by using Databricks, and then and stored in a Data Lake Storage Gen2 enabled premium block blob
storage account. End users query and search this data by using Azure Data Explorer. They chose this type of
account to increase stability and increase the performance of interactive queries. They also set the life cycle
management Delete Action policy to a few days, which helps to reduce costs. This policy prevents them from
keeping the data forever. Instead, data is deleted once it is no longer needed.
Data processing pipelines
In almost every industry, there is a need for enterprises to process data. Raw data from multiple sources needs
to be cleansed and processed so that it becomes useful for downstream consumption in tools such as data
dashboards that help users make decisions.
While speed of processing is not always the top concern when processing data, some industries require it. For
example, companies in the financial services industry often need to process data reliably and in the quickest way
possible. To detect fraud, those companies must process inputs from various sources, identify risks to their
customers, and take swift action.
In some cases, we've seen partners use multiple standard storage accounts to store data from various sources.
Some of this data is then moved to a Data Lake Storage enabled premium block blob storage account where a
data processing application frequently reads newly arriving data. Directory listing calls in this account were
much faster and performed much more consistently than they would otherwise perform in a standard general-
purpose v2 account. The speed and consistency offered by the account ensured that new data was always made
available to downstream processing systems as quickly as possible. This helped them catch and act upon
potential security risks promptly.
Internet of Things (IoT )
IoT has become a significant part of our daily lives. IoT is used to track car movements, control lights, and
monitor our health. It also has industrial applications. For example, companies use IoT to enable their smart
factory projects, improve agricultural output, and on oil rigs for predictive maintenance. Premium block blob
storage accounts add significant value to these scenarios.
We have partners in the mining industry. They use a Data Lake Storage Gen2 enable premium block blob
storage account along with HDInsight (Hbase) to ingest time series sensor data from multiple mining equipment
types, with a very taxing load profile. Premium block blob storage has helped to satisfy their need for high
sample rate ingestion. It's also cost effective, because premium block blob storage is cost optimized for
workloads that perform a large number of write transactions, and this workload generates a large number of
small write transactions (in the tens of thousands per second).
Machine Learning
In many cases, a lot of data has to be processed to train a machine learning model. To complete this processing,
compute machines must run for a long time. Compared to storage costs, compute costs usually account for a
much larger percentage of your bill, so reducing the amount of time that your compute machines run can lead
to significant savings. The low latency that you get by using premium block blob storage can significantly reduce
this time and your bill.
We have partners that deploy data processing pipelines to spark clusters where they run machine learning
training and inference. They store spark tables (parquet files) and checkpoints to a premium block blob storage
account. Spark checkpoints can create a huge number of nested files and folders. Their directory listing
operations are fast because they combined the low latency of a premium block blob storage account with the
hierarchical data structure made available with Data Lake Storage Gen2.
We also have partners in the semiconductor industry with use cases that intersect IoT and machine learning. IoT
devices attached to machines in the manufacturing plant take images of semiconductor wafers and send those
to their account. Using deep learning inference, the system can inform the on-premise machines if there is an
issue with the production and if an action needs to be taken. They mush be able to load and process images
quickly and reliably. Using Data Lake Storage Gen2 enabled premium block blob storage account helps to make
this possible.
Real-time streaming analytics
To support interactive analytics in near real time, a system must ingest and process large amounts of data, and
then make that data available to downstream systems. Using a Data Lake Storage Gen2 enabled premium block
blob storage account is perfect for these types of scenarios.
Companies in the media and entertainment industry can generate a large number of logs and telemetry data in
a short amount of time as they broadcast an event. Some of our partners rely on multiple content delivery
network (CDN) partners for streaming. They must make near real-time decisions about which CDN partners to
allocate traffic to. Therefore, data needs to be available for querying a few seconds after it is ingested. To
facilitate this quick decision making, they use data stored within premium block blob storage, and process that
data in Azure Data Explorer (ADX). All of the telemetry that is uploaded to storage is transformed in ADX, where
it can be stored in a familiar format that operators and executives can query quickly and reliably.
Data is uploaded into multiple premium performance Blob Storage accounts. Each account is connected to an
Event Grid and Event Hub resource. ADX retrieves the data from Blob Storage, performs any required
transformations to normalize the data (For example: decompressing zip files or converting from JSON to CSV).
Then, the data is made available for query through ADX and dashboards displayed in Grafana. Grafana
dashboards are used by operators, executives, and other users. The customer retains their original logs in
premium performance storage, or they copy them to a general-purpose v2 storage account where they can be
stored in the hot or cool access tier for long-term retention and future analysis.

Getting started with premium


First, check to make sure your favorite Blob Storage features are compatible with premium block blob storage
accounts, then create the account.

NOTE
You can't convert an existing standard general-purpose v2 storage account to a premium block blob storage account. To
migrate to a premium block blob storage account, you must create a premium block blob storage account, and migrate
the data to the new account.

Check for Blob Storage feature compatibility


Some Blob Storage features aren't yet supported or have partial support in premium block blob storage
accounts. Before choosing premium, review the Blob Storage feature support in Azure Storage accounts article
to determine whether the features that you intend to use are fully supported in your account. Feature support is
always expanding so make sure to periodically review this article for updates.
Create a new Storage account
To create a premium block blob storage account, make sure to choose the Premium performance option and
the Block blobs account type as you create the account.

NOTE
Some Blob Storage features aren't yet supported or have partial support in premium block blob storage accounts. Before
choosing premium, review the Blob Storage feature support in Azure Storage accounts article to determine whether the
features that you intend to use are fully supported in your account. Feature support is always expanding so make sure to
periodically review this article for updates.

If your storage account is going to be used for analytics, we highly recommend that you use Azure Data Lake
Storage Gen2 along with a premium block blob storage account. To unlock Azure Data Lake Storage Gen2
capabilities, enable the Hierarchical namespace setting in the Advanced tab of the Create storage account
page.
The following image shows this setting in the Create storage account page.
For complete guidance, see Create a storage account account.

See also
Storage account overview
Introduction to Azure Data Lake Storage Gen2
Create a storage account to use with Azure Data Lake Storage Gen2
Premium tier for Azure Data Lake Storage
Authorize access to data in Azure Storage
11/25/2021 • 3 minutes to read • Edit Online

Each time you access data in your storage account, your client application makes a request over HTTP/HTTPS to
Azure Storage. By default, every resource in Azure Storage is secured, and every request to a secure resource
must be authorized. Authorization ensures that the client application has the appropriate permissions to access
data in your storage account.
The following table describes the options that Azure Storage offers for authorizing access to data:

O N - P REM ISES
A C T IVE
SH A RED K EY SH A RED A C C ESS A Z URE A C T IVE DIREC TO RY A N O N Y M O US
A Z URE ( STO RA GE SIGN AT URE DIREC TO RY DO M A IN P UB L IC REA D
A RT IFA C T A C C O UN T K EY ) ( SA S) ( A Z URE A D) SERVIC ES A C C ESS

Azure Blobs Supported Supported Supported Not supported Supported

Azure Files (SMB) Supported Not supported Supported, only Supported, Not supported
with AAD credentials must
Domain Services be synced to
Azure AD

Azure Files Supported Supported Not supported Not supported Not supported
(REST)

Azure Queues Supported Supported Supported Not Supported Not supported

Azure Tables Supported Supported Supported Not supported Not supported


(preview)

Each authorization option is briefly described below:


Azure Active Director y (Azure AD) integration for authorizing requests to blob, queue, and table
resources. Microsoft recommends using Azure AD credentials to authorize requests to data when
possible for optimal security and ease of use. For more information about Azure AD integration, see the
articles for either blob, queue, or table resources.
You can use Azure role-based access control (Azure RBAC) to manage a security principal's permissions
to blob, queue, and table resources in a storage account. You can additionally use Azure attribute-based
access control (ABAC) to add conditions to Azure role assignments for blob resources. For more
information about RBAC, see What is Azure role-based access control (Azure RBAC)?. For more
information about ABAC, see What is Azure attribute-based access control (Azure ABAC)? (preview).
Azure Active Director y Domain Ser vices (Azure AD DS) authentication for Azure Files. Azure
Files supports identity-based authorization over Server Message Block (SMB) through Azure AD DS. You
can use Azure RBAC for fine-grained control over a client's access to Azure Files resources in a storage
account. For more information about Azure Files authentication using domain services, see the overview.
On-premises Active Director y Domain Ser vices (AD DS, or on-premises AD DS)
authentication for Azure Files. Azure Files supports identity-based authorization over SMB through AD
DS. Your AD DS environment can be hosted in on-premises machines or in Azure VMs. SMB access to
Files is supported using AD DS credentials from domain joined machines, either on-premises or in Azure.
You can use a combination of Azure RBAC for share level access control and NTFS DACLs for
directory/file level permission enforcement. For more information about Azure Files authentication using
domain services, see the overview.
Shared Key authorization for blobs, files, queues, and tables. A client using Shared Key passes a
header with every request that is signed using the storage account access key. For more information, see
Authorize with Shared Key.
You can disallow Shared Key authorization for a storage account. When Shared Key authorization is
disallowed, clients must use Azure AD to authorize requests for data in that storage account. For more
information, see Prevent Shared Key authorization for an Azure Storage account.
Shared access signatures for blobs, files, queues, and tables. Shared access signatures (SAS) provide
limited delegated access to resources in a storage account. Adding constraints on the time interval for
which the signature is valid or on permissions it grants provides flexibility in managing access. For more
information, see Using shared access signatures (SAS).
Anonymous public read access for containers and blobs. When anonymous access is configured, then
clients can read blob data without authorization. For more information, see Manage anonymous read
access to containers and blobs.
You can disallow anonymous public read access for a storage account. When anonymous public read
access is disallowed, then users cannot configure containers to enable anonymous access, and all
requests must be authorized. For more information, see Prevent anonymous public read access to
containers and blobs.

Next steps
Authorize access with Azure Active Directory to either blob, queue, or table resources.
Authorize with Shared Key
Grant limited access to Azure Storage resources using shared access signatures (SAS)
Authorize access to blobs using Azure Active
Directory
11/25/2021 • 8 minutes to read • Edit Online

Azure Storage supports using Azure Active Directory (Azure AD) to authorize requests to blob data. With Azure
AD, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal,
which may be a user, group, or application service principal. The security principal is authenticated by Azure AD
to return an OAuth 2.0 token. The token can then be used to authorize a request against the Blob service.
Authorizing requests against Azure Storage with Azure AD provides superior security and ease of use over
Shared Key authorization. Microsoft recommends using Azure AD authorization with your blob applications
when possible to assure access with minimum required privileges.
Authorization with Azure AD is available for all general-purpose and Blob storage accounts in all public regions
and national clouds. Only storage accounts created with the Azure Resource Manager deployment model
support Azure AD authorization.
Blob storage additionally supports creating shared access signatures (SAS) that are signed with Azure AD
credentials. For more information, see Grant limited access to data with shared access signatures.

Overview of Azure AD for blobs


When a security principal (a user, group, or application) attempts to access a blob resource, the request must be
authorized, unless it is a blob available for anonymous access. With Azure AD, access to a resource is a two-step
process. First, the security principal's identity is authenticated and an OAuth 2.0 token is returned. Next, the
token is passed as part of a request to the Blob service and used by the service to authorize access to the
specified resource.
The authentication step requires that an application request an OAuth 2.0 access token at runtime. If an
application is running from within an Azure entity such as an Azure VM, a virtual machine scale set, or an Azure
Functions app, it can use a managed identity to access blob data. To learn how to authorize requests made by a
managed identity to the Azure Blob service, see Authorize access to blob data with managed identities for Azure
resources.
The authorization step requires that one or more Azure RBAC roles be assigned to the security principal making
the request. For more information, see Assign Azure roles for access rights.
Native applications and web applications that make requests to the Azure Blob service can also authorize access
with Azure AD. To learn how to request an access token and use it to authorize requests for blob data, see
Authorize access to Azure Storage with Azure AD from an Azure Storage application.

Assign Azure roles for access rights


Azure Active Directory (Azure AD) authorizes access rights to secured resources through Azure RBAC. Azure
Storage defines a set of built-in RBAC roles that encompass common sets of permissions used to access blob
data. You can also define custom roles for access to blob data. To learn more about assigning Azure roles for
blob access, see Assign an Azure role for access to blob data.
An Azure AD security principal may be a user, a group, an application service principal, or a managed identity for
Azure resources. The RBAC roles that are assigned to a security principal determine the permissions that the
principal will have. To learn more about assigning Azure roles for blob access, see Assign an Azure role for
access to blob data
In some cases you may need to enable fine-grained access to blob resources or to simplify permissions when
you have a large number of role assignments for a storage resource. You can use Azure attribute-based access
control (Azure ABAC) to configure conditions on role assignments. You can use conditions with a custom role or
select built-in roles. For more information about configuring conditions for Azure storage resources with ABAC,
see Authorize access to blobs using Azure role assignment conditions (preview). For details about supported
conditions for blob data operations, see Actions and attributes for Azure role assignment conditions in Azure
Storage (preview).
Resource scope
Before you assign an Azure RBAC role to a security principal, determine the scope of access that the security
principal should have. Best practices dictate that it's always best to grant only the narrowest possible scope.
Azure RBAC roles defined at a broader scope are inherited by the resources beneath them.
You can scope access to Azure blob resources at the following levels, beginning with the narrowest scope:
An individual container. At this scope, a role assignment applies to all of the blobs in the container, as well
as container properties and metadata.
The storage account. At this scope, a role assignment applies to all containers and their blobs.
The resource group. At this scope, a role assignment applies to all of the containers in all of the storage
accounts in the resource group.
The subscription. At this scope, a role assignment applies to all of the containers in all of the storage
accounts in all of the resource groups in the subscription.
A management group. At this scope, a role assignment applies to all of the containers in all of the storage
accounts in all of the resource groups in all of the subscriptions in the management group.
For more information about scope for Azure RBAC role assignments, see Understand scope for Azure RBAC.
Azure built-in roles for blobs
Azure RBAC provides a number of built-in roles for authorizing access to blob data using Azure AD and OAuth.
Some examples of roles that provide permissions to data resources in Azure Storage include:
Storage Blob Data Owner: Use to set ownership and manage POSIX access control for Azure Data Lake
Storage Gen2. For more information, see Access control in Azure Data Lake Storage Gen2.
Storage Blob Data Contributor: Use to grant read/write/delete permissions to Blob storage resources.
Storage Blob Data Reader: Use to grant read-only permissions to Blob storage resources.
Storage Blob Delegator: Get a user delegation key to use to create a shared access signature that is signed
with Azure AD credentials for a container or blob.
To learn how to assign an Azure built-in role to a security principal, see Assign an Azure role for access to blob
data. To learn how to list Azure RBAC roles and their permissions, see List Azure role definitions.
For more information about how built-in roles are defined for Azure Storage, see Understand role definitions.
For information about creating Azure custom roles, see Azure custom roles.
Only roles explicitly defined for data access permit a security principal to access blob data. Built-in roles such as
Owner , Contributor , and Storage Account Contributor permit a security principal to manage a storage
account, but do not provide access to the blob data within that account via Azure AD. However, if a role includes
Microsoft.Storage/storageAccounts/listKeys/action , then a user to whom that role is assigned can access
data in the storage account via Shared Key authorization with the account access keys. For more information,
see Choose how to authorize access to blob data in the Azure portal.
For detailed information about Azure built-in roles for Azure Storage for both the data services and the
management service, see the Storage section in Azure built-in roles for Azure RBAC. Additionally, for
information about the different types of roles that provide permissions in Azure, see Classic subscription
administrator roles, Azure roles, and Azure AD roles.

IMPORTANT
Azure role assignments may take up to 30 minutes to propagate.

Access permissions for data operations


For details on the permissions required to call specific Blob service operations, see Permissions for calling data
operations.

Access data with an Azure AD account


Access to blob data via the Azure portal, PowerShell, or Azure CLI can be authorized either by using the user's
Azure AD account or by using the account access keys (Shared Key authorization).
Data access from the Azure portal
The Azure portal can use either your Azure AD account or the account access keys to access blob data in an
Azure storage account. Which authorization scheme the Azure portal uses depends on the Azure roles that are
assigned to you.
When you attempt to access blob data, the Azure portal first checks whether you have been assigned an Azure
role with Microsoft.Storage/storageAccounts/listkeys/action . If you have been assigned a role with this
action, then the Azure portal uses the account key for accessing blob data via Shared Key authorization. If you
have not been assigned a role with this action, then the Azure portal attempts to access data using your Azure
AD account.
To access blob data from the Azure portal using your Azure AD account, you need permissions to access blob
data, and you also need permissions to navigate through the storage account resources in the Azure portal. The
built-in roles provided by Azure Storage grant access to blob resources, but they don't grant permissions to
storage account resources. For this reason, access to the portal also requires the assignment of an Azure
Resource Manager role such as the Reader role, scoped to the level of the storage account or higher. The Reader
role grants the most restricted permissions, but another Azure Resource Manager role that grants access to
storage account management resources is also acceptable. To learn more about how to assign permissions to
users for data access in the Azure portal with an Azure AD account, see Assign an Azure role for access to blob
data.
The Azure portal indicates which authorization scheme is in use when you navigate to a container. For more
information about data access in the portal, see Choose how to authorize access to blob data in the Azure portal.
Data access from PowerShell or Azure CLI
Azure CLI and PowerShell support signing in with Azure AD credentials. After you sign in, your session runs
under those credentials. To learn more, see one of the following articles:
Choose how to authorize access to blob data with Azure CLI
Run PowerShell commands with Azure AD credentials to access blob data

Feature support
This table shows how this feature is supported in your account and the impact on support when you enable
certain capabilities.
STO RA GE A C C O UN T B LO B STO RA GE DATA L A K E STO RA GE
TYPE ( DEFA ULT SUP P O RT ) GEN 2 1 N F S 3. 0 1 SF T P 1

Standard general-
purpose v2

Premium block blobs

1 Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP)
support all require a storage account with a hierarchical namespace enabled.

Next steps
Authorize access to data in Azure Storage
Assign an Azure role for access to blob data
Authorize access to blobs using Azure role
assignment conditions (preview)
11/25/2021 • 2 minutes to read • Edit Online

Attribute-based access control (ABAC) is an authorization strategy that defines access levels based on attributes
associated with security principals, resources, requests, and the environment. Azure ABAC builds on Azure role-
based access control (Azure RBAC) by adding conditions to Azure role assignments in the existing identity and
access management (IAM) system. This preview includes support for role assignment conditions on Blobs and
Data Lake Storage Gen2. It enables you to author role-assignment conditions based on resource and request
attributes.

IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

Overview of conditions in Azure Storage


Azure Storage enables the use of Azure Active Directory (Azure AD) to authorize requests to blob, queue, and
table resources. Azure AD authorizes access rights to secured resources by using Azure RBAC. Azure Storage
defines a set of Azure built-in roles that encompass common sets of permissions used to access blob and queue
data. You can also define custom roles with select set of permissions. Azure Storage supports role assignments
for storage accounts or blob containers.
However, in some cases you might need to enable finer-grained access to resources or simplify the hundreds of
role assignments for a storage resource. You can configure conditions on role assignments for DataActions to
achieve these goals. You can use conditions with a custom role or select built-in roles. Note, conditions are not
supported for management Actions through the Storage resource provider.
Conditions in Azure Storage are supported for blobs. You can use conditions with accounts that have the
hierarchical namespace (HNS) feature enabled on them. Conditions are currently not supported for queue, table,
or file resources in Azure Storage.

Supported attributes and operations


In this preview, you can add conditions to built-in roles or custom roles. Using custom roles allows you to grant
only the essential permissions or data actions to your users. The built-in roles supported in this preview include
Storage Blob Data Reader, Storage Blob Data Contributor and Storage Blob Data Owner.
If you're working with conditions based on blob index tags, you should use the Storage Blob Data Owner since
permissions for tag operations are included in this role.

NOTE
Blob index tags are not supported for Data Lake Storage Gen2 storage accounts, which use a hierarchical namespace. You
should not author role-assignment conditions using index tags on storage accounts that have HNS enabled.

The Azure role assignment condition format allows use of @Resource or @Request attributes in the conditions. A
@Resource attribute refers to an existing attribute of a storage resource that is being accessed, such as a storage
account, a container, or a blob. A @Request attribute refers to an attribute included in a storage operation
request.
For the full list of attributes supported for each DataAction, please see the Actions and attributes for Azure role
assignment conditions in Azure Storage (preview).

See also
Security considerations for Azure role assignment conditions in Azure Storage (preview)
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
What is Azure attribute-based access control (Azure ABAC)? (preview)
Actions and attributes for Azure role assignment
conditions in Azure Storage (preview)
11/25/2021 • 5 minutes to read • Edit Online

IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

This article describes the supported attribute dictionaries that can be used in conditions on Azure role
assignments for each Azure Storage DataAction. For the list of Blob service operations that are affected by a
specific permission or DataAction, see Permissions for Blob service operations.
To understand the role assignment condition format, see Azure role assignment condition format and syntax.

Suboperations
Multiple Storage service operations can be associated with a single permission or DataAction. However, each of
these operations that are associated with the same permission might support different parameters.
Suboperations enable you to differentiate between service operations that require the same permission but
support different set of attributes for conditions. Thus, by using a suboperation, you can specify one condition
for access to a subset of operations that support a given parameter. Then, you can use another access condition
for operations with the same action that doesn't support that parameter.
For example, the Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write action is required for
over a dozen different service operations. Some of these operations can accept blob index tags as request
parameter, while others don't. For operations that accept blob index tags as a parameter, you can use blob index
tags in a Request condition. However, if such a condition is defined on the
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write action, all operations that don't accept
tags as a request parameter cannot evaluate this condition, and will fail the authorization access check.
In this case, the optional suboperation Blob.Write.WithTagHeaders can be used to apply a condition to only those
operations that support blob index tags as a request parameter.
Similarly, only select operations on the Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
action can have support blob index tags as a precondition for access. This subset of operations is identified by
the Blob.Read.WithTagConditions suboperation.

NOTE
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find data on Azure Blob
Storage with Blob Index (preview).

In this preview, storage accounts support the following suboperations:

DATA A C T IO N SUB O P ERAT IO N DISP L AY N A M E DESC RIP T IO N

Blob.Read.WithTagConditions Blob read operations that


Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read Includes REST operations
support conditions on tags Get Blob, Get Blob
Metadata, Get Blob
Properties, Get Block List,
Get Page Ranges, Query
Blob Contents.

Blob.Write.WithTagHeaders Blob writes for content with


Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write Includes REST operations
optional tags
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action Put Blob, Copy Blob, Copy
Blob From URL and Put
Block List.

Actions and suboperations


The following table lists the supported actions and suboperations for conditions in Azure Storage.

DISP L AY N A M E DESC RIP T IO N DATA A C T IO N

Delete a blob DataAction for deleting blobs. Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete


DISP L AY N A M E DESC RIP T IO N DATA A C T IO N

Read a blob DataAction for reading blobs. Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read

Read content from a blob with tag REST operations: Get Blob, Get Blob Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
conditions Metadata, Get Blob Properties, Get Suboperation
Block List, Get Page Ranges and Query Blob.Read.WithTagConditions
Blob Contents.

Write to a blob DataAction for writing to blobs. Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write

Write to a blob with blob index tags REST operations: Put Blob, Put Block Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write
List, Copy Blob and Copy Blob From Suboperation
URL. Blob.Write.WithTagHeaders

Create a blob or snapshot, or append DataAction for creating blobs. Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action


data

Write content to a blob with blob REST operations: Put Blob, Put Block Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action
index tags List, Copy Blob and Copy Blob From Suboperation
URL. Blob.Write.WithTagHeaders

Delete a version of a blob DataAction for deleting a version of a Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobV


blob.

Changes ownership of a blob DataAction for changing ownership of Microsoft.Storage/storageAccounts/blobServices/containers/blobs/manageOwner


a blob.

Modify permissions of a blob DataAction for modifying permissions Microsoft.Storage/storageAccounts/blobServices/containers/blobs/modifyPermi


of a blob.

Rename file or directory DataAction for renaming files or Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action


directories.

Permanently delete a blob overriding DataAction for permanently deleting a Microsoft.Storage/storageAccounts/blobServices/containers/blobs/permanentDe


soft-delete blob overriding soft-delete.

All data operations for accounts with DataAction for all data operations on Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperU
HNS storage accounts with HNS.

Read blob index tags DataAction for reading blob index tags. Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read

Write blob index tags DataAction for writing blob index tags. Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write

Attributes
The following table lists the descriptions for the supported attributes for conditions in Azure Storage.

DISP L AY N A M E DESC RIP T IO N AT T RIB UT E

Container name Name of a storage container or file containers:name


system. Use when you want to check
the container name.

Blob path Path of a virtual directory, blob, folder blobs:path


or file resource. Use when you want to
check the blob name or folders in a
blob path.

Blob index tags [Keys] Index tags on a blob resource. tags&$keys$&


Arbitrary user-defined key-value
properties that you can store
alongside a blob resource. Use when
you want to check the key in blob
index tags.

Blob index tags [Values in key] Index tags on a blob resource. tags: keyname
Arbitrary user-defined key-value <$key_case_sensitive$>
properties that you can store
alongside a blob resource. Use when
you want to check both the key (case-
sensitive) and value in blob index tags.

NOTE
Attributes and values listed are considered case-insensitive, unless stated otherwise.
NOTE
When specifying conditions for Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path attribute,
the values shouldn't include the container name or a preceding '/' character. Use the path characters without any URL
encoding.

NOTE
Blob index tags are not supported for Data Lake Storage Gen2 storage accounts, which have a hierarchical namespace
(HNS). You should not author role-assignment conditions using index tags on storage accounts that have HNS enabled.

Attributes available for each action


The following table lists which attributes you can use in your condition expressions depending on the action you
target. If you select multiple actions for a single condition, there might be fewer attributes to choose from for
your condition because the attributes must be available across the selected actions.

DATA A C T IO N AT T RIB UT E TYPE A P P L IES TO

string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete
containers:name ResourceAttributeOnly

blobs:path string ResourceAttributeOnly

string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
containers:name ResourceAttributeOnly

blobs:path string ResourceAttributeOnly

string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
containers:name ResourceAttributeOnly
Suboperation
Blob.Read.WithTagConditions

blobs:path string ResourceAttributeOnly

tags dictionaryOfString ResourceAttributeOnly

string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write
containers:name ResourceAttributeOnly

blobs:path string ResourceAttributeOnly

string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write
containers:name ResourceAttributeOnly
Suboperation
Blob.Write.WithTagHeaders

blobs:path string ResourceAttributeOnly

tags dictionaryOfString RequestAttributeOnly

string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action
containers:name ResourceAttributeOnly

blobs:path string ResourceAttributeOnly

string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action
containers:name ResourceAttributeOnly
Suboperation
Blob.Write.WithTagHeaders

blobs:path string ResourceAttributeOnly

tags dictionaryOfString RequestAttributeOnly

string ResourceAttributeOnly
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action
containers:name

blobs:path string ResourceAttributeOnly

string ResourceAttributeOnly
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/manageOwnership/action
containers:name

blobs:path string ResourceAttributeOnly

string ResourceAttributeOnly
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/modifyPermissions/action
containers:name

blobs:path string ResourceAttributeOnly


DATA A C T IO N AT T RIB UT E TYPE A P P L IES TO

string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action
containers:name ResourceAttributeOnly

blobs:path string ResourceAttributeOnly

string ResourceAttributeOnly
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/permanentDelete/action
containers:name

blobs:path string ResourceAttributeOnly

string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action
containers:name ResourceAttributeOnly

blobs:path string ResourceAttributeOnly

string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read
containers:name ResourceAttributeOnly

blobs:path string ResourceAttributeOnly

tags dictionaryOfString ResourceAttributeOnly

string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write
containers:name ResourceAttributeOnly

blobs:path string ResourceAttributeOnly

tags dictionaryOfString RequestAttributeOnly

See also
Example Azure role assignment conditions (preview)
Azure role assignment condition format and syntax (preview)
What is Azure attribute-based access control (Azure ABAC)? (preview)
Security considerations for Azure role assignment
conditions in Azure Storage (preview)
11/25/2021 • 5 minutes to read • Edit Online

IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it is not recommended for production workloads. Certain features might not be supported
or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure
Previews.

To fully secure resources using Azure attribute-based access control (Azure ABAC), you must also protect the
attributes used in the Azure role assignment conditions. For instance, if your condition is based on a file path,
then you should beware that access can be compromised if the principal has an unrestricted permission to
rename a file path.
This article describes security considerations that you should factor into your role assignment conditions.

Use of other authorization mechanisms


Role assignment conditions are only evaluated when using Azure RBAC for authorization. These conditions can
be bypassed if you allow access using alternate authorization methods:
Shared Key authorization
Account shared access signature (SAS)
Service SAS.
Similarly, conditions are not evaluated when access is granted using access control lists (ACLs) in storage
accounts with a hierarchical namespace (HNS).
You can prevent shared key, account-level SAS, and service-level SAS authorization by disabling shared key
authorization for your storage account. Since user delegation SAS depends on Azure RBAC, role-assignment
conditions are evaluated when using this method of authorization.

NOTE
Role-assignment conditions are not evaluated when access is granted using ACLs with Data Lake Storage Gen2. In this
case, you must plan the scope of access so it does not overlap with that granted through ACLs.

Securing storage attributes used in conditions


Blob path
When using blob path as a @Resource attribute for a condition, you should also prevent users from renaming a
blob to get access to a file when using accounts that have a hierarchical namespace. For example, if you want to
author a condition based on blob path, you should also restrict the user's access to the following actions:
A C T IO N DESC RIP T IO N

This action allows customers to rename a file using the Path


Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action
Create API.

This action allows access to various file system and path


Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action
operations.

Blob index tags


Blob index tags are used as free-form attributes for conditions in storage. If you author any access conditions by
using these tags, you must also protect the tags themselves. Specifically, the
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write DataAction allows users to modify
the tags on a storage object. You can restrict this action to prevent users from manipulating a tag key or value to
gain access to unauthorized objects.
In addition, if blob index tags are used in conditions, data may be vulnerable if the data and the associated index
tags are updated in separate operations. You can use @Request conditions on blob write operations to require
that index tags be set in the same update operation. This approach can help secure data from the instant it's
written to storage.
Tags on copied blobs
By default, blob index tags are not copied from a source blob to the destination when you use Copy Blob API or
any of its variants. To preserve the scope of access for blob upon copy, you should copy the tags as well.
Tags on snapshots
Tags on blob snapshots cannot be modified. This implies that you must update the tags on a blob before taking
the snapshot. If you modify the tags on a base blob, the tags on it's snapshot will continue to have their previous
value.
If a tag on a base blob is modified after a snapshot is taken, the scope of access may be different for the base
blob and the snapshot.
Tags on blob versions
Blob index tags aren't copied when a blob version is created through the Put Blob, Put Block List or Copy Blob
APIs. You can specify tags through the header for these APIs.
Tags can be set individually on a current base blob and on each blob version. When you modify tags on a base
blob, the tags on previous versions are not updated. If you want to change the scope of access for a blob and all
its versions using tags, you must update the tags on each version.
Querying and filtering limitations for versions and snapshots
When using tags to query and filter blobs in a container, only the base blobs are included in the response. Blob
versions or snapshots with the requested keys and values aren't included.

Roles and permissions


If you're using role assignment conditions for Azure built-in roles, you should carefully review all the
permissions that the role grants to a principal.
Inherited role assignments
Role assignments can be configured for a management group, subscription, resource group, storage account, or
a container, and are inherited at each level in the stated order. Azure RBAC has an additive model, so the effective
permissions are the sum of role assignments at each level. If a principal has the same permission assigned to
them through multiple role assignments, then access for an operation using that permission is evaluated
separately for each assignment at every level.
Since conditions are implemented as conditions on role assignments, any unconditional role assignment can
allow users to bypass the condition. Let's say you assign the Storage Blob Data Contributor role to a user for a
storage account and on a subscription, but add a condition only to the assignment for the storage account. In
this case, the user will have unrestricted access to the storage account through the role assignment at the
subscription level.
That's why you should apply conditions consistently for all role assignments across a resource hierarchy.

Other considerations
Condition operations that write blobs
Many operations that write blobs require either the
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write or the
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action permission. Built-in roles, such as
Storage Blob Data Owner and Storage Blob Data Contributor grant both permissions to a security principal.
When you define a role assignment condition on these roles, you should use identical conditions on both these
permissions to ensure consistent access restrictions for write operations.
Behavior for Copy Blob and Copy Blob from URL
For the Copy Blob and Copy Blob From URL operations, conditions using blob path as attribute on the
@Request
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write action and its suboperations are
evaluated only for the destination blob.
For conditions on the source blob, @Resource conditions on the
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read action are evaluated.
Behavior for Get Page Ranges
For the Get Page Ranges operation, @Resource conditions using
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags as an attribute on the
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read action and its suboperations are
evaluated only for the destination blob.
Conditions don't apply for access to the blob specified by the prevsnapshot URI parameter in the API.

See also
Authorize access to blobs using Azure role assignment conditions (preview)
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
What is Azure attribute-based access control (Azure ABAC)? (preview)
Example Azure role assignment conditions (preview)
11/25/2021 • 12 minutes to read • Edit Online

IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

This article list some examples of role assignment conditions.

Prerequisites
For information about the prerequisites to add or edit role assignment conditions, see Conditions prerequisites.

Example 1: Read access to blobs with a tag


This condition allows users to read blobs with a blob index tag key of Project and a tag value of Cascade.
Attempts to access blobs without this key-value tag will not be allowed.

TIP
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find Azure Blob data
with blob index tags (preview).

(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
AND
SubOperationMatches{'Blob.Read.WithTagConditions'})
)
OR
(

@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
] StringEquals 'Cascade'
)
)

Azure portal
Here are the settings to add this condition using the Azure portal.

C O N DIT IO N #1 SET T IN G

Actions Read content from a blob with tag conditions

Attribute source Resource

Attribute Blob index tags [Values in key]

Key {keyName}

Operator StringEquals

Value {keyValue}

Azure PowerShell
Here's how to add this condition using Azure PowerShell.

$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND


SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sensitive
`$>] StringEquals 'Cascade'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
$testRa.Condition = $condition
$testRa.ConditionVersion = "2.0"
Set-AzRoleAssignment -InputObject $testRa -PassThru
Here's how to test this condition.

$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName


Get-AzStorageBlob -Container <containerName> -Blob <blobName> -Context $bearerCtx

Example 2: New blobs must include a tag


This condition requires that any new blobs must include a blob index tag key of Project and a tag value of
Cascade.

TIP
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find Azure Blob data
with blob index tags (preview).

There are two permissions that allow you to create new blobs, so you must target both. You must add this
condition to any role assignments that include one of the following permissions.
/blobs/write (create or update)
/blobs/add/action (create)

(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
)
OR
(

@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
StringEquals 'Cascade'
)
)

Azure portal
Here are the settings to add this condition using the Azure portal.

C O N DIT IO N #1 SET T IN G

Actions Write to a blob with blob index tags


Write content to a blob with blob index tags
C O N DIT IO N #1 SET T IN G

Attribute source Request

Attribute Blob index tags [Values in key]

Key {keyName}

Operator StringEquals

Value {keyValue}

Azure PowerShell
Here's how to add this condition using Azure PowerShell.

$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND


SubOperationMatches{'Blob.Write.WithTagHeaders'}) AND !
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})) OR
(@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sensitive`
$>] StringEquals 'Cascade'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
$testRa.Condition = $condition
$testRa.ConditionVersion = "2.0"
Set-AzRoleAssignment -InputObject $testRa -PassThru

Here's how to test this condition.


$localSrcFile = # path to an example file, can be an empty txt
$ungrantedTag = @{'Project'='Baker'}
$grantedTag = @{'Project'='Cascade'}
# Get new context for request
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
# try ungranted tags
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example2 -Blob "Example2.txt" -Tag
$ungrantedTag -Context $bearerCtx
# try granted tags
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example2 -Blob "Example2.txt" -Tag
$grantedTag -Context $bearerCtx

Example 3: Existing blobs must have tag keys


This condition requires that any existing blobs be tagged with at least one of the allowed blob index tag keys:
Project or Program. This condition is useful for adding governance to existing blobs.

TIP
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find Azure Blob data
with blob index tags (preview).

There are two permissions that allow you to update tags on existing blobs, so you must target both. You must
add this condition to any role assignments that include one of the following permissions.
/blobs/write (update or create, cannot exclude create)
/blobs/tags/write

(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})
)
OR
(
@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&]
ForAllOfAnyValues:StringEquals {'Project', 'Program'}
)
)

Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G

Actions Write to a blob with blob index tags


Write blob index tags

Attribute source Request

Attribute Blob index tags [Keys]

Operator ForAllOfAnyValues:StringEquals

Value {keyName1}
{keyName2}

Azure PowerShell
Here's how to add this condition using Azure PowerShell.

$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND


SubOperationMatches{'Blob.Write.WithTagHeaders'}) AND !
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})) OR
(@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&`$keys`$&]
ForAllOfAnyValues:StringEquals {'Project', 'Program'}))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
$testRa.Condition = $condition
$testRa.ConditionVersion = "2.0"
Set-AzRoleAssignment -InputObject $testRa -PassThru

Here's how to test this condition.


$localSrcFile = # path to an example file, can be an empty txt
$ungrantedTag = @{'Mode'='Baker'}
$grantedTag = @{'Program'='Alpine';'Project'='Cascade'}
# Get new context for request
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
# try ungranted tags
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example3 -Blob "Example3.txt" -Tag
$ungrantedTag -Context $bearerCtx
# try granted tags
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example3 -Blob "Example3.txt" -Tag
$grantedTag -Context $bearerCtx

Example 4: Existing blobs must have a tag key and values


This condition requires that any existing blobs to have a blob index tag key of Project and tag values of Cascade,
Baker, or Skagit. This condition is useful for adding governance to existing blobs.

TIP
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find Azure Blob data
with blob index tags (preview).

There are two permissions that allow you to update tags on existing blobs, so you must target both. You must
add this condition to any role assignments that include one of the following permissions.
/blobs/write (update or create, cannot exclude create)
/blobs/tags/write

(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})
)
OR
(
@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&]
ForAnyOfAnyValues:StringEquals {'Project'}
AND

@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
ForAllOfAnyValues:StringEquals {'Cascade', 'Baker', 'Skagit'}
)
)

Azure portal
Here are the settings to add this condition using the Azure portal.

C O N DIT IO N #1 SET T IN G

Actions Write to a blob with blob index tags


Write blob index tags

Attribute source Request

Attribute Blob index tags [Keys]

Operator ForAnyOfAnyValues:StringEquals

Value {keyName}

Operator And

Expression 2

Attribute source Request

Attribute Blob index tags [Values in key]

Key {keyName}

Operator ForAllOfAnyValues:StringEquals

Value {keyValue1}
{keyValue2}
{keyValue3}
Azure PowerShell
Here's how to add this condition using Azure PowerShell.

$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND


SubOperationMatches{'Blob.Write.WithTagHeaders'}) AND !
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})) OR
(@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&`$keys`$&]
ForAnyOfAnyValues:StringEquals {'Project'} AND
@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sensitive`$
>] ForAllOfAnyValues:StringEquals {'Cascade', 'Baker', 'Skagit'}))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
$testRa.Condition = $condition
$testRa.ConditionVersion = "2.0"
Set-AzRoleAssignment -InputObject $testRa -PassThru

Here's how to test this condition.


$localSrcFile = <pathToLocalFile>
$ungrantedTag = @{'Project'='Alpine'}
$grantedTag1 = @{'Project'='Cascade'}
$grantedTag2 = @{'Project'='Baker'}
$grantedTag3 = @{'Project'='Skagit'}
# Get new context for request
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
# try ungranted tags
Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $ungrantedTag -Context $bearerCtx
# try granted tags
Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $grantedTag1 -Context $bearerCtx
Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $grantedTag2 -Context $bearerCtx
Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $grantedTag3 -Context $bearerCtx

Example 5: Read, write, or delete blobs in named containers


This condition allows users to read, write, or delete blobs in storage containers named blobs-example-container.
This condition is useful for sharing specific storage containers with other users in a subscription.
There are four permissions for read, write, and delete of existing blobs, so you must target all permissions. You
must add this condition to any role assignments that include one of the following permissions.
/blobs/delete
/blobs/read
/blobs/write (update or create)
/blobs/add/action (create)
Suboperations are not used in this condition because the subOperation is needed only when conditions are
authored based on tags.

(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
)
OR
(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-
container'
)
)

Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G

Actions Delete a blob


Read a blob
Write to a blob
Create a blob or snapshot, or append data

Attribute source Resource

Attribute Container name

Operator StringEquals

Value {containerName}

Azure PowerShell
Here's how to add this condition using Azure PowerShell.
$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})
AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}) AND !
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'}) AND !
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-
container'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
$testRa.Condition = $condition
$testRa.ConditionVersion = "2.0"
Set-AzRoleAssignment -InputObject $testRa -PassThru

Here's how to test this condition.

$localSrcFile = <pathToLocalFile>
$grantedContainer = "blobs-example-container"
$ungrantedContainer = "ungranted"
# Get new context for request
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
# Ungranted Container actions
$content = Set-AzStorageBlobContent -File $localSrcFile -Container $ungrantedContainer -Blob "Example5.txt"
-Context $bearerCtx
$content = Get-AzStorageBlobContent -Container $ungrantedContainer -Blob "Example5.txt" -Context $bearerCtx
$content = Remove-AzStorageBlob -Container $ungrantedContainer -Blob "Example5.txt" -Context $bearerCtx
# Granted Container actions
$content = Set-AzStorageBlobContent -File $localSrcFile -Container $grantedContainer -Blob "Example5.txt" -
Context $bearerCtx
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "Example5.txt" -Context $bearerCtx
$content = Remove-AzStorageBlob -Container $grantedContainer -Blob "Example5.txt" -Context $bearerCtx

Example 6: Read access to blobs in named containers with a path


This condition allows read access to storage containers named blobs-example-container with a blob path of
readonly/*. This condition is useful for sharing specific parts of storage containers for read access with other
users in the subscription.
You must add this condition to any role assignments that include the following permission.
/blobs/read
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})
)
OR
(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-
example-container'
AND
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike
'readonly/*'
)
)

Azure portal
Here are the settings to add this condition using the Azure portal.

C O N DIT IO N #1 SET T IN G

Actions Read a blob

Attribute source Resource

Attribute Container name

Operator StringEquals

Value {containerName}

Expression 2

Operator And

Attribute source Resource

Attribute Blob path

Operator StringLike

Value {pathString}
Azure PowerShell
Here's how to add this condition using Azure PowerShell.

$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-
container' AND @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike
'readonly/*'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
$testRa.Condition = $condition
$testRa.ConditionVersion = "2.0"
Set-AzRoleAssignment -InputObject $testRa -PassThru

Here's how to test this condition.

$grantedContainer = "blobs-example-container"
# Get new context for request
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
# Try to get ungranted blob
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "Ungranted.txt" -Context $bearerCtx
# Try to get granted blob
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "readonly/Example6.txt" -Context
$bearerCtx

Example 7: Write access to blobs in named containers with a path


This condition allows a partner (an Azure AD guest user) to drop files into storage containers named
Contosocorp with a path of uploads/contoso/*. This condition is useful for allowing other users to put data in
storage containers.
You must add this condition to any role assignments that include the following permissions.
/blobs/write (create or update)
/blobs/add/action (create)

(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
)
OR
(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'contosocorp'
AND
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike
'uploads/contoso/*'
)
)

Azure portal
Here are the settings to add this condition using the Azure portal.

C O N DIT IO N #1 SET T IN G

Actions Write to a blob


Create a blob or snapshot, or append data

Attribute source Resource

Attribute Container name

Operator StringEquals

Value {containerName}

Expression 2

Operator And

Attribute source Resource


C O N DIT IO N #1 SET T IN G

Attribute Blob path

Operator StringLike

Value {pathString}

Azure PowerShell
Here's how to add this condition using Azure PowerShell.

$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'contosocorp' AND
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike
'uploads/contoso/*'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
$testRa.Condition = $condition
$testRa.ConditionVersion = "2.0"
Set-AzRoleAssignment -InputObject $testRa -PassThru

Here's how to test this condition.


$grantedContainer = "contosocorp"
$localSrcFile = <pathToLocalFile>
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
# Try to set ungranted blob
$content = Set-AzStorageBlobContent -Container $grantedContainer -Blob "Example7.txt" -Context $bearerCtx -
File $localSrcFile
# Try to set granted blob
$content = Set-AzStorageBlobContent -Container $grantedContainer -Blob "uploads/contoso/Example7.txt" -
Context $bearerCtx -File $localSrcFile

Example 8: Read access to blobs with a tag and a path


This condition allows a user to read blobs with a blob index tag key of Program, a tag value of Alpine, and a blob
path of logs*. The blob path of logs* also includes the blob name.

TIP
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find Azure Blob data
with blob index tags (preview).

You must add this condition to any role assignments that includes the following permission.
/blobs/read
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
AND
SubOperationMatches{'Blob.Read.WithTagConditions'})
)
OR
(

@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Program<$key_case_sensitive$>
] StringEquals 'Alpine'
)
)
AND
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})
)
OR
(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'logs*'
)
)

Azure portal
Here are the settings to add this condition using the Azure portal.

C O N DIT IO N #1 SET T IN G

Actions Read content from a blob with tag conditions

Attribute source Resource

Attribute Blob index tabs [Values in key]

Key {keyName}

Operator StringEquals

Value {keyValue}
C O N DIT IO N #2 SET T IN G

Actions Read a blob

Attribute source Resource

Attribute Blob path

Operator StringLike

Value {pathString}
Azure PowerShell
Here's how to add this condition using Azure PowerShell.

$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND


SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Program<`$key_case_sensitive
`$>] StringEquals 'Alpine')) AND ((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'logs*'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
$testRa.Condition = $condition
$testRa.ConditionVersion = "2.0"
Set-AzRoleAssignment -InputObject $testRa -PassThru

Here's how to test this condition.

$grantedContainer = "contosocorp"
# Get new context for request
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
# Try to get ungranted blobs
# Wrong name but right tags
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "AlpineFile.txt" -Context $bearerCtx
# Right name but wrong tags
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logsAlpine.txt" -Context $bearerCtx
# Try to get granted blob
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logs/AlpineFile.txt" -Context
$bearerCtx

Example 9: Allow read and write access to blobs based on tags and
custom security attributes
This condition allows read and write access to blobs if the user has a custom security attribute that matches the
blob index tag.
For example, if Brenda has the attribute Project=Baker , she can only read and write blobs with the
Project=Baker blob index tag. Similarly, Chandra can only read and write blobs with Project=Cascade .

For more information, see Allow read access to blobs based on tags and custom security attributes.

(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})
)
OR
(
@Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
]
)
)
AND
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
)
OR
(
@Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals
@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
)
)

Azure portal
Here are the settings to add this condition using the Azure portal.

C O N DIT IO N #1 SET T IN G

Actions Read content from a blob with tag conditions

Attribute source Principal

Attribute <attributeset>_<key>

Operator StringEquals
C O N DIT IO N #1 SET T IN G

Option Attribute

Attribute source Resource

Attribute Blob index tags [Values in key]

Key <key>

C O N DIT IO N #2 SET T IN G

Actions Write to a blob with blob index tags


Write to a blob with blob index tags

Attribute source Principal

Attribute <attributeset>_<key>

Operator StringEquals

Option Attribute

Attribute source Request

Attribute Blob index tags [Values in key]

Key <key>

Example 10: Allow read access to blobs based on tags and multi-value
custom security attributes
This condition allows read access to blobs if the user has a custom security attribute with any values that
matches the blob index tag.
For example, if Chandra has the Project attribute with the values Baker and Cascade, she can only read blobs
with the Project=Baker or Project=Cascade blob index tag.
For more information, see Allow read access to blobs based on tags and custom security attributes.
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})
)
OR
(

@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
] ForAnyOfAnyValues:StringEquals
@Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project]
)
)

Azure portal
Here are the settings to add this condition using the Azure portal.

C O N DIT IO N #1 SET T IN G

Actions Read content from a blob with tag conditions

Attribute source Resource

Attribute Blob index tags [Values in key]

Key <key>

Operator ForAnyOfAnyValues:StringEquals

Option Attribute

Attribute source Principal

Attribute <attributeset>_<key>

Next steps
Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
Azure role assignment condition format and syntax (preview)
Grant limited access to Azure Storage resources
using shared access signatures (SAS)
11/25/2021 • 12 minutes to read • Edit Online

A shared access signature (SAS) provides secure delegated access to resources in your storage account. With a
SAS, you have granular control over how a client can access your data. For example:
What resources the client may access.
What permissions they have to those resources.
How long the SAS is valid.

Types of shared access signatures


Azure Storage supports three types of shared access signatures:
User delegation SAS
Service SAS
Account SAS
User delegation SAS
A user delegation SAS is secured with Azure Active Directory (Azure AD) credentials and also by the permissions
specified for the SAS. A user delegation SAS applies to Blob storage only.
For more information about the user delegation SAS, see Create a user delegation SAS (REST API).
Service SAS
A service SAS is secured with the storage account key. A service SAS delegates access to a resource in only one
of the Azure Storage services: Blob storage, Queue storage, Table storage, or Azure Files.
For more information about the service SAS, see Create a service SAS (REST API).
Account SAS
An account SAS is secured with the storage account key. An account SAS delegates access to resources in one or
more of the storage services. All of the operations available via a service or user delegation SAS are also
available via an account SAS.
You can also delegate access to the following:
Service-level operations (For example, the Get/Set Ser vice Proper ties and Get Ser vice Stats
operations).
Read, write, and delete operations that aren't permitted with a service SAS.
For more information about the account SAS, Create an account SAS (REST API).
NOTE
Microsoft recommends that you use Azure AD credentials when possible as a security best practice, rather than using the
account key, which can be more easily compromised. When your application design requires shared access signatures for
access to Blob storage, use Azure AD credentials to create a user delegation SAS when possible for superior security. For
more information, see Authorize access to data in Azure Storage.

A shared access signature can take one of the following two forms:
Ad hoc SAS . When you create an ad hoc SAS, the start time, expiry time, and permissions are specified
in the SAS URI. Any type of SAS can be an ad hoc SAS.
Ser vice SAS with stored access policy . A stored access policy is defined on a resource container,
which can be a blob container, table, queue, or file share. The stored access policy can be used to manage
constraints for one or more service shared access signatures. When you associate a service SAS with a
stored access policy, the SAS inherits the constraints—the start time, expiry time, and permissions—
defined for the stored access policy.

NOTE
A user delegation SAS or an account SAS must be an ad hoc SAS. Stored access policies are not supported for the user
delegation SAS or the account SAS.

How a shared access signature works


A shared access signature is a signed URI that points to one or more storage resources. The URI includes a token
that contains a special set of query parameters. The token indicates how the resources may be accessed by the
client. One of the query parameters, the signature, is constructed from the SAS parameters and signed with the
key that was used to create the SAS. This signature is used by Azure Storage to authorize access to the storage
resource.

NOTE
It's not possible to audit the generation of SAS tokens. Any user that has privileges to generate a SAS token, either by
using the account key, or via an Azure role assignment, can do so without the knowledge of the owner of the storage
account. Be careful to restrict permissions that allow users to generate SAS tokens. To prevent users from generating a
SAS that is signed with the account key for blob and queue workloads, you can disallow Shared Key access to the storage
account. For more information, see Prevent authorization with Shared Key.

SAS signature and authorization


You can sign a SAS token with a user delegation key or with a storage account key (Shared Key).
Signing a SAS token with a user delegation key
You can sign a SAS token by using a user delegation key that was created using Azure Active Directory (Azure
AD) credentials. A user delegation SAS is signed with the user delegation key.
To get the key, and then create the SAS, an Azure AD security principal must be assigned an Azure role that
includes the Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey action. For more
information, see Create a user delegation SAS (REST API).
Signing a SAS token with an account key
Both a service SAS and an account SAS are signed with the storage account key. To create a SAS that is signed
with the account key, an application must have access to the account key.
When a request includes a SAS token, that request is authorized based on how that SAS token is signed. The
access key or credentials that you use to create a SAS token are also used by Azure Storage to grant access to a
client that possesses the SAS.
The following table summarizes how each type of SAS token is authorized.

T Y P E O F SA S T Y P E O F A UT H O RIZ AT IO N

User delegation SAS (Blob storage only) Azure AD

Service SAS Shared Key

Account SAS Shared Key

Microsoft recommends using a user delegation SAS when possible for superior security.
SAS token
The SAS token is a string that you generate on the client side, for example by using one of the Azure Storage
client libraries. The SAS token is not tracked by Azure Storage in any way. You can create an unlimited number of
SAS tokens on the client side. After you create a SAS, you can distribute it to client applications that require
access to resources in your storage account.
Client applications provide the SAS URI to Azure Storage as part of a request. Then, the service checks the SAS
parameters and the signature to verify that it is valid. If the service verifies that the signature is valid, then the
request is authorized. Otherwise, the request is declined with error code 403 (Forbidden).
Here's an example of a service SAS URI, showing the resource URI and the SAS token. Because the SAS token
comprises the URI query string, the resource URI must be followed first by a question mark, and then by the
SAS token:

When to use a shared access signature


Use a SAS to give secure access to resources in your storage account to any client who does not otherwise have
permissions to those resources.
A common scenario where a SAS is useful is a service where users read and write their own data to your
storage account. In a scenario where a storage account stores user data, there are two typical design patterns:
1. Clients upload and download data via a front-end proxy service, which performs authentication. This
front-end proxy service allows the validation of business rules. But for large amounts of data, or high-
volume transactions, creating a service that can scale to match demand may be expensive or difficult.

2. A lightweight service authenticates the client as needed and then generates a SAS. Once the client
application receives the SAS, it can access storage account resources directly. Access permissions are
defined by the SAS and for the interval allowed by the SAS. The SAS mitigates the need for routing all
data through the front-end proxy service.
Many real-world services may use a hybrid of these two approaches. For example, some data might be
processed and validated via the front-end proxy. Other data is saved and/or read directly using SAS.
Additionally, a SAS is required to authorize access to the source object in a copy operation in certain scenarios:
When you copy a blob to another blob that resides in a different storage account.
You can optionally use a SAS to authorize access to the destination blob as well.
When you copy a file to another file that resides in a different storage account.
You can optionally use a SAS to authorize access to the destination file as well.
When you copy a blob to a file, or a file to a blob.
You must use a SAS even if the source and destination objects reside within the same storage account.

Best practices when using SAS


When you use shared access signatures in your applications, you need to be aware of two potential risks:
If a SAS is leaked, it can be used by anyone who obtains it, which can potentially compromise your
storage account.
If a SAS provided to a client application expires and the application is unable to retrieve a new SAS from
your service, then the application's functionality may be hindered.
The following recommendations for using shared access signatures can help mitigate these risks:
Always use HTTPS to create or distribute a SAS. If a SAS is passed over HTTP and intercepted, an
attacker performing a man-in-the-middle attack is able to read the SAS. Then, they can use that SAS just
as the intended user could have. This can potentially compromise sensitive data or allowing for data
corruption by the malicious user.
Use a user delegation SAS when possible. A user delegation SAS provides superior security to a
service SAS or an account SAS. A user delegation SAS is secured with Azure AD credentials, so that you
do not need to store your account key with your code.
Have a revocation plan in place for a SAS. Make sure you are prepared to respond if a SAS is
compromised.
Define a stored access policy for a ser vice SAS. Stored access policies give you the option to
revoke permissions for a service SAS without having to regenerate the storage account keys. Set the
expiration on these very far in the future (or infinite) and make sure it's regularly updated to move it
farther into the future.
Use near-term expiration times on an ad hoc SAS ser vice SAS or account SAS. In this way, even
if a SAS is compromised, it's valid only for a short time. This practice is especially important if you cannot
reference a stored access policy. Near-term expiration times also limit the amount of data that can be
written to a blob by limiting the time available to upload to it.
Have clients automatically renew the SAS if necessar y. Clients should renew the SAS well before
the expiration, in order to allow time for retries if the service providing the SAS is unavailable. This might
be unnecessary in some cases. For example, you might intend for the SAS to be used for a small number
of immediate, short-lived operations. These operations are expected to be completed within the
expiration period. As a result, you are not expecting the SAS to be renewed. However, if you have a client
that is routinely making requests via SAS, then the possibility of expiration comes into play.
Be careful with SAS star t time. If you set the start time for a SAS to the current time, failures might
occur intermittently for the first few minutes. This is due to different machines having slightly different
current times (known as clock skew). In general, set the start time to be at least 15 minutes in the past. Or,
don't set it at all, which will make it valid immediately in all cases. The same generally applies to expiry
time as well--remember that you may observe up to 15 minutes of clock skew in either direction on any
request. For clients using a REST version prior to 2012-02-12, the maximum duration for a SAS that does
not reference a stored access policy is 1 hour. Any policies that specify a longer term than 1 hour will fail.
Be careful with SAS datetime format. For some utilities (such as AzCopy), you need datetime formats
to be '+%Y-%m-%dT%H:%M:%SZ'. This format specifically includes the seconds.
Be specific with the resource to be accessed. A security best practice is to provide a user with the
minimum required privileges. If a user only needs read access to a single entity, then grant them read
access to that single entity, and not read/write/delete access to all entities. This also helps lessen the
damage if a SAS is compromised because the SAS has less power in the hands of an attacker.
Understand that your account will be billed for any usage, including via a SAS. If you provide
write access to a blob, a user may choose to upload a 200 GB blob. If you've given them read access as
well, they may choose to download it 10 times, incurring 2 TB in egress costs for you. Again, provide
limited permissions to help mitigate the potential actions of malicious users. Use short-lived SAS to
reduce this threat (but be mindful of clock skew on the end time).
Validate data written using a SAS. When a client application writes data to your storage account,
keep in mind that there can be problems with that data. If you plan to validate data, perform that
validation after the data is written and before it is used by your application. This practice also protects
against corrupt or malicious data being written to your account, either by a user who properly acquired
the SAS, or by a user exploiting a leaked SAS.
Know when not to use a SAS. Sometimes the risks associated with a particular operation against your
storage account outweigh the benefits of using a SAS. For such operations, create a middle-tier service
that writes to your storage account after performing business rule validation, authentication, and
auditing. Also, sometimes it's simpler to manage access in other ways. For example, if you want to make
all blobs in a container publicly readable, you can make the container Public, rather than providing a SAS
to every client for access.
Use Azure Monitor and Azure Storage logs to monitor your application. Authorization failures
can occur because of an outage in your SAS provider service. They can also occur from an inadvertent
removal of a stored access policy. You can use Azure Monitor and storage analytics logging to observe
any spike in these types of authorization failures. For more information, see Azure Storage metrics in
Azure Monitor and Azure Storage Analytics logging.

NOTE
Storage doesn't track the number of shared access signatures that have been generated for a storage account, and no API
can provide this detail. If you need to know the number of shared access signatures that have been generated for a
storage account, you must track the number manually.

Get started with SAS


To get started with shared access signatures, see the following articles for each SAS type.
User delegation SAS
Create a user delegation SAS for a container or blob with PowerShell
Create a user delegation SAS for a container or blob with the Azure CLI
Create a user delegation SAS for a container or blob with .NET
Service SAS
Create a service SAS for a container or blob with .NET
Account SAS
Create an account SAS with .NET

Next steps
Delegate access with a shared access signature (REST API)
Create a user delegation SAS (REST API)
Create a service SAS (REST API)
Create an account SAS (REST API)
Use the Azure Storage resource provider to access
management resources
11/25/2021 • 4 minutes to read • Edit Online

Azure Resource Manager is the deployment and management service for Azure. The Azure Storage resource
provider is a service that is based on Azure Resource Manager and that provides access to management
resources for Azure Storage. You can use the Azure Storage resource provider to create, update, manage, and
delete resources such as storage accounts, private endpoints, and account access keys. For more information
about Azure Resource Manager, see Azure Resource Manager overview.
You can use the Azure Storage resource provider to perform actions such as creating or deleting a storage
account or getting a list of storage accounts in a subscription. To authorize requests against the Azure Storage
resource provider, use Azure Active Directory (Azure AD). This article describes how to assign permissions to
management resources, and points to examples that show how to make requests against the Azure Storage
resource provider.

Management resources versus data resources


Microsoft provides two REST APIs for working with Azure Storage resources. These APIs form the basis of all
actions you can perform against Azure Storage. The Azure Storage REST API enables you to work with data in
your storage account, including blob, queue, file, and table data. The Azure Storage resource provider REST API
enables you to work with the storage account and related resources.
A request that reads or writes blob data requires different permissions than a request that performs a
management operation. Azure RBAC provides fine-grained control over permissions to both types of resources.
When you assign an Azure role to a security principal, make sure that you understand what permissions that
principal will be granted. For a detailed reference that describes which actions are associated with each Azure
built-in role, see Azure built-in roles.
Azure Storage supports using Azure AD to authorize requests against Blob and Queue storage. For information
about Azure roles for blob and queue data operations, see Authorize access to blobs and queues using Active
Directory.

Assign management permissions with Azure role-based access


control (Azure RBAC)
Every Azure subscription has an associated Azure Active Directory that manages users, groups, and applications.
A user, group, or application is also referred to as a security principal in the context of the Microsoft identity
platform. You can grant access to resources in a subscription to a security principal that is defined in the Active
Directory by using Azure role-based access control (Azure RBAC).
When you assign an Azure role to a security principal, you also indicate the scope at which the permissions
granted by the role are in effect. For management operations, you can assign a role at the level of the
subscription, the resource group, or the storage account. You can assign an Azure role to a security principal by
using the Azure portal, the Azure CLI tools, PowerShell, or the Azure Storage resource provider REST API.
For more information, see What is Azure role-based access control (Azure RBAC)? and Classic subscription
administrator roles, Azure roles, and Azure AD administrator roles.
Built-in roles for management operations
Azure provides built-in roles that grant permissions to call management operations. Azure Storage also provides
built-in roles specifically for use with the Azure Storage resource provider.
Built-in roles that grant permissions to call storage management operations include the roles described in the
following table:

IN C L UDES A C C ESS TO A C C O UN T
A Z URE RO L E DESC RIP T IO N K EY S?

Owner Can manage all storage resources and Yes, provides permissions to view and
access to resources. regenerate the storage account keys.

Contributor Can manage all storage resources, but Yes, provides permissions to view and
cannot manage access to resources. regenerate the storage account keys.

Reader Can view information about the No.


storage account, but cannot view the
account keys.

Storage Account Contributor Can manage the storage account, get Yes, provides permissions to view and
information about the subscription's regenerate the storage account keys.
resource groups and resources, and
create and manage subscription
resource group deployments.

User Access Administrator Can manage access to the storage Yes, permits a security principal to
account. assign any permissions to themselves
and others.

Vir tual Machine Contributor Can manage virtual machines, but not Yes, provides permissions to view and
the storage account to which they are regenerate the storage account keys.
connected.

The third column in the table indicates whether the built-in role supports the
Microsoft.Storage/storageAccounts/listkeys/action . This action grants permissions to read and
regenerate the storage account keys. Permissions to access Azure Storage management resources do not also
include permissions to access data. However, if a user has access to the account keys, then they can use the
account keys to access Azure Storage data via Shared Key authorization.
Custom roles for management operations
Azure also supports defining Azure custom roles for access to management resources. For more information
about custom roles, see Azure custom roles.

Code samples
For code examples that show how to authorize and call management operations from the Azure Storage
management libraries, see the following samples:
.NET
Java
Node.js
Python

Azure Resource Manager versus classic deployments


The Resource Manager and classic deployment models represent two different ways of deploying and managing
your Azure solutions. Microsoft recommends using the Azure Resource Manager deployment model when you
create a new storage account. If possible, Microsoft also recommends that you recreate existing classic storage
accounts with the Resource Manager model. Although you can create a storage account using the classic
deployment model, the classic model is less flexible and will eventually be deprecated.
For more information about Azure deployment models, see Resource Manager and classic deployment.

Next steps
Azure Resource Manager overview
What is Azure role-based access control (Azure RBAC)?
Scalability targets for the Azure Storage resource provider
Security recommendations for Blob storage
11/25/2021 • 9 minutes to read • Edit Online

This article contains security recommendations for Blob storage. Implementing these recommendations will
help you fulfill your security obligations as described in our shared responsibility model. For more information
on how Microsoft fulfills service provider responsibilities, see Shared responsibility in the cloud.
Some of the recommendations included in this article can be automatically monitored by Microsoft Defender
for Cloud, which is the first line of defense in protecting your resources in Azure. For information on Microsoft
Defender for Cloud, see What is Microsoft Defender for Cloud?
Microsoft Defender for Cloud periodically analyzes the security state of your Azure resources to identify
potential security vulnerabilities. It then provides you with recommendations on how to address them. For more
information on Microsoft Defender for Cloud recommendations, see Security recommendations in Microsoft
Defender for Cloud.

Data protection
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD

Use the Azure Resource Manager Create new storage accounts using the -
deployment model Azure Resource Manager deployment
model for important security
enhancements, including superior
Azure role-based access control (Azure
RBAC) and auditing, Resource
Manager-based deployment and
governance, access to managed
identities, access to Azure Key Vault for
secrets, and Azure AD-based
authentication and authorization for
access to Azure Storage data and
resources. If possible, migrate existing
storage accounts that use the classic
deployment model to use Azure
Resource Manager. For more
information about Azure Resource
Manager, see Azure Resource Manager
overview.

Enable Microsoft Defender for all of Microsoft Defender for Storage Yes
your storage accounts provides an additional layer of security
intelligence that detects unusual and
potentially harmful attempts to access
or exploit storage accounts. Security
alerts are triggered in Microsoft
Defender for Cloud when anomalies in
activity occur and are also sent via
email to subscription administrators,
with details of suspicious activity and
recommendations on how to
investigate and remediate threats. For
more information, see Configure
Microsoft Defender for Storage.
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD

Turn on soft delete for blobs Soft delete for blobs enables you to -
recover blob data after it has been
deleted. For more information on soft
delete for blobs, see Soft delete for
Azure Storage blobs.

Turn on soft delete for containers Soft delete for containers enables you -
to recover a container after it has been
deleted. For more information on soft
delete for containers, see Soft delete
for containers.

Lock storage account to prevent Apply an Azure Resource Manager lock


accidental or malicious deletion or to your storage account to protect the
configuration changes account from accidental or malicious
deletion or configuration change.
Locking a storage account does not
prevent data within that account from
being deleted. It only prevents the
account itself from being deleted. For
more information, see Apply an Azure
Resource Manager lock to a storage
account.

Store business-critical data in Configure legal holds and time-based -


immutable blobs retention policies to store blob data in
a WORM (Write Once, Read Many)
state. Blobs stored immutably can be
read, but cannot be modified or
deleted for the duration of the
retention interval. For more
information, see Store business-critical
blob data with immutable storage.

Require secure transfer (HTTPS) to the When you require secure transfer for a -
storage account storage account, all requests to the
storage account must be made over
HTTPS. Any requests made over HTTP
are rejected. Microsoft recommends
that you always require secure transfer
for all of your storage accounts. For
more information, see Require secure
transfer to ensure secure connections.

Limit shared access signature (SAS) Requiring HTTPS when a client uses a -
tokens to HTTPS connections only SAS token to access blob data helps to
minimize the risk of eavesdropping.
For more information, see Grant
limited access to Azure Storage
resources using shared access
signatures (SAS).

Identity and access management


REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD

Use Azure Active Directory (Azure AD) Azure AD provides superior security -
to authorize access to blob data and ease of use over Shared Key for
authorizing requests to Blob storage.
For more information, see Authorize
access to data in Azure Storage.

Keep in mind the principal of least When assigning a role to a user, group, -
privilege when assigning permissions or application, grant that security
to an Azure AD security principal via principal only those permissions that
Azure RBAC are necessary for them to perform
their tasks. Limiting access to
resources helps prevent both
unintentional and malicious misuse of
your data.

Use a user delegation SAS to grant A user delegation SAS is secured with -
limited access to blob data to clients Azure Active Directory (Azure AD)
credentials and also by the permissions
specified for the SAS. A user delegation
SAS is analogous to a service SAS in
terms of its scope and function, but
offers security benefits over the service
SAS. For more information, see Grant
limited access to Azure Storage
resources using shared access
signatures (SAS).

Secure your account access keys with Microsoft recommends using Azure -
Azure Key Vault AD to authorize requests to Azure
Storage. However, if you must use
Shared Key authorization, then secure
your account keys with Azure Key
Vault. You can retrieve the keys from
the key vault at runtime, instead of
saving them with your application. For
more information about Azure Key
Vault, see Azure Key Vault overview.

Regenerate your account keys Rotating the account keys periodically -


periodically reduces the risk of exposing your data
to malicious actors.

Disallow Shared Key authorization When you disallow Shared Key -


authorization for a storage account,
Azure Storage rejects all subsequent
requests to that account that are
authorized with the account access
keys. Only secured requests that are
authorized with Azure AD will succeed.
For more information, see Prevent
Shared Key authorization for an Azure
Storage account.

Keep in mind the principal of least When creating a SAS, specify only -
privilege when assigning permissions those permissions that are required by
to a SAS the client to perform its function.
Limiting access to resources helps
prevent both unintentional and
malicious misuse of your data.
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD

Have a revocation plan in place for any If a SAS is compromised, you will want -
SAS that you issue to clients to revoke that SAS as soon as possible.
To revoke a user delegation SAS,
revoke the user delegation key to
quickly invalidate all signatures
associated with that key. To revoke a
service SAS that is associated with a
stored access policy, you can delete the
stored access policy, rename the policy,
or change its expiry time to a time that
is in the past. For more information,
see Grant limited access to Azure
Storage resources using shared access
signatures (SAS).

If a service SAS is not associated with a A service SAS that is not associated -
stored access policy, then set the with a stored access policy cannot be
expiry time to one hour or less revoked. For this reason, limiting the
expiry time so that the SAS is valid for
one hour or less is recommended.

Disable anonymous public read access Anonymous public read access to a -


to containers and blobs container and its blobs grants read-
only access to those resources to any
client. Avoid enabling public read
access unless your scenario requires it.
To learn how to disable anonymous
public access for a storage account, see
Configure anonymous public read
access for containers and blobs.

Networking
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD

Configure the minimum required Require that clients use a more secure -
version of Transport Layer Security version of TLS to make requests
(TLS) for a storage account. against an Azure Storage account by
configuring the minimum version of
TLS for that account. For more
information, see Configure minimum
required version of Transport Layer
Security (TLS) for a storage account

Enable the Secure transfer required When you enable the Secure Yes
option on all of your storage accounts transfer required option, all requests
made against the storage account
must take place over secure
connections. Any requests made over
HTTP will fail. For more information,
see Require secure transfer in Azure
Storage.
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD

Enable firewall rules Configure firewall rules to limit access -


to your storage account to requests
that originate from specified IP
addresses or ranges, or from a list of
subnets in an Azure Virtual Network
(VNet). For more information about
configuring firewall rules, see Configure
Azure Storage firewalls and virtual
networks.

Allow trusted Microsoft services to Turning on firewall rules for your -


access the storage account storage account blocks incoming
requests for data by default, unless the
requests originate from a service
operating within an Azure Virtual
Network (VNet) or from allowed public
IP addresses. Requests that are
blocked include those from other
Azure services, from the Azure portal,
from logging and metrics services, and
so on. You can permit requests from
other Azure services by adding an
exception to allow trusted Microsoft
services to access the storage account.
For more information about adding an
exception for trusted Microsoft
services, see Configure Azure Storage
firewalls and virtual networks.

Use private endpoints A private endpoint assigns a private IP -


address from your Azure Virtual
Network (VNet) to the storage
account. It secures all traffic between
your VNet and the storage account
over a private link. For more
information about private endpoints,
see Connect privately to a storage
account using Azure Private Endpoint.

Use VNet service tags A service tag represents a group of IP -


address prefixes from a given Azure
service. Microsoft manages the
address prefixes encompassed by the
service tag and automatically updates
the service tag as addresses change.
For more information about service
tags supported by Azure Storage, see
Azure service tags overview. For a
tutorial that shows how to use service
tags to create outbound network rules,
see Restrict access to PaaS resources.

Limit network access to specific Limiting network access to networks Yes


networks hosting clients requiring access
reduces the exposure of your
resources to network attacks.
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD

Configure network routing preference You can configure network routing -


preference for your Azure storage
account to specify how network traffic
is routed to your account from clients
over the Internet using the Microsoft
global network or Internet routing. For
more information, see Configure
network routing preference for Azure
Storage.

Logging/Monitoring
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD

Track how requests are authorized Enable Azure Storage logging to track -
how each request made against Azure
Storage was authorized. The logs
indicate whether a request was made
anonymously, by using an OAuth 2.0
token, by using Shared Key, or by
using a shared access signature (SAS).
For more information, see Monitoring
Azure Blob Storage with Azure Monitor
or Azure Storage analytics logging with
Classic Monitoring.

Set up alerts in Azure Monitor Configure log alerts to evaluate -


resources logs at a set frequency and
fire an alert based on the results. For
more information, see Log alerts in
Azure Monitor.

Next steps
Azure security documentation
Secure development documentation.
Azure Storage encryption for data at rest
11/25/2021 • 4 minutes to read • Edit Online

Azure Storage uses server-side encryption (SSE) to automatically encrypt your data when it is persisted to the
cloud. Azure Storage encryption protects your data and to help you to meet your organizational security and
compliance commitments.

About Azure Storage encryption


Data in Azure Storage is encrypted and decrypted transparently using 256-bit AES encryption, one of the
strongest block ciphers available, and is FIPS 140-2 compliant. Azure Storage encryption is similar to BitLocker
encryption on Windows.
Azure Storage encryption is enabled for all storage accounts, including both Resource Manager and classic
storage accounts. Azure Storage encryption cannot be disabled. Because your data is secured by default, you
don't need to modify your code or applications to take advantage of Azure Storage encryption.
Data in a storage account is encrypted regardless of performance tier (standard or premium), access tier (hot or
cool), or deployment model (Azure Resource Manager or classic). All blobs in the archive tier are also encrypted.
All Azure Storage redundancy options support encryption, and all data in both the primary and secondary
regions is encrypted when geo-replication is enabled. All Azure Storage resources are encrypted, including
blobs, disks, files, queues, and tables. All object metadata is also encrypted. There is no additional cost for Azure
Storage encryption.
Every block blob, append blob, or page blob that was written to Azure Storage after October 20, 2017 is
encrypted. Blobs created prior to this date continue to be encrypted by a background process. To force the
encryption of a blob that was created before October 20, 2017, you can rewrite the blob. To learn how to check
the encryption status of a blob, see Check the encryption status of a blob.
For more information about the cryptographic modules underlying Azure Storage encryption, see Cryptography
API: Next Generation.
For information about encryption and key management for Azure managed disks, see Server-side encryption of
Azure managed disks.

About encryption key management


Data in a new storage account is encrypted with Microsoft-managed keys by default. You can continue to rely on
Microsoft-managed keys for the encryption of your data, or you can manage encryption with your own keys. If
you choose to manage encryption with your own keys, you have two options. You can use either type of key
management, or both:
You can specify a customer-managed key to use for encrypting and decrypting data in Blob storage and in
Azure Files.1,2 Customer-managed keys must be stored in Azure Key Vault or Azure Key Vault Managed
Hardware Security Model (HSM) (preview). For more information about customer-managed keys, see Use
customer-managed keys for Azure Storage encryption.
You can specify a customer-provided key on Blob storage operations. A client making a read or write request
against Blob storage can include an encryption key on the request for granular control over how blob data is
encrypted and decrypted. For more information about customer-provided keys, see Provide an encryption
key on a request to Blob storage.
The following table compares key management options for Azure Storage encryption.
K EY M A N A GEM EN T M IC RO SO F T - M A N A GED C USTO M ER- M A N A GED
PA RA M ET ER K EY S K EY S C USTO M ER- P RO VIDED K EY S

Encryption/decryption Azure Azure Azure


operations

Azure Storage services All Blob storage, Azure Files1,2 Blob storage
supported

Key storage Microsoft key store Azure Key Vault or Key Customer's own key store
Vault HSM

Key rotation responsibility Microsoft Customer Customer

Key control Microsoft Customer Customer

1 For information about creating an account that supports using customer-managed keys with Queue storage,
see Create an account that supports customer-managed keys for queues.
2 For information about creating an account that supports using customer-managed keys with Table storage, see

Create an account that supports customer-managed keys for tables.

NOTE
Microsoft-managed keys are rotated appropriately per compliance requirements. If you have specific key rotation
requirements, Microsoft recommends that you move to customer-managed keys so that you can manage and audit the
rotation yourself.

Doubly encrypt data with infrastructure encryption


Customers who require high levels of assurance that their data is secure can also enable 256-bit AES encryption
at the Azure Storage infrastructure level. When infrastructure encryption is enabled, data in a storage account is
encrypted twice — once at the service level and once at the infrastructure level — with two different encryption
algorithms and two different keys. Double encryption of Azure Storage data protects against a scenario where
one of the encryption algorithms or keys may be compromised. In this scenario, the additional layer of
encryption continues to protect your data.
Service-level encryption supports the use of either Microsoft-managed keys or customer-managed keys with
Azure Key Vault. Infrastructure-level encryption relies on Microsoft-managed keys and always uses a separate
key.
For more information about how to create a storage account that enables infrastructure encryption, see Create a
storage account with infrastructure encryption enabled for double encryption of data.

Next steps
What is Azure Key Vault?
Customer-managed keys for Azure Storage encryption
Encryption scopes for Blob storage
Provide an encryption key on a request to Blob storage
Customer-managed keys for Azure Storage
encryption
11/25/2021 • 6 minutes to read • Edit Online

You can use your own encryption key to protect the data in your storage account. When you specify a customer-
managed key, that key is used to protect and control access to the key that encrypts your data. Customer-
managed keys offer greater flexibility to manage access controls.
You must use one of the following Azure key stores to store your customer-managed keys:
Azure Key Vault
Azure Key Vault Managed Hardware Security Module (HSM)
You can either create your own keys and store them in the key vault or managed HSM, or you can use the Azure
Key Vault APIs to generate keys. The storage account and the key vault or managed HSM must be in the same
region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions.

NOTE
Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for
configuration.

About customer-managed keys


The following diagram shows how Azure Storage uses Azure Active Directory and a key vault or managed HSM
to make requests using the customer-managed key:

The following list explains the numbered steps in the diagram:


1. An Azure Key Vault admin grants permissions to encryption keys to the managed identity that's associated
with the storage account.
2. An Azure Storage admin configures encryption with a customer-managed key for the storage account.
3. Azure Storage uses the managed identity that's associated with the storage account to authenticate access to
Azure Key Vault via Azure Active Directory.
4. Azure Storage wraps the account encryption key with the customer-managed key in Azure Key Vault.
5. For read/write operations, Azure Storage sends requests to Azure Key Vault to unwrap the account
encryption key to perform encryption and decryption operations.
The managed identity that's associated with the storage account must have these permissions at a minimum to
access a customer-managed key in Azure Key Vault:
wrapkey
unwrapkey
get
For more information about key permissions, see Key types, algorithms, and operations.
Azure Policy provides a built-in policy to require that storage accounts use customer-managed keys for Blob
Storage and Azure Files workloads. For more information, see the Storage section in Azure Policy built-in policy
definitions.

Customer-managed keys for queues and tables


Data stored in Queue and Table storage is not automatically protected by a customer-managed key when
customer-managed keys are enabled for the storage account. You can optionally configure these services to be
included in this protection at the time that you create the storage account.
For more information about how to create a storage account that supports customer-managed keys for queues
and tables, see Create an account that supports customer-managed keys for tables and queues.
Data in Blob storage and Azure Files is always protected by customer-managed keys when customer-managed
keys are configured for the storage account.

Enable customer-managed keys for a storage account


When you configure a customer-managed key, Azure Storage wraps the root data encryption key for the
account with the customer-managed key in the associated key vault or managed HSM. Enabling customer-
managed keys does not impact performance, and takes effect immediately.
When you enable or disable customer managed keys, or when you modify the key or the key version, the
protection of the root encryption key changes, but the data in your Azure Storage account does not need to be
re-encrypted.
Customer-managed keys can enabled only on existing storage accounts. The key vault or managed HSM must
be configured to grant permissions to the managed identity that is associated with the storage account. The
managed identity is available only after the storage account is created.
You can switch between customer-managed keys and Microsoft-managed keys at any time. For more
information about Microsoft-managed keys, see About encryption key management.
To learn how to configure Azure Storage encryption with customer-managed keys in a key vault, see Configure
encryption with customer-managed keys stored in Azure Key Vault. To configure customer-managed keys in a
managed HSM, see Configure encryption with customer-managed keys stored in Azure Key Vault Managed
HSM.

IMPORTANT
Customer-managed keys rely on managed identities for Azure resources, a feature of Azure AD. Managed identities do
not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a
managed identity is automatically assigned to your storage account under the covers. If you subsequently move the
subscription, resource group, or storage account from one Azure AD directory to another, the managed identity
associated with the storage account is not transferred to the new tenant, so customer-managed keys may no longer
work. For more information, see Transferring a subscription between Azure AD directories in FAQs and known
issues with managed identities for Azure resources.

Azure storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more
information about keys, see About keys.
Using a key vault or managed HSM has associated costs. For more information, see Key Vault pricing.

Update the key version


When you configure encryption with customer-managed keys, you have two options for updating the key
version:
Automatically update the key version: To automatically update a customer-managed key when a
new version is available, omit the key version when you enable encryption with customer-managed keys
for the storage account. If the key version is omitted, then Azure Storage checks the key vault or managed
HSM daily for a new version of a customer-managed key. Azure Storage automatically uses the latest
version of the key.
Manually update the key version: To use a specific version of a key for Azure Storage encryption,
specify that key version when you enable encryption with customer-managed keys for the storage
account. If you specify the key version, then Azure Storage uses that version for encryption until you
manually update the key version.
When the key version is explicitly specified, then you must manually update the storage account to use
the new key version URI when a new version is created. To learn how to update the storage account to
use a new version of the key, see Configure encryption with customer-managed keys stored in Azure Key
Vault or Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM.
When you update the key version, the protection of the root encryption key changes, but the data in your Azure
Storage account is not re-encrypted. There is no further action required from the user.

NOTE
To rotate a key, create a new version of the key in the key vault or managed HSM, according to your compliance policies.
You can rotate your key manually or create a function to rotate it on a schedule.

Revoke access to customer-managed keys


You can revoke the storage account's access to the customer-managed key at any time. After access to customer-
managed keys is revoked, or after the key has been disabled or deleted, clients cannot call operations that read
from or write to a blob or its metadata. Attempts to call any of the following operations will fail with error code
403 (Forbidden) for all users:
List Blobs, when called with the include=metadata parameter on the request URI
Get Blob
Get Blob Properties
Get Blob Metadata
Set Blob Metadata
Snapshot Blob, when called with the x-ms-meta-name request header
Copy Blob
Copy Blob From URL
Set Blob Tier
Put Block
Put Block From URL
Append Block
Append Block From URL
Put Blob
Put Page
Put Page From URL
Incremental Copy Blob
To call these operations again, restore access to the customer-managed key.
All data operations that are not listed in this section may proceed after customer-managed keys are revoked or a
key is disabled or deleted.
To revoke access to customer-managed keys, use PowerShell or Azure CLI.

Customer-managed keys for Azure managed disks


Customer-managed keys are also available for managing encryption of Azure managed disks. Customer-
managed keys behave differently for managed disks than for Azure Storage resources. For more information,
see Server-side encryption of Azure managed disks for Windows or Server side encryption of Azure managed
disks for Linux.

Next steps
Azure Storage encryption for data at rest
Configure encryption with customer-managed keys stored in Azure Key Vault
Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM
Provide an encryption key on a request to Blob
storage
11/25/2021 • 3 minutes to read • Edit Online

Clients making requests against Azure Blob storage have the option to provide an AES-256 encryption key on a
per-request basis. Including the encryption key on the request provides granular control over encryption
settings for Blob storage operations. Customer-provided keys can be stored in Azure Key Vault or in another key
store.

Encrypting read and write operations


When a client application provides an encryption key on the request, Azure Storage performs encryption and
decryption transparently while reading and writing blob data. Azure Storage writes an SHA-256 hash of the
encryption key alongside the blob's contents. The hash is used to verify that all subsequent operations against
the blob use the same encryption key.
Azure Storage does not store or manage the encryption key that the client sends with the request. The key is
securely discarded as soon as the encryption or decryption process is complete.
When a client creates or updates a blob using a customer-provided key on the request, then subsequent read
and write requests for that blob must also provide the key. If the key is not provided on a request for a blob that
has already been encrypted with a customer-provided key, then the request fails with error code 409 (Conflict).
If the client application sends an encryption key on the request, and the storage account is also encrypted using
a Microsoft-managed key or a customer-managed key, then Azure Storage uses the key provided on the request
for encryption and decryption.
To send the encryption key as part of the request, a client must establish a secure connection to Azure Storage
using HTTPS.
Each blob snapshot can have its own encryption key.

Request headers for specifying customer-provided keys


For REST calls, clients can use the following headers to securely pass encryption key information on a request to
Blob storage:

REQ UEST H EA DER DESC RIP T IO N

x-ms-encryption-key Required for both write and read requests. A Base64-


encoded AES-256 encryption key value.

x-ms-encryption-key-sha256 Required for both write and read requests. The Base64-
encoded SHA256 of the encryption key.

x-ms-encryption-algorithm Required for write requests, optional for read requests.


Specifies the algorithm to use when encrypting data using
the given key. The value of this header must be AES256 .

Specifying encryption keys on the request is optional. However, if you specify one of the headers listed above for
a write operation, then you must specify all of them.
Blob storage operations supporting customer-provided keys
The following Blob storage operations support sending customer-provided encryption keys on a request:
Put Blob
Put Block List
Put Block
Put Block from URL
Put Page
Put Page from URL
Append Block
Set Blob Properties
Set Blob Metadata
Get Blob
Get Blob Properties
Get Blob Metadata
Snapshot Blob

Rotate customer-provided keys


To rotate an encryption key that was used to encrypt a blob, download the blob and then re-upload it with the
new encryption key.

IMPORTANT
The Azure portal cannot be used to read from or write to a container or blob that is encrypted with a key provided on the
request.
Be sure to protect the encryption key that you provide on a request to Blob storage in a secure key store like Azure Key
Vault. If you attempt a write operation on a container or blob without the encryption key, the operation will fail, and you
will lose access to the object.

Feature support
This table shows how this feature is supported in your account and the impact on support when you enable
certain capabilities.

STO RA GE A C C O UN T B LO B STO RA GE DATA L A K E STO RA GE


TYPE ( DEFA ULT SUP P O RT ) GEN 2 1 N F S 3. 0 1 SF T P 1

Standard general-
purpose v2

Premium block blobs

1 Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP)
support all require a storage account with a hierarchical namespace enabled.

Next steps
Specify a customer-provided key on a request to Blob storage with .NET
Azure Storage encryption for data at rest
Encryption scopes for Blob storage
11/25/2021 • 5 minutes to read • Edit Online

Encryption scopes enable you to manage encryption with a key that is scoped to a container or an individual
blob. You can use encryption scopes to create secure boundaries between data that resides in the same storage
account but belongs to different customers.
For more information about working with encryption scopes, see Create and manage encryption scopes.

How encryption scopes work


By default, a storage account is encrypted with a key that is scoped to the entire storage account. When you
define an encryption scope, you specify a key that may be scoped to a container or an individual blob. When the
encryption scope is applied to a blob, the blob is encrypted with that key. When the encryption scope is applied
to a container, it serves as the default scope for blobs in that container, so that all blobs that are uploaded to that
container may be encrypted with the same key. The container can be configured to enforce the default
encryption scope for all blobs in the container, or to permit an individual blob to be uploaded to the container
with an encryption scope other than the default.
Read operations on a blob that was created with an encryption scope happen transparently, so long as the
encryption scope is not disabled.
Key management
When you define an encryption scope, you can specify whether the scope is protected with a Microsoft-
managed key or with a customer-managed key that is stored in Azure Key Vault. Different encryption scopes on
the same storage account can use either Microsoft-managed or customer-managed keys. You can also switch
the type of key used to protect an encryption scope from a customer-managed key to a Microsoft-managed key,
or vice versa, at any time. For more information about customer-managed keys, see Customer-managed keys
for Azure Storage encryption. For more information about Microsoft-managed keys, see About encryption key
management.
If you define an encryption scope with a customer-managed key, then you can choose to update the key version
either automatically or manually. If you choose to automatically update the key version, then Azure Storage
checks the key vault or managed HSM daily for a new version of the customer-managed key and automatically
updates the key to the latest version. For more information about updating the key version for a customer-
managed key, see Update the key version.
Azure Policy provides a built-in policy to require that encryption scopes use customer-managed keys. For more
information, see the Storage section in Azure Policy built-in policy definitions.
A storage account may have up to 10,000 encryption scopes that are protected with customer-managed keys
for which the key version is automatically updated. If your storage account already has 10,000 encryption
scopes that are protected with customer-managed keys that are being automatically updated, then the key
version must be updated manually for any additional encryption scopes that are protected with customer-
managed keys.
Infrastructure encryption
Infrastructure encryption in Azure Storage enables double encryption of data. With infrastructure encryption,
data is encrypted twice — once at the service level and once at the infrastructure level — with two different
encryption algorithms and two different keys.
Infrastructure encryption is supported for an encryption scope, as well as at the level of the storage account. If
infrastructure encryption is enabled for an account, then any encryption scope created on that account
automatically uses infrastructure encryption. If infrastructure encryption is not enabled at the account level, then
you have the option to enable it for an encryption scope at the time that you create the scope. The infrastructure
encryption setting for an encryption scope cannot be changed after the scope is created.
For more information about infrastructure encryption, see Enable infrastructure encryption for double
encryption of data.
Encryption scopes for containers and blobs
When you create a container, you can specify a default encryption scope for the blobs that are subsequently
uploaded to that container. When you specify a default encryption scope for a container, you can decide how the
default encryption scope is enforced:
You can require that all blobs uploaded to the container use the default encryption scope. In this case, every
blob in the container is encrypted with the same key.
You can permit a client to override the default encryption scope for the container, so that a blob may be
uploaded with an encryption scope other than the default scope. In this case, the blobs in the container may
be encrypted with different keys.
The following table summarizes the behavior of a blob upload operation, depending on how the default
encryption scope is configured for the container:

UP LO A DIN G A B LO B W IT H A N
T H E EN C RY P T IO N SC O P E DEF IN ED O N UP LO A DIN G A B LO B W IT H T H E EN C RY P T IO N SC O P E OT H ER T H A N
T H E C O N TA IN ER IS. . . DEFA ULT EN C RY P T IO N SC O P E. . . T H E DEFA ULT SC O P E. . .

A default encryption scope with Succeeds Succeeds


overrides permitted

A default encryption scope with Succeeds Fails


overrides prohibited

A default encryption scope must be specified for a container at the time that the container is created.
If no default encryption scope is specified for the container, then you can upload a blob using any encryption
scope that you've defined for the storage account. The encryption scope must be specified at the time that the
blob is uploaded.

Disabling an encryption scope


When you disable an encryption scope, any subsequent read or write operations made with the encryption
scope will fail with HTTP error code 403 (Forbidden). If you re-enable the encryption scope, read and write
operations will proceed normally again.
When an encryption scope is disabled, you are no longer billed for it. Disable any encryption scopes that are not
needed to avoid unnecessary charges.
If your encryption scope is protected with a customer-managed key, and you revoke the key in the key vault, the
data will become inaccessible. Be sure to disable the encryption scope prior to revoking the key in key vault to
avoid being charged for the encryption scope.
Keep in mind that customer-managed keys are protected by soft delete and purge protection in the key vault,
and a deleted key is subject to the behavior defined for by those properties. For more information, see one of
the following topics in the Azure Key Vault documentation:
How to use soft-delete with PowerShell
How to use soft-delete with CLI
IMPORTANT
It is not possible to delete an encryption scope.

Feature support
This table shows how this feature is supported in your account and the impact on support when you enable
certain capabilities.

STO RA GE A C C O UN T B LO B STO RA GE DATA L A K E STO RA GE


TYPE ( DEFA ULT SUP P O RT ) GEN 2 1 N F S 3. 0 1 SF T P 1

Standard general-
purpose v2

Premium block blobs

1 Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP)
support all require a storage account with a hierarchical namespace enabled.

Next steps
Azure Storage encryption for data at rest
Create and manage encryption scopes
Customer-managed keys for Azure Storage encryption
What is Azure Key Vault?
Use private endpoints for Azure Storage
11/25/2021 • 8 minutes to read • Edit Online

You can use private endpoints for your Azure Storage accounts to allow clients on a virtual network (VNet) to
securely access data over a Private Link. The private endpoint uses a separate IP address from the VNet address
space for each storage account service. Network traffic between the clients on the VNet and the storage account
traverses over the VNet and a private link on the Microsoft backbone network, eliminating exposure from the
public internet.
Using private endpoints for your storage account enables you to:
Secure your storage account by configuring the storage firewall to block all connections on the public
endpoint for the storage service.
Increase security for the virtual network (VNet), by enabling you to block exfiltration of data from the VNet.
Securely connect to storage accounts from on-premises networks that connect to the VNet using VPN or
ExpressRoutes with private-peering.

Conceptual overview

A private endpoint is a special network interface for an Azure service in your Virtual Network (VNet). When you
create a private endpoint for your storage account, it provides secure connectivity between clients on your VNet
and your storage. The private endpoint is assigned an IP address from the IP address range of your VNet. The
connection between the private endpoint and the storage service uses a secure private link.
Applications in the VNet can connect to the storage service over the private endpoint seamlessly, using the
same connection strings and authorization mechanisms that they would use other wise . Private
endpoints can be used with all protocols supported by the storage account, including REST and SMB.
Private endpoints can be created in subnets that use Service Endpoints. Clients in a subnet can thus connect to
one storage account using private endpoint, while using service endpoints to access others.
When you create a private endpoint for a storage service in your VNet, a consent request is sent for approval to
the storage account owner. If the user requesting the creation of the private endpoint is also an owner of the
storage account, this consent request is automatically approved.
Storage account owners can manage consent requests and the private endpoints, through the 'Private
endpoints' tab for the storage account in the Azure portal.

TIP
If you want to restrict access to your storage account through the private endpoint only, configure the storage firewall to
deny or control access through the public endpoint.

You can secure your storage account to only accept connections from your VNet, by configuring the storage
firewall to deny access through its public endpoint by default. You don't need a firewall rule to allow traffic from
a VNet that has a private endpoint, since the storage firewall only controls access through the public endpoint.
Private endpoints instead rely on the consent flow for granting subnets access to the storage service.

NOTE
When copying blobs between storage accounts, your client must have network access to both accounts. So if you choose
to use a private link for only one account (either the source or the destination), make sure that your client has network
access to the other account. To learn about other ways to configure network access, see Configure Azure Storage firewalls
and virtual networks.

Creating a private endpoint


To create a private endpoint by using the Azure Portal, see Connect privately to a storage account from the
Storage Account experience in the Azure portal.
To create a private endpoint by using PowerShell or the Azure CLI, see either of these articles. Both of them
feature an Azure web app as the target service, but the steps to create a private link are the same for an Azure
Storage account.
Create a private endpoint using Azure CLI
Create a private endpoint using Azure PowerShell
When you create a private endpoint, you must specify the storage account and the storage service to which it
connects.
You need a separate private endpoint for each storage resource that you need to access, namely Blobs, Data
Lake Storage Gen2, Files, Queues, Tables, or Static Websites. On the private endpoint, these storage services are
defined as the target sub-resource of the associated storage account.
If you create a private endpoint for the Data Lake Storage Gen2 storage resource, then you should also create
one for the Blob storage resource. That's because operations that target the Data Lake Storage Gen2 endpoint
might be redirected to the Blob endpoint. By creating a private endpoint for both resources, you ensure that
operations can complete successfully.

TIP
Create a separate private endpoint for the secondary instance of the storage service for better read performance on RA-
GRS accounts. Make sure to create a general-purpose v2(Standard or Premium) storage account.

For read access to the secondary region with a storage account configured for geo-redundant storage, you need
separate private endpoints for both the primary and secondary instances of the service. You don't need to create
a private endpoint for the secondary instance for failover . The private endpoint will automatically connect to
the new primary instance after failover. For more information about storage redundancy options, see Azure
Storage redundancy.

Connecting to a private endpoint


Clients on a VNet using the private endpoint should use the same connection string for the storage account, as
clients connecting to the public endpoint. We rely upon DNS resolution to automatically route the connections
from the VNet to the storage account over a private link.

IMPORTANT
Use the same connection string to connect to the storage account using private endpoints, as you'd use otherwise. Please
don't connect to the storage account using its privatelink subdomain URL.

We create a private DNS zone attached to the VNet with the necessary updates for the private endpoints, by
default. However, if you're using your own DNS server, you may need to make additional changes to your DNS
configuration. The section on DNS changes below describes the updates required for private endpoints.

DNS changes for private endpoints


When you create a private endpoint, the DNS CNAME resource record for the storage account is updated to an
alias in a subdomain with the prefix privatelink . By default, we also create a private DNS zone, corresponding
to the privatelink subdomain, with the DNS A resource records for the private endpoints.
When you resolve the storage endpoint URL from outside the VNet with the private endpoint, it resolves to the
public endpoint of the storage service. When resolved from the VNet hosting the private endpoint, the storage
endpoint URL resolves to the private endpoint's IP address.
For the illustrated example above, the DNS resource records for the storage account 'StorageAccountA', when
resolved from outside the VNet hosting the private endpoint, will be:

NAME TYPE VA L UE

StorageAccountA.blob.core.windows.net CNAME StorageAccountA.privatelink.blob.core.windows.net

CNAME
StorageAccountA.privatelink.blob.core.windows.net <storage service public endpoint>

<storage service public endpoint> A <storage service public IP address>

As previously mentioned, you can deny or control access for clients outside the VNet through the public
endpoint using the storage firewall.
The DNS resource records for StorageAccountA, when resolved by a client in the VNet hosting the private
endpoint, will be:

NAME TYPE VA L UE

StorageAccountA.blob.core.windows.net CNAME StorageAccountA.privatelink.blob.core.windows.net

A
StorageAccountA.privatelink.blob.core.windows.net 10.1.1.5

This approach enables access to the storage account using the same connection string for clients on the
VNet hosting the private endpoints, as well as clients outside the VNet.
If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the storage
account endpoint to the private endpoint IP address. You should configure your DNS server to delegate your
private link subdomain to the private DNS zone for the VNet, or configure the A records for
StorageAccountA.privatelink.blob.core.windows.net with the private endpoint IP address.
TIP
When using a custom or on-premises DNS server, you should configure your DNS server to resolve the storage account
name in the privatelink subdomain to the private endpoint IP address. You can do this by delegating the
privatelink subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and
adding the DNS A records.

The recommended DNS zone names for private endpoints for storage services, and the associated endpoint
target sub-resources, are:

STO RA GE SERVIC E TA RGET SUB - RESO URC E ZONE NAME

Blob service blob privatelink.blob.core.windows.net

Data Lake Storage Gen2 dfs privatelink.dfs.core.windows.net

File service file privatelink.file.core.windows.net

Queue service queue privatelink.queue.core.windows.net

Table service table privatelink.table.core.windows.net

Static Websites web privatelink.web.core.windows.net

For more information on configuring your own DNS server to support private endpoints, refer to the following
articles:
Name resolution for resources in Azure virtual networks
DNS configuration for private endpoints

Pricing
For pricing details, see Azure Private Link pricing.

Known Issues
Keep in mind the following known issues about private endpoints for Azure Storage.
Storage access constraints for clients in VNets with private endpoints
Clients in VNets with existing private endpoints face constraints when accessing other storage accounts that
have private endpoints. For example, suppose a VNet N1 has a private endpoint for a storage account A1 for
Blob storage. If storage account A2 has a private endpoint in a VNet N2 for Blob storage, then clients in VNet N1
must also access Blob storage in account A2 using a private endpoint. If storage account A2 does not have any
private endpoints for Blob storage, then clients in VNet N1 can access Blob storage in that account without a
private endpoint.
This constraint is a result of the DNS changes made when account A2 creates a private endpoint.
Network Security Group rules for subnets with private endpoints
Currently, you can't configure Network Security Group (NSG) rules and user-defined routes for private
endpoints. NSG rules applied to the subnet hosting the private endpoint are not applied to the private endpoint.
They are applied only to other endpoints (For example: network interface controllers). A limited workaround for
this issue is to implement your access rules for private endpoints on the source subnets, though this approach
may require a higher management overhead.
Copying blobs between storage accounts
You can copy blobs between storage accounts by using private endpoints only if you use the Azure REST API, or
tools that use the REST API. These tools include AzCopy, Storage Explorer, Azure PowerShell, Azure CLI, and the
Azure Blob Storage SDKs.
Only private endpoints that target the Blob storage resource are supported. Private endpoints that target the
Data Lake Storage Gen2 or the File resource are not yet supported. Also, copying between storage accounts by
using the Network File System (NFS) protocol is not yet supported.

Next steps
Configure Azure Storage firewalls and virtual networks
Security recommendations for Blob storage
Network routing preference for Azure Storage
11/25/2021 • 3 minutes to read • Edit Online

You can configure network routing preference for your Azure storage account to specify how network traffic is
routed to your account from clients over the internet. By default, traffic from the internet is routed to the public
endpoint of your storage account over the Microsoft global network. Azure Storage provides additional options
for configuring how traffic is routed to your storage account.
Configuring routing preference gives you the flexibility to optimize your traffic either for premium network
performance or for cost. When you configure a routing preference, you specify how traffic will be directed to the
public endpoint for your storage account by default. You can also publish route-specific endpoints for your
storage account.

NOTE
This feature is not supported in premium performance storage accounts or accounts configured to use Zone-redundant
storage (ZRS).

Microsoft global network versus Internet routing


By default, clients outside of the Azure environment access your storage account over the Microsoft global
network. The Microsoft global network is optimized for low-latency path selection to deliver premium network
performance with high reliability. Both inbound and outbound traffic are routed through the point of presence
(POP) that is closest to the client. This default routing configuration ensures that traffic to and from your storage
account traverses over the Microsoft global network for the bulk of its path, maximizing network performance.
You can change the routing configuration for your storage account so that both inbound and outbound traffic
are routed to and from clients outside of the Azure environment through the POP closest to the storage account.
This route minimizes the traversal of your traffic over the Microsoft global network, handing it off to the transit
ISP at the earliest opportunity. Utilizing this routing configuration lowers networking costs.
The following diagram shows how traffic flows between the client and the storage account for each routing
preference:
For more information on routing preference in Azure, see What is routing preference?.

Routing configuration
For step-by-step guidance that shows you how to configure the routing preference and route-specific endpoints,
see Configure network routing preference for Azure Storage.
You can choose between the Microsoft global network and internet routing as the default routing preference for
the public endpoint of your storage account. The default routing preference applies to all traffic from clients
outside Azure and affects the endpoints for Azure Data Lake Storage Gen2, Blob storage, Azure Files, and static
websites. Configuring routing preference is not supported for Azure Queues or Azure Tables.
You can also publish route-specific endpoints for your storage account. When you publish route-specific
endpoints, Azure Storage creates new public endpoints for your storage account that route traffic over the
desired path. This flexibility enables you to direct traffic to your storage account over a specific route without
changing your default routing preference.
For example, publishing an internet route-specific endpoint for the 'StorageAccountA' will publish the following
endpoints for your storage account:

STO RA GE SERVIC E RO UT E- SP EC IF IC EN DP O IN T

Blob service StorageAccountA-


internetrouting.blob.core.windows.net

Data Lake Storage Gen2 StorageAccountA-


internetrouting.dfs.core.windows.net

File service StorageAccountA-


internetrouting.file.core.windows.net
STO RA GE SERVIC E RO UT E- SP EC IF IC EN DP O IN T

Static Websites StorageAccountA-


internetrouting.web.core.windows.net

If you have a read-access geo-redundant storage (RA-GRS) or a read-access geo-zone-redundant storage (RA-
GZRS) storage account, publishing route-specific endpoints also automatically creates the corresponding
endpoints in the secondary region for read access.

STO RA GE SERVIC E RO UT E- SP EC IF IC REA D- O N LY SEC O N DA RY EN DP O IN T

Blob service StorageAccountA-internetrouting-


secondary.blob.core.windows.net

Data Lake Storage Gen2 StorageAccountA-internetrouting-


secondary.dfs.core.windows.net

File service StorageAccountA-internetrouting-


secondary.file.core.windows.net

Static Websites StorageAccountA-internetrouting-


secondary.web.core.windows.net

The connection strings for the published route-specific endpoints can be copied via the Azure portal. These
connection strings can be used for Shared Key authorization with all existing Azure Storage SDKs and APIs.

Regional availability
Routing preference for Azure Storage is available in the following regions:
Central US
Central US EUAP
East US
East US 2
East US 2
East US 2 EUAP
South Central US
West Central US
West US
West US 2
France Central
France South
Germany North
Germany West Central
North Central US
North Europe
Norway East
Switzerland North
Switzerland West
UK South
UK West
West Europe
UAE Central
East Asia
Southeast Asia
Japan East
Japan West
West India
Australia East
Australia Southeast
The following known issues affect the routing preference for Azure Storage:
Access requests for the route-specific endpoint for the Microsoft global network fail with HTTP error 404 or
equivalent. Routing over the Microsoft global network works as expected when it is set as the default routing
preference for the public endpoint.

Pricing and billing


For pricing and billing details, see the Pricing section in What is routing preference?.

Next steps
What is routing preference?
Configure network routing preference
Configure Azure Storage firewalls and virtual networks
Security recommendations for Blob storage
Data protection overview
11/25/2021 • 11 minutes to read • Edit Online

Azure Storage provides data protection for Blob Storage and Azure Data Lake Storage Gen2 to help you to
prepare for scenarios where you need to recover data that has been deleted or overwritten. It's important to
think about how to best protect your data before an incident occurs that could compromise it. This guide can
help you decide in advance which data protection features your scenario requires, and how to implement them.
If you should need to recover data that has been deleted or overwritten, this overview also provides guidance
on how to proceed, based on your scenario.
In the Azure Storage documentation, data protection refers to strategies for protecting the storage account and
data within it from being deleted or modified, or for restoring data after it has been deleted or modified. Azure
Storage also offers options for disaster recovery, including multiple levels of redundancy to protect your data
from service outages due to hardware problems or natural disasters, and customer-managed failover in the
event that the data center in the primary region becomes unavailable. For more information about how your
data is protected from service outages, see Disaster recovery.

Recommendations for basic data protection


If you are looking for basic data protection coverage for your storage account and the data that it contains, then
Microsoft recommends taking the following steps to begin with:
Configure an Azure Resource Manager lock on the storage account to protect the account from deletion or
configuration changes. Learn more...
Enable container soft delete for the storage account to recover a deleted container and its contents. Learn
more...
Save the state of a blob at regular intervals:
For Blob Storage workloads, enable blob versioning to automatically save the state of your data each
time a blob is overwritten. Learn more...
For Azure Data Lake Storage workloads, take manual snapshots to save the state of your data at a
particular point in time. Learn more...
These options, as well as additional data protection options for other scenarios, are described in more detail in
the following section.
For an overview of the costs involved with these features, see Summary of cost considerations.

Overview of data protection options


The following table summarizes the options available in Azure Storage for common data protection scenarios.
Choose the scenarios that are applicable to your situation to learn more about the options available to you. Note
that not all features are available at this time for storage accounts with a hierarchical namespace enabled.

DATA P ROT EC T IO N P ROT EC T IO N AVA IL A B L E F O R DATA


SC EN A RIO O P T IO N REC O M M EN DAT IO N S B EN EF IT L A K E STO RA GE
DATA P ROT EC T IO N P ROT EC T IO N AVA IL A B L E F O R DATA
SC EN A RIO O P T IO N REC O M M EN DAT IO N S B EN EF IT L A K E STO RA GE

Prevent a storage Azure Resource Lock all of your Protects the storage Yes
account from being Manager lock storage accounts account against
deleted or modified. Learn more... with an Azure deletion or
Resource Manager configuration
lock to prevent changes.
deletion of the
storage account. Does not protect
containers or blobs in
the account from
being deleted or
overwritten.

Prevent a blob Immutability policy Set an immutability Protects a blob No


version from being on a blob version policy on an version from being
deleted for an (preview) individual blob deleted and its
interval that you Learn more... version to protect metadata from being
control. business-critical overwritten. An
documents, for overwrite operation
example, in order to creates a new
meet legal or version.
regulatory
compliance If at least one
requirements. container has
version-level
immutability enabled,
the storage account
is also protected
from deletion.
Container deletion
fails if at least one
blob exists in the
container.

Prevent a container Immutability policy Set an immutability Protects a container Yes, in preview
and its blobs from on a container policy on a container and its blobs from all
being deleted or Learn more... to protect business- deletes and
modified for an critical documents, overwrites.
interval that you for example, in order
control. to meet legal or When a legal hold or
regulatory a locked time-based
compliance retention policy is in
requirements. effect, the storage
account is also
protected from
deletion. Containers
for which no
immutability policy
has been set are not
protected from
deletion.
DATA P ROT EC T IO N P ROT EC T IO N AVA IL A B L E F O R DATA
SC EN A RIO O P T IO N REC O M M EN DAT IO N S B EN EF IT L A K E STO RA GE

Restore a deleted Container soft delete Enable container soft A deleted container Yes
container within a Learn more... delete for all storage and its contents may
specified interval. accounts, with a be restored within
minimum retention the retention period.
interval of 7 days.
Only container-level
Enable blob operations (e.g.,
versioning and blob Delete Container) can
soft delete together be restored.
with container soft Container soft delete
delete to protect does not enable you
individual blobs in a to restore an
container. individual blob in the
container if that blob
Store containers that is deleted.
require different
retention periods in
separate storage
accounts.

Automatically save Blob versioning Enable blob Every blob write No


the state of a blob in Learn more... versioning, together operation creates a
a previous version with container soft new version. The
when it is delete and blob soft current version of a
overwritten. delete, for storage blob may be restored
accounts where you from a previous
need optimal version if the current
protection for blob version is deleted or
data. overwritten.

Store blob data that


does not require
versioning in a
separate account to
limit costs.

Restore a deleted Blob soft delete Enable blob soft A deleted blob or Yes, in preview
blob or blob version Learn more... delete for all storage blob version may be
within a specified accounts, with a restored within the
interval. minimum retention retention period.
interval of 7 days.

Enable blob
versioning and
container soft delete
together with blob
soft delete for
optimal protection of
blob data.

Store blobs that


require different
retention periods in
separate storage
accounts.
DATA P ROT EC T IO N P ROT EC T IO N AVA IL A B L E F O R DATA
SC EN A RIO O P T IO N REC O M M EN DAT IO N S B EN EF IT L A K E STO RA GE

Restore a set of block Point-in-time restore To use point-in-time A set of block blobs No
blobs to a previous Learn more... restore to revert to may be reverted to
point in time. an earlier state, their state at a
design your specific point in the
application to delete past.
individual block blobs
rather than deleting Only operations
containers. performed on block
blobs are reverted.
Any operations
performed on
containers, page
blobs, or append
blobs are not
reverted.

Manually save the Blob snapshot Recommended as an A blob may be Yes, in preview
state of a blob at a Learn more... alternative to blob restored from a
given point in time. versioning when snapshot if the blob
versioning is not is overwritten. If the
appropriate for your blob is deleted,
scenario, due to cost snapshots are also
or other deleted.
considerations, or
when the storage
account has a
hierarchical
namespace enabled.

A blob can be Roll-your-own Recommended for Data can be restored AzCopy and Azure
deleted or solution for copying peace-of-mind from the second Data Factory are
overwritten, but th