AboutBlobStorage AZURE
AboutBlobStorage AZURE
Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing
massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model
or definition, such as text or binary data.
Next steps
Introduction to Azure Blob storage
Introduction to Azure Data Lake Storage Gen2
Introduction to the core Azure Storage services
11/25/2021 • 11 minutes to read • Edit Online
The Azure Storage platform is Microsoft's cloud storage solution for modern data storage scenarios. Core
storage services offer a massively scalable object store for data objects, disk storage for Azure virtual machines
(VMs), a file system service for the cloud, a messaging store for reliable messaging, and a NoSQL store. The
services are:
Durable and highly available. Redundancy ensures that your data is safe in the event of transient
hardware failures. You can also opt to replicate data across datacenters or geographical regions for additional
protection from local catastrophe or natural disaster. Data replicated in this way remains highly available in
the event of an unexpected outage.
Secure. All data written to an Azure storage account is encrypted by the service. Azure Storage provides you
with fine-grained control over who has access to your data.
Scalable. Azure Storage is designed to be massively scalable to meet the data storage and performance
needs of today's applications.
Managed. Azure handles hardware maintenance, updates, and critical issues for you.
Accessible. Data in Azure Storage is accessible from anywhere in the world over HTTP or HTTPS. Microsoft
provides client libraries for Azure Storage in a variety of languages, including .NET, Java, Node.js, Python, PHP,
Ruby, Go, and others, as well as a mature REST API. Azure Storage supports scripting in Azure PowerShell or
Azure CLI. And the Azure portal and Azure Storage Explorer offer easy visual solutions for working with your
data.
Example scenarios
The following table compares Files, Blobs, Disks, Queues, and Tables, and shows example scenarios for each.
Azure Files Offers fully managed cloud file shares You want to "lift and shift" an
that you can access from anywhere via application to the cloud that already
the industry standard Server Message uses the native file system APIs to
Block (SMB) protocol. share data between it and other
applications running in Azure.
You can mount Azure file shares from
cloud or on-premises deployments of You want to replace or supplement on-
Windows, Linux, and macOS. premises file servers or NAS devices.
Azure Blobs Allows unstructured data to be stored You want your application to support
and accessed at a massive scale in streaming and random access
block blobs. scenarios.
Also supports Azure Data Lake Storage You want to be able to access
Gen2 for enterprise big data analytics application data from anywhere.
solutions.
You want to build an enterprise data
lake on Azure and perform big data
analytics.
Azure Disks Allows data to be persistently stored You want to "lift and shift" applications
and accessed from an attached virtual that use native file system APIs to read
hard disk. and write data to persistent disks.
Azure Queues Allows for asynchronous message You want to decouple application
queueing between application components and use asynchronous
components. messaging to communicate between
them.
Azure Tables Allow you to store structured NoSQL You want to store flexible datasets like
data in the cloud, providing a user data for web applications, address
key/attribute store with a schemaless books, device information, or other
design. types of metadata your service
requires.
Blob storage
Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing
massive amounts of unstructured data, such as text or binary data.
Blob storage is ideal for:
Serving images or documents directly to a browser.
Storing files for distributed access.
Streaming video and audio.
Storing data for backup and restore, disaster recovery, and archiving.
Storing data for analysis by an on-premises or Azure-hosted service.
Objects in Blob storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client
applications can access blobs via URLs, the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure
Storage client library. The storage client libraries are available for multiple languages, including .NET, Java,
Node.js, Python, PHP, and Ruby.
For more information about Blob storage, see Introduction to Blob storage.
Azure Files
Azure Files enables you to set up highly available network file shares that can be accessed by using the standard
Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read
and write access. You can also read the files using the REST interface or the storage client libraries.
One thing that distinguishes Azure Files from files on a corporate file share is that you can access the files from
anywhere in the world using a URL that points to the file and includes a shared access signature (SAS) token.
You can generate SAS tokens; they allow specific access to a private asset for a specific amount of time.
File shares can be used for many common scenarios:
Many on-premises applications use file shares. This feature makes it easier to migrate those applications
that share data to Azure. If you mount the file share to the same drive letter that the on-premises
application uses, the part of your application that accesses the file share should work with minimal, if any,
changes.
Configuration files can be stored on a file share and accessed from multiple VMs. Tools and utilities used
by multiple developers in a group can be stored on a file share, ensuring that everybody can find them,
and that they use the same version.
Resource logs, metrics, and crash dumps are just three examples of data that can be written to a file share
and processed or analyzed later.
For more information about Azure Files, see Introduction to Azure Files.
Some SMB features are not applicable to the cloud. For more information, see Features not supported by the
Azure File service.
Queue storage
The Azure Queue service is used to store and retrieve messages. Queue messages can be up to 64 KB in size,
and a queue can contain millions of messages. Queues are generally used to store lists of messages to be
processed asynchronously.
For example, say you want your customers to be able to upload pictures, and you want to create thumbnails for
each picture. You could have your customer wait for you to create the thumbnails while uploading the pictures.
An alternative would be to use a queue. When the customer finishes their upload, write a message to the queue.
Then have an Azure Function retrieve the message from the queue and create the thumbnails. Each of the parts
of this processing can be scaled separately, giving you more control when tuning it for your usage.
For more information about Azure Queues, see Introduction to Queues.
Table storage
Azure Table storage is now part of Azure Cosmos DB. To see Azure Table storage documentation, see the Azure
Table Storage Overview. In addition to the existing Azure Table storage service, there is a new Azure Cosmos DB
Table API offering that provides throughput-optimized tables, global distribution, and automatic secondary
indexes. To learn more and try out the new premium experience, see Azure Cosmos DB Table API.
For more information about Table storage, see Overview of Azure Table storage.
Disk storage
An Azure managed disk is a virtual hard disk (VHD). You can think of it like a physical disk in an on-premises
server but, virtualized. Azure-managed disks are stored as page blobs, which are a random IO storage object in
Azure. We call a managed disk 'managed' because it is an abstraction over page blobs, blob containers, and
Azure storage accounts. With managed disks, all you have to do is provision the disk, and Azure takes care of
the rest.
For more information about managed disks, see Introduction to Azure managed disks.
Encryption
There are two basic kinds of encryption available for the core storage services. For more information about
security and encryption, see the Azure Storage security guide.
Encryption at rest
Azure Storage encryption protects and safeguards your data to meet your organizational security and
compliance commitments. Azure Storage automatically encrypts all data prior to persisting to the storage
account and decrypts it prior to retrieval. The encryption, decryption, and key management processes are
transparent to users. Customers can also choose to manage their own keys using Azure Key Vault. For more
information, see Azure Storage encryption for data at rest.
Client-side encryption
The Azure Storage client libraries provide methods for encrypting data from the client library before sending it
across the wire and decrypting the response. Data encrypted via client-side encryption is also encrypted at rest
by Azure Storage. For more information about client-side encryption, see Client-side encryption with .NET for
Azure Storage.
Redundancy
To ensure that your data is durable, Azure Storage stores multiple copies of your data. When you set up your
storage account, you select a redundancy option. For more information, see Azure Storage redundancy.
Pricing
When making decisions about how your data is stored and accessed, you should also consider the costs
involved. For more information, see Azure Storage pricing.
Next steps
To get up and running with core Azure Storage services, see Create a storage account.
Introduction to Azure Blob storage
11/25/2021 • 4 minutes to read • Edit Online
Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing
massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model
or definition, such as text or binary data.
Storage accounts
A storage account provides a unique namespace in Azure for your data. Every object that you store in Azure
Storage has an address that includes your unique account name. The combination of the account name and the
Azure Storage blob endpoint forms the base address for the objects in your storage account.
For example, if your storage account is named mystorageaccount, then the default endpoint for Blob storage is:
http://mystorageaccount.blob.core.windows.net
To create a storage account, see Create a storage account. To learn more about storage accounts, see Azure
storage account overview.
Containers
A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an
unlimited number of containers, and a container can store an unlimited number of blobs.
NOTE
The container name must be lowercase. For more information about naming containers, see Naming and Referencing
Containers, Blobs, and Metadata.
Blobs
Azure Storage supports three types of blobs:
Block blobs store text and binary data. Block blobs are made up of blocks of data that can be managed
individually. Block blobs can store up to about 190.7 TiB.
Append blobs are made up of blocks like block blobs, but are optimized for append operations. Append
blobs are ideal for scenarios such as logging data from virtual machines.
Page blobs store random access files up to 8 TiB in size. Page blobs store virtual hard drive (VHD) files and
serve as disks for Azure virtual machines. For more information about page blobs, see Overview of Azure
page blobs
For more information about the different types of blobs, see Understanding Block Blobs, Append Blobs, and
Page Blobs.
Next steps
Create a storage account
Scalability and performance targets for Blob storage
Quickstart: Upload, download, and list blobs with
the Azure portal
11/25/2021 • 3 minutes to read • Edit Online
In this quickstart, you learn how to use the Azure portal to create a container in Azure Storage, and to upload
and download block blobs in that container.
Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
Create a container
To create a container in the Azure portal, follow these steps:
1. Navigate to your new storage account in the Azure portal.
2. In the left menu for the storage account, scroll to the Data storage section, then select Blob containers .
3. Select the + Container button.
4. Type a name for your new container. The container name must be lowercase, must start with a letter or
number, and can include only letters, numbers, and the dash (-) character. For more information about
container and blob names, see Naming and referencing containers, blobs, and metadata.
5. Set the level of public access to the container. The default level is Private (no anonymous access) .
6. Select OK to create the container.
Upload a block blob
Block blobs consist of blocks of data assembled to make a blob. Most scenarios using Blob storage employ block
blobs. Block blobs are ideal for storing text and binary data in the cloud, like files, images, and videos. This
quickstart shows how to work with block blobs.
To upload a block blob to your new container in the Azure portal, follow these steps:
1. In the Azure portal, navigate to the container you created in the previous section.
2. Select the container to show a list of blobs it contains. This container is new, so it won't yet contain any
blobs.
3. Select the Upload button to open the upload blade and browse your local file system to find a file to
upload as a block blob. You can optionally expand the Advanced section to configure other settings for
the upload operation.
4. Select the Upload button to upload the blob.
5. Upload as many blobs as you like in this way. You'll see that the new blobs are now listed within the
container.
Clean up resources
To remove all the resources you created in this quickstart, you can simply delete the container. All blobs in the
container will also be deleted.
To delete the container:
1. In the Azure portal, navigate to the list of containers in your storage account.
2. Select the container to delete.
3. Select the More button (...), and select Delete .
4. Confirm that you want to delete the container.
Next steps
In this quickstart, you learned how to create a container and upload a blob with Azure portal. To learn about
working with Blob storage from a web app, continue to a tutorial that shows how to upload images to a storage
account.
Tutorial: Upload image data in the cloud with Azure Storage
Quickstart: Use Azure Storage Explorer to create a
blob
11/25/2021 • 4 minutes to read • Edit Online
In this quickstart, you learn how to use Azure Storage Explorer to create a container and a blob. Next, you learn
how to download the blob to your local computer, and how to view all of the blobs in a container. You also learn
how to create a snapshot of a blob, manage container access policies, and create a shared access signature.
Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
This quickstart requires that you install Azure Storage Explorer. To install Azure Storage Explorer for Windows,
Macintosh, or Linux, see Azure Storage Explorer.
TA SK P URP O SE
Use a connection string or shared access signature URI Can be used to directly access a container or storage
account with a SAS token or a shared connection string.
Use a storage account name and key Use the storage account name and key of your storage
account to connect to Azure storage.
Select Add an Azure Account and click Sign in... Follow the on-screen prompts to sign into your Azure
account.
After Storage Explorer finishes connecting, it displays the Explorer tab. This view gives you insight to all of your
Azure storage accounts as well as local storage configured through the Azurite storage emulator, Cosmos DB
accounts, or Azure Stack environments.
Create a container
To create a container, expand the storage account you created in the proceeding step. Select Blob Containers ,
right-click and select Create Blob Container . Enter the name for your blob container. See the Create a
container section for a list of rules and restrictions on naming blob containers. When complete, press Enter to
create the blob container. Once the blob container has been successfully created, it is displayed under the Blob
Containers folder for the selected storage account.
Manage snapshots
Azure Storage Explorer provides the capability to take and manage snapshots of your blobs. To take a snapshot
of a blob, right-click the blob and select Create Snapshot . To view snapshots for a blob, right-click the blob and
select Manage Snapshots . A list of the snapshots for the blob are shown in the current tab.
NOTE
When you create a SAS with Storage Explorer, the SAS is always assigned with the storage account key. Storage Explorer
does not currently support creating a user delegation SAS, which is a SAS that is signed with Azure AD credentials.
Next steps
In this quickstart, you learned how to transfer files between a local disk and Azure Blob storage using Azure
Storage Explorer . To learn more about working with Blob storage, continue to the Blob storage overview.
Introduction to Azure Blob Storage
Quickstart: Upload, download, and list blobs with
PowerShell
11/25/2021 • 5 minutes to read • Edit Online
Use the Azure PowerShell module to create and manage Azure resources. Creating or managing Azure
resources can be done from the PowerShell command line or in scripts. This guide describes using PowerShell
to transfer files between local disk and Azure Blob storage.
Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, then create a
free account before you begin.
You will also need the Storage Blob Data Contributor role to read, write, and delete Azure Storage containers and
blobs.
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
This quickstart requires the Azure PowerShell module Az version 0.7 or later. Run
Get-InstalledModule -Name Az -AllVersions | select Name,Version to find the version. If you need to install or
upgrade, see Install Azure PowerShell module.
Sign in to Azure
Sign in to your Azure subscription with the Connect-AzAccount command and follow the on-screen directions.
Connect-AzAccount
If you don't know which location you want to use, you can list the available locations. Display the list of locations
by using the following code example and find the one you want to use. This example uses eastus . Store the
location in a variable and use the variable so you can change it in one place.
$resourceGroup = "myResourceGroup"
New-AzResourceGroup -Name $resourceGroup -Location $location
Create a storage account
Create a standard, general-purpose storage account with LRS replication by using New-AzStorageAccount. Next,
get the storage account context that defines the storage account you want to use. When acting on a storage
account, reference the context instead of repeatedly passing in the credentials. Use the following example to
create a storage account called mystorageaccount with locally redundant storage (LRS) and blob encryption
(enabled by default).
$ctx = $storageAccount.Context
Create a container
Blobs are always uploaded into a container. You can organize groups of blobs like the way you organize your
files on your computer in folders.
Set the container name, then create the container by using New-AzStorageContainer. Set the permissions to
blob to allow public access of the files. The container name in this example is quickstartblobs.
$containerName = "quickstartblobs"
New-AzStorageContainer -Name $containerName -Context $ctx -Permission blob
Download blobs
Download the blobs to your local disk. For each blob you want to download, set the name and call Get-
AzStorageBlobContent to download the blob.
This example downloads the blobs to D:\_TestImages\Downloads on the local disk.
azcopy login
azcopy copy 'C:\myDirectory\myTextFile.txt'
'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt'
Clean up resources
Remove all of the assets you've created. The easiest way to remove the assets is to delete the resource group.
Removing the resource group also deletes all resources included within the group. In the following example,
removing the resource group removes the storage account and the resource group itself.
Next steps
In this quickstart, you transferred files between a local file system and Azure Blob storage. To learn more about
working with Blob storage by using PowerShell, explore Azure PowerShell samples for Blob storage.
Azure PowerShell samples for Azure Blob storage
Microsoft Azure PowerShell Storage cmdlets reference
Storage PowerShell cmdlets
Microsoft Azure Storage Explorer
Microsoft Azure Storage Explorer is a free, standalone app from Microsoft that enables you to work visually
with Azure Storage data on Windows, macOS, and Linux.
Quickstart: Create, download, and list blobs with
Azure CLI
11/25/2021 • 5 minutes to read • Edit Online
The Azure CLI is Azure's command-line experience for managing Azure resources. You can use it in your browser
with Azure Cloud Shell. You can also install it on macOS, Linux, or Windows and run it from the command line.
In this quickstart, you learn to use the Azure CLI to upload and download data to and from Azure Blob storage.
Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
Prepare your environment for the Azure CLI
Use the Bash environment in Azure Cloud Shell.
If you prefer, install the Azure CLI to run CLI reference commands.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This article requires version 2.0.46 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is
already installed.
az group create \
--name <resource-group> \
--location <location>
Create a container
Blobs are always uploaded into a container. You can organize groups of blobs in containers similar to the way
you organize your files on your computer in folders. Create a container for storing blobs with the az storage
container create command.
The following example uses your Azure AD account to authorize the operation to create the container. Before
you create the container, assign the Storage Blob Data Contributor role to yourself. Even if you are the account
owner, you need explicit permissions to perform data operations against the storage account. For more
information about assigning Azure roles, see Assign an Azure role for access to blob data.
Remember to replace placeholder values in angle brackets with your own values:
IMPORTANT
Azure role assignments may take a few minutes to propagate.
You can also use the storage account key to authorize the operation to create the container. For more
information about authorizing data operations with Azure CLI, see Authorize access to blob or queue data with
Azure CLI.
Upload a blob
Blob storage supports block blobs, append blobs, and page blobs. The examples in this quickstart show how to
work with block blobs.
First, create a file to upload to a block blob. If you're using Azure Cloud Shell, use the following command to
create a file:
vi helloworld
When the file opens, press inser t . Type Hello world, then press Esc . Next, type :x, then press Enter .
In this example, you upload a blob to the container you created in the last step using the az storage blob upload
command. It's not necessary to specify a file path since the file was created at the root directory. Remember to
replace placeholder values in angle brackets with your own values:
This operation creates the blob if it doesn't already exist, and overwrites it if it does. Upload as many files as you
like before continuing.
To upload multiple files at the same time, you can use the az storage blob upload-batch command.
Download a blob
Use the az storage blob download command to download the blob you uploaded earlier. Remember to replace
placeholder values in angle brackets with your own values:
azcopy login
azcopy copy 'C:\myDirectory\myTextFile.txt'
'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt'
Clean up resources
If you want to delete the resources you created as part of this quickstart, including the storage account, delete
the resource group by using the az group delete command. Remember to replace placeholder values in angle
brackets with your own values:
az group delete \
--name <resource-group> \
--no-wait
Next steps
In this quickstart, you learned how to transfer files between a local file system and a container in Azure Blob
storage. To learn more about working with Blob storage by using Azure CLI, explore Azure CLI samples for Blob
storage.
Azure CLI samples for Blob storage
Quickstart: Azure Blob Storage client library v12 for
.NET
11/25/2021 • 7 minutes to read • Edit Online
Get started with the Azure Blob Storage client library v12 for .NET. Azure Blob Storage is Microsoft's object
storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob
storage is optimized for storing massive amounts of unstructured data.
The examples in this quickstart show you how to use the Azure Blob Storage client library v12 for .NET to:
Get the connection string
Create a container
Upload a blob to a container
List blobs in a container
Download a blob
Delete a container
Additional resources:
API reference documentation
Library source code
Package (NuGet)
Samples
Prerequisites
Azure subscription - create one for free
Azure storage account - create a storage account
Current .NET Core SDK for your operating system. Be sure to get the SDK and not the runtime.
Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for
.NET.
Create the project
Create a .NET Core application named BlobQuickstartV12.
1. In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new
console app with the name BlobQuickstartV12. This command creates a simple "Hello World" C# project
with a single source file: Program.cs.
cd BlobQuickstartV12
3. In side the BlobQuickstartV12 directory, create another directory called data. This is where the blob data
files will be created and stored.
mkdir data
using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using System;
using System.IO;
using System.Threading.Tasks;
namespace BlobQuickstartV12
{
class Program
{
static async Task Main()
{
}
}
}
After you add the environment variable in Windows, you must start a new instance of the command window.
Linux
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
macOS
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.
Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers
three types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.
Use the following .NET classes to interact with these resources:
BlobServiceClient: The BlobServiceClient class allows you to manipulate Azure Storage resources and blob
containers.
BlobContainerClient: The BlobContainerClient class allows you to manipulate Azure Storage containers and
their blobs.
BlobClient: The BlobClient class allows you to manipulate Azure Storage blobs.
Code examples
The sample code snippets in the following sections show you how to perform basic data operations with the
Azure Blob Storage client library for .NET.
Get the connection string
The code below retrieves the connection string for the storage account from the environment variable created in
the Configure your storage connection string section.
Add this code inside the Main method:
// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING. If the
// environment variable is created after the application is launched in a
// console or with Visual Studio, the shell or application needs to be closed
// and reloaded to take the environment variable into account.
string connectionString = Environment.GetEnvironmentVariable("AZURE_STORAGE_CONNECTION_STRING");
Create a container
Decide on a name for the new container. The code below appends a GUID value to the container name to ensure
that it is unique.
IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.
Create an instance of the BlobServiceClient class. Then, call the CreateBlobContainerAsync method to create the
container in your storage account.
Add this code to the end of the Main method:
// Create a BlobServiceClient object which will be used to create a container client
BlobServiceClient blobServiceClient = new BlobServiceClient(connectionString);
// Create a local file in the ./data/ directory for uploading and downloading
string localPath = "./data/";
string fileName = "quickstart" + Guid.NewGuid().ToString() + ".txt";
string localFilePath = Path.Combine(localPath, fileName);
Console.WriteLine("Listing blobs...");
Download a blob
Download the previously created blob by calling the DownloadToAsync method. The example code adds a suffix
of "DOWNLOADED" to the file name so that you can see both files in local file system.
Add this code to the end of the Main method:
// Download the blob to a local file
// Append the string "DOWNLOADED" before the .txt extension
// so you can compare the files in the data directory
string downloadFilePath = localFilePath.Replace(".txt", "DOWNLOADED.txt");
Delete a container
The following code cleans up the resources the app created by deleting the entire container by using
DeleteAsync. It also deletes the local files created by the app.
The app pauses for user input by calling Console.ReadLine before it deletes the blob, container, and local files.
This is a good chance to verify that the resources were actually created correctly, before they are deleted.
Add this code to the end of the Main method:
// Clean up
Console.Write("Press any key to begin clean up");
Console.ReadLine();
Console.WriteLine("Done");
dotnet build
dotnet run
Listing blobs...
quickstart2fe6c5b4-7918-46cb-96f4-8c4c5cb2fd31.txt
Downloading blob to
./data/quickstart2fe6c5b4-7918-46cb-96f4-8c4c5cb2fd31DOWNLOADED.txt
Before you begin the clean up process, check your data folder for the two files. You can open them and observe
that they are identical.
After you've verified the files, press the Enter key to delete the test files and finish the demo.
Next steps
In this quickstart, you learned how to upload, download, and list blobs using .NET.
To see Blob storage sample apps, continue to:
Azure Blob Storage SDK v12 .NET samples
For tutorials, samples, quick starts and other documentation, visit Azure for .NET and .NET Core developers.
To learn more about .NET Core, see Get started with .NET in 10 minutes.
Quickstart: Azure Blob storage client library v11 for
.NET
11/25/2021 • 10 minutes to read • Edit Online
Get started with the Azure Blob Storage client library v11 for .NET. Azure Blob Storage is Microsoft's object
storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob
storage is optimized for storing massive amounts of unstructured data.
NOTE
This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see
Quickstart: Azure Blob storage client library v12 for .NET.
Use the Azure Blob Storage client library for .NET to:
Create a container
Set permissions on a container
Create a blob in Azure Storage
Download the blob to your local computer
List all of the blobs in a container
Delete a container
Additional resources:
API reference documentation
Library source code
Package (NuGet)
Samples
Prerequisites
Azure subscription - create one for free
Azure Storage account - create a storage account
Current .NET Core SDK for your operating system. Be sure to get the SDK and not the runtime.
Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library for .NET.
Create the project
First, create a .NET Core application named blob-quickstart.
1. In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new
console app with the name blob-quickstart. This command creates a simple "Hello World" C# project with
a single source file: Program.cs.
cd blob-quickstart
dotnet build
The expected output from the build should look something like this:
Build succeeded.
0 Warning(s)
0 Error(s)
namespace blob_quickstart
{
class Program
{
public static async Task Main()
{
Console.WriteLine("Azure Blob Storage - .NET quickstart sample\n");
await ProcessAsync();
After you add the environment variable in Windows, you must start a new instance of the command window.
Linux
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
MacOS
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before continuing.
Object model
Azure Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers
three types of resources:
The storage account.
A container in the storage account
A blob in a container
The following diagram shows the relationship between these resources.
Code examples
These example code snippets show you how to perform the following with the Azure Blob storage client library
for .NET:
Authenticate the client
Create a container
Set permissions on a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Authenticate the client
The code below checks that the environment variable contains a connection string that can be parsed to create a
CloudStorageAccount object pointing to the storage account. To check that the connection string is valid, use the
TryParse method. If TryParse is successful, it initializes the storageAccount variable and returns true .
Add this code inside the ProcessAsync method:
// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING. If the
// environment variable is created after the application is launched in a
// console or with Visual Studio, the shell or application needs to be closed
// and reloaded to take the environment variable into account.
string storageConnectionString = Environment.GetEnvironmentVariable("AZURE_STORAGE_CONNECTION_STRING");
NOTE
To perform the rest of the operations in this article, replace // ADD OTHER OPERATIONS HERE in the code above with the
code snippets in the following sections.
Create a container
To create the container, first create an instance of the CloudBlobClient object, which points to Blob storage in
your storage account. Next, create an instance of the CloudBlobContainer object, then create the container.
In this case, the code calls the CreateAsync method to create the container. A GUID value is appended to the
container name to ensure that it is unique. In a production environment, it's often preferable to use the
CreateIfNotExistsAsync method to create a container only if it does not already exist.
IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.
// Create the CloudBlobClient that represents the
// Blob storage endpoint for the storage account.
CloudBlobClient cloudBlobClient = storageAccount.CreateCloudBlobClient();
// Get a reference to the blob address, then upload the file to the blob.
// Use the value of localFileName for the blob name.
CloudBlockBlob cloudBlockBlob = cloudBlobContainer.GetBlockBlobReference(localFileName);
await cloudBlockBlob.UploadFromFileAsync(sourceFile);
Download blobs
Download the blob created previously to your local file system by using the DownloadToFileAsync method. The
example code adds a suffix of "_DOWNLOADED" to the blob name so that you can see both files in local file
system.
// Download the blob to a local file, using the reference created earlier.
// Append the string "_DOWNLOADED" before the .txt extension so that you
// can see both files in MyDocuments.
string destinationFile = sourceFile.Replace(".txt", "_DOWNLOADED.txt");
Console.WriteLine("Downloading blob to {0}", destinationFile);
await cloudBlockBlob.DownloadToFileAsync(destinationFile, FileMode.Create);
Delete a container
The following code cleans up the resources the app created by deleting the entire container using CloudBlob
Container.DeleteAsync. You can also delete the local files if you like.
dotnet build
dotnet run
The output of the app is similar to the following example:
Press any key to delete the example files and example container.
When you press the Enter key, the application deletes the storage container and the files. Before you delete
them, check your MyDocuments folder for the two files. You can open them and observe that they are identical.
Copy the blob's URL from the console window and paste it into a browser to view the contents of the blob.
After you've verified the files, hit any key to finish the demo and delete the test files.
Next steps
In this quickstart, you learned how to upload, download, and list blobs using .NET.
To learn how to create a web app that uploads an image to Blob storage, continue to:
Upload and process an image
To learn more about .NET Core, see Get started with .NET in 10 minutes.
To explore a sample application that you can deploy from Visual Studio for Windows, see the .NET Photo
Gallery Web Application Sample with Azure Blob Storage.
Quickstart: Manage blobs with Java v12 SDK
11/25/2021 • 8 minutes to read • Edit Online
In this quickstart, you learn to manage blobs by using Java. Blobs are objects that can hold large amounts of text
or binary data, including images, documents, streaming media, and archive data. You'll upload, download, and
list blobs, and you'll create and delete containers.
Additional resources:
API reference documentation
Library source code
Package (Maven)
Samples
Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
Java Development Kit (JDK) version 8 or above.
Apache Maven.
Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for
Java.
Create the project
Create a Java application named blob-quickstart-v12.
1. In a console window (such as cmd, PowerShell, or Bash), use Maven to create a new console app with the
name blob-quickstart-v12. Type the following mvn command to create a "Hello world!" Java project.
PowerShell
Bash
mvn archetype:generate `
--define interactiveMode=n `
--define groupId=com.blobs.quickstart `
--define artifactId=blob-quickstart-v12 `
--define archetypeArtifactId=maven-archetype-quickstart `
--define archetypeVersion=1.4
2. The output from generating the project should look something like this:
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------< org.apache.maven:standalone-pom >-------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] --------------------------------[ pom ]---------------------------------
[INFO]
[INFO] >>> maven-archetype-plugin:3.1.2:generate (default-cli) > generate-sources @ standalone-pom
>>>
[INFO]
[INFO] <<< maven-archetype-plugin:3.1.2:generate (default-cli) < generate-sources @ standalone-pom
<<<
[INFO]
[INFO]
[INFO] --- maven-archetype-plugin:3.1.2:generate (default-cli) @ standalone-pom ---
[INFO] Generating project in Batch mode
[INFO] ----------------------------------------------------------------------------
[INFO] Using following parameters for creating project from Archetype: maven-archetype-quickstart:1.4
[INFO] ----------------------------------------------------------------------------
[INFO] Parameter: groupId, Value: com.blobs.quickstart
[INFO] Parameter: artifactId, Value: blob-quickstart-v12
[INFO] Parameter: version, Value: 1.0-SNAPSHOT
[INFO] Parameter: package, Value: com.blobs.quickstart
[INFO] Parameter: packageInPathFormat, Value: com/blobs/quickstart
[INFO] Parameter: version, Value: 1.0-SNAPSHOT
[INFO] Parameter: package, Value: com.blobs.quickstart
[INFO] Parameter: groupId, Value: com.blobs.quickstart
[INFO] Parameter: artifactId, Value: blob-quickstart-v12
[INFO] Project created from Archetype in dir: C:\QuickStarts\blob-quickstart-v12
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.056 s
[INFO] Finished at: 2019-10-23T11:09:21-07:00
[INFO] ------------------------------------------------------------------------
```
cd blob-quickstart-v12
4. In side the blob-quickstart-v12 directory, create another directory called data. This is where the blob data
files will be created and stored.
mkdir data
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-storage-blob</artifactId>
<version>12.13.0</version>
</dependency>
package com.blobs.quickstart;
/**
* Azure blob storage v12 SDK quickstart
*/
import com.azure.storage.blob.*;
import com.azure.storage.blob.models.*;
import java.io.*;
After you add the environment variable in Windows, you must start a new instance of the command window.
Linux
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
macOS
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.
Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers
three types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.
Code examples
These example code snippets show you how to perform the following with the Azure Blob Storage client library
for Java:
Get the connection string
Create a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Get the connection string
The code below retrieves the connection string for the storage account from the environment variable created in
the Configure your storage connection string section.
Add this code inside the Main method:
// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING. If the environment variable
// is created after the application is launched in a console or with
// Visual Studio, the shell or application needs to be closed and reloaded
// to take the environment variable into account.
String connectStr = System.getenv("AZURE_STORAGE_CONNECTION_STRING");
Create a container
Decide on a name for the new container. The code below appends a UUID value to the container name to ensure
that it is unique.
IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.
Next, create an instance of the BlobContainerClient class, then call the create method to actually create the
container in your storage account.
Add this code to the end of the Main method:
// Create a local file in the ./data/ directory for uploading and downloading
String localPath = "./data/";
String fileName = "quickstart" + java.util.UUID.randomUUID() + ".txt";
File localFile = new File(localPath + fileName);
System.out.println("\nListing blobs...");
Download blobs
Download the previously created blob by calling the downloadToFile method. The example code adds a suffix of
"DOWNLOAD" to the file name so that you can see both files in local file system.
Add this code to the end of the Main method:
blobClient.downloadToFile(localPath + downloadFileName);
Delete a container
The following code cleans up the resources the app created by removing the entire container using the delete
method. It also deletes the local files created by the app.
The app pauses for user input by calling System.console().readLine() before it deletes the blob, container, and
local files. This is a good chance to verify that the resources were created correctly, before they are deleted.
Add this code to the end of the Main method:
// Clean up
System.out.println("\nPress the Enter key to begin clean up");
System.console().readLine();
System.out.println("Done");
mvn compile
mvn package
Listing blobs...
quickstarta9c3a53e-ae9d-4863-8b34-f3d807992d65.txt
Downloading blob to
./data/quickstarta9c3a53e-ae9d-4863-8b34-f3d807992d65DOWNLOAD.txt
Before you begin the clean up process, check your data folder for the two files. You can open them and observe
that they are identical.
After you've verified the files, press the Enter key to delete the test files and finish the demo.
Next steps
In this quickstart, you learned how to upload, download, and list blobs using Java.
To see Blob storage sample apps, continue to:
Azure Blob Storage SDK v12 Java samples
To learn more, see the Azure SDK for Java.
For tutorials, samples, quickstarts, and other documentation, visit Azure for Java cloud developers.
Quickstart: Manage blobs with Java v8 SDK
11/25/2021 • 8 minutes to read • Edit Online
In this quickstart, you learn to manage blobs by using Java. Blobs are objects that can hold large amounts of text
or binary data, including images, documents, streaming media, and archive data. You'll upload, download, and
list blobs. You'll also create, set permissions on, and delete containers.
NOTE
This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see
Quickstart: Manage blobs with Java v12 SDK.
Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
An IDE that has Maven integration. This guide uses Eclipse with the "Eclipse IDE for Java Developers"
configuration.
This command clones the repository to your local git folder. To open the project, launch Eclipse and close the
Welcome screen. Select File then Open Projects from File System . Make sure Detect and configure
project natures is checked. Select Director y then navigate to where you stored the cloned repository. Inside
the cloned repository, select the blobAzureApp folder. Make sure the blobAzureApp project appears as an
Eclipse project, then select Finish .
Once the project completes importing, open AzureApp.java (located in blobQuickstar t.blobAzureApp
inside of src/main/java ), and replace the accountname and accountkey inside of the storageConnectionString
string. Then run the application. Specific instructions for completing these tasks are described in the following
sections.
Before you continue, check your default directory (C:\Users<user>\AppData\Local\Temp, for Windows users)
for the sample file. Copy the URL for the blob out of the console window and paste it into a browser to view the
contents of the file in Blob storage. If you compare the sample file in your directory with the contents stored in
Blob storage, you will see that they are the same.
NOTE
You can also use a tool such as the Azure Storage Explorer to view the files in Blob storage. Azure Storage Explorer is a free
cross-platform tool that allows you to access your storage account information.
After you've verified the files, press the Enter key to complete the demo and delete the test files. Now that you
know what the sample does, open the AzureApp.java file to look at the code.
IMPORTANT
Container names must be lowercase. For more information about containers, see Naming and Referencing Containers,
Blobs, and Metadata.
Create a container
In this section, you create an instance of the objects, create a new container, and then set permissions on the
container so the blobs are public and can be accessed with just a URL. The container is called
quickstar tcontainer .
This example uses CreateIfNotExists because we want to create a new container each time the sample is run. In a
production environment, where you use the same container throughout an application, it's better practice to
only call CreateIfNotExists once. Alternatively, you can create the container ahead of time so you don't need to
create it in the code.
// Parse the connection string and create a blob client to interact with Blob storage
storageAccount = CloudStorageAccount.parse(storageConnectionString);
blobClient = storageAccount.createCloudBlobClient();
container = blobClient.getContainerReference("quickstartcontainer");
There are several upload methods including upload, uploadBlock, uploadFullBlob, uploadStandardBlobTier, and
uploadText which you can use with Blob storage. For example, if you have a string, you can use the UploadText
method rather than the Upload method.
Block blobs can be any type of text or binary file. Page blobs are primarily used for the VHD files that back IaaS
VMs. Use append blobs for logging, such as when you want to write to a file and then keep adding more
information. Most objects stored in Blob storage are block blobs.
List the blobs in a container
You can get a list of files in the container using CloudBlobContainer.ListBlobs. The following code retrieves the
list of blobs, then loops through them, showing the URIs of the blobs found. You can copy the URI from the
command window and paste it into a browser to view the file.
Download blobs
Download blobs to your local disk using CloudBlob.DownloadToFile.
The following code downloads the blob uploaded in a previous section, adding a suffix of "_DOWNLOADED" to
the blob name so you can see both files on local disk.
// Download blob. In most cases, you would have to retrieve the reference
// to cloudBlockBlob here. However, we created that reference earlier, and
// haven't changed the blob we're interested in, so we can reuse it.
// Here we are creating a new file to download to. Alternatively you can also pass in the path as a string
into downloadToFile method: blob.downloadToFile("/path/to/new/file").
downloadedFile = new File(sourceFile.getParentFile(), "downloadedFile.txt");
blob.downloadToFile(downloadedFile.getAbsolutePath());
Clean up resources
If you no longer need the blobs that you have uploaded, you can delete the entire container using
CloudBlobContainer.DeleteIfExists. This method also deletes the files in the container.
try {
if(container != null)
container.deleteIfExists();
} catch (StorageException ex) {
System.out.println(String.format("Service error. Http code: %d and error code: %s", ex.getHttpStatusCode(),
ex.getErrorCode()));
}
if(downloadedFile != null)
downloadedFile.deleteOnExit();
if(sourceFile != null)
sourceFile.deleteOnExit();
Next steps
In this article, you learned how to transfer files between a local disk and Azure Blob storage using Java. To learn
more about working with Java, continue to our GitHub source code repository.
Java API Reference Code Samples for Java
Quickstart: Manage blobs with Python v12 SDK
11/25/2021 • 7 minutes to read • Edit Online
In this quickstart, you learn to manage blobs by using Python. Blobs are objects that can hold large amounts of
text or binary data, including images, documents, streaming media, and archive data. You'll upload, download,
and list blobs, and you'll create and delete containers.
More resources:
API reference documentation
Library source code
Package (Python Package Index)
Samples
Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
Python 2.7 or 3.6+.
Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for
Python.
Create the project
Create a Python application named blob-quickstart-v12.
1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for the project.
mkdir blob-quickstart-v12
cd blob-quickstart-v12
3. In side the blob-quickstart-v12 directory, create another directory called data. This directory is where the
blob data files will be created and stored.
mkdir data
This command installs the Azure Blob Storage client library for Python package and all the libraries on which it
depends. In this case, that is just the Azure core library for Python.
Set up the app framework
From the project directory:
1. Open a new text file in your code editor
2. Add import statements
3. Create the structure for the program, including basic exception handling
Here's the code:
try:
print("Azure Blob Storage v" + __version__ + " - Python quickstart sample")
After you add the environment variable in Windows, you must start a new instance of the command window.
Linux
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
macOS
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.
Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that doesn't adhere to a particular data model or definition, such as text or binary data. Blob storage offers three
types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.
Code examples
These example code snippets show you how to do the following tasks with the Azure Blob Storage client library
for Python:
Get the connection string
Create a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Get the connection string
The code below retrieves the storage account connection string from the environment variable created in the
Configure your storage connection string section.
Add this code inside the try block:
# Retrieve the connection string for use with the application. The storage
# connection string is stored in an environment variable on the machine
# running the application called AZURE_STORAGE_CONNECTION_STRING. If the environment variable is
# created after the application is launched in a console or with Visual Studio,
# the shell or application needs to be closed and reloaded to take the
# environment variable into account.
connect_str = os.getenv('AZURE_STORAGE_CONNECTION_STRING')
Create a container
Decide on a name for the new container. The code below appends a UUID value to the container name to ensure
that it's unique.
IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.
Create an instance of the BlobServiceClient class by calling the from_connection_string method. Then, call the
create_container method to actually create the container in your storage account.
Add this code to the end of the try block:
# Create the BlobServiceClient object which will be used to create a container client
blob_service_client = BlobServiceClient.from_connection_string(connect_str)
# Create a blob client using the local file name as the name for the blob
blob_client = blob_service_client.get_blob_client(container=container_name, blob=local_file_name)
print("\nListing blobs...")
Download blobs
Download the previously created blob by calling the download_blob method. The example code adds a suffix of
"DOWNLOAD" to the file name so that you can see both files in local file system.
Add this code to the end of the try block:
Delete a container
The following code cleans up the resources the app created by removing the entire container using the
delete_container method. You can also delete the local files, if you like.
The app pauses for user input by calling input() before it deletes the blob, container, and local files. Verify that
the resources were created correctly, before they're deleted.
Add this code to the end of the try block:
# Clean up
print("\nPress the Enter key to begin clean up")
input()
print("Done")
python blob-quickstart-v12.py
Listing blobs...
quickstartcf275796-2188-4057-b6fb-038352e35038.txt
Downloading blob to
./data/quickstartcf275796-2188-4057-b6fb-038352e35038DOWNLOAD.txt
Before you begin the cleanup process, check your data folder for the two files. You can open them and observe
that they're identical.
After you've verified the files, press the Enter key to delete the test files and finish the demo.
Next steps
In this quickstart, you learned how to upload, download, and list blobs using Python.
To see Blob storage sample apps, continue to:
Azure Blob Storage SDK v12 Python samples
To learn more, see the Azure Storage client libraries for Python.
For tutorials, samples, quickstarts, and other documentation, visit Azure for Python Developers.
Quickstart: Manage blobs with Python v2.1 SDK
11/25/2021 • 6 minutes to read • Edit Online
In this quickstart, you learn to manage blobs by using Python. Blobs are objects that can hold large amounts of
text or binary data, including images, documents, streaming media, and archive data. You'll upload, download,
and list blobs, and you'll create and delete containers.
NOTE
This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see
Quickstart: Manage blobs with Python v12 SDK.
Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
Python.
Azure Storage SDK for Python.
To review the Python program, open the example.py file at the root of the repository.
block_blob_service = BlockBlobService(
account_name='accountname', account_key='accountkey')
cd storage-blobs-python-quickstart
python example.py
4. Before you continue, go to your Documents folder and check for the two files.
QuickStart_<universally-unique-identifier>
QuickStart_<universally-unique-identifier>_DOWNLOADED
5. You can open them and see they're the same.
You can also use a tool like the Azure Storage Explorer. It's good for viewing the files in Blob storage.
Azure Storage Explorer is a free cross-platform tool that lets you access your storage account info.
6. After you've looked at the files, press any key to finish the sample and delete the test files.
# Create the BlockBlockService that the system uses to call the Blob service for the storage account.
block_blob_service = BlockBlobService(
account_name='accountname', account_key='accountkey')
First, you create the references to the objects used to access and manage Blob storage. These objects build on
each other, and each is used by the next one in the list.
Instantiate the BlockBlobSer vice object, which points to the Blob service in your storage account.
Instantiate the CloudBlobContainer object, which represents the container you're accessing. The system
uses containers to organize your blobs like you use folders on your computer to organize your files.
Once you have the Cloud Blob container, instantiate the CloudBlockBlob object that points to the specific blob
that you're interested in. You can then upload, download, and copy the blob as you need.
IMPORTANT
Container names must be lowercase. For more information about container and blob names, see Naming and Referencing
Containers, Blobs, and Metadata.
# Upload the created file, use local_file_name for the blob name.
block_blob_service.create_blob_from_path(
container_name, local_file_name, full_path_to_file)
There are several upload methods that you can use with Blob storage. For example, if you have a memory
stream, you can use the create_blob_from_stream method rather than create_blob_from_path .
List the blobs in a container
The following code creates a generator for the list_blobs method. The code loops through the list of blobs in
the container and prints their names to the console.
Clean up resources
If you no longer need the blobs uploaded in this quickstart, you can delete the entire container using the
delete_container method. To delete individual files instead, use the delete_blob method.
# Clean up resources. This includes the container and the temp files.
block_blob_service.delete_container(container_name)
os.remove(full_path_to_file)
os.remove(full_path_to_file2)
Next steps
In this quickstart, you learned how to transfer files between a local disk and Azure Blob storage using Python.
For more about the Storage Explorer and Blobs, see Manage Azure Blob storage resources with Storage Explorer.
Quickstart: Manage blobs with JavaScript v12 SDK
in Node.js
11/25/2021 • 8 minutes to read • Edit Online
In this quickstart, you learn to manage blobs by using Node.js. Blobs are objects that can hold large amounts of
text or binary data, including images, documents, streaming media, and archive data. You'll upload, download,
and list blobs, and you'll create and delete containers.
Additional resources:
API reference documentation
Library source code
Package (Node Package Manager)
Samples
Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
Node.js.
Setting up
This section walks you through preparing a project to work with the Azure Blob storage client library v12 for
JavaScript.
Create the project
Create a JavaScript application named blob-quickstart-v12.
1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for the project.
mkdir blob-quickstart-v12
cd blob-quickstart-v12
3. Create a new text file called package.json. This file defines the Node.js project. Save this file in the blob-
quickstart-v12 directory. Here is the contents of the file:
{
"name": "blob-quickstart-v12",
"version": "1.0.0",
"description": "Use the @azure/storage-blob SDK version 12 to interact with Azure Blob storage",
"main": "blob-quickstart-v12.js",
"scripts": {
"start": "node blob-quickstart-v12.js"
},
"author": "Your Name",
"license": "MIT",
"dependencies": {
"@azure/storage-blob": "^12.0.0",
"@types/dotenv": "^4.0.3",
"dotenv": "^6.0.0"
}
}
You can put your own name in for the author field, if you'd like.
Install the package
While still in the blob-quickstart-v12 directory, install the Azure Blob storage client library for JavaScript
package by using the npm install command. This command reads the package.json file and installs the Azure
Blob storage client library v12 for JavaScript package and all the libraries on which it depends.
npm install
After you add the environment variable in Windows, you must start a new instance of the command window.
Linux
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
macOS
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.
Object model
Azure Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers
three types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.
Code examples
These example code snippets show you how to perform the following with the Azure Blob storage client library
for JavaScript:
Get the connection string
Create a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Get the connection string
The code below retrieves the connection string for the storage account from the environment variable created in
the Configure your storage connection string section.
Add this code inside the main function:
// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING. If the
// environment variable is created after the application is launched in a
// console or with Visual Studio, the shell or application needs to be closed
// and reloaded to take the environment variable into account.
const AZURE_STORAGE_CONNECTION_STRING = process.env.AZURE_STORAGE_CONNECTION_STRING;
Create a container
Decide on a name for the new container. The code below appends a UUID value to the container name to ensure
that it is unique.
IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.
Create an instance of the BlobServiceClient class by calling the fromConnectionString method. Then, call the
getContainerClient method to get a reference to a container. Finally, call create to actually create the container in
your storage account.
Add this code to the end of the main function:
// Create the BlobServiceClient object which will be used to create a container client
const blobServiceClient = BlobServiceClient.fromConnectionString(AZURE_STORAGE_CONNECTION_STRING);
console.log('\nCreating container...');
console.log('\t', containerName);
console.log('\nListing blobs...');
Download blobs
Download the previously created blob by calling the download method. The example code includes a helper
function called streamToString , which is used to read a Node.js readable stream into a string.
Add this code to the end of the main function:
Delete a container
The following code cleans up the resources the app created by removing the entire container using the delete
method. You can also delete the local files, if you like.
Add this code to the end of the main function:
console.log('\nDeleting container...');
// Delete container
const deleteContainerResponse = await containerClient.delete();
console.log("Container was deleted successfully. requestId: ", deleteContainerResponse.requestId);
node blob-quickstart-v12.js
Creating container...
quickstart4a0780c0-fb72-11e9-b7b9-b387d3c488da
Listing blobs...
quickstart4a3128d0-fb72-11e9-b7b9-b387d3c488da.txt
Deleting container...
Done
Step through the code in your debugger and check your Azure portal throughout the process. Check to see that
the container is being created. You can open the blob inside the container and view the contents.
Next steps
In this quickstart, you learned how to upload, download, and list blobs using JavaScript.
For tutorials, samples, quickstarts, and other documentation, visit:
Azure for JavaScript developer center
To learn how to deploy a web app that uses Azure Blob storage, see Tutorial: Upload image data in the cloud
with Azure Storage
To see Blob storage sample apps, continue to Azure Blob storage client library v12 JavaScript samples.
To learn more, see the Azure Blob storage client library for JavaScript.
Quickstart: Manage blobs with JavaScript v10 SDK
in Node.js
11/25/2021 • 9 minutes to read • Edit Online
In this quickstart, you learn to manage blobs by using Node.js. Blobs are objects that can hold large amounts of
text or binary data, including images, documents, streaming media, and archive data. You'll upload, download,
list, and delete blobs, and you'll manage containers.
NOTE
This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see
Quickstart: Manage blobs with JavaScript v12 SDK in Node.js.
Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
Node.js.
cd azure-storage-js-v10-quickstart
npm install
npm start
The output from the app will be similar to the following example:
If you're using a new storage account for this quickstart, then you may only see the demo container listed under
the label "Containers:".
const {
Aborter,
BlobURL,
BlockBlobURL,
ContainerURL,
ServiceURL,
SharedKeyCredential,
StorageURL,
uploadStreamToBlockBlob
} = require('@azure/storage-blob');
Credentials are read from environment variables based on the appropriate context.
The dotenv module loads environment variables when running the app locally for debugging. Values are
defined in a file named .env and loaded into the current execution context. In production, the server
configuration provides these values, which is why this code only runs when the script is not running under a
"production" environment.
The next block of modules is imported to help interface with the file system.
const fs = require('fs');
const path = require('path');
The purpose of these modules is as follows:
fs is the native Node.js module used to work with the file system
path is required to determine the absolute path of the file, which is used when uploading a file to Blob
storage
Next, environment variable values are read and set aside in constants.
The next set of constants helps to reveal the intent of file size calculations during upload operations.
Requests made by the API can be set to time out after a given interval. The Aborter class is responsible for
managing how requests are timed-out and the following constant is used to define timeouts used in this
sample.
Calling code
To support JavaScript's async/await syntax, all the calling code is wrapped in a function named execute. Then
execute is called and handled as a promise.
All of the following code runs inside the execute function where the // commands... comment is placed.
First, the relevant variables are declared to assign names, sample content and to point to the local file to upload
to Blob storage.
Account credentials are used to create a pipeline, which is responsible for managing how requests are sent to
the REST API. Pipelines are thread-safe and specify logic for retry policies, logging, HTTP response
deserialization rules, and more.
The containerURL and blockBlobURL variables are reused throughout the sample to act on the storage account.
At this point, the container doesn't exist in the storage account. The instance of ContainerURL represents a URL
that you can act upon. By using this instance, you can create and delete the container. The location of this
container equates to a location such as this:
https://<ACCOUNT_NAME>.blob.core.windows.net/demo
The blockBlobURL is used to manage individual blobs, allowing you to upload, download, and delete blob
content. The URL represented here is similar to this location:
https://<ACCOUNT_NAME>.blob.core.windows.net/demo/quickstart.txt
As with the container, the block blob doesn't exist yet. The blockBlobURL variable is used later to create the blob
by uploading content.
Using the Aborter class
Requests made by the API can be set to time out after a given interval. The Aborter class is responsible for
managing how requests are timed out. The following code creates a context where a set of requests is given 30
minutes to execute.
await containerURL.create(aborter);
console.log(`Container: "${containerName}" is created`);
console.log("Containers:");
await showContainerNames(serviceURL, aborter);
The showContainerNames function uses the listContainersSegment method to request batches of container
names from the storage account.
do {
const listContainersResponse = await serviceURL.listContainersSegment(aborter, marker);
marker = listContainersResponse.nextMarker;
for(let container of listContainersResponse.containerItems) {
console.log(` - ${ container.name }`);
}
} while (marker);
}
When the response is returned, then the containerItems are iterated to log the name to the console.
Upload text
To upload text to the blob, use the upload method.
Here the text and its length are passed into the method.
Upload a local file
To upload a local file to the container, you need a container URL and the path to the file.
The uploadLocalFile function calls the uploadFileToBlockBlob function, which takes the file path and an instance
of the destination of the block blob as arguments.
filePath = path.resolve(filePath);
Upload a stream
Uploading streams is also supported. This sample opens a local file as a stream to pass to the upload method.
await uploadStream(containerURL, localFilePath, aborter);
console.log(`Local file "${localFilePath}" is uploaded as a stream`);
The uploadStream function calls uploadStreamToBlockBlob to upload the stream to the storage container.
const uploadOptions = {
bufferSize: FOUR_MEGABYTES,
maxBuffers: 5,
};
During an upload, uploadStreamToBlockBlob allocates buffers to cache data from the stream in case a retry is
necessary. The maxBuffers value designates at most how many buffers are used as each buffer creates a
separate upload request. Ideally, more buffers equate to higher speeds, but at the cost of higher memory usage.
The upload speed plateaus when the number of buffers is high enough that the bottleneck transitions to the
network or disk instead of the client.
Show blob names
Just as accounts can contain many containers, each container can potentially contain a vast amount of blobs.
Access to each blob in a container are available via an instance of the ContainerURL class.
The function showBlobNames calls listBlobFlatSegment to request batches of blobs from the container.
do {
const listBlobsResponse = await containerURL.listBlobFlatSegment(Aborter.none, marker);
marker = listBlobsResponse.nextMarker;
for (const blob of listBlobsResponse.segment.blobItems) {
console.log(` - ${ blob.name }`);
}
} while (marker);
}
Download a blob
Once a blob is created, you can download the contents by using the download method.
const downloadResponse = await blockBlobURL.download(aborter, 0);
const downloadedContent = await streamToString(downloadResponse.readableStreamBody);
console.log(`Downloaded blob content: "${downloadedContent}"`);
The response is returned as a stream. In this example, the stream is converted to a string by using the following
streamToString helper function.
Delete a blob
The delete method from a BlockBlobURL instance deletes a blob from the container.
await blockBlobURL.delete(aborter)
console.log(`Block blob "${blobName}" is deleted`);
Delete a container
The delete method from a ContainerURL instance deletes a container from the storage account.
await containerURL.delete(aborter);
console.log(`Container "${containerName}" is deleted`);
Clean up resources
All data written to the storage account is automatically deleted at the end of the code sample.
Next steps
This quickstart demonstrates how to manage blobs and containers in Azure Blob storage using Node.js. To learn
more about working with this SDK, refer to the GitHub repository.
Azure Storage v10 SDK for JavaScript repository Azure Storage JavaScript API Reference
Quickstart: Manage blobs with JavaScript v12 SDK
in a browser
11/25/2021 • 12 minutes to read • Edit Online
Azure Blob storage is optimized for storing large amounts of unstructured data. Blobs are objects that can hold
text or binary data, including images, documents, streaming media, and archive data. In this quickstart, you learn
to manage blobs by using JavaScript in a browser. You'll upload and list blobs, and you'll create and delete
containers.
Additional resources:
API reference documentation
Library source code
Package (npm)
Samples
Prerequisites
An Azure account with an active subscription
An Azure Storage account
Node.js
Microsoft Visual Studio Code
A Visual Studio Code extension for browser debugging, such as:
Debugger for Microsoft Edge
Debugger for Chrome
Debugger for Firefox
Object model
Blob storage offers three types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.
In this quickstart, you'll use the following JavaScript classes to interact with these resources:
BlobServiceClient: The BlobServiceClient class allows you to manipulate Azure Storage resources and blob
containers.
ContainerClient: The ContainerClient class allows you to manipulate Azure Storage containers and their
blobs.
BlockBlobClient: The BlockBlobClient class allows you to manipulate Azure Storage blobs.
Setting up
This section walks you through preparing a project to work with the Azure Blob storage client library v12 for
JavaScript.
Create a CORS rule
Before your web application can access blob storage from the client, you must configure your account to enable
cross-origin resource sharing, or CORS.
In the Azure portal, select your storage account. To define a new CORS rule, navigate to the Settings section
and select CORS . For this quickstart, you create an open CORS rule:
The following table describes each CORS setting and explains the values used to define the rule.
ALLOWED METHODS DELETE , GET , HEAD , MERGE , POST , Lists the HTTP verbs allowed to
OPTIONS, and PUT execute against the storage account.
For the purposes of this quickstart,
select all available options.
After you fill in the fields with the values from this table, click the Save button.
IMPORTANT
Ensure any settings you use in production expose the minimum amount of access necessary to your storage account to
maintain secure access. The CORS settings described here are appropriate for a quickstart as it defines a lenient security
policy. These settings, however, are not recommended for a real-world context.
npm init -y
The Azure SDK is composed of many separate packages. You can choose which packages you need based on the
services you intend to use. Run following npm command in the terminal window to install the
@azure/storage-blob package.
In Visual Studio Code, open the package.json file and add a browserlist between the license and
dependencies entries. This browserlist targets the latest version of three popular browsers. The full
package.json file should now look like this:
{
"name": "azure-blobs-javascript",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"browserslist": [
"last 1 Edge version",
"last 1 Chrome version",
"last 1 Firefox version"
],
"dependencies": {
"@azure/storage-blob": "^12.1.1"
}
}
// index.js
const { BlobServiceClient } = require("@azure/storage-blob");
// Now do something interesting with BlobServiceClient
<body>
<button id="create-container-button">Create container</button>
<button id="delete-container-button">Delete container</button>
<button id="select-button">Select and upload files</button>
<input type="file" id="file-input" multiple style="display: none;" />
<button id="list-button">List files</button>
<button id="delete-button">Delete selected files</button>
<p><b>Status:</b></p>
<p id="status" style="height:160px; width: 593px; overflow: scroll;" />
<p><b>Files:</b></p>
<select id="file-list" multiple style="height:222px; width: 593px; overflow: scroll;" />
</body>
<script src="./index.js"></script>
</html>
Code examples
The example code shows you how to accomplish the following tasks with the Azure Blob storage client library
for JavaScript:
Declare fields for UI elements
Add your storage account info
Create client objects
Create and delete a storage container
List blobs
Upload blobs
Delete blobs
You'll run the code after you add all the snippets to the index.js file.
Declare fields for UI elements
Add the following code to the end of the index.js file.
createContainerButton.addEventListener("click", createContainer);
deleteContainerButton.addEventListener("click", deleteContainer);
listButton.addEventListener("click", listFiles);
deleteButton.addEventListener("click", deleteFiles);
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "edge",
"request": "launch",
"name": "Launch Edge against localhost",
"url": "http://localhost:1234/index.html",
"webRoot": "${workspaceFolder}"
}
]
}
After updating, save the launch.json file. This configuration tells Visual Studio Code which browser to open and
which URL to load.
Launch the web server
To launch the local development web server, select View > Terminal to open a console window inside Visual
Studio Code, then enter the following command.
parcel index.html
Parcel bundles your code and starts a local development server for your page at
http://localhost:1234/index.html . Changes you make to index.js will automatically be built and reflected on the
development server whenever you save the file.
If you receive a message that says configured por t 1234 could not be used , you can change the port by
running the command parcel -p <port#> index.html . In the launch.json file, update the port in the URL path to
match.
Start debugging
Run the page in the debugger and get a feel for how blob storage works. If any errors occur, the Status pane on
the web page will display the error message received.
To open index.html in the browser with the Visual Studio Code debugger attached, select Run > Star t
Debugging or press F5 in Visual Studio Code.
Use the web app
In the Azure portal, you can verify the results of the API calls as you follow the steps below.
Step 1 - Create a container
1. In the web app, select Create container . The status indicates that a container was created.
2. To verify in the Azure portal, select your storage account. Under Blob ser vice , select Containers . Verify that
the new container appears. (You may need to select Refresh .)
Step 2 - Upload a blob to the container
1. On your local computer, create and save a test file, such as test.txt.
2. In the web app, click Select and upload files .
3. Browse to your test file, and then select Open . The status indicates that the file was uploaded, and the file list
was retrieved.
4. In the Azure portal, select the name of the new container that you created earlier. Verify that the test file
appears.
Step 3 - Delete the blob
1. In the web app, under Files , select the test file.
2. Select Delete selected files . The status indicates that the file was deleted and that the container contains no
files.
3. In the Azure portal, select Refresh . Verify that you see No blobs found .
Step 4 - Delete the container
1. In the web app, select Delete container . The status indicates that the container was deleted.
2. In the Azure portal, select the <account-name> | Containers link at the top-left of the portal pane.
3. Select Refresh . The new container disappears.
4. Close the web app.
Clean up resources
Click on the Terminal console in Visual Studio Code and press CTRL+C to stop the web server.
To clean up the resources created during this quickstart, go to the Azure portal and delete the resource group
you created in the Prerequisites section.
Next steps
In this quickstart, you learned how to upload, list, and delete blobs using JavaScript. You also learned how to
create and delete a blob storage container.
For tutorials, samples, quickstarts, and other documentation, visit:
Azure for JavaScript documentation
To learn more, see the Azure Blob storage client library for JavaScript.
To see Blob storage sample apps, continue to Azure Blob storage client library v12 JavaScript samples.
Quickstart: Manage blobs with JavaScript v10 SDK
in browser
11/25/2021 • 12 minutes to read • Edit Online
In this quickstart, you learn to manage blobs by using JavaScript code running entirely in the browser. Blobs are
objects that can hold large amounts of text or binary data, including images, documents, streaming media, and
archive data. You'll use required security measures to ensure protected access to your blob storage account.
NOTE
This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see
Quickstart: Manage blobs with JavaScript v12 SDK in a browser.
Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure Storage account. Create a storage account.
A local web server. This article uses Node.js to open a basic server.
Visual Studio Code.
A VS Code extension for browser debugging, such as Debugger for Chrome or Debugger for Microsoft Edge.
Allowed methods delete, get, head, merge, post, options, Lists the HTTP verbs allowed to
and put execute against the storage account.
For the purposes of this quickstart,
select all available options.
Next, you use the Azure cloud shell to create a security token.
O P T IO N EXA M P L E/ L IN K
Use the following CLI command, with actual values for each placeholder, to generate a SAS that you can use in
your JavaScript code.
You may find the series of values after each parameter a bit cryptic. These parameter values are taken from the
first letter of their respective permission. The following table explains where the values come from:
PA RA M ET ER VA L UE DESC RIP T IO N
Now that the SAS is generated, copy the return value and save it somewhere for use in an upcoming step. If you
generated your SAS using a method other than the Azure CLI, you will need to remove the initial ? if it is
present. This character is a URL separator that is already provided in the URL template later in this topic where
the SAS is used.
IMPORTANT
In production, always pass SAS tokens using TLS. Also, SAS tokens should be generated on the server and sent to the
HTML page in order pass back to Azure Blob Storage. One approach you may consider is to use a serverless function to
generate SAS tokens. The Azure Portal includes function templates that feature the ability to generate a SAS with a
JavaScript function.
<!DOCTYPE html>
<html>
<body>
<button id="create-container-button">Create container</button>
<button id="delete-container-button">Delete container</button>
<button id="select-button">Select and upload files</button>
<input type="file" id="file-input" multiple style="display: none;" />
<button id="list-button">List files</button>
<button id="delete-button">Delete selected files</button>
<p><b>Status:</b></p>
<p id="status" style="height:160px; width: 593px; overflow: scroll;" />
<p><b>Files:</b></p>
<select id="file-list" multiple style="height:222px; width: 593px; overflow: scroll;" />
</body>
</html>
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "chrome",
"request": "launch",
"name": "Launch Chrome against localhost",
"url": "http://localhost:8080/index.html",
"webRoot": "${workspaceFolder}"
}
]
}
This configuration tells VS Code which browser to launch and which URL to load.
Launch the web server
To launch the local Node.js web server, select View > Terminal to open a console window inside VS Code, then
enter the following command.
npx http-server
This command will install the http-server package and launch the server, making the current folder available
through default URLs including the one indicated in the previous step.
Start debugging
To launch index.html in the browser with the VS Code debugger attached, select Debug > Star t Debugging or
press F5 in VS Code.
The UI displayed doesn't do anything yet, but you'll add JavaScript code in the following section to implement
each function shown. You can then set breakpoints and interact with the debugger when it's paused on your
code.
When you make changes to index.html, be sure to reload the page to see the changes in the browser. In VS
Code, you can also select Debug > Restar t Debugging or press CTRL + SHIFT + F5.
Add the blob storage client library
To enable calls to the blob storage API, first Download the Azure Storage SDK for JavaScript - Blob client library,
extract the contents of the zip, and place the azure-storage-blob.js file in the azure-blobs-javascript folder.
Next, paste the following HTML into index.html after the </body> closing tag, replacing the placeholder
comment.
<script>
// You'll add code here in the following sections.
</script>
This code adds a reference to the script file and provides a place for your own JavaScript code. For the purposes
of this quickstart, we're using the azure-storage-blob.js script file so that you can open it in VS Code, read its
contents, and set breakpoints. In production, you should use the more compact azure-storage.blob.min.js file
that is also provided in the zip file.
You can find out more about each blob storage function in the reference documentation. Note that some of the
functions in the SDK are only available in Node.js or only available in the browser.
The code in azure-storage-blob.js exports a global variable called azblob , which you'll use in your JavaScript
code to access the blob storage APIs.
Add the initial JavaScript code
Next, paste the following code into the <script> element shown in the previous code block, replacing the
placeholder comment.
This code creates fields for each HTML element that the following code will use, and implements a reportStatus
function to display output.
In the following sections, add each new block of JavaScript code after the previous block.
Add your storage account info
Next, add code to access your storage account, replacing the placeholders with your account name and the SAS
you generated in a previous step.
This code uses your account info and SAS to create a ContainerURL instance, which is useful for creating and
manipulating a storage container.
Create and delete a storage container
Next, add code to create and delete the storage container when you press the corresponding button.
createContainerButton.addEventListener("click", createContainer);
deleteContainerButton.addEventListener("click", deleteContainer);
This code calls the ContainerURL create and delete functions without using an Aborter instance. To keep things
simple for this quickstart, this code assumes that your storage account has been created and is enabled. In
production code, use an Aborter instance to add timeout functionality.
List blobs
Next, add code to list the contents of the storage container when you press the List files button.
const listFiles = async () => {
fileList.size = 0;
fileList.innerHTML = "";
try {
reportStatus("Retrieving file list...");
let marker = undefined;
do {
const listBlobsResponse = await containerURL.listBlobFlatSegment(
azblob.Aborter.none, marker);
marker = listBlobsResponse.nextMarker;
const items = listBlobsResponse.segment.blobItems;
for (const blob of items) {
fileList.size += 1;
fileList.innerHTML += `<option>${blob.name}</option>`;
}
} while (marker);
if (fileList.size > 0) {
reportStatus("Done.");
} else {
reportStatus("The container does not contain any files.");
}
} catch (error) {
reportStatus(error.body.message);
}
};
listButton.addEventListener("click", listFiles);
This code calls the ContainerURL.listBlobFlatSegment function in a loop to ensure that all segments are
retrieved. For each segment, it loops over the list of blob items it contains and updates the Files list.
Upload blobs
Next, add code to upload files to the storage container when you press the Select and upload files button.
This code connects the Select and upload files button to the hidden file-input element. In this way, the
button click event triggers the file input click event and displays the file picker. After you select files and
close the dialog box, the input event occurs and the uploadFiles function is called. This function calls the
browser-only uploadBrowserDataToBlockBlob function for each file you selected. Each call returns a Promise,
which is added to a list so that they can all be awaited at once, causing the files to upload in parallel.
Delete blobs
Next, add code to delete files from the storage container when you press the Delete selected files button.
const deleteFiles = async () => {
try {
if (fileList.selectedOptions.length > 0) {
reportStatus("Deleting files...");
for (const option of fileList.selectedOptions) {
const blobURL = azblob.BlobURL.fromContainerURL(containerURL, option.text);
await blobURL.delete(azblob.Aborter.none);
}
reportStatus("Done.");
listFiles();
} else {
reportStatus("No files selected.");
}
} catch (error) {
reportStatus(error.body.message);
}
};
deleteButton.addEventListener("click", deleteFiles);
This code calls the BlobURL.delete function to remove each file selected in the list. It then calls the listFiles
function shown earlier to refresh the contents of the Files list.
Run and test the web application
At this point, you can launch the page and experiment to get a feel for how blob storage works. If any errors
occur (for example, when you try to list files before you've created the container), the Status pane will display
the error message received. You can also set breakpoints in the JavaScript code to examine the values returned
by the storage APIs.
Clean up resources
To clean up the resources created during this quickstart, go to the Azure portal and delete the resource group
you created in the Prerequisites section.
Next steps
In this quickstart, you've created a simple website that accesses blob storage from browser-based JavaScript. To
learn how you can host a website itself on blob storage, continue to the following tutorial:
Host a static website on Blob Storage
Quickstart: Azure Blob Storage client library v12 for
C++
11/25/2021 • 6 minutes to read • Edit Online
Get started with the Azure Blob Storage client library v12 for C++. Azure Blob Storage is Microsoft's object
storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob
Storage is optimized for storing massive amounts of unstructured data.
Use the Azure Blob Storage client library v12 for C++ to:
Create a container
Upload a blob to Azure Storage
List all of the blobs in a container
Download the blob to your local computer
Delete a container
Resources:
API reference documentation
Library source code
Samples
Prerequisites
Azure subscription
Azure storage account
C++ compiler
CMake
Vcpkg - C and C++ package manager
Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for
C++.
Install the packages
The vcpkg install command will install the Azure Storage Blobs SDK for C++ and necessary dependencies:
For more information, visit GitHub to acquire and build the Azure SDK for C++.
Create the project
In Visual Studio, create a new C++ console application for Windows called BlobQuickstartV12.
Copy your credentials from the Azure portal
When the sample application makes a request to Azure Storage, it must be authorized. To authorize a request,
add your storage account credentials to the application as a connection string. To view your storage account
credentials, follow these steps:
1. Sign in to the Azure portal.
2. Locate your storage account.
3. In the storage account menu pane, under Security + networking , select Access keys . Here, you can
view the account access keys and the complete connection string for each key.
After you add the environment variable in Windows, you must start a new instance of the command window.
Linux
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
macOS
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before you continue.
Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that doesn't adhere to a particular data model or definition, such as text or binary data. Blob Storage offers three
types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.
Code examples
These example code snippets show you how to do the following tasks with the Azure Blob Storage client library
for C++:
Add include files
Get the connection string
Create a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Add include files
From the project directory:
1. Open the BlobQuickstartV12.sln solution file in Visual Studio
2. Inside Visual Studio, open the BlobQuickstartV12.cpp source file
3. Remove any code inside main that was autogenerated
4. Add #include statements
#include <stdlib.h>
#include <iostream>
#include <azure/storage/blobs.hpp>
// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING.
// Note that _MSC_VER is set when using MSVC compiler.
static const char* AZURE_STORAGE_CONNECTION_STRING = "AZURE_STORAGE_CONNECTION_STRING";
#if !defined(_MSC_VER)
const char* connectionString = std::getenv(AZURE_STORAGE_CONNECTION_STRING);
#else
// Use getenv_s for MSVC
size_t requiredSize;
getenv_s(&requiredSize, NULL, NULL, AZURE_STORAGE_CONNECTION_STRING);
if (requiredSize == 0) {
throw std::runtime_error("missing connection string from env.");
}
std::vector<char> value(requiredSize);
getenv_s(&requiredSize, value.data(), value.size(), AZURE_STORAGE_CONNECTION_STRING);
std::string connectionStringStr = std::string(value.begin(), value.end());
const char* connectionString = connectionStringStr.c_str();
#endif
Create a container
Create an instance of the BlobContainerClient class by calling the CreateFromConnectionString function. Then
call CreateIfNotExists to create the actual container in your storage account.
IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.
Add this code to the end of main() :
// Create the container. This will do nothing if the container already exists.
std::cout << "Creating container: " << containerName << std::endl;
containerClient.CreateIfNotExists();
Download blobs
Get the properties of the uploaded blob. Then, declare and resize a new std::vector<uint8_t> object by using
the properties of the uploaded blob. Download the previously created blob into the new std::vector<uint8_t>
object by calling the DownloadTo function in the BlobClient base class. Finally, display the downloaded blob data.
Add this code to the end of main() :
auto properties = blobClient.GetProperties().Value;
std::vector<uint8_t> downloadedBlob(properties.BlobSize);
blobClient.DownloadTo(downloadedBlob.data(), downloadedBlob.size());
std::cout << "Downloaded blob contents: " << std::string(downloadedBlob.begin(), downloadedBlob.end()) <<
std::endl;
Delete a Blob
The following code deletes the blob from the Azure Blob Storage container by calling the BlobClient.Delete
function.
Delete a container
The following code cleans up the resources the app created by deleting the entire container by using
BlobContainerClient.Delete.
Add this code to the end of main() :
Next steps
In this quickstart, you learned how to upload, download, and list blobs using C++. You also learned how to
create and delete an Azure Blob Storage container.
To see a C++ Blob Storage sample, continue to:
Azure Blob Storage SDK v12 for C++ sample
Quickstart: Upload, download, and list blobs using
Go
11/25/2021 • 8 minutes to read • Edit Online
In this quickstart, you learn how to use the Go programming language to upload, download, and list block blobs
in a container in Azure Blob storage.
Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
Make sure you have the following additional prerequisites installed:
Go 1.8 or above
Azure Storage Blob SDK for Go, using the following command:
go get -u github.com/Azure/azure-storage-blob-go/azblob
NOTE
Make sure that you capitalize Azure in the URL to avoid case-related import problems when working with the
SDK. Also capitalize Azure in your import statements.
This command clones the repository to your local git folder. To open the Go sample for Blob storage, look for
storage-quickstart.go file.
export AZURE_STORAGE_ACCOUNT="<youraccountname>"
export AZURE_STORAGE_ACCESS_KEY="<youraccountkey>"
The following output is an example of the output returned when running the application:
When you press the key to continue, the sample program deletes the storage container and the files.
TIP
You can also use a tool such as the Azure Storage Explorer to view the files in Blob storage. Azure Storage Explorer is a free
cross-platform tool that allows you to access your storage account information.
IMPORTANT
Container names must be lowercase. See Naming and Referencing Containers, Blobs, and Metadata for more information
about container and blob names.
In this section, you create a new container. The container is called quickstar tblobs-[random string] .
// From the Azure portal, get your storage account name and key and set environment variables.
accountName, accountKey := os.Getenv("AZURE_STORAGE_ACCOUNT"), os.Getenv("AZURE_STORAGE_ACCESS_KEY")
if len(accountName) == 0 || len(accountKey) == 0 {
log.Fatal("Either the AZURE_STORAGE_ACCOUNT or AZURE_STORAGE_ACCESS_KEY environment variable is not
set")
}
// Create a default request pipeline using your storage account name and account key.
credential, err := azblob.NewSharedKeyCredential(accountName, accountKey)
if err != nil {
log.Fatal("Invalid credentials with error: " + err.Error())
}
p := azblob.NewPipeline(credential, azblob.PipelineOptions{})
// From the Azure portal, get your storage account blob service URL endpoint.
URL, _ := url.Parse(
fmt.Sprintf("https://%s.blob.core.windows.net/%s", accountName, containerName))
// Create a ContainerURL object that wraps the container URL and a request
// pipeline to make requests.
containerURL := azblob.NewContainerURL(*URL, p)
// You can use the low-level Upload (PutBlob) API to upload files. Low-level APIs are simple wrappers for
the Azure Storage REST APIs.
// Note that Upload can upload up to 256MB data in one shot. Details:
https://docs.microsoft.com/rest/api/storageservices/put-blob
// To upload more than 256MB, use StageBlock (PutBlock) and CommitBlockList (PutBlockList) functions.
// Following is commented out intentionally because we will instead use UploadFileToBlockBlob API to upload
the blob
// _, err = blobURL.Upload(ctx, file, azblob.BlobHTTPHeaders{ContentType: "text/plain"}, azblob.Metadata{},
azblob.BlobAccessConditions{})
// handleErrors(err)
// The high-level API UploadFileToBlockBlob function uploads blocks in parallel for optimal performance, and
can handle large files as well.
// This function calls StageBlock/CommitBlockList for files larger 256 MBs, and calls Upload for any file
smaller
fmt.Printf("Uploading the file with blob name: %s\n", fileName)
_, err = azblob.UploadFileToBlockBlob(ctx, file, blobURL, azblob.UploadToBlockBlobOptions{
BlockSize: 4 * 1024 * 1024,
Parallelism: 16})
handleErrors(err)
// ListBlobs returns the start of the next segment; you MUST use this to get
// the next segment (after processing the current result segment).
marker = listBlob.NextMarker
// Process the blobs returned in this result segment (if the segment is empty, the loop body won't
execute)
for _, blobInfo := range listBlob.Segment.BlobItems {
fmt.Print(" Blob name: " + blobInfo.Name + "\n")
}
}
Clean up resources
If you no longer need the blobs uploaded in this quickstart, you can delete the entire container using the Delete
method.
// Cleaning up the quick start by deleting the container and the file created locally
fmt.Printf("Press enter key to delete the sample files, example container, and exit the application.\n")
bufio.NewReader(os.Stdin).ReadBytes('\n')
fmt.Printf("Cleaning up.\n")
containerURL.Delete(ctx, azblob.ContainerAccessConditions{})
file.Close()
os.Remove(fileName)
Next steps
In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using Go. For
more information about the Azure Storage Blob SDK, view the Source Code and API Reference.
Transfer objects to/from Azure Blob storage using
PHP
11/25/2021 • 6 minutes to read • Edit Online
In this quickstart, you learn how to use PHP to upload, download, and list block blobs in a container in Azure
Blob storage.
Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
Make sure you have the following additional prerequisites installed:
PHP
Azure Storage SDK for PHP
This command clones the repository to your local git folder. To open the PHP sample application, look for the
storage-blobs-php-quickstart folder, and open the phpqs.php file.
export ACCOUNT_NAME=<youraccountname>
export ACCOUNT_KEY=<youraccountkey>
When you press the button displayed, the sample program deletes the storage container and the files. Before
you continue, check your server's folder for the two files. You can open them and see they are identical.
You can also use a tool such as the Azure Storage Explorer to view the files in Blob storage. Azure Storage
Explorer is a free cross-platform tool that allows you to access your storage account information.
After you've verified the files, hit any key to finish the demo and delete the test files. Now that you know what
the sample does, open the example.rb file to look at the code.
IMPORTANT
Container names must be lowercase. See Naming and Referencing Containers, Blobs, and Metadata for more information
about container and blob names.
In this section, you set up an instance of Azure storage client, instantiate the blob service object, create a new
container, and set permissions on the container so the blobs are public. The container is called quickstar tblobs .
# Create the BlobService that represents the Blob service for the storage account
$createContainerOptions = new CreateContainerOptions();
$createContainerOptions->setPublicAccess(PublicAccessType::CONTAINER_AND_BLOBS);
$containerName = "blockblobs".generateRandomString();
try {
// Create container.
$blobClient->createContainer($containerName, $createContainerOptions);
//Upload blob
$blobClient->createBlockBlob($containerName, $fileToUpload, $content);
To perform a partial update of the content of a block blob, use the createblocklist() method. Block blobs can be
as large as 4.7 TB, and can be anything from Excel spreadsheets to large video files. Page blobs are primarily
used for the VHD files used to back IaaS VMs. Append blobs are used for logging, such as when you want to
write to a file and then keep adding more information. Append blob should be used in a single writer model.
Most objects stored in Blob storage are block blobs.
List the blobs in a container
You can get a list of files in the container using the listBlobs() method. The following code retrieves the list of
blobs, then loops through them, showing the names of the blobs found in a container.
do{
$result = $blobClient->listBlobs($containerName, $listBlobsOptions);
foreach ($result->getBlobs() as $blob)
{
echo $blob->getName().": ".$blob->getUrl()."<br />";
}
$listBlobsOptions->setContinuationToken($result->getContinuationToken());
} while($result->getContinuationToken());
Clean up resources
If you no longer need the blobs uploaded in this quickstart, you can delete the entire container using the
deleteContainer() method. If the files created are no longer needed, you use the deleteBlob() method to
delete the files.
// Delete blob.
echo "Deleting Blob".PHP_EOL;
echo $fileToUpload;
echo "<br />";
$blobClient->deleteBlob($_GET["containerName"], $fileToUpload);
// Delete container.
echo "Deleting Container".PHP_EOL;
echo $_GET["containerName"].PHP_EOL;
echo "<br />";
$blobClient->deleteContainer($_GET["containerName"]);
Next steps
In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using PHP. To
learn more about working with PHP, continue to our PHP Developer center.
PHP Developer Center
For more information about the Storage Explorer and Blobs, see Manage Azure Blob storage resources with
Storage Explorer.
Quickstart: Azure Blob Storage client library for
Ruby
11/25/2021 • 5 minutes to read • Edit Online
Learn how to use Ruby to create, download, and list blobs in a container in Microsoft Azure Blob Storage.
Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
Make sure you have the following additional prerequisites installed:
Ruby
Azure Storage library for Ruby, using the RubyGem package:
Navigate to the storage-blobs-ruby-quickstart folder, and open the example.rb file in your code editor.
blob_client = Azure::Storage::Blob::BlobService.create(
storage_account_name: account_name,
storage_access_key: account_key
)
Paused, press the Enter key to delete resources created by the sample and exit the application
When you press Enter to continue, the sample program deletes the storage container and the local file. Before
you continue, check your Documents folder for the downloaded file.
You can also use Azure Storage Explorer to view the files in your storage account. Azure Storage Explorer is a
free cross-platform tool that allows you to access your storage account information.
After you've verified the files, press the Enter key to delete the test files and end the demo. Open the example.rb
file to look at the code.
IMPORTANT
Container names must be lowercase. For more information about container and blob names, see Naming and Referencing
Containers, Blobs, and Metadata.
# Create a container
container_name = "quickstartblobs" + SecureRandom.uuid
puts "\nCreating a container: " + container_name
container = blob_client.create_container(container_name)
Block blobs can be as large as 4.7 TB, and can be anything from spreadsheets to large video files. Page blobs are
primarily used for the VHD files that back IaaS virtual machines. Append blobs are commonly used for logging,
such as when you want to write to a file and then keep adding more information.
List the blobs in a container
Get a list of files in the container using the list_blobs method. The following code retrieves the list of blobs, then
displays their names.
# List the blobs in the container
puts "\nList blobs in the container following continuation token"
nextMarker = nil
loop do
blobs = blob_client.list_blobs(container_name, { marker: nextMarker })
blobs.each do |blob|
puts "\tBlob name: #{blob.name}"
end
nextMarker = blobs.continuation_token
break unless nextMarker && !nextMarker.empty?
end
Download a blob
Download a blob to your local disk using the get_blob method. The following code downloads the blob created
in a previous section.
Clean up resources
If a blob is no longer needed, use delete_blob to remove it. Delete an entire container using the delete_container
method. Deleting a container also deletes any blobs stored in the container.
Next steps
In this quickstart, you learned how to transfer files between Azure Blob Storage and a local disk by using Ruby.
To learn more about working with Blob Storage, continue to the Storage account overview.
Storage account overview
For more information about the Storage Explorer and Blobs, see Manage Azure Blob Storage resources with
Storage Explorer.
Quickstart: Azure Blob Storage client library v12
with Xamarin
11/25/2021 • 6 minutes to read • Edit Online
Get started with the Azure Blob Storage client library v12 with Xamarin. Azure Blob Storage is Microsoft's object
storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob
storage is optimized for storing massive amounts of unstructured data.
Use the Azure Blob Storage client library v12 with Xamarin to:
Create a container
Upload a blob to Azure Storage
List all of the blobs in a container
Download the blob to your device
Delete a container
Reference links:
API reference documentation
Library source code
Package (NuGet)
Sample
Prerequisites
Azure subscription - create one for free
Azure storage account - create a storage account
Visual Studio with Mobile Development for .NET workload installed or Visual Studio for Mac
Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 with
Xamarin.
Create the project
1. Open Visual Studio and create a Blank Forms App.
2. Name it: BlobQuickstartV12
Install the package
1. Right-click your solution in the Solution Explorer pane and select Manage NuGet Packages for Solution .
2. Search for Azure.Storage.Blobs and install the latest stable version into all projects in your solution.
Set up the app framework
From the BlobQuickstar tV12 directory:
1. Open up the MainPage.xaml file in your editor
2. Remove everything between the <ContentPage></ContentPage> elements and replace with the below:
<StackLayout HorizontalOptions="Center" VerticalOptions="Center">
</StackLayout>
Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers
three types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.
Use the following .NET classes to interact with these resources:
BlobServiceClient: The BlobServiceClient class allows you to manipulate Azure Storage resources and blob
containers.
BlobContainerClient: The BlobContainerClient class allows you to manipulate Azure Storage containers and
their blobs.
BlobClient: The BlobClient class allows you to manipulate Azure Storage blobs.
BlobDownloadInfo: The BlobDownloadInfo class represents the properties and content returned from
downloading a blob.
Code examples
These example code snippets show you how to perform the following tasks with the Azure Blob Storage client
library for .NET in a Xamarin.Forms app:
Create class level variables
Create a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Create class level variables
The code below declares several class level variables. They needed to communicate to Azure Blob Storage
throughout the rest of this sample.
These are in addition to the connection string for the storage account set in the Configure your storage
connection string section.
Add this code as class level variables inside the MainPage.xaml.cs file:
string storageConnectionString = "{set in the Configure your storage connection string section}";
string fileName = $"{Guid.NewGuid()}-temp.txt";
BlobServiceClient client;
BlobContainerClient containerClient;
BlobClient blobClient;
Create a container
Decide on a name for the new container. The code below appends a GUID value to the container name to ensure
that it is unique.
IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.
Create an instance of the BlobServiceClient class. Then, call the CreateBlobContainerAsync method to create the
container in your storage account.
Add this code to MainPage.xaml.cs file:
blobClient = containerClient.GetBlobClient(fileName);
uploadButton.IsEnabled = true;
}
uploadButton.IsEnabled = false;
listButton.IsEnabled = true;
}
listButton.IsEnabled = false;
downloadButton.IsEnabled = true;
}
Download blobs
Download the previously created blob by calling the DownloadToAsync method. The example code copies the
Stream representation of the blob first into a MemoryStream and then into a StreamReader so the text can be
displayed.
Add this code to the MainPage.xaml.cs file:
await downloadInfo.Content.CopyToAsync(memoryStream);
memoryStream.Position = 0;
downloadButton.IsEnabled = false;
deleteButton.IsEnabled = true;
}
Delete a container
The following code cleans up the resources the app created by deleting the entire container by using
DeleteAsync.
The app first prompts to confirm before it deletes the blob and container. This is a good chance to verify that the
resources were created correctly, before they are deleted.
Add this code to the MainPage.xaml.cs file:
if (deleteContainer == false)
return;
await containerClient.DeleteAsync();
deleteButton.IsEnabled = false;
}
Before you begin the clean-up process, verify the output of the blob's contents on screen match the value that
was uploaded.
After you've verified the values, confirm the prompt to delete the container and finish the demo.
Next steps
In this quickstart, you learned how to upload, download, and list blobs using Azure Blob Storage client library
v12 with Xamarin.
To see Blob storage sample apps, continue to:
Azure Blob Storage SDK v12 Xamarin sample
For tutorials, samples, quick starts and other documentation, visit Azure for mobile developers.
To learn more about Xamarin, see Getting started with Xamarin.
Tutorial: Upload image data in the cloud with Azure
Storage
11/25/2021 • 11 minutes to read • Edit Online
This tutorial is part one of a series. In this tutorial, you'll learn how to deploy a web app. The web app uses the
Azure Blob Storage client library to upload images to a storage account. When you're finished, you'll have a web
app that stores and displays images from Azure storage.
.NET v12 SDK
JavaScript v12 SDK
O P T IO N EXA M P L E/ L IN K
PowerShell
Azure CLI
Create a resource group with the New-AzResourceGroup command. An Azure resource group is a logical
container into which Azure resources are deployed and managed.
In the following command, replace your own globally unique name for the Blob storage account where you see
the <blob_storage_account> placeholder.
PowerShell
Azure CLI
Create a storage account in the resource group you created by using the New-AzStorageAccount command.
$blobStorageAccount="<blob_storage_account>"
PowerShell
Azure CLI
Get the storage account key by using the Get-AzStorageAccountKey command. Then, use this key to create two
containers with the New-AzStorageContainer command.
Make a note of your Blob storage account name and key. The sample app uses these settings to connect to the
storage account to upload the images.
PowerShell
Azure CLI
PowerShell
Azure CLI
Create a web app in the myAppServicePlan App Service plan with the New-AzWebApp command.
$webapp="<web_app>"
App Service supports several ways to deploy content to a web app. In this tutorial, you deploy the web app from
a public GitHub sample repository. Configure GitHub deployment to the web app with the az webapp
deployment source config command.
The sample project contains an ASP.NET MVC app. The app accepts an image, saves it to a storage account, and
displays images from a thumbnail container. The web app uses the Azure.Storage, Azure.Storage.Blobs, and
Azure.Storage.Blobs.Models namespaces to interact with the Azure Storage service.
The sample web app uses the Azure Storage APIs for .NET to upload images. Storage account credentials are set
in the app settings for the web app. Add app settings to the deployed app with the az webapp config appsettings
set or New-AzStaticWebAppSetting command.
az webapp config appsettings set --name $webapp --resource-group myResourceGroup \
--settings AzureStorageConfig__AccountName=$blobStorageAccount \
AzureStorageConfig__ImageContainer=images \
AzureStorageConfig__ThumbnailContainer=thumbnails \
AzureStorageConfig__AccountKey=$blobStorageAccountKey
After you deploy and configure the web app, you can test the image upload functionality in the app.
Upload an image
To test the web app, browse to the URL of your published app. The default URL of the web app is
https://<web_app>.azurewebsites.net .
Select the Upload photos region to specify and upload a file, or drag a file onto the region. The image
disappears if successfully uploaded. The Generated Thumbnails section will remain empty until we test it later
in this tutorial.
In the sample code, the UploadFileToStorage task in the Storagehelper.cs file is used to upload the images to the
images container within the storage account using the UploadAsync method. The following code sample
contains the UploadFileToStorage task.
public static async Task<bool> UploadFileToStorage(Stream fileStream, string fileName,
AzureStorageConfig _storageConfig)
{
// Create a URI to the blob
Uri blobUri = new Uri("https://" +
_storageConfig.AccountName +
".blob.core.windows.net/" +
_storageConfig.ImageContainer +
"/" + fileName);
The following classes and methods are used in the preceding task:
C L A SS M ET H O D
BlobClient UploadAsync
In part two of the series, you automate thumbnail image creation so you don't need this image. In the
thumbnails container, select the image you uploaded, and select Delete to remove the image.
You can enable Content Delivery Network (CDN) to cache content from your Azure storage account. For more
information, see Integrate an Azure storage account with Azure CDN.
Next steps
In part one of the series, you learned how to configure a web app to interact with storage.
Go on to part two of the series to learn about using Event Grid to trigger an Azure function to resize an image.
Use Event Grid to trigger an Azure Function to resize an uploaded image
Tutorial: Automate resizing uploaded images using
Event Grid
11/25/2021 • 8 minutes to read • Edit Online
Azure Event Grid is an eventing service for the cloud. Event Grid enables you to create subscriptions to events
raised by Azure services or third-party resources.
This tutorial is part two of a series of Storage tutorials. It extends the previous Storage tutorial to add serverless
automatic thumbnail generation using Azure Event Grid and Azure Functions. Event Grid enables Azure
Functions to respond to Azure Blob storage events and generate thumbnails of uploaded images. An event
subscription is created against the Blob storage create event. When a blob is added to a specific Blob storage
container, a function endpoint is called. Data passed to the function binding from Event Grid is used to access the
blob and generate the thumbnail image.
You use the Azure CLI and the Azure portal to add the resizing functionality to an existing image upload app.
Prerequisites
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
PowerShell
Azure CLI
$resourceGroupName="myResourceGroup"
$location="eastus"
Now configure the function app to connect to the Blob storage account you created in the previous tutorial.
The FUNCTIONS_EXTENSION_VERSION=~2 setting makes the function app run on version 2.x of the Azure Functions
runtime.
You can now deploy a function code project to this function app.
The sample C# resize function is available on GitHub. Deploy this code project to the function app by using the
az functionapp deployment source config command.
The image resize function is triggered by HTTP requests sent to it from the Event Grid service. You tell Event Grid
that you want to get these notifications at your function's URL by creating an event subscription. For this tutorial
you subscribe to blob-created events.
The data passed to the function from the Event Grid notification includes the URL of the blob. That URL is in turn
passed to the input binding to obtain the uploaded image from Blob storage. The function generates a
thumbnail image and writes the resulting stream to a separate container in Blob storage.
This project uses EventGridTrigger for the trigger type. Using the Event Grid trigger is recommended over
generic HTTP triggers. Event Grid automatically validates Event Grid Function triggers. With generic HTTP
triggers, you must implement the validation response.
To learn more about this function, see the function.json and run.csx files.
The function project code is deployed directly from the public sample repository. To learn more about
deployment options for Azure Functions, see Continuous deployment for Azure Functions.
2. Select select Integration then choose the Event Grid Trigger and select Create Event Grid
subscription .
Resource Your Blob storage account Choose the Blob storage account
you created.
System Topic Name imagestoragesystopic Specify a name for the system topic.
To learn about system topics, see
System topics overview.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
Event types Blob created Uncheck all types other than Blob
created . Only event types of
Microsoft.Storage.BlobCreated
are passed to the function.
5. Select Create to add the event subscription. This creates an event subscription that triggers the
Thumbnail function when a blob is added to the images container. The function resizes the images and
adds them to the thumbnails container.
Now that the backend services are configured, you test the image resize functionality in the sample web app.
Click the Upload photos region to select and upload a file. You can also drag a photo to this region.
Notice that after the uploaded image disappears, a copy of the uploaded image is displayed in the Generated
Thumbnails carousel. This image was resized by the function, added to the thumbnails container, and
downloaded by the web client.
Next steps
In this tutorial, you learned how to:
Create a general Azure Storage account
Deploy serverless code using Azure Functions
Create a Blob storage event subscription in Event Grid
Advance to part three of the Storage tutorial series to learn how to secure access to the storage account.
Secure access to an applications data in the cloud
To learn more about Event Grid, see An introduction to Azure Event Grid.
To try another tutorial that features Azure Functions, see Create a function that integrates with Azure Logic
Apps.
Secure access to application data
11/25/2021 • 4 minutes to read • Edit Online
This tutorial is part three of a series. You learn how to secure access to the storage account.
In part three of the series, you learn how to:
Use SAS tokens to access thumbnail images
Turn on server-side encryption
Enable HTTPS-only transport
Azure blob storage provides a robust service to store files for applications. This tutorial extends the previous
topic to show how to secure access to your storage account from a web application. When you're finished the
images are encrypted and the web app uses secure SAS tokens to access the thumbnail images.
Prerequisites
To complete this tutorial you must have completed the previous Storage tutorial: Automate resizing uploaded
images using Event Grid.
PowerShell
Azure CLI
$blobStorageAccount="<blob_storage_account>"
The sasTokens branch of the repository updates the StorageHelper.cs file. It replaces the GetThumbNailUrls
task with the code example below. The updated task retrieves the thumbnail URLs by using a BlobSasBuilder to
specify the start time, expiry time, and permissions for the SAS token. Once deployed the web app now retrieves
the thumbnails with a URL using a SAS token. The updated task is shown in the following example:
public static async Task<List<string>> GetThumbNailUrls(AzureStorageConfig _storageConfig)
{
List<string> thumbnailUrls = new List<string>();
if (container.Exists())
{
// Set the expiration time and permissions for the container.
// In this case, the start time is specified as a few
// minutes in the past, to mitigate clock skew.
// The shared access signature will be valid immediately.
BlobSasBuilder sas = new BlobSasBuilder
{
Resource = "c",
BlobContainerName = _storageConfig.ThumbnailContainer,
StartsOn = DateTimeOffset.UtcNow.AddMinutes(-5),
ExpiresOn = DateTimeOffset.UtcNow.AddHours(1)
};
sas.SetPermissions(BlobContainerSasPermissions.All);
//Return the URI string for the container, including the SAS token.
thumbnailUrls.Add(sasBlobUri);
}
}
return await Task.FromResult(thumbnailUrls);
}
The following classes, properties, and methods are used in the preceding task:
C L A SS P RO P ERT IES M ET H O DS
StorageSharedKeyCredential
BlobServiceClient GetBlobContainerClient
BlobSasBuilder SetPermissions
ToSasQueryParameters
BlobItem Name
UriBuilder Query
List Add
curl http://<storage-account-name>.blob.core.windows.net/<container>/<blob-name> -I
Now that secure transfer is required, you receive the following message:
HTTP/1.1 400 The account being accessed does not support http.
Next steps
In part three of the series, you learned how to secure access to the storage account, such as how to:
Use SAS tokens to access thumbnail images
Turn on server-side encryption
Enable HTTPS-only transport
Advance to part four of the series to learn how to monitor and troubleshoot a cloud storage application.
Monitor and troubleshoot application cloud application storage
Monitor and troubleshoot a cloud storage
application
11/25/2021 • 3 minutes to read • Edit Online
This tutorial is part four and the final part of a series. You learn how to monitor and troubleshoot a cloud
storage application.
In part four of the series, you learn how to:
Turn on logging and metrics
Enable alerts for authorization errors
Run test traffic with incorrect SAS tokens
Download and analyze logs
Azure storage analytics provides logging and metric data for a storage account. This data provides insights into
the health of your storage account. To collect data from Azure storage analytics, you can configure logging,
metrics and alerts. This process involves turning on logging, configuring metrics, and enabling alerts.
Logging and metrics from storage accounts are enabled from the Diagnostics tab in the Azure portal. Storage
logging enables you to record details for both successful and failed requests in your storage account. These logs
enable you to see details of read, write, and delete operations against your Azure tables, queues, and blobs. They
also enable you to see the reasons for failed requests such as timeouts, throttling, and authorization errors.
curl https://<STORAGE_ACCOUNT_NAME>.blob.core.windows.net/<CONTAINER_NAME>/<INCORRECT_BLOB_NAME>?$sasToken
The following image is an example alert that is based off the simulated failure ran with the preceding example.
Once you are connected, expand the containers in the storage tree view to view the log blobs. Select the latest
log and click OK .
Next steps
In part four and the final part of the series, you learned how to monitor and troubleshoot your storage account,
such as how to:
Turn on logging and metrics
Enable alerts for authorization errors
Run test traffic with incorrect SAS tokens
Download and analyze logs
Follow this link to see pre-built storage samples.
Azure storage script samples
Tutorial: Migrate on-premises data to cloud storage
with AzCopy
11/25/2021 • 6 minutes to read • Edit Online
AzCopy is a command-line tool for copying data to or from Azure Blob storage, Azure Files, and Azure Table
storage, by using simple commands. The commands are designed for optimal performance. Using AzCopy, you
can either copy data between a file system and a storage account, or between storage accounts. AzCopy may be
used to copy data from local (on-premises) data to a storage account.
In this tutorial, you learn how to:
Create a storage account.
Use AzCopy to upload all your data.
Modify the data for test purposes.
Create a scheduled task or cron job to identify new files to upload.
If you don't have an Azure subscription, create a free account before you begin.
Prerequisites
To complete this tutorial, download the latest version of AzCopy. See Get started with AzCopy.
If you're on Windows, you will require Schtasks as this tutorial makes use of it in order to schedule a task. Linux
users will make use of the crontab command, instead.
To create a general-purpose v2 storage account in the Azure portal, follow these steps:
1. On the Azure portal menu, select All ser vices . In the list of resources, type Storage Accounts . As you
begin typing, the list filters based on your input. Select Storage Accounts .
2. On the Storage Accounts window that appears, choose + New .
3. On the Basics blade, select the subscription in which to create the storage account.
4. Under the Resource group field, select your desired resource group, or create a new resource group. For
more information on Azure resource groups, see Azure Resource Manager overview.
5. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name
also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
6. Select a region for your storage account, or use the default region.
7. Select a performance tier. The default tier is Standard.
8. Specify how the storage account will be replicated. The default redundancy option is Geo-redundant storage
(GRS). For more information about available replication options, see Azure Storage redundancy.
9. Additional options are available on the Advanced , Networking , Data protection , and Tags blades. To use
Azure Data Lake Storage, choose the Advanced blade, and then set Hierarchical namespace to Enabled .
For more information, see Azure Data Lake Storage Gen2 Introduction
10. Select Review + Create to review your storage account settings and create the account.
11. Select Create .
The following image shows the settings on the Basics blade for a new storage account:
Create a container
The first step is to create a container, because blobs must always be uploaded into a container. Containers are
used as a method of organizing groups of blobs like you would files on your computer, in folders.
Follow these steps to create a container:
1. Select the Storage accounts button from the main page, and select the storage account that you
created.
2. Select Blobs under Ser vices , and then select Container .
Container names must start with a letter or number. They can contain only letters, numbers, and the hyphen
character (-). For more rules about naming blobs and containers, see Naming and referencing containers, blobs,
and metadata.
Download AzCopy
Download the AzCopy V10 executable file.
Windows (zip)
Linux (tar)
macOS (zip)
Place the AzCopy file anywhere on your computer. Add the location of the file to your system path variable so
that you can refer to this executable file from any folder on your computer.
azcopy login
This command returns an authentication code and the URL of a website. Open the website, provide the code,
and then choose the Next button.
A sign-in window will appear. In that window, sign into your Azure account by using your Azure account
credentials. After you've successfully signed in, you can close the browser window and begin using AzCopy.
Replace the <local-folder-path> placeholder with the path to a folder that contains files (For example:
C:\myFolder or /mnt/myFolder ).
Replace the <storage-account-name> placeholder with the name of your storage account.
Replace the <container-name> placeholder with the name of the container that you created.
To upload the contents of the specified directory to Blob storage recursively, specify the --recursive option.
When you run AzCopy with this option, all subfolders and their files are uploaded as well.
Replace the <local-folder-path> placeholder with the path to a folder that contains files (For example:
C:\myFolder or /mnt/myFolder .
Replace the <storage-account-name> placeholder with the name of your storage account.
Replace the <container-name> placeholder with the name of the container that you created.
To learn more about the sync command, see Synchronize files.
NOTE
The Linux example appends a SAS token. You'll need to provide one in your command. The current version of AzCopy V10
doesn't support Azure AD authorization in cron jobs.
Linux
Windows
In this tutorial, Schtasks is used to create a scheduled task on Windows. The Crontab command is used to create
a cron job on Linux.
Schtasks enables an administrator to create, delete, query, change, run, and end scheduled tasks on a local or
remote computer. Cron enables Linux and Unix users to run commands or scripts at a specified date and time
by using cron expressions.
Linux
Windows
crontab -e
*/5 * * * * sh /path/to/script.sh
Specifying the cron expression */5 * * * * in the command indicates that the shell script script.sh should
run every five minutes. You can schedule the script to run at a specific time daily, monthly, or yearly. To learn
more about setting the date and time for job execution, see cron expressions.
To validate that the scheduled task/cron job runs correctly, create new files in your myFolder directory. Wait five
minutes to confirm that the new files have been uploaded to your storage account. Go to your log directory to
view output logs of the scheduled task or cron job.
Next steps
To learn more about ways to move on-premises data to Azure Storage and vice versa, follow this link:
Move data to and from Azure Storage.
For more information about AzCopy, see any of these articles:
Get started with AzCopy
Transfer data with AzCopy and blob storage
Transfer data with AzCopy and file storage
Transfer data with AzCopy and Amazon S3 buckets
Configure, optimize, and troubleshoot AzCopy
Create a virtual machine and storage account for a
scalable application
11/25/2021 • 4 minutes to read • Edit Online
This tutorial is part one of a series. This tutorial shows you deploy an application that uploads and download
large amounts of random data with an Azure storage account. When you're finished, you have a console
application running on a virtual machine that you upload and download large amounts of data to a storage
account.
In part one of the series, you learn how to:
Create a storage account
Create a virtual machine
Configure a custom script extension
If you don't have an Azure subscription, create a free account before you begin.
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
O P T IO N EXA M P L E/ L IN K
Deploy configuration
For this tutorial, there are pre-requisites that must be installed on the virtual machine. The custom script
extension is used to run a PowerShell script that completes the following tasks:
Install .NET core 2.0
Install chocolatey
Install GIT
Clone the sample repo
Restore NuGet packages
Creates 50 1-GB files with random data
Run the following cmdlet to finalize configuration of the virtual machine. This step takes 5-15 minutes to
complete.
# Start a CustomScript extension to use a simple PowerShell script to install .NET core, dependencies, and
pre-create the files to upload.
Set-AzVMCustomScriptExtension -ResourceGroupName myResourceGroup `
-VMName myVM `
-Location EastUS `
-FileUri https://raw.githubusercontent.com/azure-samples/storage-dotnet-perf-scale-
app/master/setup_env.ps1 `
-Run 'setup_env.ps1' `
-Name DemoScriptExtension
Next steps
In part one of the series, you learned about creating a storage account, deploying a virtual machine and
configuring the virtual machine with the required pre-requisites such as how to:
Create a storage account
Create a virtual machine
Configure a custom script extension
Advance to part two of the series to upload large amounts of data to a storage account using exponential retry
and parallelism.
Upload large amounts of large files in parallel to a storage account
Upload large amounts of random data in parallel to
Azure storage
11/25/2021 • 7 minutes to read • Edit Online
This tutorial is part two of a series. This tutorial shows you deploy an application that uploads large amount of
random data to an Azure storage account.
In part two of the series, you learn how to:
Configure the connection string
Build the application
Run the application
Validate the number of connections
Microsoft Azure Blob Storage provides a scalable service for storing your data. To ensure your application is as
performant as possible, an understanding of how blob storage works is recommended. Knowledge of the limits
for Azure blobs is important, to learn more about these limits visit: Scalability and performance targets for Blob
storage.
Partition naming is another potentially important factor when designing a high-performance application using
blobs. For block sizes greater than or equal to 4 MiB, High-Throughput block blobs are used, and partition
naming will not impact performance. For block sizes less than 4 MiB, Azure storage uses a range-based
partitioning scheme to scale and load balance. This configuration means that files with similar naming
conventions or prefixes go to the same partition. This logic includes the name of the container that the files are
being uploaded to. In this tutorial, you use files that have GUIDs for names as well as randomly generated
content. They are then uploaded to five different containers with random names.
Prerequisites
To complete this tutorial, you must have completed the previous Storage tutorial: Create a virtual machine and
storage account for a scalable application.
mstsc /v:<publicIpAddress>
When finished, open another Command Prompt , navigate to D:\git\storage-dotnet-perf-scale-app and type
dotnet build to build the application.
dotnet run
The application creates five randomly named containers and begins uploading the files in the staging directory
to the storage account.
The UploadFilesAsync method is shown in the following example:
// Start a timer to measure how long it takes to upload all the files.
Stopwatch timer = Stopwatch.StartNew();
try
{
Console.WriteLine($"Iterating in directory: {uploadPath}");
int count = 0;
timer.Stop();
Console.WriteLine($"Uploaded {count} files in {timer.Elapsed.TotalSeconds} seconds");
}
catch (RequestFailedException ex)
{
Console.WriteLine($"Azure request failed: {ex.Message}");
}
catch (DirectoryNotFoundException ex)
{
Console.WriteLine($"Error parsing files in the directory: {ex.Message}");
}
catch (Exception ex)
{
Console.WriteLine($"Exception: {ex.Message}");
}
}
C:\>
Next steps
In part two of the series, you learned about uploading large amounts of random data to a storage account in
parallel, such as how to:
Configure the connection string
Build the application
Run the application
Validate the number of connections
Advance to part three of the series to download large amounts of data from a storage account.
Download large amounts of random data from Azure storage
Download large amounts of random data from
Azure storage
11/25/2021 • 6 minutes to read • Edit Online
This tutorial is part three of a series. This tutorial shows you how to download large amounts of data from Azure
storage.
In part three of the series, you learn how to:
Update the application
Run the application
Validate the number of connections
Prerequisites
To complete this tutorial, you must have completed the previous Storage tutorial: Upload large amounts of
random data in parallel to Azure storage.
mstsc /v:<publicIpAddress>
// Uncomment the following line to enable downloading of files from the storage account.
// This is commented out initially to support the tutorial at
// https://docs.microsoft.com/azure/storage/blobs/storage-blob-scalable-app-download-files
await DownloadFilesAsync();
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
exception = true;
}
finally
{
// The following function will delete the container and all files contained in them.
// This is commented out initially as the tutorial at
// https://docs.microsoft.com/azure/storage/blobs/storage-blob-scalable-app-download-files
// has you upload only for one tutorial and download for the other.
if (!exception)
{
// await DeleteExistingContainersAsync();
}
Console.WriteLine("Press any key to exit the application");
Console.ReadKey();
}
}
After the application has been updated, you need to build the application again. Open a Command Prompt and
navigate to D:\git\storage-dotnet-perf-scale-app . Rebuild the application by running dotnet build as seen in
the following example:
dotnet build
dotnet run
// Start a timer to measure how long it takes to download all the files.
Stopwatch timer = Stopwatch.StartNew();
C:\>
Next steps
In part three of the series, you learned about downloading large amounts of data from a storage account,
including how to:
Run the application
Validate the number of connections
Go to part four of the series to verify throughput and latency metrics in the portal.
Verify throughput and latency metrics in the portal
Verify throughput and latency metrics for a storage
account
11/25/2021 • 2 minutes to read • Edit Online
This tutorial is part four and the final part of a series. In the previous tutorials, you learned how to upload and
download larges amounts of random data to an Azure storage account. This tutorial shows you how you can use
metrics to view throughput and latency in the Azure portal.
In part four of the series, you learn how to:
Configure charts in the Azure portal
Verify throughput and latency metrics
Azure storage metrics uses Azure monitor to provide a unified view into the performance and availability of
your storage account.
Configure metrics
Navigate to Metrics under SETTINGS in your storage account.
Choose Blob from the SUB SERVICE drop-down.
Under METRIC , select one of the metrics found in the following table:
The following metrics give you an idea of the latency and throughput of the application. The metrics you
configure in the portal are in 1-minute averages. If a transaction finished in the middle of a minute that minute
data is halved for the average. In the application, the upload and download operations were timed and provided
you output of the actual amount of time it took to upload and download the files. This information can be used
in conjunction with the portal metrics to fully understand throughput.
M ET RIC DEF IN IT IO N
Success E2E Latency The average end-to-end latency of successful requests made
to a storage service or the specified API operation. This value
includes the required processing time within Azure Storage
to read the request, send the response, and receive
acknowledgment of the response.
Success Ser ver Latency The average time used to process a successful request by
Azure Storage. This value does not include the network
latency specified in SuccessE2ELatency.
Select Last 24 hours (Automatic) next to Time . Choose Last hour and Minute for Time granularity , then
click Apply .
Charts can have more than one metric assigned to them, but assigning more than one metric disables the ability
to group by dimensions.
Dimensions
Dimensions are used to look deeper into the charts and get more detailed information. Different metrics have
different dimensions. One dimension that is available is the API name dimension. This dimension breaks out
the chart into each separate API call. The first image below shows an example chart of total transactions for a
storage account. The second image shows the same chart but with the API name dimension selected. As you can
see, each transaction is listed giving more details into how many calls were made by API name.
Clean up resources
When no longer needed, delete the resource group, virtual machine, and all related resources. To do so, select
the resource group for the VM and click Delete.
Next steps
In part four of the series, you learned about viewing metrics for the example solution, such as how to:
Configure charts in the Azure portal
Verify throughput and latency metrics
Follow this link to see pre-built storage samples.
Azure storage script samples
Tutorial: Host a static website on Blob Storage
11/25/2021 • 4 minutes to read • Edit Online
In this tutorial, you'll learn how to build and deploy a static website to Azure Storage. When you're finished, you
will have a static website that users can access publicly.
In this tutorial, you learn how to:
Configure static website hosting
Deploy a Hello World website
Static websites have some limitations. For example, If you want to configure headers, you'll have to use Azure
Content Delivery Network (Azure CDN). There's no way to configure headers as part of the static website feature
itself. Also, AuthN and AuthZ are not supported.
If these features are important for your scenario, consider using Azure Static Web Apps. It's a great alternative to
static websites and is also appropriate in cases where you don't require a web server to render content. You can
configure headers and AuthN / AuthZ is fully supported. Azure Static Web Apps also provides a fully managed
continuous integration and continuous delivery (CI/CD) workflow from GitHub source to global deployment.
Prerequisites
To access Azure Storage, you'll need an Azure subscription. If you don't already have a subscription, create a free
account before you begin.
All access to Azure Storage takes place through a storage account. For this quickstart, create a storage account
using the Azure portal, Azure PowerShell, or Azure CLI. For help creating a storage account, see Create a storage
account.
NOTE
Static websites are now available for general-purpose v2 Standard storage accounts as well as storage accounts with
hierarchical namespace enabled.
This tutorial uses Visual Studio Code, a free tool for programmers, to build the static website and deploy it to an
Azure Storage account.
After you install Visual Studio Code, install the Azure Storage preview extension. This extension integrates Azure
Storage management functionality with Visual Studio Code. You will use the extension to deploy your static
website to Azure Storage. To install the extension:
1. Launch Visual Studio Code.
2. On the toolbar, click Extensions . Search for Azure Storage, and select the Azure Storage extension from
the list. Then click the Install button to install the extension.
Sign in to the Azure portal
Sign in to the Azure portal to get started.
3. Create the default index file in the mywebsite folder and name it index.html.
4. Open index.html in the editor, paste the following text into the file, and save it:
<!DOCTYPE html>
<html>
<body>
<h1>Hello World!</h1>
</body>
</html>
<!DOCTYPE html>
<html>
<body>
<h1>404</h1>
</body>
</html>
7. Right-click under the mywebsite folder in the Explorer panel and select Deploy to Static Website... to
deploy your website. You will be prompted to log in to Azure to retrieve a list of subscriptions.
8. Select the subscription containing the storage account for which you enabled static website hosting. Next,
select the storage account when prompted.
Visual Studio Code will now upload your files to your web endpoint, and show the success status bar. Launch the
website to view it in Azure.
You've successfully completed the tutorial and deployed a static website to Azure.
Feature support
This table shows how this feature is supported in your account and the impact on support when you enable
certain capabilities.
Standard general-purpose
v2
1 Data Lake Storage Gen2 and the Network File System (NFS) 3.0 protocol both require a storage account with a
hierarchical namespace enabled.
Next steps
In this tutorial, you learned how to configure your Azure Storage account for static website hosting, and how to
create and deploy a static website to an Azure endpoint.
Next, learn how to configure a custom domain with your static website.
Map a custom domain to an Azure Blob Storage endpoint
Tutorial: Build a highly available application with
Blob storage
11/25/2021 • 12 minutes to read • Edit Online
This tutorial is part one of a series. In it, you learn how to make your application data highly available in Azure.
When you've completed this tutorial, you will have a console application that uploads and retrieves a blob from
a read-access geo-zone-redundant (RA-GZRS) storage account.
Geo-redundancy in Azure Storage replicates transactions asynchronously from a primary region to a secondary
region that is hundreds of miles away. This replication process guarantees that the data in the secondary region
is eventually consistent. The console application uses the circuit breaker pattern to determine which endpoint to
connect to, automatically switching between endpoints as failures and recoveries are simulated.
If you don't have an Azure subscription, create a free account before you begin.
In part one of the series, you learn how to:
Create a storage account
Set the connection string
Run the console application
Prerequisites
To complete this tutorial:
.NET v12 SDK
.NET v11 SDK
Python v12 SDK
Python v2.1
Node.js v12 SDK
Node.js v11 SDK
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
Next steps
In part one of the series, you learned about making an application highly available with RA-GZRS storage
accounts.
Advance to part two of the series to learn how to simulate a failure and force your application to use the
secondary RA-GZRS endpoint.
Simulate a failure in reading from the primary region
Tutorial: Simulate a failure in reading data from the
primary region
11/25/2021 • 5 minutes to read • Edit Online
This tutorial is part two of a series. In it, you learn about the benefits of read-access geo-zone-redundant storage
(RA-GZRS) by simulating a failure.
In order to simulate a failure, you can use either static routing or Fiddler. Both methods will allow you to
simulate failure for requests to the primary endpoint of your read-access geo-redundant (RA-GZRS) storage
account, leading the application to read from the secondary endpoint instead.
If you don't have an Azure subscription, create a free account before you begin.
In part two of the series, you learn how to:
Run and pause the application
Simulate a failure with an invalid static route or Fiddler
Simulate primary endpoint restoration
Prerequisites
Before you begin this tutorial, complete the previous tutorial: Make your application data highly available with
Azure storage.
To simulate a failure with static routing, you will use an elevated command prompt.
To simulate a failure using Fiddler, download and install Fiddler
nslookup STORAGEACCOUNTNAME.blob.core.windows.net
Copy to the IP address of your storage account to a text editor for later use.
To get the IP address of your local host, type ipconfig on the Windows command prompt, or ifconfig on the
Linux terminal.
To add a static route for a destination host, type the following command on a Windows command prompt or
Linux terminal, replacing <destination_ip> with your storage account IP address and <gateway_ip> with your
local host IP address.
Linux
Windows
In the window with the running sample, resume the application or press the appropriate key to download the
sample file and confirm that it comes from secondary storage. You can then pause the sample again or wait at
the prompt.
Simulate primary endpoint restoration
To simulate the primary endpoint becoming functional again, delete the invalid static route from the routing
table. This allows all requests to the primary endpoint to be routed through the default gateway. Type the
following command on a Windows command prompt or Linux terminal.
Linux
Windows
You can then resume the application or press the appropriate key to download the sample file again, this time
confirming that it once again comes from primary storage.
/*
// Simulate data center failure
// After it is successfully downloading the blob, pause the code in the sample,
// uncomment these lines of script, and save the script.
// It will intercept the (probably successful) responses and send back a 503 error.
// When you're ready to stop sending back errors, comment these lines of script out again
// and save the changes.
if ((oSession.hostname == "STORAGEACCOUNTNAME.blob.core.windows.net")
&& (oSession.PathAndQuery.Contains("HelloWorld"))) {
oSession.responseCode = 503;
}
*/
Start and pause the application
Use the instructions in the previous tutorial to launch the sample and download the test file, confirming that it
comes from primary storage. Depending on your target platform, you can then manually pause the sample or
wait at a prompt.
Simulate failure
While the application is paused, switch back to Fiddler and uncomment the custom rule you saved in the
OnBeforeResponse function. Be sure to select File and Save to save your changes so the rule will take effect. This
code looks for requests to the RA-GZRS storage account and, if the path contains the name of the sample file,
returns a response code of 503 - Service Unavailable .
In the window with the running sample, resume the application or press the appropriate key to download the
sample file and confirm that it comes from secondary storage. You can then pause the sample again or wait at
the prompt.
Simulate primary endpoint restoration
In Fiddler, remove or comment out the custom rule again. Select File and Save to ensure the rule will no longer
be in effect.
In the window with the running sample, resume the application or press the appropriate key to download the
sample file and confirm that it comes from primary storage once again. You can then exit the sample.
Next steps
In part two of the series, you learned about simulating a failure to test read access geo-redundant storage.
To learn more about how RA-GZRS storage works, as well as its associated risks, read the following article:
Designing HA apps with RA-GZRS
Tutorial - Encrypt and decrypt blobs using Azure
Key Vault
11/25/2021 • 7 minutes to read • Edit Online
This tutorial covers how to make use of client-side storage encryption with Azure Key Vault. It walks you through
how to encrypt and decrypt a blob in a console application using these technologies.
Estimated time to complete: 20 minutes
For overview information about Azure Key Vault, see What is Azure Key Vault?.
For overview information about client-side encryption for Azure Storage, see Client-Side Encryption and Azure
Key Vault for Microsoft Azure Storage.
Prerequisites
To complete this tutorial, you must have the following:
An Azure Storage account
Visual Studio 2013 or later
Azure PowerShell
Install-Package Microsoft.Azure.ConfigurationManager
Install-Package Microsoft.Azure.Storage.Common
Install-Package Microsoft.Azure.Storage.Blob
Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory
Install-Package Microsoft.Azure.KeyVault
Install-Package Microsoft.Azure.KeyVault.Extensions
<appSettings>
<add key="accountName" value="myaccount"/>
<add key="accountKey" value="theaccountkey"/>
<add key="clientId" value="theclientid"/>
<add key="clientSecret" value="theclientsecret"/>
<add key="container" value="stuff"/>
</appSettings>
Add the following using directives and make sure to add a reference to System.Configuration to the project.
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
NOTE
Key Vault Object Models
It is important to understand that there are actually two Key Vault object models to be aware of: one is based on the
REST API (KeyVault namespace) and the other is an extension for client-side encryption.
The Key Vault Client interacts with the REST API and understands JSON Web Keys and secrets for the two kinds of things
that are contained in Key Vault.
The Key Vault Extensions are classes that seem specifically created for client-side encryption in Azure Storage. They contain
an interface for keys (IKey) and classes based on the concept of a Key Resolver. There are two implementations of IKey
that you need to know: RSAKey and SymmetricKey. Now they happen to coincide with the things that are contained in a
Key Vault, but at this point they are independent classes (so the Key and Secret retrieved by the Key Vault Client do not
implement IKey).
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
NOTE
If you look at the BlobEncryptionPolicy constructor, you will see that it can accept a key and/or a resolver. Be aware that
right now you cannot use a resolver for encryption because it does not currently support a default key.
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
NOTE
There are a couple of other kinds of resolvers to make key management easier, including: AggregateKeyResolver and
CachingKeyResolver.
Use Key Vault secrets
The way to use a secret with client-side encryption is via the SymmetricKey class because a secret is essentially a
symmetric key. But, as noted above, a secret in Key Vault does not map exactly to a SymmetricKey. There are a
few things to understand:
The key in a SymmetricKey has to be a fixed length: 128, 192, 256, 384, or 512 bits.
The key in a SymmetricKey should be Base64 encoded.
A Key Vault secret that will be used as a SymmetricKey needs to have a Content Type of "application/octet-
stream" in Key Vault.
Here is an example in PowerShell of creating a secret in Key Vault that can be used as a SymmetricKey. Please
note that the hard coded value, $key, is for demonstration purpose only. In your own code you'll want to
generate this key.
In your console application, you can use the same call as before to retrieve this secret as a SymmetricKey.
.NET v12 SDK
.NET v11 SDK
We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For
more information, see Announcing the Azure Storage v12 Client Libraries.
Next steps
For more information about using Microsoft Azure Storage with C#, see Microsoft Azure Storage Client Library
for .NET.
For more information about the Blob REST API, see Blob Service REST API.
For the latest information on Microsoft Azure Storage, go to the Microsoft Azure Storage Team Blog.
Tutorial: Add a role assignment condition to restrict
access to blobs using the Azure portal (preview)
11/25/2021 • 4 minutes to read • Edit Online
IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some
cases you might want to provide more fine-grained access control by adding a role assignment condition.
In this tutorial, you learn how to:
Add a condition to a role assignment
Restrict access to blobs based on a blob index tag
Prerequisites
For information about the prerequisites to add or edit role assignment conditions, see Conditions prerequisites.
Condition
In this tutorial, you restrict access to blobs with a specific tag. For example, you add a condition to a role
assignment so that Chandra can only read files with the tag Project=Cascade .
If Chandra tries to read a blob without the tag Project=Cascade , access is not allowed.
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
] StringEqualsIgnoreCase 'Cascade'
)
)
NOTE
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to
blob index tags, you must use blob index tags with conditions.
K EY VA L UE
Project Cascade
7. Click the Upload button to upload the file.
8. Upload a second text file.
9. Add the following blob index tag to the second text file.
K EY VA L UE
Project Baker
7. (Optional) In the Description box, enter Read access to blobs with the tag Project=Cascade .
8. Click Next .
3. Under Read a blog, click Read content from a blob with tag conditions and then click Select .
4. In the Build expression section, click Add expression .
The Expression section expands.
5. Specify the following expression settings:
SET T IN G VA L UE
Key Project
Operator StringEqualsIgnoreCase
Value Cascade
NOTE
You typically don't need to assign the Reader role. However, this is done so that you can test the condition using
the Azure portal.
Step 6: Test the condition
1. In a new window, open the Azure portal.
2. Sign in as the user you created earlier.
3. Open the storage account and container you created.
4. Ensure that the authentication method is set to Azure AD User Account and not Access key .
Next steps
Example Azure role assignment conditions
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
Azure role assignment condition format and syntax
Tutorial: Add a role assignment condition to restrict
access to blobs using Azure PowerShell (preview)
11/25/2021 • 5 minutes to read • Edit Online
IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some
cases you might want to provide more fine-grained access control by adding a role assignment condition.
In this tutorial, you learn how to:
Add a condition to a role assignment
Restrict access to blobs based on a blob index tag
Prerequisites
For information about the prerequisites to add or edit role assignment conditions, see Conditions prerequisites.
Condition
In this tutorial, you restrict access to blobs with a specific tag. For example, you add a condition to a role
assignment so that Chandra can only read files with the tag Project=Cascade.
If Chandra tries to read a blob without the tag Project=Cascade, access is not allowed.
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
] StringEquals 'Cascade'
)
)
Get-InstalledModule -Name Az
Get-InstalledModule -Name Az.Resources
Get-InstalledModule -Name Az.Storage
3. If necessary, use Install-Module to install the required versions for the Az, Az.Resources, and Az.Storage
modules.
Connect-AzAccount
Get-AzSubscription
$subscriptionId = "<subscriptionId>"
$userObjectId = "<userObjectId>"
NOTE
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to
blob index tags, you must use blob index tags with conditions.
K EY VA L UE
Project Cascade
K EY VA L UE
Project Baker
$resourceGroup = "<resourceGroup>"
$storageAccountName = "<storageAccountName>"
$containerName = "<containerName>"
$blobNameCascade = "<blobNameCascade>"
$blobNameBaker = "<blobNameBaker>"
$condition = "((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_se
nsitive`$>] StringEquals 'Cascade'))"
In PowerShell, if your condition includes a dollar sign ($), you must prefix it with a backtick (`). For
example, this condition uses dollar signs to delineate the tag key name.
4. Initialize the condition version and description.
$conditionVersion = "2.0"
$description = "Read access to blobs with the tag Project=Cascade"
5. Use New-AzRoleAssignment to assign the Storage Blob Data Reader role with a condition to the user at a
resource group scope.
RoleAssignmentId : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microso
ft.Authorization/roleAssignments/<roleAssignmentId>
Scope : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>
DisplayName : Chandra
SignInName : [email protected]
RoleDefinitionName : Storage Blob Data Reader
RoleDefinitionId : 2a2b9908-6ea1-4ae2-8e65-a410df84e7d1
ObjectId : <userObjectId>
ObjectType : User
CanDelegate : False
Description : Read access to blobs with the tag Project=Cascade
ConditionVersion : 2.0
Condition : ((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/co
ntainers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'))
Connect-AzAccount
$storageAccountName = "<storageAccountName>"
$containerName = "<containerName>"
$blobNameBaker = "<blobNameBaker>"
$blobNameCascade = "<blobNameCascade>"
4. Use New-AzStorageContext to create a specific context to access your storage account more easily.
5. Use Get-AzStorageBlob to try to read the file for the Baker project.
Here's an example of the output. Notice that you can't read the file because of the condition you added.
Get-AzStorageBlob : This request is not authorized to perform this operation using this permission.
HTTP Status Code:
403 - HTTP Error Message: This request is not authorized to perform this operation using this
permission.
ErrorCode: AuthorizationPermissionMismatch
ErrorMessage: This request is not authorized to perform this operation using this permission.
RequestId: <requestId>
Time: Sat, 24 Apr 2021 13:26:25 GMT
At line:1 char:1
+ Get-AzStorageBlob -Container $containerName -Blob $blobNameBaker -Con ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [Get-AzStorageBlob], StorageException
+ FullyQualifiedErrorId :
StorageException,Microsoft.WindowsAzure.Commands.Storage.Blob.Cmdlet.GetAzureStorageBlob
Command
Here's an example of the output. Notice that you can read the file because it has the tag Project=Cascade.
ime
---- -------- ------ ----------- ------------ --
-------- ---------
CascadeFile.txt BlockBlob 7 text/plain 2021-04-24 05:35:24Z
Hot
$condition = "((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_se
nsitive`$>] StringEquals 'Cascade' OR
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sen
sitive`$>] StringEquals 'Baker'))"
$testRa.Condition = $condition
$testRa.Description = "Read access to blobs with the tag Project=Cascade or Project=Baker"
RoleAssignmentId : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microso
ft.Authorization/roleAssignments/<roleAssignmentId>
Scope : /subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>
DisplayName : Chandra
SignInName : [email protected]
RoleDefinitionName : Storage Blob Data Reader
RoleDefinitionId : 2a2b9908-6ea1-4ae2-8e65-a410df84e7d1
ObjectId : <userObjectId>
ObjectType : User
CanDelegate : False
Description : Read access to blobs with the tag Project=Cascade or Project=Baker
ConditionVersion : 2.0
Condition : ((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/co
ntainers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade' OR
@Resource[Microsoft.S
torage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
StringEquals 'Baker'))
Next steps
Example Azure role assignment conditions
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
Azure role assignment condition format and syntax
Tutorial: Add a role assignment condition to restrict
access to blobs using Azure CLI (preview)
11/25/2021 • 6 minutes to read • Edit Online
IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some
cases you might want to provide more fine-grained access control by adding a role assignment condition.
In this tutorial, you learn how to:
Add a condition to a role assignment
Restrict access to blobs based on a blob index tag
Prerequisites
For information about the prerequisites to add or edit role assignment conditions, see Conditions prerequisites.
Condition
In this tutorial, you restrict access to blobs with a specific tag. For example, you add a condition to a role
assignment so that Chandra can only read files with the tag Project=Cascade.
If Chandra tries to read a blob without the tag Project=Cascade, access is not allowed.
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
] StringEquals 'Cascade'
)
)
az login
az account show
subscriptionId="<subscriptionId>"
userObjectId="<userObjectId>"
K EY VA L UE
Project Cascade
K EY VA L UE
Project Baker
resourceGroup="<resourceGroup>"
storageAccountName="<storageAccountName>"
containerName="<containerName>"
blobNameCascade="<blobNameCascade>"
blobNameBaker="<blobNameBaker>"
scope="/subscriptions/$subscriptionId/resourceGroups/$resourceGroup"
condition="((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
AND SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<\$key_case_se
nsitive\$>] StringEquals 'Cascade'))"
In Bash, if history expansion is enabled, you might see the message bash: !: event not found because of
the exclamation point (!). In this case, you can disable history expansion with the command set +H . To
re-enable history expansion, use set -H .
In Bash, a dollar sign ($) has special meaning for expansion. If your condition includes a dollar sign ($),
you might need to prefix it with a backslash (\). For example, this condition uses dollar signs to delineate
the tag key name. For more information about rules for quotation marks in Bash, see Double Quotes.
4. Initialize the condition version and description.
conditionVersion="2.0"
description="Read access to blobs with the tag Project=Cascade"
5. Use az role assignment create to assign the Storage Blob Data Reader role with a condition to the user at
a resource group scope.
{
"canDelegate": null,
"condition": "((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sen
sitive$>] StringEquals 'Cascade'))",
"conditionVersion": "2.0",
"description": "Read access to blobs with the tag Project=Cascade",
"id":
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Authorization/rol
eAssignments/{roleAssignmentId}",
"name": "{roleAssignmentId}",
"principalId": "{userObjectId}",
"principalType": "User",
"resourceGroup": "{resourceGroup}",
"roleDefinitionId":
"/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-
4ae2-8e65-a410df84e7d1",
"scope": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}",
"type": "Microsoft.Authorization/roleAssignments"
}
az login
storageAccountName="<storageAccountName>"
containerName="<containerName>"
blobNameBaker="<blobNameBaker>"
blobNameCascade="<blobNameCascade>"
4. Use az storage blob show to try to read the properties of the file for the Baker project.
Here's an example of the output. Notice that you can't read the file because of the condition you added.
You do not have the required permissions needed to perform this operation.
Depending on your operation, you may need to be assigned one of the following roles:
"Storage Blob Data Contributor"
"Storage Blob Data Reader"
"Storage Queue Data Contributor"
"Storage Queue Data Reader"
If you want to use the old authentication method and allow querying for the right account key, please
use the "--auth-mode" parameter and "key" value.
Here's an example of the output. Notice that you can read the properties of the file because it has the tag
Project=Cascade.
{
"container": "<containerName>",
"content": "",
"deleted": false,
"encryptedMetadata": null,
"encryptionKeySha256": null,
"encryptionScope": null,
"isAppendBlobSealed": null,
"isCurrentVersion": null,
"lastAccessedOn": null,
"metadata": {},
"name": "<blobNameCascade>",
"objectReplicationDestinationPolicy": null,
"objectReplicationSourceProperties": [],
"properties": {
"appendBlobCommittedBlockCount": null,
"blobTier": "Hot",
"blobTierChangeTime": null,
"blobTierInferred": true,
"blobType": "BlockBlob",
"contentLength": 7,
"contentRange": null,
...
2. Create a JSON file with the following format and update the condition and description properties.
{
"canDelegate": null,
"condition": "((!
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sen
sitive$>] StringEquals 'Cascade' OR
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sens
itive$>] StringEquals 'Baker'))",
"conditionVersion": "2.0",
"description": "Read access to blobs with the tag Project=Cascade or Project=Baker",
"id":
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Authorization/rol
eAssignments/{roleAssignmentId}",
"name": "{roleAssignmentId}",
"principalId": "{userObjectId}",
"principalName": "[email protected]",
"principalType": "User",
"resourceGroup": "{resourceGroup}",
"roleDefinitionId":
"/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-
4ae2-8e65-a410df84e7d1",
"roleDefinitionName": "Storage Blob Data Reader",
"scope": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}",
"type": "Microsoft.Authorization/roleAssignments"
}
3. Use az role assignment update to update the condition for the role assignment.
Next steps
Example Azure role assignment conditions
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
Azure role assignment condition format and syntax
Azure Storage samples using v12 .NET client
libraries
11/25/2021 • 2 minutes to read • Edit Online
The following table provides an overview of our samples repository and the scenarios covered in each sample.
Click on the links to view the corresponding sample code in GitHub.
NOTE
These samples use the latest Azure Storage .NET v12 library. For legacy v11 code, see Azure Blob Storage Samples for .NET
in the GitHub repository.
Blob samples
Authentication
Authenticate using a connection string
Authenticate using a shared key credential
Authenticate with Azure Identity
Authenticate using an Active Directory token
Anonymously access a public blob
Batching
Delete several blobs in one request
Set several blob access tiers in one request
Fine-grained control in a batch request
Catch errors from a failed sub-operation
Blob
Upload a file to a blob
Download a blob to a file
Download an image
List all blobs in a container
Troubleshooting
Trigger a recoverable error using a container client
Queue samples
Authentication
Authenticate using Azure Active Directory
Authenticate using a connection string
Authenticate using a shared key credential
Authenticate using a shared access signature (SAS))
Authenticate using an Active Directory token
Queue
Create a queue and add a message
Message
Receive and process messages
Peek at messages
Receive messages and update visibility timeout
Troubleshooting
Trigger a recoverable error using a queue client
Next steps
For information on samples for other languages:
Java: Azure Storage samples using Java
Python: Azure Storage samples using Python
JavaScript/Node.js: Azure Storage samples using JavaScript
C++: Azure Storage samples using C++
All other languages: Azure Storage samples
Azure Storage samples using v12 Java client libraries
11/25/2021 • 2 minutes to read • Edit Online
The following table provides an overview of our samples repository and the scenarios covered in each sample.
Click on the links to view the corresponding sample code in GitHub.
NOTE
These samples use the latest Azure Storage Java v12 library. For legacy v8 code, see Getting Started with Azure Blob
Service in Java in the GitHub repository.
Blob samples
Authentication
Authenticate using a shared key credential
Authenticate using Azure Identity
Blob service
Create a blob service client
List containers
Delete containers
Batching
Create a blob batch client
Bulk delete blobs
Set access tier on a batch of blobs
Container
Create a container client
Create a container
List blobs
Delete a container
Blob
Upload a blob
Download a blob
Delete a blob
Upload a blob from a large file
Download a large blob to a file
Troubleshooting
Trigger a recoverable error using a container client
Data Lake Storage Gen2 samples
Data Lake service
Create a Data Lake service client
Create a file system client
File system
Create a file system
Create a directory
Create a file and subdirectory
Create a file client
List paths in a file system
Delete a file system
List file systems in an Azure storage account
Directory
Create a directory client
Create a parent directory
Create a child directory
Create a file in a child directory
Get directory properties
Delete a child directory
Delete a parent folder
File
Create a file using a file client
Delete a file
Set access controls on a file
Get access controls on a file
Queue samples
Authentication
Authenticate using a SAS token
Queue service
Create a queue
List queues
Delete queues
Queue
Create a queue client
Add messages to a queue
Message
Get the count of messages
Peek at messages
Receive messages
Update a message
Delete the first message
Clear all messages
Delete a queue
Next steps
For information on samples for other languages:
.NET: Azure Storage samples using .NET
Python: Azure Storage samples using Python
JavaScript/Node.js: Azure Storage samples using JavaScript
C++: Azure Storage samples using C++
All other languages: Azure Storage samples
Azure Storage samples using v12 Python client
libraries
11/25/2021 • 3 minutes to read • Edit Online
The following tables provide an overview of our samples repository and the scenarios covered in each sample.
Click on the links to view the corresponding sample code in GitHub.
NOTE
These samples use the latest Azure Storage .NET v12 library. For legacy v2.1 code, see Azure Storage: Getting Started with
Azure Storage in Python in the GitHub repository.
Blob samples
Authentication
Create blob service client using a connection string
Create container client using a connection string
Create blob client using a connection string
Create blob service client using a shared access key
Create blob client from URL
Create blob client SAS URL
Create blob service client using ClientSecretCredential
Create SAS token
Create blob service client using Azure Identity
Create blob snapshot
Blob service
Get blob service account info
Set blob service properties
Get blob service properties
Get blob service stats
Create container using service client
List containers
Delete container using service client
Get container client
Get blob client
Container
Create container client from service
Create container client using SAS URL
Create container using container client
Get container properties
Delete container using container client
Acquire lease on container
Set container metadata
Set container access policy
Get container access policy
Generate SAS token
Create container client using SAS token
Upload blob to container
List blobs in container
Get blob client
Blob
Upload a blob
Download a blob
Delete blob
Undelete blob
Get blob properties
Delete multiple blobs
Copy blob from URL
Abort copy blob from URL
Acquire lease on blob
Queue samples
Authentication
Authenticate using connection string
Create queue service client token
Create queue client from connection string
Generate queue client SAS token
Queue service
Create queue service client
Set queue service properties
Get queue service properties
Create queue using service client
Delete queue using service client
Queue
Create queue client
Set queue metadata
Get queue properties
Create queue using queue client
Delete queue using queue client
List queues
Get queue client
Message
Send messages
Receive messages
Peek message
Update message
Delete message
Clear messages
Set message access policy
Next steps
For information on samples for other languages:
.NET: Azure Storage samples using .NET
Java: Azure Storage samples using Java
JavaScript/Node.js: Azure Storage samples using JavaScript
C++: Azure Storage samples using C++
All other languages: Azure Storage samples
Azure Storage samples using v12 JavaScript client
libraries
11/25/2021 • 2 minutes to read • Edit Online
The following tables provide an overview of our samples repository and the scenarios covered in each sample.
Click on the links to view the corresponding sample code in GitHub.
NOTE
These samples use the latest Azure Storage JavaScript v12 library. For legacy v11 code, see Getting Started with Azure
Blob Service in Node.js in the GitHub repository.
Blob samples
Authentication
Authenticate using connection string
Authenticate using SAS connection string
Authenticate using shared key credential
Authenticate using AnonymousCredential
Authenticate using Azure Active Directory
Authenticate using a proxy
Connect using a custom pipeline
Blob service
Create blob service client using a SAS URL
Container
Create a container
Create a container using a shared key credential
List containers
List containers using an iterator
List containers by page
Delete a container
Blob
Create a blob
List blobs
Download a blob
List blobs using an iterator
List blobs by page
List blobs by hierarchy
Listing blobs without using await
Create a blob snapshot
Download a blob snapshot
Parallel upload a stream to a blob
Parallel download block blob
Set the access tier on a blob
Troubleshooting
Trigger a recoverable error using a container client
Queue samples
Authentication
Authenticate using a connection string
Authenticate using a shared key credential
Authenticate using AnonymousCredential
Connect using a custom pipeline
Connect using a proxy
Authenticate using Azure Active Directory
Queue service
Create a queue service client
Queue
Create a new queue
List queues
List queues by page
Delete a queue
Message
Send a message into a queue
Peek at messages
Receive messages
Delete messages
Next steps
For information on samples for other languages:
.NET: Azure Storage samples using .NET
Java: Azure Storage samples using Java
Python: Azure Storage samples using Python
C++: Azure Storage samples using C++
All other languages: Azure Storage samples
Azure Storage samples using v12 C++ client
libraries
11/25/2021 • 2 minutes to read • Edit Online
The following table provides an overview of our samples repository and the scenarios covered in each sample.
Click on the links to view the corresponding sample code in GitHub.
NOTE
These samples use the latest Azure Storage C++ v12 library.
Blob samples
Authenticate using a connection string
Create a blob container
Get a blob client
Upload a blob
Set metadata on a blob
Get blob properties
Download a blob
Next steps
For information on samples for other languages:
.NET: Azure Storage samples using .NET
Java: Azure Storage samples using Java
Python: Azure Storage samples using Python
JavaScript/Node.js: Azure Storage samples using JavaScript
All other languages: Azure Storage samples
Azure Storage samples
11/25/2021 • 2 minutes to read • Edit Online
Use the links below to view and download Azure Storage sample code and applications.
.NET samples
To explore the .NET samples, download the .NET Storage Client Library from NuGet. The .NET storage client
library is also available in the Azure SDK for .NET.
Azure Storage samples using .NET
Java samples
To explore the Java samples, download the Java Storage Client Library.
Azure Storage samples using Java
Python samples
To explore the Python samples, download the Python Storage Client Library.
Azure Storage samples using Python
Node.js samples
To explore the Node.js samples, download the Node.js Storage Client Library.
Azure Storage samples using JavaScript/Node.js
C++ samples
To explore the C++ samples, get the Azure Storage Client Library for C++ from GitHub.
Get started with Azure Blobs
Get started with Azure Data Lake
Get started with Azure Files
Azure CLI
To explore the Azure CLI samples, first Install the Azure CLI.
Get started with the Azure CLI
Azure Storage samples using the Azure CLI
.NET .NET Client Library Reference Source code for the .NET storage client
library
Java Java Client Library Reference Source code for the Java storage client
library
Python Python Client Library Reference Source code for the Python storage
client library
Node.js Node.js Client Library Reference Source code for the Node.js storage
client library
C++ C++ Client Library Reference Source code for the C++ storage client
library
Azure CLI Azure CLI Library Reference Source code for the Azure CLI storage
client library
Next steps
The following articles index each of the samples by service (blob, file, queue, table).
Azure Storage samples using .NET
Azure Storage samples using Java
Azure Storage samples using JavaScript
Azure Storage samples using Python
Azure Storage samples using C++
Azure Storage samples using the Azure CLI
Azure PowerShell samples for Azure Blob storage
11/25/2021 • 2 minutes to read • Edit Online
The following table includes links to PowerShell script samples that create and manage Azure Storage.
Storage accounts
Create a storage account and retrieve/rotate the access keys Creates an Azure Storage account and retrieves and rotates
one of its access keys.
Migrate Blobs across storage accounts using AzCopy on Migrate blobs across Azure Storage accounts using AzCopy
Windows on Windows.
Blob storage
Calculate the total size of a Blob storage container Calculates the total size of all the blobs in a container.
Calculate the size of a Blob storage container for billing Calculates the size of a container in Blob storage for the
purposes purpose of estimating billing costs.
Delete containers with a specific prefix Deletes containers starting with a specified string.
Azure CLI samples for Azure Blob storage
11/25/2021 • 2 minutes to read • Edit Online
The following table includes links to Bash scripts built using the Azure CLI that create and manage Azure
Storage.
Storage accounts
Create a storage account and retrieve/rotate the access keys Creates an Azure Storage account and retrieves and rotates
its access keys.
Blob storage
Calculate the total size of a Blob storage container Calculates the total size of all the blobs in a container.
Delete containers with a specific prefix Deletes containers starting with a specified string.
Azure Resource Graph sample queries for Azure
Storage
11/25/2021 • 3 minutes to read • Edit Online
This page is a collection of Azure Resource Graph sample queries for Azure Storage. For a complete list of Azure
Resource Graph samples, see Resource Graph samples by Category and Resource Graph samples by Table.
Sample queries
Find storage accounts with a specific case -insensitive tag on the resource group
Similar to the 'Find storage accounts with a specific case-sensitive tag on the resource group' query, but when
it's necessary to look for a case insensitive tag name and tag value, use mv-expand with the bagexpansion
parameter. This query uses more quota than the original query, so use mv-expand only if necessary.
Resources
| where type =~ 'microsoft.storage/storageaccounts'
| join kind=inner (
ResourceContainers
| where type =~ 'microsoft.resources/subscriptions/resourcegroups'
| mv-expand bagexpansion=array tags
| where isnotempty(tags)
| where tags[0] =~ 'key1' and tags[1] =~ 'value1'
| project subscriptionId, resourceGroup)
on subscriptionId, resourceGroup
| project-away subscriptionId1, resourceGroup1
Azure CLI
Azure PowerShell
Portal
Find storage accounts with a specific case -sensitive tag on the resource group
The following query uses an inner join to connect storage accounts with resource groups that have a
specified case-sensitive tag name and tag value.
Resources
| where type =~ 'microsoft.storage/storageaccounts'
| join kind=inner (
ResourceContainers
| where type =~ 'microsoft.resources/subscriptions/resourcegroups'
| where tags['Key1'] =~ 'Value1'
| project subscriptionId, resourceGroup)
on subscriptionId, resourceGroup
| project-away subscriptionId1, resourceGroup1
Azure CLI
Azure PowerShell
Portal
Resources
| where type =~ 'Microsoft.Storage/storageAccounts'
| where tags['tag with a space']=='Custom value'
Azure CLI
Azure PowerShell
Portal
Resources
| where type contains 'storage' | distinct type
Azure CLI
Azure PowerShell
Portal
Next steps
Learn more about the query language.
Learn more about how to explore resources.
See samples of Starter language queries.
See samples of Advanced language queries.
Storage account overview
11/25/2021 • 6 minutes to read • Edit Online
An Azure storage account contains all of your Azure Storage data objects: blobs, file shares, queues, tables, and
disks. The storage account provides a unique namespace for your Azure Storage data that's accessible from
anywhere in the world over HTTP or HTTPS. Data in your storage account is durable and highly available, secure,
and massively scalable.
To learn how to create an Azure storage account, see Create a storage account.
Standard general-purpose Blob (including Data Lake LRS/GRS/RA-GRS Standard storage account
v2 Storage1 ), Queue, and Table type for blobs, file shares,
storage, Azure Files ZRS/GZRS/RA-GZRS2 queues, and tables.
Recommended for most
scenarios using Azure
Storage. Note that if you
want support for NFS file
shares in Azure Files, use
the premium file shares
account type.
Premium block blobs3 Blob storage (including LRS Premium storage account
Data Lake Storage1 ) type for block blobs and
ZRS2 append blobs.
Recommended for scenarios
with high transactions rates,
or scenarios that use
smaller objects or require
consistently low storage
latency. Learn more about
example workloads.
Premium page blobs3 Page blobs only LRS Premium storage account
type for page blobs only.
Learn more about page
blobs and sample use cases.
1 Data Lake Storage is a set of capabilities dedicated to big data analytics, built on Azure Blob storage. For
more
information, see Introduction to Data Lake Storage Gen2 and Create a storage account to use with Data Lake
Storage Gen2.
2 Zone-redundant storage (ZRS) and geo-zone-redundant storage (GZRS/RA-GZRS) are available only for
standard general-purpose v2, premium block blobs, and premium file shares accounts in certain regions. For
more information, see Azure Storage redundancy.
3 Premium performance storage accounts use solid-state drives (SSDs) for low latency and high throughput.
Legacy storage accounts are also supported. For more information, see Legacy storage account types.
STO RA GE SERVIC E EN DP O IN T
Construct the URL for accessing an object in a storage account by appending the object's location in the storage
account to the endpoint. For example, the URL for a blob will be similar to:
http://*mystorageaccount*.blob.core.windows.net/*mycontainer*/*myblob*
You can also configure your storage account to use a custom domain for blobs. For more information, see
Configure a custom domain name for your Azure Storage account.
Move a storage account to a different subscription Azure Resource Manager provides options for moving a
resource to a different subscription. For more information,
see Move resources to a new resource group or
subscription.
Move a storage account to a different resource group Azure Resource Manager provides options for moving a
resource to a different resource group. For more
information, see Move resources to a new resource group or
subscription.
Move a storage account to a different region To move a storage account, create a copy of your storage
account in another region. Then, move your data to that
account by using AzCopy, or another tool of your choice. For
more information, see Move an Azure Storage account to
another region.
Upgrade to a general-purpose v2 storage account You can upgrade a general-purpose v1 storage account or
Blob storage account to a general-purpose v2 account. Note
that this action cannot be undone. For more information,
see Upgrade to a general-purpose v2 storage account.
Migrate a classic storage account to Azure Resource The Azure Resource Manager deployment model is superior
Manager to the classic deployment model in terms of functionality,
scalability, and security. For more information about
migrating a classic storage account to Azure Resource
Manager, see the "Migration of storage accounts" section of
Platform-supported migration of IaaS resources from classic
to Azure Resource Manager.
Next steps
Create a storage account
Upgrade to a general-purpose v2 storage account
Recover a deleted storage account
Premium block blob storage accounts
11/25/2021 • 13 minutes to read • Edit Online
Premium block blob storage accounts make data available via high-performance hardware. Data is stored on
solid-state drives (SSDs) which are optimized for low latency. SSDs provide higher throughput compared to
traditional hard drives. File transfer is much faster because data is stored on instantly accessible memory chips.
All parts of a drive accessible at once. By contrast, the performance of a hard disk drive (HDD) depends on the
proximity of data to the read/write heads.
Cost effectiveness
Premium block blob storage accounts have a higher storage cost but a lower transaction cost as compared to
standard general-purpose v2 accounts. If your applications and workloads execute a large number of
transactions, premium blob blob storage can be cost-effective, especially if the workload is write-heavy.
In most cases, workloads executing more than 35 to 40 transactions per second per terabyte (TPS/TB) are good
candidates for this type of account. For example, if your workload executes 500 million read operations and 100
million write operations in a month, then you can calculate the TPS/TB as follows:
Write transactions per second = 100,000,000 / (30 x 24 x 60 x 60) = 39 (rounded to the nearest whole
number)
Read transactions per second = 500,000,000 / (30 x 24 x 60 x 60) = 193 (rounded to the nearest whole
number)
Total transactions per second = 193 + 39 = 232
Assuming your account had 5TB data on average, then TPS/TB would be 230 / 5 = 46 .
NOTE
Prices differ per operation and per region. Use the Azure pricing calculator to compare pricing between standard and
premium performance tiers.
The following table demonstrates the cost-effectiveness of premium block blob storage accounts. The numbers
in this table are based on a Azure Data Lake Storage Gen2 enabled premium block blob storage account (also
referred to as the premium tier for Azure Data Lake Storage). Each column represents the number of
transactions in a month. Each row represents the percentage of transactions that are read transactions. Each cell
in the table shows the percentage of cost reduction associated with a read transaction percentage and the
number of transactions executed.
For example, assuming that your account is in the East US 2 region, the number of transactions with your
account exceeds 90M, and 70% of those transactions are read transactions, premium block blob storage
accounts are more cost-effective.
NOTE
If you prefer to evaluate cost effectiveness based on the number of transactions per second for each TB of data, you can
use the column headings that appear at the bottom of the table.
Premium scenarios
This section contains real-world examples of how some of our Azure Storage partners use premium block blob
storage. Some of them also enable Azure Data Lake Storage Gen2 which introduces a hierarchical file structure
that can further enhance transaction performance in certain scenarios.
TIP
If you have an analytics use case, we highly recommend that you use Azure Data Lake Storage Gen2 along with a
premium block blob storage account.
NOTE
You can't convert an existing standard general-purpose v2 storage account to a premium block blob storage account. To
migrate to a premium block blob storage account, you must create a premium block blob storage account, and migrate
the data to the new account.
NOTE
Some Blob Storage features aren't yet supported or have partial support in premium block blob storage accounts. Before
choosing premium, review the Blob Storage feature support in Azure Storage accounts article to determine whether the
features that you intend to use are fully supported in your account. Feature support is always expanding so make sure to
periodically review this article for updates.
If your storage account is going to be used for analytics, we highly recommend that you use Azure Data Lake
Storage Gen2 along with a premium block blob storage account. To unlock Azure Data Lake Storage Gen2
capabilities, enable the Hierarchical namespace setting in the Advanced tab of the Create storage account
page.
The following image shows this setting in the Create storage account page.
For complete guidance, see Create a storage account account.
See also
Storage account overview
Introduction to Azure Data Lake Storage Gen2
Create a storage account to use with Azure Data Lake Storage Gen2
Premium tier for Azure Data Lake Storage
Authorize access to data in Azure Storage
11/25/2021 • 3 minutes to read • Edit Online
Each time you access data in your storage account, your client application makes a request over HTTP/HTTPS to
Azure Storage. By default, every resource in Azure Storage is secured, and every request to a secure resource
must be authorized. Authorization ensures that the client application has the appropriate permissions to access
data in your storage account.
The following table describes the options that Azure Storage offers for authorizing access to data:
O N - P REM ISES
A C T IVE
SH A RED K EY SH A RED A C C ESS A Z URE A C T IVE DIREC TO RY A N O N Y M O US
A Z URE ( STO RA GE SIGN AT URE DIREC TO RY DO M A IN P UB L IC REA D
A RT IFA C T A C C O UN T K EY ) ( SA S) ( A Z URE A D) SERVIC ES A C C ESS
Azure Files (SMB) Supported Not supported Supported, only Supported, Not supported
with AAD credentials must
Domain Services be synced to
Azure AD
Azure Files Supported Supported Not supported Not supported Not supported
(REST)
Next steps
Authorize access with Azure Active Directory to either blob, queue, or table resources.
Authorize with Shared Key
Grant limited access to Azure Storage resources using shared access signatures (SAS)
Authorize access to blobs using Azure Active
Directory
11/25/2021 • 8 minutes to read • Edit Online
Azure Storage supports using Azure Active Directory (Azure AD) to authorize requests to blob data. With Azure
AD, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal,
which may be a user, group, or application service principal. The security principal is authenticated by Azure AD
to return an OAuth 2.0 token. The token can then be used to authorize a request against the Blob service.
Authorizing requests against Azure Storage with Azure AD provides superior security and ease of use over
Shared Key authorization. Microsoft recommends using Azure AD authorization with your blob applications
when possible to assure access with minimum required privileges.
Authorization with Azure AD is available for all general-purpose and Blob storage accounts in all public regions
and national clouds. Only storage accounts created with the Azure Resource Manager deployment model
support Azure AD authorization.
Blob storage additionally supports creating shared access signatures (SAS) that are signed with Azure AD
credentials. For more information, see Grant limited access to data with shared access signatures.
IMPORTANT
Azure role assignments may take up to 30 minutes to propagate.
Feature support
This table shows how this feature is supported in your account and the impact on support when you enable
certain capabilities.
STO RA GE A C C O UN T B LO B STO RA GE DATA L A K E STO RA GE
TYPE ( DEFA ULT SUP P O RT ) GEN 2 1 N F S 3. 0 1 SF T P 1
Standard general-
purpose v2
1 Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP)
support all require a storage account with a hierarchical namespace enabled.
Next steps
Authorize access to data in Azure Storage
Assign an Azure role for access to blob data
Authorize access to blobs using Azure role
assignment conditions (preview)
11/25/2021 • 2 minutes to read • Edit Online
Attribute-based access control (ABAC) is an authorization strategy that defines access levels based on attributes
associated with security principals, resources, requests, and the environment. Azure ABAC builds on Azure role-
based access control (Azure RBAC) by adding conditions to Azure role assignments in the existing identity and
access management (IAM) system. This preview includes support for role assignment conditions on Blobs and
Data Lake Storage Gen2. It enables you to author role-assignment conditions based on resource and request
attributes.
IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
NOTE
Blob index tags are not supported for Data Lake Storage Gen2 storage accounts, which use a hierarchical namespace. You
should not author role-assignment conditions using index tags on storage accounts that have HNS enabled.
The Azure role assignment condition format allows use of @Resource or @Request attributes in the conditions. A
@Resource attribute refers to an existing attribute of a storage resource that is being accessed, such as a storage
account, a container, or a blob. A @Request attribute refers to an attribute included in a storage operation
request.
For the full list of attributes supported for each DataAction, please see the Actions and attributes for Azure role
assignment conditions in Azure Storage (preview).
See also
Security considerations for Azure role assignment conditions in Azure Storage (preview)
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
What is Azure attribute-based access control (Azure ABAC)? (preview)
Actions and attributes for Azure role assignment
conditions in Azure Storage (preview)
11/25/2021 • 5 minutes to read • Edit Online
IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
This article describes the supported attribute dictionaries that can be used in conditions on Azure role
assignments for each Azure Storage DataAction. For the list of Blob service operations that are affected by a
specific permission or DataAction, see Permissions for Blob service operations.
To understand the role assignment condition format, see Azure role assignment condition format and syntax.
Suboperations
Multiple Storage service operations can be associated with a single permission or DataAction. However, each of
these operations that are associated with the same permission might support different parameters.
Suboperations enable you to differentiate between service operations that require the same permission but
support different set of attributes for conditions. Thus, by using a suboperation, you can specify one condition
for access to a subset of operations that support a given parameter. Then, you can use another access condition
for operations with the same action that doesn't support that parameter.
For example, the Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write action is required for
over a dozen different service operations. Some of these operations can accept blob index tags as request
parameter, while others don't. For operations that accept blob index tags as a parameter, you can use blob index
tags in a Request condition. However, if such a condition is defined on the
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write action, all operations that don't accept
tags as a request parameter cannot evaluate this condition, and will fail the authorization access check.
In this case, the optional suboperation Blob.Write.WithTagHeaders can be used to apply a condition to only those
operations that support blob index tags as a request parameter.
Similarly, only select operations on the Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
action can have support blob index tags as a precondition for access. This subset of operations is identified by
the Blob.Read.WithTagConditions suboperation.
NOTE
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find data on Azure Blob
Storage with Blob Index (preview).
Read content from a blob with tag REST operations: Get Blob, Get Blob Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
conditions Metadata, Get Blob Properties, Get Suboperation
Block List, Get Page Ranges and Query Blob.Read.WithTagConditions
Blob Contents.
Write to a blob with blob index tags REST operations: Put Blob, Put Block Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write
List, Copy Blob and Copy Blob From Suboperation
URL. Blob.Write.WithTagHeaders
Write content to a blob with blob REST operations: Put Blob, Put Block Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action
index tags List, Copy Blob and Copy Blob From Suboperation
URL. Blob.Write.WithTagHeaders
All data operations for accounts with DataAction for all data operations on Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperU
HNS storage accounts with HNS.
Read blob index tags DataAction for reading blob index tags. Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read
Write blob index tags DataAction for writing blob index tags. Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write
Attributes
The following table lists the descriptions for the supported attributes for conditions in Azure Storage.
Blob index tags [Values in key] Index tags on a blob resource. tags: keyname
Arbitrary user-defined key-value <$key_case_sensitive$>
properties that you can store
alongside a blob resource. Use when
you want to check both the key (case-
sensitive) and value in blob index tags.
NOTE
Attributes and values listed are considered case-insensitive, unless stated otherwise.
NOTE
When specifying conditions for Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path attribute,
the values shouldn't include the container name or a preceding '/' character. Use the path characters without any URL
encoding.
NOTE
Blob index tags are not supported for Data Lake Storage Gen2 storage accounts, which have a hierarchical namespace
(HNS). You should not author role-assignment conditions using index tags on storage accounts that have HNS enabled.
string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete
containers:name ResourceAttributeOnly
string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
containers:name ResourceAttributeOnly
string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
containers:name ResourceAttributeOnly
Suboperation
Blob.Read.WithTagConditions
string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write
containers:name ResourceAttributeOnly
string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write
containers:name ResourceAttributeOnly
Suboperation
Blob.Write.WithTagHeaders
string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action
containers:name ResourceAttributeOnly
string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action
containers:name ResourceAttributeOnly
Suboperation
Blob.Write.WithTagHeaders
string ResourceAttributeOnly
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action
containers:name
string ResourceAttributeOnly
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/manageOwnership/action
containers:name
string ResourceAttributeOnly
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/modifyPermissions/action
containers:name
string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action
containers:name ResourceAttributeOnly
string ResourceAttributeOnly
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/permanentDelete/action
containers:name
string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action
containers:name ResourceAttributeOnly
string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read
containers:name ResourceAttributeOnly
string
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write
containers:name ResourceAttributeOnly
See also
Example Azure role assignment conditions (preview)
Azure role assignment condition format and syntax (preview)
What is Azure attribute-based access control (Azure ABAC)? (preview)
Security considerations for Azure role assignment
conditions in Azure Storage (preview)
11/25/2021 • 5 minutes to read • Edit Online
IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it is not recommended for production workloads. Certain features might not be supported
or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure
Previews.
To fully secure resources using Azure attribute-based access control (Azure ABAC), you must also protect the
attributes used in the Azure role assignment conditions. For instance, if your condition is based on a file path,
then you should beware that access can be compromised if the principal has an unrestricted permission to
rename a file path.
This article describes security considerations that you should factor into your role assignment conditions.
NOTE
Role-assignment conditions are not evaluated when access is granted using ACLs with Data Lake Storage Gen2. In this
case, you must plan the scope of access so it does not overlap with that granted through ACLs.
Other considerations
Condition operations that write blobs
Many operations that write blobs require either the
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write or the
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action permission. Built-in roles, such as
Storage Blob Data Owner and Storage Blob Data Contributor grant both permissions to a security principal.
When you define a role assignment condition on these roles, you should use identical conditions on both these
permissions to ensure consistent access restrictions for write operations.
Behavior for Copy Blob and Copy Blob from URL
For the Copy Blob and Copy Blob From URL operations, conditions using blob path as attribute on the
@Request
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write action and its suboperations are
evaluated only for the destination blob.
For conditions on the source blob, @Resource conditions on the
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read action are evaluated.
Behavior for Get Page Ranges
For the Get Page Ranges operation, @Resource conditions using
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags as an attribute on the
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read action and its suboperations are
evaluated only for the destination blob.
Conditions don't apply for access to the blob specified by the prevsnapshot URI parameter in the API.
See also
Authorize access to blobs using Azure role assignment conditions (preview)
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
What is Azure attribute-based access control (Azure ABAC)? (preview)
Example Azure role assignment conditions (preview)
11/25/2021 • 12 minutes to read • Edit Online
IMPORTANT
Azure ABAC and Azure role assignment conditions are currently in preview. This preview version is provided without a
service level agreement, and it's not recommended for production workloads. Certain features might not be supported or
might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
Prerequisites
For information about the prerequisites to add or edit role assignment conditions, see Conditions prerequisites.
TIP
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find Azure Blob data
with blob index tags (preview).
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
AND
SubOperationMatches{'Blob.Read.WithTagConditions'})
)
OR
(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
] StringEquals 'Cascade'
)
)
Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G
Key {keyName}
Operator StringEquals
Value {keyValue}
Azure PowerShell
Here's how to add this condition using Azure PowerShell.
TIP
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find Azure Blob data
with blob index tags (preview).
There are two permissions that allow you to create new blobs, so you must target both. You must add this
condition to any role assignments that include one of the following permissions.
/blobs/write (create or update)
/blobs/add/action (create)
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
)
OR
(
@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
StringEquals 'Cascade'
)
)
Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G
Key {keyName}
Operator StringEquals
Value {keyValue}
Azure PowerShell
Here's how to add this condition using Azure PowerShell.
TIP
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find Azure Blob data
with blob index tags (preview).
There are two permissions that allow you to update tags on existing blobs, so you must target both. You must
add this condition to any role assignments that include one of the following permissions.
/blobs/write (update or create, cannot exclude create)
/blobs/tags/write
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})
)
OR
(
@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&]
ForAllOfAnyValues:StringEquals {'Project', 'Program'}
)
)
Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G
Operator ForAllOfAnyValues:StringEquals
Value {keyName1}
{keyName2}
Azure PowerShell
Here's how to add this condition using Azure PowerShell.
TIP
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find Azure Blob data
with blob index tags (preview).
There are two permissions that allow you to update tags on existing blobs, so you must target both. You must
add this condition to any role assignments that include one of the following permissions.
/blobs/write (update or create, cannot exclude create)
/blobs/tags/write
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})
)
OR
(
@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&]
ForAnyOfAnyValues:StringEquals {'Project'}
AND
@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
ForAllOfAnyValues:StringEquals {'Cascade', 'Baker', 'Skagit'}
)
)
Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G
Operator ForAnyOfAnyValues:StringEquals
Value {keyName}
Operator And
Expression 2
Key {keyName}
Operator ForAllOfAnyValues:StringEquals
Value {keyValue1}
{keyValue2}
{keyValue3}
Azure PowerShell
Here's how to add this condition using Azure PowerShell.
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
)
OR
(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-
container'
)
)
Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G
Operator StringEquals
Value {containerName}
Azure PowerShell
Here's how to add this condition using Azure PowerShell.
$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete'})
AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}) AND !
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'}) AND !
(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-
container'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
$testRa.Condition = $condition
$testRa.ConditionVersion = "2.0"
Set-AzRoleAssignment -InputObject $testRa -PassThru
$localSrcFile = <pathToLocalFile>
$grantedContainer = "blobs-example-container"
$ungrantedContainer = "ungranted"
# Get new context for request
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
# Ungranted Container actions
$content = Set-AzStorageBlobContent -File $localSrcFile -Container $ungrantedContainer -Blob "Example5.txt"
-Context $bearerCtx
$content = Get-AzStorageBlobContent -Container $ungrantedContainer -Blob "Example5.txt" -Context $bearerCtx
$content = Remove-AzStorageBlob -Container $ungrantedContainer -Blob "Example5.txt" -Context $bearerCtx
# Granted Container actions
$content = Set-AzStorageBlobContent -File $localSrcFile -Container $grantedContainer -Blob "Example5.txt" -
Context $bearerCtx
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "Example5.txt" -Context $bearerCtx
$content = Remove-AzStorageBlob -Container $grantedContainer -Blob "Example5.txt" -Context $bearerCtx
Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G
Operator StringEquals
Value {containerName}
Expression 2
Operator And
Operator StringLike
Value {pathString}
Azure PowerShell
Here's how to add this condition using Azure PowerShell.
$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-
container' AND @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike
'readonly/*'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
$testRa.Condition = $condition
$testRa.ConditionVersion = "2.0"
Set-AzRoleAssignment -InputObject $testRa -PassThru
$grantedContainer = "blobs-example-container"
# Get new context for request
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
# Try to get ungranted blob
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "Ungranted.txt" -Context $bearerCtx
# Try to get granted blob
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "readonly/Example6.txt" -Context
$bearerCtx
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})
)
OR
(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'contosocorp'
AND
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike
'uploads/contoso/*'
)
)
Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G
Operator StringEquals
Value {containerName}
Expression 2
Operator And
Operator StringLike
Value {pathString}
Azure PowerShell
Here's how to add this condition using Azure PowerShell.
$condition = "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'})
AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'})) OR
(@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'contosocorp' AND
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike
'uploads/contoso/*'))"
$testRa = Get-AzRoleAssignment -Scope $scope -RoleDefinitionName $roleDefinitionName -ObjectId $userObjectID
$testRa.Condition = $condition
$testRa.ConditionVersion = "2.0"
Set-AzRoleAssignment -InputObject $testRa -PassThru
TIP
Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob
index tags, you must use blob index tags with conditions. For more information, see Manage and find Azure Blob data
with blob index tags (preview).
You must add this condition to any role assignments that includes the following permission.
/blobs/read
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'}
AND
SubOperationMatches{'Blob.Read.WithTagConditions'})
)
OR
(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Program<$key_case_sensitive$>
] StringEquals 'Alpine'
)
)
AND
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'})
)
OR
(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'logs*'
)
)
Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G
Key {keyName}
Operator StringEquals
Value {keyValue}
C O N DIT IO N #2 SET T IN G
Operator StringLike
Value {pathString}
Azure PowerShell
Here's how to add this condition using Azure PowerShell.
$grantedContainer = "contosocorp"
# Get new context for request
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
# Try to get ungranted blobs
# Wrong name but right tags
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "AlpineFile.txt" -Context $bearerCtx
# Right name but wrong tags
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logsAlpine.txt" -Context $bearerCtx
# Try to get granted blob
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logs/AlpineFile.txt" -Context
$bearerCtx
Example 9: Allow read and write access to blobs based on tags and
custom security attributes
This condition allows read and write access to blobs if the user has a custom security attribute that matches the
blob index tag.
For example, if Brenda has the attribute Project=Baker , she can only read and write blobs with the
Project=Baker blob index tag. Similarly, Chandra can only read and write blobs with Project=Cascade .
For more information, see Allow read access to blobs based on tags and custom security attributes.
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})
)
OR
(
@Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
]
)
)
AND
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
AND
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND
SubOperationMatches{'Blob.Write.WithTagHeaders'})
)
OR
(
@Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals
@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
)
)
Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G
Attribute <attributeset>_<key>
Operator StringEquals
C O N DIT IO N #1 SET T IN G
Option Attribute
Key <key>
C O N DIT IO N #2 SET T IN G
Attribute <attributeset>_<key>
Operator StringEquals
Option Attribute
Key <key>
Example 10: Allow read access to blobs based on tags and multi-value
custom security attributes
This condition allows read access to blobs if the user has a custom security attribute with any values that
matches the blob index tag.
For example, if Chandra has the Project attribute with the values Baker and Cascade, she can only read blobs
with the Project=Baker or Project=Cascade blob index tag.
For more information, see Allow read access to blobs based on tags and custom security attributes.
(
(
!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND
SubOperationMatches{'Blob.Read.WithTagConditions'})
)
OR
(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>
] ForAnyOfAnyValues:StringEquals
@Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project]
)
)
Azure portal
Here are the settings to add this condition using the Azure portal.
C O N DIT IO N #1 SET T IN G
Key <key>
Operator ForAnyOfAnyValues:StringEquals
Option Attribute
Attribute <attributeset>_<key>
Next steps
Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)
Actions and attributes for Azure role assignment conditions in Azure Storage (preview)
Azure role assignment condition format and syntax (preview)
Grant limited access to Azure Storage resources
using shared access signatures (SAS)
11/25/2021 • 12 minutes to read • Edit Online
A shared access signature (SAS) provides secure delegated access to resources in your storage account. With a
SAS, you have granular control over how a client can access your data. For example:
What resources the client may access.
What permissions they have to those resources.
How long the SAS is valid.
A shared access signature can take one of the following two forms:
Ad hoc SAS . When you create an ad hoc SAS, the start time, expiry time, and permissions are specified
in the SAS URI. Any type of SAS can be an ad hoc SAS.
Ser vice SAS with stored access policy . A stored access policy is defined on a resource container,
which can be a blob container, table, queue, or file share. The stored access policy can be used to manage
constraints for one or more service shared access signatures. When you associate a service SAS with a
stored access policy, the SAS inherits the constraints—the start time, expiry time, and permissions—
defined for the stored access policy.
NOTE
A user delegation SAS or an account SAS must be an ad hoc SAS. Stored access policies are not supported for the user
delegation SAS or the account SAS.
NOTE
It's not possible to audit the generation of SAS tokens. Any user that has privileges to generate a SAS token, either by
using the account key, or via an Azure role assignment, can do so without the knowledge of the owner of the storage
account. Be careful to restrict permissions that allow users to generate SAS tokens. To prevent users from generating a
SAS that is signed with the account key for blob and queue workloads, you can disallow Shared Key access to the storage
account. For more information, see Prevent authorization with Shared Key.
T Y P E O F SA S T Y P E O F A UT H O RIZ AT IO N
Microsoft recommends using a user delegation SAS when possible for superior security.
SAS token
The SAS token is a string that you generate on the client side, for example by using one of the Azure Storage
client libraries. The SAS token is not tracked by Azure Storage in any way. You can create an unlimited number of
SAS tokens on the client side. After you create a SAS, you can distribute it to client applications that require
access to resources in your storage account.
Client applications provide the SAS URI to Azure Storage as part of a request. Then, the service checks the SAS
parameters and the signature to verify that it is valid. If the service verifies that the signature is valid, then the
request is authorized. Otherwise, the request is declined with error code 403 (Forbidden).
Here's an example of a service SAS URI, showing the resource URI and the SAS token. Because the SAS token
comprises the URI query string, the resource URI must be followed first by a question mark, and then by the
SAS token:
2. A lightweight service authenticates the client as needed and then generates a SAS. Once the client
application receives the SAS, it can access storage account resources directly. Access permissions are
defined by the SAS and for the interval allowed by the SAS. The SAS mitigates the need for routing all
data through the front-end proxy service.
Many real-world services may use a hybrid of these two approaches. For example, some data might be
processed and validated via the front-end proxy. Other data is saved and/or read directly using SAS.
Additionally, a SAS is required to authorize access to the source object in a copy operation in certain scenarios:
When you copy a blob to another blob that resides in a different storage account.
You can optionally use a SAS to authorize access to the destination blob as well.
When you copy a file to another file that resides in a different storage account.
You can optionally use a SAS to authorize access to the destination file as well.
When you copy a blob to a file, or a file to a blob.
You must use a SAS even if the source and destination objects reside within the same storage account.
NOTE
Storage doesn't track the number of shared access signatures that have been generated for a storage account, and no API
can provide this detail. If you need to know the number of shared access signatures that have been generated for a
storage account, you must track the number manually.
Next steps
Delegate access with a shared access signature (REST API)
Create a user delegation SAS (REST API)
Create a service SAS (REST API)
Create an account SAS (REST API)
Use the Azure Storage resource provider to access
management resources
11/25/2021 • 4 minutes to read • Edit Online
Azure Resource Manager is the deployment and management service for Azure. The Azure Storage resource
provider is a service that is based on Azure Resource Manager and that provides access to management
resources for Azure Storage. You can use the Azure Storage resource provider to create, update, manage, and
delete resources such as storage accounts, private endpoints, and account access keys. For more information
about Azure Resource Manager, see Azure Resource Manager overview.
You can use the Azure Storage resource provider to perform actions such as creating or deleting a storage
account or getting a list of storage accounts in a subscription. To authorize requests against the Azure Storage
resource provider, use Azure Active Directory (Azure AD). This article describes how to assign permissions to
management resources, and points to examples that show how to make requests against the Azure Storage
resource provider.
IN C L UDES A C C ESS TO A C C O UN T
A Z URE RO L E DESC RIP T IO N K EY S?
Owner Can manage all storage resources and Yes, provides permissions to view and
access to resources. regenerate the storage account keys.
Contributor Can manage all storage resources, but Yes, provides permissions to view and
cannot manage access to resources. regenerate the storage account keys.
Storage Account Contributor Can manage the storage account, get Yes, provides permissions to view and
information about the subscription's regenerate the storage account keys.
resource groups and resources, and
create and manage subscription
resource group deployments.
User Access Administrator Can manage access to the storage Yes, permits a security principal to
account. assign any permissions to themselves
and others.
Vir tual Machine Contributor Can manage virtual machines, but not Yes, provides permissions to view and
the storage account to which they are regenerate the storage account keys.
connected.
The third column in the table indicates whether the built-in role supports the
Microsoft.Storage/storageAccounts/listkeys/action . This action grants permissions to read and
regenerate the storage account keys. Permissions to access Azure Storage management resources do not also
include permissions to access data. However, if a user has access to the account keys, then they can use the
account keys to access Azure Storage data via Shared Key authorization.
Custom roles for management operations
Azure also supports defining Azure custom roles for access to management resources. For more information
about custom roles, see Azure custom roles.
Code samples
For code examples that show how to authorize and call management operations from the Azure Storage
management libraries, see the following samples:
.NET
Java
Node.js
Python
Next steps
Azure Resource Manager overview
What is Azure role-based access control (Azure RBAC)?
Scalability targets for the Azure Storage resource provider
Security recommendations for Blob storage
11/25/2021 • 9 minutes to read • Edit Online
This article contains security recommendations for Blob storage. Implementing these recommendations will
help you fulfill your security obligations as described in our shared responsibility model. For more information
on how Microsoft fulfills service provider responsibilities, see Shared responsibility in the cloud.
Some of the recommendations included in this article can be automatically monitored by Microsoft Defender
for Cloud, which is the first line of defense in protecting your resources in Azure. For information on Microsoft
Defender for Cloud, see What is Microsoft Defender for Cloud?
Microsoft Defender for Cloud periodically analyzes the security state of your Azure resources to identify
potential security vulnerabilities. It then provides you with recommendations on how to address them. For more
information on Microsoft Defender for Cloud recommendations, see Security recommendations in Microsoft
Defender for Cloud.
Data protection
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD
Use the Azure Resource Manager Create new storage accounts using the -
deployment model Azure Resource Manager deployment
model for important security
enhancements, including superior
Azure role-based access control (Azure
RBAC) and auditing, Resource
Manager-based deployment and
governance, access to managed
identities, access to Azure Key Vault for
secrets, and Azure AD-based
authentication and authorization for
access to Azure Storage data and
resources. If possible, migrate existing
storage accounts that use the classic
deployment model to use Azure
Resource Manager. For more
information about Azure Resource
Manager, see Azure Resource Manager
overview.
Enable Microsoft Defender for all of Microsoft Defender for Storage Yes
your storage accounts provides an additional layer of security
intelligence that detects unusual and
potentially harmful attempts to access
or exploit storage accounts. Security
alerts are triggered in Microsoft
Defender for Cloud when anomalies in
activity occur and are also sent via
email to subscription administrators,
with details of suspicious activity and
recommendations on how to
investigate and remediate threats. For
more information, see Configure
Microsoft Defender for Storage.
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD
Turn on soft delete for blobs Soft delete for blobs enables you to -
recover blob data after it has been
deleted. For more information on soft
delete for blobs, see Soft delete for
Azure Storage blobs.
Turn on soft delete for containers Soft delete for containers enables you -
to recover a container after it has been
deleted. For more information on soft
delete for containers, see Soft delete
for containers.
Require secure transfer (HTTPS) to the When you require secure transfer for a -
storage account storage account, all requests to the
storage account must be made over
HTTPS. Any requests made over HTTP
are rejected. Microsoft recommends
that you always require secure transfer
for all of your storage accounts. For
more information, see Require secure
transfer to ensure secure connections.
Limit shared access signature (SAS) Requiring HTTPS when a client uses a -
tokens to HTTPS connections only SAS token to access blob data helps to
minimize the risk of eavesdropping.
For more information, see Grant
limited access to Azure Storage
resources using shared access
signatures (SAS).
Use Azure Active Directory (Azure AD) Azure AD provides superior security -
to authorize access to blob data and ease of use over Shared Key for
authorizing requests to Blob storage.
For more information, see Authorize
access to data in Azure Storage.
Keep in mind the principal of least When assigning a role to a user, group, -
privilege when assigning permissions or application, grant that security
to an Azure AD security principal via principal only those permissions that
Azure RBAC are necessary for them to perform
their tasks. Limiting access to
resources helps prevent both
unintentional and malicious misuse of
your data.
Use a user delegation SAS to grant A user delegation SAS is secured with -
limited access to blob data to clients Azure Active Directory (Azure AD)
credentials and also by the permissions
specified for the SAS. A user delegation
SAS is analogous to a service SAS in
terms of its scope and function, but
offers security benefits over the service
SAS. For more information, see Grant
limited access to Azure Storage
resources using shared access
signatures (SAS).
Secure your account access keys with Microsoft recommends using Azure -
Azure Key Vault AD to authorize requests to Azure
Storage. However, if you must use
Shared Key authorization, then secure
your account keys with Azure Key
Vault. You can retrieve the keys from
the key vault at runtime, instead of
saving them with your application. For
more information about Azure Key
Vault, see Azure Key Vault overview.
Keep in mind the principal of least When creating a SAS, specify only -
privilege when assigning permissions those permissions that are required by
to a SAS the client to perform its function.
Limiting access to resources helps
prevent both unintentional and
malicious misuse of your data.
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD
Have a revocation plan in place for any If a SAS is compromised, you will want -
SAS that you issue to clients to revoke that SAS as soon as possible.
To revoke a user delegation SAS,
revoke the user delegation key to
quickly invalidate all signatures
associated with that key. To revoke a
service SAS that is associated with a
stored access policy, you can delete the
stored access policy, rename the policy,
or change its expiry time to a time that
is in the past. For more information,
see Grant limited access to Azure
Storage resources using shared access
signatures (SAS).
If a service SAS is not associated with a A service SAS that is not associated -
stored access policy, then set the with a stored access policy cannot be
expiry time to one hour or less revoked. For this reason, limiting the
expiry time so that the SAS is valid for
one hour or less is recommended.
Networking
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD
Configure the minimum required Require that clients use a more secure -
version of Transport Layer Security version of TLS to make requests
(TLS) for a storage account. against an Azure Storage account by
configuring the minimum version of
TLS for that account. For more
information, see Configure minimum
required version of Transport Layer
Security (TLS) for a storage account
Enable the Secure transfer required When you enable the Secure Yes
option on all of your storage accounts transfer required option, all requests
made against the storage account
must take place over secure
connections. Any requests made over
HTTP will fail. For more information,
see Require secure transfer in Azure
Storage.
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD
Logging/Monitoring
REC O M M EN DAT IO N C O M M EN T S DEF EN DER F O R C LO UD
Track how requests are authorized Enable Azure Storage logging to track -
how each request made against Azure
Storage was authorized. The logs
indicate whether a request was made
anonymously, by using an OAuth 2.0
token, by using Shared Key, or by
using a shared access signature (SAS).
For more information, see Monitoring
Azure Blob Storage with Azure Monitor
or Azure Storage analytics logging with
Classic Monitoring.
Next steps
Azure security documentation
Secure development documentation.
Azure Storage encryption for data at rest
11/25/2021 • 4 minutes to read • Edit Online
Azure Storage uses server-side encryption (SSE) to automatically encrypt your data when it is persisted to the
cloud. Azure Storage encryption protects your data and to help you to meet your organizational security and
compliance commitments.
Azure Storage services All Blob storage, Azure Files1,2 Blob storage
supported
Key storage Microsoft key store Azure Key Vault or Key Customer's own key store
Vault HSM
1 For information about creating an account that supports using customer-managed keys with Queue storage,
see Create an account that supports customer-managed keys for queues.
2 For information about creating an account that supports using customer-managed keys with Table storage, see
NOTE
Microsoft-managed keys are rotated appropriately per compliance requirements. If you have specific key rotation
requirements, Microsoft recommends that you move to customer-managed keys so that you can manage and audit the
rotation yourself.
Next steps
What is Azure Key Vault?
Customer-managed keys for Azure Storage encryption
Encryption scopes for Blob storage
Provide an encryption key on a request to Blob storage
Customer-managed keys for Azure Storage
encryption
11/25/2021 • 6 minutes to read • Edit Online
You can use your own encryption key to protect the data in your storage account. When you specify a customer-
managed key, that key is used to protect and control access to the key that encrypts your data. Customer-
managed keys offer greater flexibility to manage access controls.
You must use one of the following Azure key stores to store your customer-managed keys:
Azure Key Vault
Azure Key Vault Managed Hardware Security Module (HSM)
You can either create your own keys and store them in the key vault or managed HSM, or you can use the Azure
Key Vault APIs to generate keys. The storage account and the key vault or managed HSM must be in the same
region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions.
NOTE
Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for
configuration.
IMPORTANT
Customer-managed keys rely on managed identities for Azure resources, a feature of Azure AD. Managed identities do
not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a
managed identity is automatically assigned to your storage account under the covers. If you subsequently move the
subscription, resource group, or storage account from one Azure AD directory to another, the managed identity
associated with the storage account is not transferred to the new tenant, so customer-managed keys may no longer
work. For more information, see Transferring a subscription between Azure AD directories in FAQs and known
issues with managed identities for Azure resources.
Azure storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more
information about keys, see About keys.
Using a key vault or managed HSM has associated costs. For more information, see Key Vault pricing.
NOTE
To rotate a key, create a new version of the key in the key vault or managed HSM, according to your compliance policies.
You can rotate your key manually or create a function to rotate it on a schedule.
Next steps
Azure Storage encryption for data at rest
Configure encryption with customer-managed keys stored in Azure Key Vault
Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM
Provide an encryption key on a request to Blob
storage
11/25/2021 • 3 minutes to read • Edit Online
Clients making requests against Azure Blob storage have the option to provide an AES-256 encryption key on a
per-request basis. Including the encryption key on the request provides granular control over encryption
settings for Blob storage operations. Customer-provided keys can be stored in Azure Key Vault or in another key
store.
x-ms-encryption-key-sha256 Required for both write and read requests. The Base64-
encoded SHA256 of the encryption key.
Specifying encryption keys on the request is optional. However, if you specify one of the headers listed above for
a write operation, then you must specify all of them.
Blob storage operations supporting customer-provided keys
The following Blob storage operations support sending customer-provided encryption keys on a request:
Put Blob
Put Block List
Put Block
Put Block from URL
Put Page
Put Page from URL
Append Block
Set Blob Properties
Set Blob Metadata
Get Blob
Get Blob Properties
Get Blob Metadata
Snapshot Blob
IMPORTANT
The Azure portal cannot be used to read from or write to a container or blob that is encrypted with a key provided on the
request.
Be sure to protect the encryption key that you provide on a request to Blob storage in a secure key store like Azure Key
Vault. If you attempt a write operation on a container or blob without the encryption key, the operation will fail, and you
will lose access to the object.
Feature support
This table shows how this feature is supported in your account and the impact on support when you enable
certain capabilities.
Standard general-
purpose v2
1 Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP)
support all require a storage account with a hierarchical namespace enabled.
Next steps
Specify a customer-provided key on a request to Blob storage with .NET
Azure Storage encryption for data at rest
Encryption scopes for Blob storage
11/25/2021 • 5 minutes to read • Edit Online
Encryption scopes enable you to manage encryption with a key that is scoped to a container or an individual
blob. You can use encryption scopes to create secure boundaries between data that resides in the same storage
account but belongs to different customers.
For more information about working with encryption scopes, see Create and manage encryption scopes.
UP LO A DIN G A B LO B W IT H A N
T H E EN C RY P T IO N SC O P E DEF IN ED O N UP LO A DIN G A B LO B W IT H T H E EN C RY P T IO N SC O P E OT H ER T H A N
T H E C O N TA IN ER IS. . . DEFA ULT EN C RY P T IO N SC O P E. . . T H E DEFA ULT SC O P E. . .
A default encryption scope must be specified for a container at the time that the container is created.
If no default encryption scope is specified for the container, then you can upload a blob using any encryption
scope that you've defined for the storage account. The encryption scope must be specified at the time that the
blob is uploaded.
Feature support
This table shows how this feature is supported in your account and the impact on support when you enable
certain capabilities.
Standard general-
purpose v2
1 Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP)
support all require a storage account with a hierarchical namespace enabled.
Next steps
Azure Storage encryption for data at rest
Create and manage encryption scopes
Customer-managed keys for Azure Storage encryption
What is Azure Key Vault?
Use private endpoints for Azure Storage
11/25/2021 • 8 minutes to read • Edit Online
You can use private endpoints for your Azure Storage accounts to allow clients on a virtual network (VNet) to
securely access data over a Private Link. The private endpoint uses a separate IP address from the VNet address
space for each storage account service. Network traffic between the clients on the VNet and the storage account
traverses over the VNet and a private link on the Microsoft backbone network, eliminating exposure from the
public internet.
Using private endpoints for your storage account enables you to:
Secure your storage account by configuring the storage firewall to block all connections on the public
endpoint for the storage service.
Increase security for the virtual network (VNet), by enabling you to block exfiltration of data from the VNet.
Securely connect to storage accounts from on-premises networks that connect to the VNet using VPN or
ExpressRoutes with private-peering.
Conceptual overview
A private endpoint is a special network interface for an Azure service in your Virtual Network (VNet). When you
create a private endpoint for your storage account, it provides secure connectivity between clients on your VNet
and your storage. The private endpoint is assigned an IP address from the IP address range of your VNet. The
connection between the private endpoint and the storage service uses a secure private link.
Applications in the VNet can connect to the storage service over the private endpoint seamlessly, using the
same connection strings and authorization mechanisms that they would use other wise . Private
endpoints can be used with all protocols supported by the storage account, including REST and SMB.
Private endpoints can be created in subnets that use Service Endpoints. Clients in a subnet can thus connect to
one storage account using private endpoint, while using service endpoints to access others.
When you create a private endpoint for a storage service in your VNet, a consent request is sent for approval to
the storage account owner. If the user requesting the creation of the private endpoint is also an owner of the
storage account, this consent request is automatically approved.
Storage account owners can manage consent requests and the private endpoints, through the 'Private
endpoints' tab for the storage account in the Azure portal.
TIP
If you want to restrict access to your storage account through the private endpoint only, configure the storage firewall to
deny or control access through the public endpoint.
You can secure your storage account to only accept connections from your VNet, by configuring the storage
firewall to deny access through its public endpoint by default. You don't need a firewall rule to allow traffic from
a VNet that has a private endpoint, since the storage firewall only controls access through the public endpoint.
Private endpoints instead rely on the consent flow for granting subnets access to the storage service.
NOTE
When copying blobs between storage accounts, your client must have network access to both accounts. So if you choose
to use a private link for only one account (either the source or the destination), make sure that your client has network
access to the other account. To learn about other ways to configure network access, see Configure Azure Storage firewalls
and virtual networks.
TIP
Create a separate private endpoint for the secondary instance of the storage service for better read performance on RA-
GRS accounts. Make sure to create a general-purpose v2(Standard or Premium) storage account.
For read access to the secondary region with a storage account configured for geo-redundant storage, you need
separate private endpoints for both the primary and secondary instances of the service. You don't need to create
a private endpoint for the secondary instance for failover . The private endpoint will automatically connect to
the new primary instance after failover. For more information about storage redundancy options, see Azure
Storage redundancy.
IMPORTANT
Use the same connection string to connect to the storage account using private endpoints, as you'd use otherwise. Please
don't connect to the storage account using its privatelink subdomain URL.
We create a private DNS zone attached to the VNet with the necessary updates for the private endpoints, by
default. However, if you're using your own DNS server, you may need to make additional changes to your DNS
configuration. The section on DNS changes below describes the updates required for private endpoints.
NAME TYPE VA L UE
CNAME
StorageAccountA.privatelink.blob.core.windows.net <storage service public endpoint>
As previously mentioned, you can deny or control access for clients outside the VNet through the public
endpoint using the storage firewall.
The DNS resource records for StorageAccountA, when resolved by a client in the VNet hosting the private
endpoint, will be:
NAME TYPE VA L UE
A
StorageAccountA.privatelink.blob.core.windows.net 10.1.1.5
This approach enables access to the storage account using the same connection string for clients on the
VNet hosting the private endpoints, as well as clients outside the VNet.
If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the storage
account endpoint to the private endpoint IP address. You should configure your DNS server to delegate your
private link subdomain to the private DNS zone for the VNet, or configure the A records for
StorageAccountA.privatelink.blob.core.windows.net with the private endpoint IP address.
TIP
When using a custom or on-premises DNS server, you should configure your DNS server to resolve the storage account
name in the privatelink subdomain to the private endpoint IP address. You can do this by delegating the
privatelink subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and
adding the DNS A records.
The recommended DNS zone names for private endpoints for storage services, and the associated endpoint
target sub-resources, are:
For more information on configuring your own DNS server to support private endpoints, refer to the following
articles:
Name resolution for resources in Azure virtual networks
DNS configuration for private endpoints
Pricing
For pricing details, see Azure Private Link pricing.
Known Issues
Keep in mind the following known issues about private endpoints for Azure Storage.
Storage access constraints for clients in VNets with private endpoints
Clients in VNets with existing private endpoints face constraints when accessing other storage accounts that
have private endpoints. For example, suppose a VNet N1 has a private endpoint for a storage account A1 for
Blob storage. If storage account A2 has a private endpoint in a VNet N2 for Blob storage, then clients in VNet N1
must also access Blob storage in account A2 using a private endpoint. If storage account A2 does not have any
private endpoints for Blob storage, then clients in VNet N1 can access Blob storage in that account without a
private endpoint.
This constraint is a result of the DNS changes made when account A2 creates a private endpoint.
Network Security Group rules for subnets with private endpoints
Currently, you can't configure Network Security Group (NSG) rules and user-defined routes for private
endpoints. NSG rules applied to the subnet hosting the private endpoint are not applied to the private endpoint.
They are applied only to other endpoints (For example: network interface controllers). A limited workaround for
this issue is to implement your access rules for private endpoints on the source subnets, though this approach
may require a higher management overhead.
Copying blobs between storage accounts
You can copy blobs between storage accounts by using private endpoints only if you use the Azure REST API, or
tools that use the REST API. These tools include AzCopy, Storage Explorer, Azure PowerShell, Azure CLI, and the
Azure Blob Storage SDKs.
Only private endpoints that target the Blob storage resource are supported. Private endpoints that target the
Data Lake Storage Gen2 or the File resource are not yet supported. Also, copying between storage accounts by
using the Network File System (NFS) protocol is not yet supported.
Next steps
Configure Azure Storage firewalls and virtual networks
Security recommendations for Blob storage
Network routing preference for Azure Storage
11/25/2021 • 3 minutes to read • Edit Online
You can configure network routing preference for your Azure storage account to specify how network traffic is
routed to your account from clients over the internet. By default, traffic from the internet is routed to the public
endpoint of your storage account over the Microsoft global network. Azure Storage provides additional options
for configuring how traffic is routed to your storage account.
Configuring routing preference gives you the flexibility to optimize your traffic either for premium network
performance or for cost. When you configure a routing preference, you specify how traffic will be directed to the
public endpoint for your storage account by default. You can also publish route-specific endpoints for your
storage account.
NOTE
This feature is not supported in premium performance storage accounts or accounts configured to use Zone-redundant
storage (ZRS).
Routing configuration
For step-by-step guidance that shows you how to configure the routing preference and route-specific endpoints,
see Configure network routing preference for Azure Storage.
You can choose between the Microsoft global network and internet routing as the default routing preference for
the public endpoint of your storage account. The default routing preference applies to all traffic from clients
outside Azure and affects the endpoints for Azure Data Lake Storage Gen2, Blob storage, Azure Files, and static
websites. Configuring routing preference is not supported for Azure Queues or Azure Tables.
You can also publish route-specific endpoints for your storage account. When you publish route-specific
endpoints, Azure Storage creates new public endpoints for your storage account that route traffic over the
desired path. This flexibility enables you to direct traffic to your storage account over a specific route without
changing your default routing preference.
For example, publishing an internet route-specific endpoint for the 'StorageAccountA' will publish the following
endpoints for your storage account:
STO RA GE SERVIC E RO UT E- SP EC IF IC EN DP O IN T
If you have a read-access geo-redundant storage (RA-GRS) or a read-access geo-zone-redundant storage (RA-
GZRS) storage account, publishing route-specific endpoints also automatically creates the corresponding
endpoints in the secondary region for read access.
The connection strings for the published route-specific endpoints can be copied via the Azure portal. These
connection strings can be used for Shared Key authorization with all existing Azure Storage SDKs and APIs.
Regional availability
Routing preference for Azure Storage is available in the following regions:
Central US
Central US EUAP
East US
East US 2
East US 2
East US 2 EUAP
South Central US
West Central US
West US
West US 2
France Central
France South
Germany North
Germany West Central
North Central US
North Europe
Norway East
Switzerland North
Switzerland West
UK South
UK West
West Europe
UAE Central
East Asia
Southeast Asia
Japan East
Japan West
West India
Australia East
Australia Southeast
The following known issues affect the routing preference for Azure Storage:
Access requests for the route-specific endpoint for the Microsoft global network fail with HTTP error 404 or
equivalent. Routing over the Microsoft global network works as expected when it is set as the default routing
preference for the public endpoint.
Next steps
What is routing preference?
Configure network routing preference
Configure Azure Storage firewalls and virtual networks
Security recommendations for Blob storage
Data protection overview
11/25/2021 • 11 minutes to read • Edit Online
Azure Storage provides data protection for Blob Storage and Azure Data Lake Storage Gen2 to help you to
prepare for scenarios where you need to recover data that has been deleted or overwritten. It's important to
think about how to best protect your data before an incident occurs that could compromise it. This guide can
help you decide in advance which data protection features your scenario requires, and how to implement them.
If you should need to recover data that has been deleted or overwritten, this overview also provides guidance
on how to proceed, based on your scenario.
In the Azure Storage documentation, data protection refers to strategies for protecting the storage account and
data within it from being deleted or modified, or for restoring data after it has been deleted or modified. Azure
Storage also offers options for disaster recovery, including multiple levels of redundancy to protect your data
from service outages due to hardware problems or natural disasters, and customer-managed failover in the
event that the data center in the primary region becomes unavailable. For more information about how your
data is protected from service outages, see Disaster recovery.
Prevent a storage Azure Resource Lock all of your Protects the storage Yes
account from being Manager lock storage accounts account against
deleted or modified. Learn more... with an Azure deletion or
Resource Manager configuration
lock to prevent changes.
deletion of the
storage account. Does not protect
containers or blobs in
the account from
being deleted or
overwritten.
Prevent a container Immutability policy Set an immutability Protects a container Yes, in preview
and its blobs from on a container policy on a container and its blobs from all
being deleted or Learn more... to protect business- deletes and
modified for an critical documents, overwrites.
interval that you for example, in order
control. to meet legal or When a legal hold or
regulatory a locked time-based
compliance retention policy is in
requirements. effect, the storage
account is also
protected from
deletion. Containers
for which no
immutability policy
has been set are not
protected from
deletion.
DATA P ROT EC T IO N P ROT EC T IO N AVA IL A B L E F O R DATA
SC EN A RIO O P T IO N REC O M M EN DAT IO N S B EN EF IT L A K E STO RA GE
Restore a deleted Container soft delete Enable container soft A deleted container Yes
container within a Learn more... delete for all storage and its contents may
specified interval. accounts, with a be restored within
minimum retention the retention period.
interval of 7 days.
Only container-level
Enable blob operations (e.g.,
versioning and blob Delete Container) can
soft delete together be restored.
with container soft Container soft delete
delete to protect does not enable you
individual blobs in a to restore an
container. individual blob in the
container if that blob
Store containers that is deleted.
require different
retention periods in
separate storage
accounts.
Restore a deleted Blob soft delete Enable blob soft A deleted blob or Yes, in preview
blob or blob version Learn more... delete for all storage blob version may be
within a specified accounts, with a restored within the
interval. minimum retention retention period.
interval of 7 days.
Enable blob
versioning and
container soft delete
together with blob
soft delete for
optimal protection of
blob data.
Restore a set of block Point-in-time restore To use point-in-time A set of block blobs No
blobs to a previous Learn more... restore to revert to may be reverted to
point in time. an earlier state, their state at a
design your specific point in the
application to delete past.
individual block blobs
rather than deleting Only operations
containers. performed on block
blobs are reverted.
Any operations
performed on
containers, page
blobs, or append
blobs are not
reverted.
Manually save the Blob snapshot Recommended as an A blob may be Yes, in preview
state of a blob at a Learn more... alternative to blob restored from a
given point in time. versioning when snapshot if the blob
versioning is not is overwritten. If the
appropriate for your blob is deleted,
scenario, due to cost snapshots are also
or other deleted.
considerations, or
when the storage
account has a
hierarchical
namespace enabled.
A blob can be Roll-your-own Recommended for Data can be restored AzCopy and Azure
deleted or solution for copying peace-of-mind from the second Data Factory are
overwritten, but th