Expert Tips for Working with Windows Azure Blob Storage and Silverlight

Advanced techniques for working with Azure blob storage and Silverlight apps

Chris Auld, Chris Klug

October 29, 2010

15 Min Read
lava lamp with orange blobs and blue background

On the face of it, the Windows Azure Blob storage service is a fairly simple offering: a massively scalable bit bucket in the sky. On further reflection, though, you'll find that blob (aka BLOB) storage is a rather sophisticated service. In this article, we hope to share some insights into the more advanced features of blob storage. We'll discuss the security model that can be applied to blob assets, and on the basis of this we will describe how to consume blob storage from a lightweight client without the use of the StorageClient .NET API. Finally, we'll look at the Windows Azure Content Delivery Network (CDN) and discuss some of the exciting scenarios that can be solved using this feature.

We assume that you've spent some time working with Windows Azure storage already and that you understand how to set up a storage account and consume the storage endpoint from a .NET application using the StorageClient API. For an introduction, we suggest theWindows Azure Training Kit from Microsoft.

Storage Security Using Shared Access Signatures

Windows Azure storage is exposed as a set of RESTful web services. This means that it should be possible to consume the storage service from any client capable of basic HTTP communications. To retrieve the contents of a public blob, one can make a simple HTTP request to the blob URL. This is a brilliant solution if you want to store lots of public data in the cloud: Blob storage acts as a highly scalable static content web server. A more challenging scenario arises when you want to deal with non-public resources—in particular, providing write access to blob storage.

Each blob container defines an access level. In order to write blob data or to read data from read-only containers, requests to the REST service must be signed using a secret key. This secret key approach provides a high level of security, provided the key is not compromised. Our goal is to allow a Silverlight client to upload data directly to blob storage. However, working with the secret key on the client machine would be asking for trouble; no amount of obfuscation or faux encryption will keep that key safe from a dedicated "bad guy" in possession of our client.

To solve the shared secret problem, Azure blob storage offers a feature called Shared Access Signatures. A Shared Access Signature allows permissions to be defined on a much more granular level. A Shared Access Signature is a set of query string parameters that define a permission set and a validity period. This query string is then digitally signed using the shared key. Using this approach, a trusted client that possesses the secret key can create a Shared Access Signature, then hand off that signature to an untrusted client for use.

Shared Access Signatures provide an elegant solution for our Silverlight client. We can generate a Shared Access Signature including appropriate write permissions and hand this to the Silverlight client, which can then directly upload data into blob storage.

Using Container-Level Policies

There are two broad flavors of Shared Access Signature. We refer to them as ad hoc signatures and policy-based signatures.

Ad hoc signatures. An ad hoc signature directly encodes the permission set into the Shared Access Signature by specifying an end time and permission set. Clients in receipt of an ad hoc signature will be able to perform the actions defined in the permission set for the lifetime of the signature. It is not possible to revoke an ad hoc signature except by deleting the blob or container to which it relates.

Ad hoc signatures are ideal for very short dated signatures with a limited permission set. For example, we might use an ad hoc signature to grant read access to a private blob for one minute. We'd be unlikely to grant a month of write permissions to a container using this approach.

As an example, to create a Shared Access Signature valid for one minute of read access to a container, we can use the following code.

string SAS = container.GetSharedAccessSignature(new SharedAccessPolicy(){Permissions = SharedAccessPermissions.Read,SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(1)})

Policy-based signatures. Policy-based signatures provide an additional level of intermediation and because of this they can be revoked. A policy-based signature does not sign a permission set, rather it signs a pointer to a set of permissions defined as a container-level policy. It is possible to revoke a policy-based Shared Access Signature, even within the validity timeframe, by changing the container-level policy.

To use this approach, we first create a container-level access policy.

protected void CreatePolicy(object sender, EventArgs e){var permissions = _container.GetPermissions();SharedAccessPolicy policy = new SharedAccessPolicy() { Permissions =SharedAccessPermissions.Write };permissions.SharedAccessPolicies.Add("SilverlightAccess", policy);_container.SetPermissions(permissions);}

We can then create a Shared Access Signature providing one hour of access based on this policy.

SharedAccessPolicy sasReq = new SharedAccessPolicy();sasReq.SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1);SAS = container.GetSharedAccessSignature(sasReq, "SilverlightAccess")

If we need to revoke access within that one-hour period, we simply remove the underlying container-level policy.

protected void RevokePolicy(object sender, EventArgs e){var permissions = _container.GetPermissions();permissions.SharedAccessPolicies.Remove("SilverlightAccess");_container.SetPermissions(permissions);}

Creating Shared Access Signatures for a Silverlight Client

To hand off a Shared Access Signature to our Silverlight application, we will generate it in the code behind of the page that hosts the Silverlight *.xap file, then pass it in as an initialization parameter. The sample code for this article includes an ASP.NET page called admin.aspx that includes the code to create and remove the container-level policy. The page that hosts the Silverlight *.xap control includes code to create a Shared Access Signature based on this policy with a validity period of one hour.

Accessing Blob Storage from Silverlight

Once the Silverlight client has a Shared Access Signature, it can make REST calls directly to the storage endpoint. The only snag with this is that there is no prebuilt cloud storage client for Silverlight. The StorageClient API that we use in the full version of the .NET Framework is not compatible with Silverlight.

But, RESTful service calls are simply HTTP requests. And Silverlight has no problem making HTTP requests. So Silverlight has no problems at all being a lightweight client for Azure blob storage; we just need to craft the HTTP requests ourselves.

Cross-Domain Considerations

To minimize the scope for malicious Silverlight applications, they are allowed only to make cross-domain calls—that is, calls to a domain other than that which the *.xap was downloaded from, if the server receiving the call serves up a client access policy document. A client access policy is an XML file that defines what Silverlight clients are allowed to access on that domain.

Silverlight will automatically look for this file as soon as a cross-domain call is made. It expects it to be located at the root of the service and have a specific name, clientaccesspolicy.xml. So when trying to access blob storage, Silverlight will automatically look for a blob called clientaccesspolicy.xml at the root of the blob storage: [http://myaccount.blob.core.windows.net/clientaccesspolicy.xml].

Before we can use blob storage from Silverlight, we need to upload a client access policy to blob storage. This is done by uploading a clientaccesspolicy.xml file to a specific blob container called $root.

One of the great things about working with Windows Azure is the developer tools from the Windows Azure SDK. At design time, we're big fans of using the Development Storage service—this provides a simulated storage endpoint that runs entirely on a developer's machine. Unfortunately, development storage works a little bit differently than the real thing when it comes to the $root container. The root container is not actually served out of the domain root. Rather it sits under a path structure: [http://127.0.0.1:10000/devstoreaccount1/clientaccesspolicy.xml]. This makes it impossible for Silverlight to find the clientaccesspolicy.xml file, and therefore it throws an exception whenever a Silverlight client, served up from a different domain—a Windows Azure Web Role, for example—tries to call development storage. We propose two approaches for solving this issue.

The first, and simplest, is to upload the Silverlight XAP package to blob storage as well and run it from there. This circumvents the issue as the calls to blob storage are no longer cross-domain calls. It is, however, not the greatest debugging experience. We need to attach the debugger manually in order to debug Silverlight.

Our preferred solution is to use a lightweight proxy to receive calls at "correct" Uniform Resource Identifiers (URIs) and route them on to the development storage service. We created a console application that listens for incoming requests on port 11000 instead of 10000, then forwards any incoming requests to the correct port as adding the /devstoraccount1/ path fragment to the request URI. You will find the code for this proxy along with the sample code for this article at the Download the Code link at the top of this article.

Creating the Silverlight Client

The core of the Silverlight client is contained in the Azure.Storage namespace. This is a lightweight storage API that we have written for use from Silverlight. You can see the interface structure of this library in Figure 1.

Figure 1: Structure of Azure.Storage namespace

The storage client code that we have provided currently targets only blob storage. Although it may be tempting to extend this to support table storage and queue storage, it's important to remember that there is no equivalent of Shared Access Signatures for these storage types—the challenge of key security will surface once more.

Uploads are performed in a block-wise fashion. Files are split into blocks and uploaded piece by piece. While this is not strictly necessary for blobs of less than 64MB, it is useful for improving performance by parallelizing uploads and provides an ability to retry failed blocks on failure. Silverlight relies on the underlying HTTP stack of the hosting browser; different browsers have differing attitudes to the strict application of the two connections per domain limit, so your mileage may vary. The sample code uses one connection per file, but you may choose some other heuristic. You can see the interface of the primary upload class in Figure 2.

Figure 2: Primary upload class interface

Note the need to upload (put) a block list to finalize the upload of the blob.

The REST protocol requires a full suite of HTTP verbs—we variously need to GET, POST, PUT, and DELETE. This means that in Silverlight we need to use the ClientHttp instead of the BrowserHttp, which you might be more familiar with; the latter does not support HTTP verbs other than GET and POST. All the REST calls are made by AzureClient.MakeRequest.

UriBuilder uri = new UriBuilder(GetUri(Type, AccountName, path));AddQueryParameters(uri, SharedAccessSignature, queryParameters);HttpWebRequest req = (HttpWebRequest)WebRequestCreator.ClientHttp.Create(uri.Uri);req.Method = method;AddHeaders(req, customHeaders);

The Shared Access Signature, as well as the account name and container name, are passed to the Silverlight control using InitParameters. The Silverlight client can then use these pieces of information to build the URIs required to make the RESTful HTTP calls directly to the blob storage endpoint. It should be trivial to rehost the sample code in any ASP.NET application regardless of whether it is hosted in a Windows Azure Web Role or otherwise.

The code that we provide in the Azure.Storage client contains extra functionality that is not used by the sample application itself. The IFileSystem interface can be used to work with blob storage in a fashion that more resembles a traditional file system.

Now that we have a convenient mechanism for uploading blobs, we can take a look at interesting uses of blob storage for serving that content back up to users. If you've run the sample code, you will notice that there is a combo box for selecting the Cache-Control value for the blob. This is to support the Windows Azure CDN; we will examine this feature in the following section.

The Windows Azure CDN

A CDN consists of a number of geographically dispersed servers that act in concert to cache information around the globe. By intelligently routing client requests to the nearest CDN node, rather than to a centralized web server, application developers can achieve significant performance improvements for their end users as well as reduce the load on crucial web server resources.

CDN technologies are not a new concept, however, they are not the sort of resources that a typical web developer has close at hand. Running a fleet of global data centers and the necessary network infrastructure to intelligently route requests is well beyond the means of even comparatively large providers. While third-party CDN offerings have been available for some time, they've traditionally been expensive and complex to set up.

The Windows Azure CDN changes the game for CDN delivery. Put simply, it allows developers to flag their storage account as being CDN enabled and to then serve public blobs, stored in Windows Azure blob storage, via a 20-node global CDN. There is no dependency on Windows Azure compute; even developers deploying traditional on-premises applications can take advantage of the CDN. At 15 to 20 cents per gigabyte of data transferred, it's a no-brainer. Indeed, outside of the US and Europe, CDN traffic is less than half the price of serving content directly from blob storage itself.

In CDN parlance, there are two servers that may be involved in a typical request. The origin server is the centralized server that holds the authoritative master copy of the content; the edge server is the secondary server, located close to the client, which holds a cached copy of the content.

In the case of the Windows Azure CDN, our Windows Azure storage account, located in one of the six primary Windows Azure data centers around the world, acts as the origin server. The content is lazy loaded to the edge servers. This means that if an edge server receives a request for a blob that it does not hold in local cache, it will in turn retrieve that resource from the origin server.

Working with the Windows Azure CDN is a fairly simple process:

  1. Enable the CDN for your Windows Azure storage account.

  2. (Optionally) map a friendly URL—for example, [http://cdn.mycoolservice.com].

  3. Upload content to blob storage setting the Cache-Control header.

  4. Request content using the CDN URL.

CDN-Enabling Our Uploads

To take advantage of the Windows Azure CDN, you will need to use an actual Windows Azure storage account—this will not work with development storage. Enable the CDN for the storage account by using the Windows Azure Developer Portal.

We won't bother mapping a friendly URL; instead we'll use the default CDN URL that is assigned to our account. It will look something like http://xxxx.vo.msecnd.net/. The sample code included with this article includes functionality to set the Cache-Control header on file upload. It does so by setting the HTTP header x-ms-blob-cache-control on the PUT blob operation.

We like the Fiddler diagnostics tool for testing purposes. It allows us to see the headers returned on each request as well as the time taken. Compare the two screenshots in Figures 3A and 3B. Figure 3A shows the file downloaded from the blob storage endpoint.

Figure 3A: File downloaded from the blob storage endpoint

Note the elapsed time of just over 6 seconds (this is from the US South Central Windows Azure data center to New Zealand). Figure 3B shows the file downloaded from the CDN. Note that this took just 2.3 seconds.

Figure 3B: File downloaded from the CDN

The traceroute reports for the file transfers tell the whole story. As you can see in Figure 4A, from Wellington, New Zealand, the request to the blob endpoint travels first to Auckland, New Zealand, before making the long journey across the Pacific Ocean to Los Angeles and on, over several more hops, to the Windows Azure data center.

Figure 4A: Traceroute report for file downloaded from the blob storage endpoint

As shown in Figure 4B, the request to the CDN instead travels from Auckland across the Tasman Sea to Sydney, Australia, and is served from a data center there.

There is no doubt that the Windows Azure CDN can provide significant performance improvements for web-based applications. It is useful even for applications that may not be running in Windows Azure compute. Files can be pushed from on-premises applications to blob storage and then subsequently served from the CDN.

Working Smarter with Azure

We've provided some insights into Windows Azure blob storage that we hope will help you to better understand and work smarter and more efficiently with Azure blog storage in your Silverlight client applications. Download the sample code and try these techniques for yourself.

Chris Auld is Director of Strategy and Innovation at Intergen, an Australasian Microsoft Gold Partner. Chris is a regular presenter on Windows Azure application architecture at events around the world. Chris is a Microsoft MVP and Regional Director and blogs at www.syringe.net.nz.

Chris Klug is a Silverlight Solutions Specialist at Intergen and identifies himself as a devigner—that's a developer/designer hybrid. Chris blogs extensively on Silverlight and Windows Phone 7 at chris.59north.com.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like