Wednesday, 10 October 2012

Next Stop: Aditi Technologies

I am excited to announce that I have officially joined Aditi Technologies as Director of Product Services.  Taking what I have learned building large scale solutions in Windows Azure, I will be responsible for building Aditi's own portfolio of SaaS services and IP/frameworks.  We have a number of exciting projects underway and I hope to blog more about what we are building soon.

Along with this move, I get to rejoin Wade (now my boss!) and Steve as well as some of my former Cumulux colleagues.  I took this role because I see a great opportunity to build software and services in the 'cloud' and I am convinced that Aditi has been making the right investments.  It doesn't hurt at all that I get to work with top-notch technologists either.

Along the way, I plan to build a team to deliver on these cloud services.  If you think you have what it takes to build great software, send me a note and your resume.  Thanks!



Sunday, 19 February 2012

Monitoring in Windows Azure

For the last year since leaving Microsoft, I have been deeply involved in building a world class SaaS monitoring service called AzureOps.  During this time, it was inevitable that I see not only how to best monitor services running in Windows Azure, but also see the common pitfalls amongst our beta users.  It is one thing to be a Technical Evangelist like I was and occasionally use a service for a demo or two, and quite another to attempt to build a business on it.

Monitoring a running service in Windows Azure can actually be daunting if you have not worked with it before.  In this coming series, I will attempt to share the knowledge we have gained building AzureOps and from our customers.  The series will be grounded in these 5 areas:

  1. Choosing what to monitor in Windows Azure.
  2. Getting the diagnostics data from Windows Azure
  3. Interpreting diagnostics data and making adjustments
  4. Maintaining your service in Windows Azure.

Each one of these areas will be a post in the series and I will update this post to keep a link to the latest.  I will use AzureOps as an example in some cases to highlight both what we learned as well as the approach we take now due to this experience.

If you are interested in monitoring your own services in Windows Azure, grab an invite and get started today!.

Wednesday, 24 August 2011

Handling Continuation Tokens in Windows Azure - Gotcha

I spent the last few hours debugging an issue where a query in Windows Azure table storage was not returning any results, even though I knew that data was there.  It didn't start that way of course.  Rather, stuff that should have been working and previously was working, just stopped working.  Tracing through the code and debugging showed me it was a case of a method not returning data when it should have.

Now, I have known for quite some time that you must handle continuation tokens and you can never assume that a query will return data always (Steve talks about it waaaay back when here).  However, what I did not know was that different methods of enumeration will give you different results.  Let me explain by showing the code.

var q = this.CreateQuery()
    .Where(f => f.PartitionKey.CompareTo(start.GetTicks()) > 0)
var first = q.FirstOrDefault();
if (first != null)
    return new DateTime(long.Parse(first.PartitionKey));

In this scenario, you would assume that you have continuation tokens nailed because you have the magical AsTableServiceQuery extension method in use.  It will magically chase the tokens until conclusion for you.  However, this code does not work!  It will actually return null in cases where you do not hit the partition server that holds your query results on the first try.

I could easily reproduce the query in LINQPad:

var q = ctx.CreateQuery<Foo>("WADPerformanceCountersTable")
    .Where(f => f.RowKey.CompareTo("9232a4ca79344adf9b1a942d37deb44a") > 0 && f.RowKey.CompareTo("9232a4ca79344adf9b1a942d37deb44a__|") < 0)
    .Where(f => f.PartitionKey.CompareTo(DateTime.Now.AddDays(-30).GetTicks()) > 0)

Yet, this query worked perfectly.  I got exactly 1 result as I expected.  I was pretty stumped for a bit, then I realized what was happening.  You see FirstOrDefault will not trigger the enumeration required to generate the necessary two round-trips to table storage (first one gets continuation token, second gets results).  It just will not force the continuation token to be chased.  Pretty simple fix it turns out:

var first = q.AsEnumerable().SingleOrDefault();

Hours wasted for that one simple line fix.  Hope this saves someone the pain I just went through.

Thursday, 14 July 2011

How to Diagnose Windows Azure Error Attaching Debugger Errors

I was working on a Windows Azure website solution the other day and suddenly started getting this error when I tried to run the site with a debugger:


This error is one of the hardest to diagnose.  Typically, it means that there is something crashing in your website before the debugger can attach.  A good candidate to check is your global.asax to see if you have changed anything there.  I knew that the global.asax had not been changed, so it was puzzling.  Naturally, I took the normal course of action:

  1. Run the website without debug inside the emulator.
  2. Run the website with and without debugging outside the emulator.
  3. Tried it on another machine

None of these methods gave me any clue what the issue was as they all worked perfectly fine.  It was killing me that it only happened on debugging inside the emulator and only on 1 machine (the one I really wanted to work).  I was desperately looking for a solution that did not involve rebuilding the machine.   I turned on SysInternal's DebugView to see if there were some debug messages telling me what the message was.  I saw an interesting number of things, but nothing that really stood out as the source of the error.  However, I did notice the process ID of what appeared to be reporting errors:


Looking at Process Explorer, I found this was for DFAgent.exe (the Dev Fabric Agent).  I could see that it was starting with an environment variable, so I took a look at where that was happening:


That gave me a direction to start looking.  I opened the %UserProfile%\AppData\Local\Temp directory and found a conveniently named file there called Visual Studio Web Debugger.log. 


A quick look at it showed it to be HTML, so one rename later and viola!


One of our developers had overridden the <httpErrors> setting in web.config that was disallowed on my 1 machine.  I opened my applicationHost.config using a Administatrive Notepad and sure enough:


So, the moral of the story is next time, just take a look at this log file and you might find the issue.  I suspect the reason that this only happened on debug and not when running without the debugger was that for some reason the debugger is looking for a file called debugattach.aspx.  Since this file does not exist on my machine, it throws a 404, which in turn tries to access the <httpErrors> setting, which culminates in the 500.19 server error.  I hope this saves someone the many hours I spent finding it and I hope it prevents you from rebuilding your machine as I almost did.

Tuesday, 17 May 2011

Windows Azure Bootstrapper

One of the common patterns I have noticed across many customers is the desire to download a resource from the web and execute it as part of the bootup process for a Windows Azure web or worker role.  The resource could be any number of things (e.g. exe, zip, msi, cmd, etc.), but typically is something that the customer does not want to package directly in the deployment package (cspkg) for size or update/maintainability reasons.

While the pattern is pretty common, the ways to approach the problem can certainly vary.  A more complete solution will need to deal with the following issues:

  • Downloading from arbitrary http(s) sources or Windows Azure blob storage
  • Logging
  • Parsing configuration from the RoleEnvironment or app.config
  • Interacting with the RoleEnvironment to get ports, DIP addresses, and Local Resource paths
  • Unzipping resources
  • Launching processes
  • Ensuring that resources are only installed once (or downloaded and unzipped once)

With these goals in mind, we built the Windows Azure Bootstrapper.  It is a pretty simple tool to use and requires only packaging of the .exe and the .config file itself in your role.  With these two items in place, you can then script out fairly complicated installations.  For example, you could prepare your roles with MVC3 using a command like this:

bootstrapper.exe -get -lr $lr(temp) -run $lr(temp)\AspNetMVC3ToolsUpdateSetup.exe -args /q

Check out the project page for more examples, but the possibilities are pretty endless here.  One customer uses the Bootstrapper to download agents and drivers from their blob storage account to install at startup for their web and worker roles.  Other folks use it to simply copy files out of blob storage and lay them out correctly on disk.

Of course, none of this would be available in the community if not for the great guys working at National Instruments.  They allowed us to take this code written for them and turn it over to the community.

Enjoy and let us know your feedback or any bugs you find.

Tuesday, 11 January 2011

A new Career Path

As some of you may know, I recently left Microsoft after almost 4 years.  It was a wonderful experience and I met and worked with some truly great people there.  I was lucky enough to join at the time when the cloud was just getting started at Microsoft.  Code words like CloudDB, Strata, Sitka, Red Dog, and others I won't mention here flew around like crazy.

I started as the SQL Server Data Services (SSDS) evangelist and later became the Windows Azure Evangelist (after SSDS eventually became SQL Azure).  For the last 2 years, Windows Azure has been my life and it was great seeing a technology very rapidly grow from CTP to a full featured platform in such a short time.

Well, I am not moving far from Windows Azure - I still strongly believe in what Microsoft has done here.  I recently joined Cumulux as Director of Cloud Services.  I will be driving product strategy, some key customer engagements, and take part in the leadership team there.  Cumulux is a close Microsoft partner with a Windows Azure focus, so it is a great fit for me for both personal as well as professional reasons.

While I will miss the great folks I got to work with at Microsoft and being in the thick of everything, I am very excited to begin this new career path with Cumulux.

Sunday, 19 December 2010

Using WebDeploy with Windows Azure

Update:  Looks like Wade also blogged about this, showing a similar way with some early scripts I wrote.  However, there are some important differences around versioning of WebDeploy and creating users that you should pay attention to here.  Also, I am using plugins, which are much easier to consume.

One of the features announced at PDC was the coming ability to use the standard IIS7 WebDeploy capability with Windows Azure.  This is exciting news, but it comes with an important caveat.  This feature is strictly for development purposes only.  That is, you should not expect anything you deploy using WebDeploy to persist longterm on your running Windows Azure instances.  If you are familiar with Windows Azure, you know the reason is that any changes to the OS post-startup are not durable.

So, the use case for WebDeploy in Windows Azure is in the case of a single instance of a role during development.  The idea is that you would:

  1. Deploy your hosted service (using the .cspkg upload and deploy) with a single instance with WebDeploy enabled.
  2. Use WebDeploy on subsequent deploys for instant feedback
  3. Deploy the final version again using .cspkg (without WebDeploy enabled) so the results are durable with at least 2 instances.

The Visual Studio team will shortly be supplying some tooling to make this scenario simple.  However, in the interim, it is relatively straightforward to implement this yourself and do what the tooling will eventually do for you.

If you look at the Windows Azure Training Kit, you will find the Advanced Web and Worker Lab Exercise 3 and it will show you the main things to get this done.  You simply need to extrapolate from this to get the whole thing working.

We will be using Startup Tasks to perform a little bit of bootstrapping when the role instance (note the singular) starts in order to use Web Deploy.  This will be implemented using the Role Plugin feature to make this a snap to consume.  Right now, the plugins are undocumented, but if you peruse your SDK bin folder, it won't be too hard to figure out how they work.

The nice thing about using a plugin is that you don't need to litter your service definition with anything in order to use the feature.  You simply need to include a single "Import" element and you are done!

Install the Plugin

In order to use this feature, simply extract the contents of the zip file into "%programfiles%\Windows Azure SDK\v1.3\bin\plugins\WebDeploy".  You might have to extract the files locally in your profile first and copy them into this location if you run UAC.  At the end of the day, you need a folder called "WebDeploy" in this location with all the files in this zip in it.

Use the Plugin

To use the plugin you simply need to add one line to your Service Definition:

      <Import moduleName="RemoteAccess" />
      <Import moduleName="RemoteForwarder" />
      <Import moduleName="WebDeploy"/>

Notice, in my example, I am also including the Remote Access and Remote Forwarder plugins as well.  You must have RemoteAccess enabled to use this plugin as we will rely on the user created here.  The Remote Forwarder is required on one role in your solution.

Next, you should hit publish on the Windows Azure project (not Web Site) and setup an RDP user.  We will be using this same user later in order to deploy because by default the RDP user is also an Admin with permission to use Web Deploy.  Alternatively, you could create an admin user with a Startup Task, but this method is easier (and you get RDP).  If you have not setup the certificates for RDP access before, it is a one time process outlined here.





Now, you publish this to Windows Azure using the normal Windows Azure publishing process:


That's it.  Your running instance has WebDeploy and the IIS Management Service running now.  All you had to do was import the WebDeploy plugin and make sure you also used RDP (which most developers will enable during development anyway).

At this point, it is a pretty simple matter to publish using WebDeploy.  Just right click the Web Site (not the Cloud Project) and hit publish:


You will need to type in the name of your project in the Service URL.  By default, it will assume port 8172; if you have chosen a different public VIP port (by editing the plugin -see *hint below), here is where you need to update it (using the full https:// syntax).

Next, you need to update the Site/Application.  The format for this is ROLENAME_IN_0_SITENAME, so you need to look in your Service Definition to find this:


Notice, in my example, my ROLENAME is "WebUX" and the Site name is "Web".  This will differ for each Web Role potentially, so make sure you check this carefully.

Finally, check the "Allow untrusted certificate" option and use the same RDP username and password you created on deploy.  That's all.  It should be possible for you to use Web Deploy now with your running instance:  Hit Publish and Done!

How it works

If you check the plugin, you will see that we use 2 scripts and WebPI to do this.  Actually, this is the command line version of WebPI, so it will run without prompts.  You can also download it, but it is packaged already in the plugin.

The first script, called EnableWebAdmin.cmd simply enables the IIS Management service and gets it running.  By default, this sets up the service to run and listen on port 8172.  If you check the .csplugin file, you will notice we have opened that port on the Load Balancer as well.

start /w ocsetup IIS-ManagementService
reg add HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WebManagement\Server /v EnableRemoteManagement /t REG_DWORD /d 1 /f
net start wmsvc
sc config WMSVC start= auto
exit /b 0

*Hint: if you work at a place like Microsoft that prevents ports other than 443 to host SSL traffic, here would be a good place to map 443 to 8172 in order to deploy.  For most normal environments, this is not necessary.

Next, we have the proper WebDeploy installation command using WebPI command line.

@echo off
ECHO "Starting WebDeploy Installation" >> log.txt
"%~dp0webpicmd\WebPICmdLine.exe" /Products: WDeploy /xml: /log:webdeploy.txt

ECHO "Completed WebDeploy Installation" >> log.txt


net stop wmsvc

net start wmsvc

You will notice one oddity here - namely, we are using the WebPI 2.0 feed to install an older version of WebDeploy (v1.1).  If you leave this off, it will default to the 3.0 version of WebPI that uses the Web Deploy 2.0 RC.  In my testing, Web Deploy 2.0 RC only works sporadically and usually not at all.  The 1.1 version is well integrated in VS tooling, so I would suggest this one instead until 2.0 works better.

Also, you will notice that I am stopping and restarting the IIS Management Service here as well.  In my testing, WebDeploy was was unreliable on the Windows Server 2008 R2 family (osFamily=2).  Starting and stopping the service seems to fix it.

Final Warning

Please bear in mind that this method is only for proto-typing and rapid development.  Since the changes are not persisted, they will disappear the next time you are healed by the fabric controller.  Eventually, the Visual Studio team will release their own plugin that does essentially the same thing and you can stop using this.  In the meantime, have fun!

Tuesday, 30 November 2010

Using Windows Azure MMC and Cmdlet with Windows Azure SDK 1.3

If you haven't already read on the official blog, Windows Azure SDK 1.3 was released (along with Visual Studio tooling).  Rather than rehash what it contains, go read that blog post if you want to know what is included (and there are lots of great features).  Even better, goto and watch the videos on the specific features.

If you are a user of the Windows Azure MMC or the Windows Azure Service Management Cmdlets, you might notice that stuff gets broken on new installs.  The underlying issue is that the MMC snapin was written against the 1.0 version of the Storage Client.  With the 1.3 SDK, the Storage Client is now at 1.1.  So, this means you can fix this two ways:

  1. Copy an older 1.2 SDK version of the Storage Client into the MMC's "release" directory.  Of course, you need to either have the .dll handy or go find it.  If you have the MMC installed prior to SDK 1.3, you probably already have it in the release directory and you are good to go.
  2. Use Assembly Redirection to allow the MMC or Powershell to call into the 1.1 client instead of the 1.0 client.

To use Assembly Redirection, just create a "mmc.exe.config" file for MMC (or "Powershell.exe.config" for cmdlets) and place it in the %windir%\system32 directory.  Inside the .config file, just use the following xml:

      <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
         <assemblyIdentity name="Microsoft.WindowsAzure.StorageClient"
                           culture="neutral" />
         <bindingRedirect oldVersion=""
         <assemblyIdentity name="Microsoft.WindowsAzure.StorageClient"
                           culture="neutral" />
          <publisherPolicy apply="no" />

Tuesday, 16 November 2010

Back from PDC and TechEd Europe

I am back from TechEd EU (and PDC)!  It was a long and exciting last few weeks getting first ready for PDC and then flying to Germany to deliver a couple sessions.  It was a great, if not hectic experience for the last few weeks.  For PDC, I had a part to play in the keynote demos as well as the PDC Workshop on the following Saturday.  Those two things together took more time than I would care to admit.  Once PDC was a wrap, I got to start building my talks for TechEd EU.

You can find both of my talks on the TechEd Online Site and embedded below.  The embedding doesn't work great on the blog because of layout issues I need to fix, but RSS readers should have a better time.

Thanks to everyone who attended these sessions in person and all the nice comments.  I have almost completely recovered my voice at this point - my cold last just the duration of the trip of course!



Get Microsoft Silverlight

Get Microsoft Silverlight

Friday, 15 October 2010

Load Testing on Windows Azure

For the 30th Cloud Cover show, Steve and I talked about coordinating work.  It is not an entirely new topic as there have been a couple variations on the topic before.  However, the solution I coded is actually different than both of those.  I chose to simply iterate through all the available instances and fire a WCF message to each one.  I could have done this is parallel or used multi-casting (probably more elegant), but for simplicity sake: it just works.

The reason I needed to coordinate work was in context of load testing a site.  Someone had asked me about load testing in Windows Azure.  The cloud is a perfect tool to use for load testing.   Where else can you get a lot of capacity for a short period of time?  With that thought in mind, I searched around to find a way to generate a lot of traffic quickly.

I built a simple 2 role solution where a controller could send commands to a worker and the worker in turn would spawn a load testing tool.  For this sample, I chose a tool called ApacheBench.  You should watch the show to understand why I chose ApacheBench instead of the more powerful WCAT.


Get the source - It's Hammer Time!


PS: Big props go to Steve for AJAX'ing the source up side the head.

Thursday, 14 October 2010

Sticky HTTP Session Routing in Windows Azure

I have been meaning to update my blog for some time with the routing solution I presented in Cloud Cover 24.  One failed laptop harddrive later and well. I lost my original post and didn't get around until now to try to replace it.  Anyhow, enough of my sad story.

One of the many great benefits of using Windows Azure is that networking is taken care of for you.  You get connected to a load balancer for all your input endpoints and we automatically round-robin the requests to all your instances that are declared to be part of the role.  Visually, it is pretty easy to see:


However, this introduces a slight issue common to all webfarm scenarios:  state doesn't work very well.  If your service likes to rely on the fact that it will communicate with exactly the same machine as last time, you are in trouble.  In fact, this is why you will always hear an overarching theme of designing to be stateless in the cloud (or any scale-out architecture).

There are inevitably scenarios where sticky sessions (routing when possible to the same instance) will benefit a given architecture.  We talked a lot about this on the show.  In order to produce sticky sessions in Windows Azure, you need a mechanism to route to a given instance.  For the show, I talked about how to do this with a simple socket listener.



We introduce a router between the Load Balancer and the Web Roles in this scenario.  The router will peek (1) the incoming headers to determine via cookies or other logic where to route the request.  It then forwards the traffic over sockets to the appropriate server (2).  For all return responses, the router will optionally peek that stream as well to determine whether to inject the cookie (3).  Finally, if necessary, the router will inject a cookie that states where the user should be routed to in subsequent requests (4).

I should note that during the show we talked about how this cookie injection could work, but the demo did not use it.  For this post, I went ahead and updated the code to actually perform the cookie injection.  This allows me to have a stateless router as well.  In the demo on the show, I had a router that simply used a dictionary lookup of SessionIDs to Endpoints.  However, we pointed out that this was problematic without externalizing that state somewhere.  A simple solution is to have the client deliver to the router via a cookie where it was last connected.  This way the state is carried on each request and is no longer the router's responsibility.

You can download the code for this from Code Gallery.  Please bear in mind that this was only lightly tested and I make no guarantees about production worthiness.

Friday, 16 July 2010

Getting the Content-Type from your Registry

For Episode 19 of the Cloud Cover show, Steve and I discussed the importance of setting the Content-Type on your blobs in Windows Azure blob storage.  This was was especially important for Silverlight clients.  I mentioned that there was a way to look up a Content Type from your registry as opposed to hardcoding a list.  The code is actually pretty simple.  I pulled this from some code I had lying around that does uploads. 

Here it is:

private static string GetContentType(string file)
	string contentType = "application/octet-stream";
	string fileExt = System.IO.Path.GetExtension(file).ToLowerInvariant();
	RegistryKey fileExtKey = Registry.ClassesRoot.OpenSubKey(fileExt);
	if (fileExtKey != null && fileExtKey.GetValue("Content Type") != null)
		contentType = fileExtKey.GetValue("Content Type").ToString();
	return contentType;

Friday, 11 June 2010

PowerScripting Podcast

Last week, I had the opportunity to talk with Hal and Jonathan on the PowerScripting podcast about Windows Azure.  It was a fun chat - lots on Windows Azure, a bit on the WASM cmdlets and MMC, and it revealed my favorite comic book character.

Listen to it now.

Friday, 28 May 2010

Hosting WCF in Windows Azure

This post is a bit overdue:  Steve threatened to blog it himself, so I figured I should get moving.  In one of our Cloud Cover episodes, we covered how to host WCF services in Windows Azure.  I showed how to host both publically accessible ones as well as how to host internal WCF services that are only visible within a hosted service.

In order to host an internal WCF Service, you need to setup an internal endpoint and use inter-role communication.  The difference between doing this and hosting an external WCF service on an input endpoint is mainly in the fact that internal endpoints are not load-balanced, while input endpoints are hooked to the load-balancer.

Hosting an Internal WCF Service

Here you can see how simple it is to actually get the internal WCF service up and listening.  Notice that the only thing that is different is that the base address I pass to my ServiceHost contains the internal endpoint I created.  Since the port and IP address I am running on is not known until runtime, you have to create the host and pass this information in dynamically.

public override bool OnStart()
    // Set the maximum number of concurrent connections 
    ServicePointManager.DefaultConnectionLimit = 12;
    // For information on handling configuration changes
    // see the MSDN topic at
    RoleEnvironment.Changing += RoleEnvironmentChanging;
    return base.OnStart();
private void StartWCFService()
    var baseAddress = String.Format(
    var host = new ServiceHost(typeof(EchoService), new Uri(baseAddress));
    host.AddServiceEndpoint(typeof(IEchoService), new NetTcpBinding(SecurityMode.None), "echo");

Consuming the Internal WCF Service

From another role in my hosted service, I want to actually consume this service.  From my code-behind, this was all the code I needed to actually call the service.

protected void Button1_Click(object sender, EventArgs e)
    var factory = new ChannelFactory<WorkerHost.IEchoService>(new NetTcpBinding(SecurityMode.None));
    var channel = factory.CreateChannel(GetRandomEndpoint());
    Label1.Text = channel.Echo(TextBox1.Text);
private EndpointAddress GetRandomEndpoint()
    var endpoints = RoleEnvironment.Roles["WorkerHost"].Instances
        .Select(i => i.InstanceEndpoints["EchoService"])
    var r = new Random(DateTime.Now.Millisecond);
    return new EndpointAddress(
            endpoints[r.Next(endpoints.Count() - 1)].IPEndpoint)

The only bit of magic here was querying the fabric to determine all the endpoints in the WorkerHost role that implemented the EchoService endpoint and routing a request to one of them randomly.  You don't have to route requests randomly per se, but I did this because internal endpoints are not load-balanced.  I wanted to distribute the load evenly over each of my WorkerHost instances.

One tip that I found out is that there is no need to cache the IPEndpoint information you find.  It is already cached in the API call.  However, you may want to cache your ChannelFactory according to best practices (unlike me).

Hosting Public WCF Services

This is all pretty easy as well.  The only trick to this is that you need to apply a new behavior that knows how to deal with the load balancer for proper MEX endpoint generation.  Additionally, you need to include a class attribute on your service to deal with an address filter mismatch issue.  This is pretty well documented along with links to download the QFE that contains the behavior patch out on the WCF Azure Samples project on Code Gallery in Known Issues. Jim Nakashima actually posted about this the other day as well in detail on his blog as well, so I won't dig into this again here.

Lastly, if you just want the code from the show, have at it!

Monday, 10 May 2010

Windows Azure MMC v2 Released

I am happy to announce the public release of the Windows Azure MMC - May Release.  It is a very significant upgrade to the previous version on Code Gallery.  So much, in fact, I tend to unofficially call it v2 (it has been called the May Release on Code Gallery).  In addition to all-new and faster storage browsing capabilities, we have added service management as well as diagnostics support.  We have also rebuilt the tool from the ground up to support extensibility.  You can replace or supplement our table viewers, log viewers, and diagnostics tooling with your own creation.

This update has been in the pipeline for a very long time.  It was actually finished and ready to go in late January.  Given the amount of code however that we had to invest to produce this tool, we had to go through a lengthy legal review and produce a new EULA.  As such, you may notice that we are no longer offering the source code in this release to the MMC snap-in itself.  Included in this release is the source for the WASM cmdlets, but not for the MMC or the default plugins.  In the future, we hope to be able to release the source code in its entirety.


Features At A Glance:


hosted_services Hosted Services Upload / configure / control / upgrade / swap / remove Windows Azure application deployments
diagnostics Diagnostics Configure instrumentation for Windows Azure applications (diagnostics) per source (perf counters, file based, app logs, infrastructure logs, event logs).   Transfer the diagnostic data on-demand or scheduled.

View / Analyze / Export to Excel and Clear instrumentation results.

certificates Certificates Upload / manage certificates for Windows Azure applications
storage_services Storage Services Configure Storage Services for Windows Azure applications
blobs BLOBs and Containers Add / Upload / Download / Remove BLOBs and Containers and connect to multiple storage accounts
queues Queues Add / Purge / Delete Windows Azure Queues
tables Tables Query and delete Windows Azure Tables
extensibility Extensibility Create plugins for rich diagnostics data visualization (e.g. add your own visualizer for performance counters). Create plugins for table viewers and editors or add completely new modules!  Plugin Engine uses MEF (extensibility framework) to easily add functionality.

powershell PowerShell-based backend The backend is based on PowerShell cmdlets. If you don't like our UI, you can still use the underlying cmdlets and script out anything we do


How To Get Started:

There are so many features and updates in this release that I have prepared a very quick 15-min screencast on the features and how to get started managing your services and diagnostics in Windows Azure today!

Friday, 05 March 2010

Calculating the Size of Your SQL Azure Database

In Episode 3 of Cloud Cover, I mentioned the tip of the week was how to measure your database size in SQL Azure.  Here is the exact queries you can run to do it:

      sum(reserved_page_count) * 8.0 / 1024

select, sum(reserved_page_count) * 8.0 / 1024
      sys.dm_db_partition_stats, sys.objects
      sys.dm_db_partition_stats.object_id = sys.objects.object_id

group by

The first one will give you the size of your database in MB and the second one will do the same, but break it out for each object in your database.

Hat tip to David Robinson and Tony Petrossian on the SQL Azure team for the query.

Wednesday, 17 February 2010

WASM Cmdlets Updated


I am happy to announce the updated release of the Windows Azure Service Management (WASM) Cmdlets for PowerShell today. With these cmdlets you can effectively automate and manage all your services in Windows Azure. Specifically,

  • Deploy new Hosted Services
    • Automatically upload your packages from the file system to blob storage.
  • Upgrade your services
    • Choose between automatic or manual rolling upgrades
    • Swap between staging and production environments
  • Remove your Hosted Services
    • Automatically pull down your services at the end of the day to stop billing. This is a critical need for test and development environments.
  • Manage your Storage accounts
    • Retrieve or regenerate your storage keys
  • Manage your Certificates
    • Deploy certificates from your Windows store or the local filesystem
  • Configure your Diagnostics
    • Remotely configure the event sources you wish to monitor (Event Logs, Tracing, IIS Logs, Performance Counters and more)
  • Transfer your Diagnostics Information
    • Schedule your transfers or Transfer on Demand.


Why did we build this?

The WASM cmdlets were built to unblock adoption for many of our customers as well as serve as a common underpinning to our labs and internal tooling. There was an immediate demand for an automation API that would fit into the standard toolset for IT Pros. Given the adoption and penetration of PowerShell, we determined that cmdlets focused on this core audience would be the most effective way forward. Furthermore, since PowerShell is a full scripting language with complete access to .NET, this allows these cmdlets to be used as the basis for very complicated deployment and automation scripts as part of the application lifecycle.

How can you use them?

Every call to the Service Management API requires an X509 certificate and the subscription ID for the account. To get started, you need to upload a valid certificate to the portal and have it installed locally to your workstation. If you are unfamiliar with how to do this, you can follow the procedure outlined on the Windows Azure Channel9 Learning Center here.

Here are a few examples of how to use the cmdlets for a variety of common tasks:

Common Setup

Each script referenced below will refer to the following variables:

Add-PSSnapin AzureManagementToolsSnapIn

#get your local certificate for authentication

$cert = Get-Item cert:\CurrentUser\My\<YourThumbPrint>

#subID from portal

$sub = 'c9f9b345-7ff5-4eba-9d58-0cea5793050c'

#your service name (without

$service = 'yourservice'

#path to package (can also be http: address in blob storage)

$package = "D:\deploy\MyPackage.cspkg"

#configuration file

$config = "D:\deploy\ServiceConfiguration.cscfg"


Listing My Hosted Services

Get-HostedServices -SubscriptionId $sub -Certificate $cert


View Production Service Status

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production' |

select RoleInstanceList -ExpandProperty RoleInstanceList |

ft InstanceName, InstanceStatus -GroupBy RoleName


Creating a new deployment

#Create a new Deployment

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

New-Deployment -Slot Production -Package $package -Configuration $config -Label 'v1' |

Get-OperationStatus -WaitToComplete

#Set the service to 'Running'

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production'|

Set-DeploymentStatus 'Running' |

Get-OperationStatus -WaitToComplete


Removing a deployment

#Ensure that the service is first in Suspended mode

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production'|

Set-DeploymentStatus 'Suspended' |

Get-OperationStatus -WaitToComplete |

#Remove the deployment

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production'|



Upgrading a single Role

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |

Get-Deployment -Slot Production |

Set-Deployment -mode Auto -roleName 'WebRole1' -package $package -label 'v1.2' |

Get-OperationStatus -WaitToComplete


Adding a local certificate

$deploycert = Get-Item cert:\CurrentUser\My\CBF145B628EA06685419AEDBB1EEE78805B135A2

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Add-Certificate -CertificateToDeploy $deploycert |

Get-OperationStatus -WaitToComplete


Configuring Diagnostics - Adding a Performance Counter to All Running Instances

#get storage account name and key

$storage = "yourstorageaccount"

$key = (Get-StorageKeys -ServiceName $storage -Certificate $cert `

    -SubscriptionId $sub).Primary


$deployId = (Get-HostedService $service -SubscriptionId $sub `

    -Certificate $cert | Get-Deployment Production).DeploymentId


$counter = '\Processor\(_Total)\% Processor Time'

$rate = [TimeSpan]::FromSeconds(5)


Get-DiagnosticAwareRoles -StorageAccountName $storage -StorageAccountKey $key `

-DeploymentId $deployId |

foreach {

    $role = $_

    Get-DiagnosticAwareRoleInstances $role -DeploymentId $deployId `

    -StorageAccountName $storage -StorageAccountKey $key |

    foreach {

        $instance = $_

        $config = Get-DiagnosticConfiguration -RoleName $role -InstanceId $_ `

            -StorageAccountName $storage -StorageAccountKey $key `

            -BufferName PerformanceCounters -DeploymentId $deployId

        $perf = New-Object Microsoft.WindowsAzure.Diagnostics.PerformanceCounterConfiguration `

            -Property @{CounterSpecifier=$counter; SampleRate=$rate}


        $config.DataSources |

         foreach {

             Set-PerformanceCounter -PerformanceCounters $_ -RoleName $role `

             -InstanceId $instance -DeploymentId $deployId -StorageAccountName $storage `

             -StorageAccountKey $key




More Examples

You can find more examples and documentation on these cmdlets by typing 'Get-Help <cmdlet> -full' from the PowerShell cmd prompt.

If you have any questions for feedback, please send it directly to me through the blog (look at right hand navigation pane for Contact Ryan link).

Friday, 12 February 2010

Sharing Blobs in Windows Azure

Windows Azure storage makes use of a symmetric key authentication system.  Essentially, we take a 256-bit key and sign each HTTP request to the storage subsystem.  In order to access storage, you have to prove you know the key.  What this means of course is that you need to protect that key well.  It is an all-or-nothing scenario, if you have the key, you can do anything.  If you don't possess the key, you can do nothing*.

The natural question for most folks when they understand this model is, how can I grant access to someone without compromising my master key?  The solution turns out to be something called Shared Access Signatures (SAS) for Windows Azure.  SAS works by specifying a few query string parameters, canonicalizing those parameters, hashing them and signing the hash in the query string.  This creates a unique URL that embeds not only the required access, but the proof that it was created by someone that knew the master key.  The parameters on the query string are:

  • st - this is the start time of when the signature becomes valid.  It is optional.  If not supplied, then now is implied.
  • se - this the the expiration date and time.  All signatures are timebound and this parameter is required.
  • sr - this is the resource that the signature applies to and will be either (b)lob or (c)ontainer.  This is required.
  • sp -this is the permission set that you are granting - (r)ead, (w)rite, (d)elete, and (l)ist.  This is required.
  • si - this is a signed identifier or a named policy that can incorporate any of the previous elements. Optional.
  • sig - this is the signed hash of the querystring and URI that proves it was created with the master key.  It is required.

There is one other caveat that is important to mention here.  Unless you use a signed identifier - what I refer to as a policy - there is no way to create a signature that has a lifetime longer than an hour.  This is for good reason.  A SAS URL that was mistakenly created with an extremely long lifetime and without using the signed idenifier (policy) could not be revoked without changing the master key on the storage account.  If that URL was to leak, your signed resource would be open to abuse for a potentially long time.  By making the longest lifetime of a signature only an hour, we have limited the window in which you are exposed.

If you want to create a longer-lived SAS, you must create a policy.  The policy is very interesting.  Because this policy contains any of those parameters mentioned above and is stored at the service, it means that we can revoke those permission or completely change them instantly.

Let's walk through an example using, where it is trivial to create a SAS.  Once I login to the site, I am going to select the BLOBs option from the navigation tabs on top.  Here I will see a list of all my containers.  I can select a container's Actions menu and click Manage Policies.


Next, I am going to create two policies (signed identifiers), called Read and Write.  These will have different expirations dates and permission sets.  Notice I am not specifying a Start Date, so they are immediately valid.



Next, I am going to select the Actions menu for one of the blobs under this container and click Share.  I am going to apply one of the policies (shared access signature) that I just created by selecting it from the dropdown.


You will notice that the value that I created in the policy fill in the values in the Share BLOB UI and disable you from changing them.  The reason is that a policy is like a template.  If you don't set a value, then you can set (or must set it if it is required).  However, if the policy states one of the parameters, you cannot supply that parameter.  In this case, the 'read' policy that was created specified the expiration (se) and permissions (sp).  It is implied from the dialog selection that the resource (sr) is a (b)lob.  The only value that could be supplied here outside of the policy is the start time (st), which I am not supplying as it is optional.

When I click the Get URL button, I get back a URL that looks like this:


Now, if that URL was to leak and I no longer wanted to provide read access to the blob, I could simply delete the 'read' policy or change the expiration date.  It would instantly be invalidated.  Compare this to the the same signature created without a policy:


This signature could not be revoked until it either expired or I regenerated the master key.

If you want to see how the SAS feature works or easily share blobs or containers in your Windows Azure storage account, give it a try at and see how easy it is to do.


* assuming the key holder has not marked the container as blob- or container-level public access already, in which case it is public read-only.

Wednesday, 10 February 2010

Do you Incarnate?

It wasn't too long ago when Karsten Januszewski came to my office looking for a Windows Azure token (back when we were in CTP).  I wondered what cool shenanigans the MIX team had going.  Turns out it was for the Incarnate project (explained here).  In short, this service finds all the different avatars you might be using across popular sites* and allows you to select an existing one instead of having to upload one again and again.


You will note that there is another 'dunnry' somewhere on the interwebs, stealing my exclusive trademark.  I have conveniently crossed them out for reference. ;)

Since the entire Incarnate service is running in Windows Azure, I was interested in Karsten's experience:

We chose Windows Azure to host Incarnate because there was a lot of uncertainty in traffic.  We didn't know how popular the service would be and knowing that we could scale to any load was a big factor in choosing it.

I asked him how the experience was, developing for Windows Azure

There is a ton of great documentation and samples.  I relied heavily on the Windows Azure Platform Kit as well as the samples in the SDK to get started.  Once I understood how the development environment worked and how the deployment model functioned, I was off and running. I'd definitely recommend those two resources as well as the videos from the PDC for people who are getting started.

I love it when I hear that.  Karsten was able to get the Incarnate service up and running on Windows Azure easily and now he is scale-proof in addition to the management goodness that is baked into Windows Azure.

Checkout more about Incarnate and Karsten's Windows Azure learning on the MIX blog.


*turns out you can extend this to add a provider for any site (not just the ones that ship in source).

Wednesday, 03 February 2010

How Do I Stop the Billing Meter in Windows Azure?

This might come as a surprise to some folks, but in Windows Azure you are billed when you deploy, not when you run.  That means we don't care about CPU hours - we care about deployed hours.  Your meter starts the second you deploy, irrespective of the state of the application.  This means that even if you 'Suspend' your service so it is not reachable (and consumes no CPU), the meter is still running.

Visually, here is the meter still running:


Here is when the meter is stopped:


Right now, there is a 'free' offering of Windows Azure that offers a limited # of hours per month.  If you are using MSDN benefits for Windows Azure, there is another offering that offers some bucket of 'free' hours.  Any overage and you start to pay.

Now, if you are like me and have a fair number of hosted services squirreled around, you might forget to go to the portal and delete the deployments when you are done.  Or, you might simply wish to automate the removal of your deployments at the end of day.  There are lots of reasons to remove your deployments, but the primary one is to turn the meter off.  Given that re-deploying your services is very simple (and can also be automated), this is not a huge issue to remove the deployment when you are done.

Automatic Service Removal

For those folks that wish an automated solution, it turns out that this is amazingly simple when using the Service Management API and the Azure cmdlets.  Here is the complete, deployment-nuking script:

$cert = Get-Item cert:\CurrentUser\My\<cert thumbprint>
$sub = 'CCCEA07B. your sub ID'

$services = Get-HostedServices -Certificate $cert -SubscriptionId $sub

$services | Get-Deployment -Slot Production | Set-DeploymentStatus 'Suspended' | Get-OperationStatus -WaitToComplete
$services | Get-Deployment -Slot Staging | Set-DeploymentStatus 'Suspended' | Get-OperationStatus -WaitToComplete

$services | Get-Deployment -Slot Production | Remove-Deployment
$services | Get-Deployment -Slot Staging | Remove-Deployment

That's it - just 6 lines of Powershell. BE CAREFUL.  This script will iterate through all the services in your subscription ID, stop any deployed service, and then remove it.  After this runs, every hosted service will be gone and the billing meter has stopped (for hosted services anyway).

Monday, 25 January 2010

Supporting Basic Auth Proxies

A few customers have asked how they can use tools like wazt, Windows Azure MMC, the Azure Cmdlets, etc. when they are behind proxies at work that require basic authentication.  The tools themselves don't directly support this type of proxy.  What we are doing is simply relying on the fact that the underlying HttpRequest object will pick up your IE's default proxy configuration.  Most of the time, this just works.

However, if you are in an environment where you are prompted for your username and password, you might be on a basic auth proxy and the tools might not work.  To work around this, you can actually implement a very simple proxy handler yourself and inject it into the application.

Here is one that I wrote to support wazt.  To use this, add the following to your app.config and drop the output assembly from this project into your execution directory.  Note, this would work with any tool in .NET that uses HttpWebRequest under the covers (like csmanage for instance).


<!-- basic auth proxy section declaration area-->
<!-- proxyHostAddress="Auto" : use Internet explorer configuration for name of the proxy -->
  <sectionGroup name="proxyGroup">
    <section name="basicProxy"
             type="Proxy.Configuration.CustomProxySection, Proxy" />

  <defaultProxy enabled="true" useDefaultCredentials="false">
    <module type="Proxy.CustomProxy, Proxy"/>

  <basicProxy proxyHostAddress="Auto" proxyUserName="MyName" proxyUserPassword="MyPassword" />


Download the source here.

Neat Windows Azure Storage Tool

I love elegant software.  I knew about CloudXplorer from Clumsy Leaf for some time, but I hadn't used it for awhile because the Windows Azure MMC and have been all I need for storage for awhile.  Also, I have a private tool that I wrote awhile back to generate Shared Access signatures for files I want to share.

I decided to check out the progress on this tool and noticed in the change log that support for Shared Access signatures is now included.  Nice!  So far, this is the only tool* that I have seen handle Shared Access signatures in such an elegant and complete manner.  Nicely done!

Definitely a recommended tool to keep on your shortlist.


*My tool is complete, but not nearly as elegant.

Monday, 11 January 2010

LINQPad supports SQL Azure

Some time back, I put in a request to LINQPad's feature request page to support SQL Azure.  I love using LINQPad for basically all my quick demo programs and prototypes.  Since all I work with these days is the Windows Azure platform, it was killing me to have to go to SSMS to do anything with SQL Azure.

Well, my request was granted!  Today, you can use the beta version of LINQPad against SQL Azure and get the full LINQ experience.  Behold:


In this case, I am querying the firewall rules on my database using LINQ.  Hot damn.  Nice work Joe!  If you pay a few bucks, you get the intellisense version of the tool too, which is well worth it.  This tool has completely replaced SnippetCompiler for me and continues to get better and better.  Now, if Joe would add F# support.

LINQPad Beta

Monday, 23 November 2009

PDC 2009 Windows Azure Resources

For those of you that made PDC, this will serve as a reminder and for those of you that missed PDC this year (too bad!), this will serve as a guide to some great content.

PDC Sessions for the Windows Azure Platform

Getting Started

Windows Azure

Codename "Dallas"

SQL Azure


Customer & Partner Showcases

Channel 9 Learning Centers

Coinciding with PDC, we have released the first wave of learning content on Channel 9. The new Ch9 learning centers features content for both the Windows Azure Platform, as well as a course specifically designed for the Identity Developer. The content on both these sites will be continued to be developed by the team over the coming weeks and months. Watch out for updates and additions.

Downloadable Training Kits

To complement the learning centers on Ch9, we still continue to maintain the training kits on the Microsoft download center, which allows you to download and consume the content offline. You can download the Windows Azure Platform training kit here, and the Identity training kit here. The next update is planned for mid-December.

Monday, 26 October 2009

Windows Azure Service Management CmdLets

As I write this, I am sitting on a plane headed back to the US from a wonderful visit over to our UK office.  While there, I got to meet a number of customers working with Windows Azure.  It was clear from the interaction that these folks were looking for a way to simplify how to manage their deployments and build it into an automated process.

With the release of the Service Management API, this is now possible.  As of today, you can download some Powershell cmdlets that wrap this API and make managing your Windows Azure applications simple from script.  With these cmdlets, you can script your deploys, upgrades, and scaling operations very easily.


The cmdlets mirror the API quite closely, but since it is Powershell, we support piping which cuts down quite a bit on the things you need to type.  As an example, here is how we can take an existing deployment, stop it, remove it, create a new deployment, and start it:

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production |
    Set-DeploymentStatus 'Suspended' |
    Get-OperationStatus -WaitToComplete |

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production |
    Remove-Deployment |
    Get-OperationStatus -WaitToComplete

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    New-Deployment Production $package $config -Label 'v.Next' |
    Get-OperationStatus -WaitToComplete

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production |
    Set-DeploymentStatus 'Running' |
    Get-OperationStatus -WaitToComplete

Notice that in each case, we are first getting our service by passing in the certificate and our subscription ID.  Again, since this is Powershell, we can get the certificate quite easily:

$cert = Get-Item cert:\CurrentUser\My\D6BE55AC428FAC6CDEBAFF432BDC0780F1BD00CF

You will find your Subscription ID on the portal under the 'Account' tab.  Note that we are breaking up the steps by using the Get-OperationStatus cmdlet and having it block until it completes.  This is because the Service Management API is an asynchronous model.

Similarly, here is a script that will upgrade a single role or the entire deployment depending on the arguments passed to it:

$label = 'nolabel'
$role = ''

if ($args.Length -eq 2)
    $role = $args[0]
    $label = $args[1]

if ($args.Length -eq 1)
    $label = $args[0]

if ($role -ne '')
    Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production |
    Set-Deployment -mode Auto -roleName $role -package $package -label $label |
    Get-OperationStatus -WaitToComplete
    Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production |
    Set-Deployment -mode Auto -package $package -label $label |
    Get-OperationStatus -WaitToComplete

Download the cmdlets from Code Gallery and leave me some feedback if you like them or if they are not working for you.

Tuesday, 06 October 2009


Things are getting crazy here at Microsoft getting ready for PDC.  I haven't had much time to blog or tweet for that matter.  However, I am taking a break from the grind to announce something I am really excited about - a sample TableBrowser service we are hosting for developers at

We built this service using ASP.NET MVC on a rich AJAX interface.  The goals of this service were to provide developers to an easy way to create, query, and manage their Windows Azure tables.  What better way to host this than on a scalable compute platform like Windows Azure?

Create and Delete Tables

If you need to create or manage your tables, you get a nice big list of the ones you have.


Create, Edit, and Clone your Entities

I love being able to edit my table data on the fly.  Since we can clone the entity, it makes it trivial to copy large entities around and just apply updates.


Query Entities

Of course, no browser application would be complete without being able to query your data as well.  Since the ADO.NET Data Services syntax can be a little unfamiliar at first, we decided to go for a more natural syntax route.  Using simple predicates long with OR, AND, and NOT operations, you can easily test your queries.


Display Data

Lastly, we have tried to make showing data in Windows Azure as convenient as possible.  Since data is not necessarily rectangular in nature in Windows Azure tables, we have given you some options:  First, you can choose the attributes to display in columns by partition.  Next, you expand the individual entity to show each attribute.


Please note:  during login you will need to supply your storage account name and key.  We do not store this key.  It is kept in an encrypted cookie and passed back and forth on each request.  Furthermore, we have SSL enabled to protect the channel.

The service is open for business right now and will run at least until PDC (and hopefully longer).  Enjoy and let me know through the blog any feedback you have or issues you run into.

Friday, 18 September 2009

Upgrading Your Service in Windows Azure

Windows Azure has been in CTP since PDC 08 in October of last year.  Since that time, we have had a fairly simple, yet powerful concept for how to upgrade your application.  Essentially, we have two environments: staging and production.


The difference between these two environments is only in the URI that points to any web exposed services.  In staging, we give you an opaque GUID-like URI (e.g. <guidvalue> that is hard to publically discover and in production, we give you the URI that you chose when you created the hosted service (e.g. <yourservice>

VIP Swaps, Deploys, and Upgrades

When you wanted to upgrade your service, you needed to deploy the updated service package containing all your roles into one of the environments.  Typically, this was in the staging environment.  Whenever you were ready, you would then click the big button in the middle to swap environments.  This re-programmed the load balancers and suddenly staging was production and vice versa.  If anything went wrong in your upgrade, you could hit the button again and you were back to the original deployment in seconds.  We called this model a "VIP Swap" and it is easy to understand and very powerful.

We heard from some customers that they wanted more flexibility to upgrade an individual role without redeploying the entire service.  Typically, this can be because there might be some state or caching going on in one of the other roles that a VIP swap would cause to be lost.

The good news is that now you can upgrade individual roles (or even the whole service) using the in place upgrade.  When you click the new 'Upgrade' button on the portal, you will see a screen very similar to the 'Deploy' screen that you would be used to from before, but this time you have two new options.


Upgrade Domains

The first new option allow you to choose if you want the upgrade to be 'Automatic' or 'Manual' across the upgrade domains.  To understand this option, you would probably want to understand what an 'Upgrade Domain' is all about.  You can think of upgrade domains as vertical slices of your application, crossing roles.  So, if I had a service with a single web role using 10 instances with 2 worker roles, each with 4 instances, then with 2 upgrade domains, I would have a 5 web role instance, and 2 + 2 worker roles instances in each upgrade domain.  Illustrated:


If I choose 'Automatic', it simply means that each upgrade domain will be sequentially be brought down and upgraded in turn.  If I choose 'Manual', then I need to click another button between each upgrade domain update in order to proceed.

Note:  in the CTP today, 2 upgrade domains are automatically defined and set.  In the future, you will be able to specify how many upgrade domains you would like to have.

Role Upgrades

Next, we have a radio button that specifies if you want to update the whole service, or a specific role with in the service.  Most folks will likely use the role specific update.

It is important to note that these upgrades are for the services where the topology has not changed.  That is, you cannot update the Service Definition (e.g. adding, removing roles or configuration options).  If you want to change the topology, you would need to use the more familiar VIP swap model.

Once you click Deploy, the selected role will be upgraded according to the upgrade mode you specified.

More information about in-place upgrades and update domains can be found here.  Lastly, you can of course eschew the portal and perform all of these actions using the new Service Management API.  Happy upgrading!

Wednesday, 02 September 2009

Instant Windows Azure Tokens

I am on vacation right now, but I read this over at Steve's blog and I just had to make sure everyone knows about it.  Right now, when you register at Microsoft Connect for Windows Azure, you will get an instant token.  No more 1 or 2 days wait!

Register for Windows Azure

Thursday, 27 August 2009

Deploying Applications on Windows Azure

There has been a bit of interest in an application called 'myTODO' that we built for the World Partner Conference (WPC) event back in July.  It is a simple, yet useful application.  The application allows you to create and share lists very easily.  It integrates with Twitter, so if you decide to share your lists, it will tweet them and their updates automatically.  You can also subscribe to lists using standard RSS if Twitter isn't your thing.


The funny thing is that we only built this app because we wanted something more interesting than the standard "Hello World" application.  The entire purpose of the app was to show how easily you can deploy an application (in just mins) on Windows Azure.

You can learn more about this application in 3 ways:

  1. Get the deployment package and deploy this yourself using our demo script from the Windows Azure Platform Training Kit.  You will find it in the Demos section and called "Deploying Windows Azure Services".
  2. Watch me show how the app works and how to deploy it by watching my screencast.
  3. Download the source code and see how we built the rich dynamic UI and how we modeled the data using tables.

Tuesday, 04 August 2009

Federation on Windows Azure

My teammate Vittorio has put out some new guidance and a great new toolkit that shows how to use federation today with Windows Identity Foundation (WIF or Geneva) on Windows Azure.  I know this has been a very common request, especially as services move outside of the private datacenters and into the cloud and as vendors try to build true SaaS applications that need to integrate seamlessly into the customer's experience.

As the technologies evolve, the guidance will be kept up to date.  For now, this is a great piece of work that gets us past some of the early roadblocks we encountered.

Monday, 20 July 2009

Windows Azure SDK update for July

You can download the new SDK and Windows Azure Tools for Visual Studio right now.  The big feature in this release is the support for multiple roles per deployment (instead of a single web and worker role).  This is surfaced as a new wizard UI in the tools.  Additionally, there is new support for any type of web app to be associated to the project.  Previously this was limited to a specific project type (but now it supports MVC directly!).

Download Windows Azure Tools for Visual Studio (includes SDK)
Download Windows Azure SDK

Monday, 01 June 2009

Windows Azure June Round-up

Ahh, it is the start of a glorious June and here in Washington the weather is starting to really get nice.  The previous cold and rainy spring must have made the product group more productive as Windows Azure has continued adding features since the last major update at MIX.

Given that Windows Azure is a service and not a boxed product, you can expect updates and features to roll-out over the coming months.  In this round-up, we have a number of features that have gone live in the last month or two.

New Feature: Geo-Location support


Starting in May, a new option was added to the portal to support geo-locating your code and data. In order to use this most effectively, the idea of an 'Affinity Group' was created. This allows you to associate various services under an umbrella label for the location.

Read more about this feature here and see a complete provisioning walk-through.

New Feature: Storage API updates

I briefly mentioned this last week, but on Thursday (5/28), new features were released to the cloud for Windows Azure storage. The long awaited batch transaction capability for tables as well as a new blob copying capability were released. Additionally, the GetBlockList API was updated to return both committed and uncommitted blocks in blob storage.

One more significant change of note is that a new versioning mechanism has been added. New features will be versioned by a new header ("x-ms-version"). This versioning header must be present to opt-in to new features. This mechanism is in place to prevent breaking changes from impacting existing clients in the future. It is recommended that you start including this header in all authenticated API calls.

Rounding out these updates were some changes to how property names are stored in table storage as well as the size for Partition and Row keys. Unicode chars and up to 1K key size are supported, respectively. Finally, the timeout values for various storage operations were updated as well.

For more details, please read the announcement.

Please note: There currently is no SDK support for these new storage features. The local developer fabric does NOT currently support these features and the StorageClient SDK sample has not been updated yet. At this point, you need to use the samples provided on Steve Marx's blog. A later SDK update will add these features officially.

Windows Azure SDK Update

The May CTP SDK update has been released to the download center. While this release does NOT support the new storage features, it does add a few new capabilities that will be of interest to the Visual Studio 2010 beta testers. Specifically:

  • Support for Visual Studio 2010 Beta 1 (templates, local dev fabric, etc.)
  • Updated support for Visual Studio 2008 - you can now configure settings through the UI instead of munging XML files.
  • Improved reliability of the local dev fabric for debugging
  • Enhanced robustness and stability (aka bug fixes).

Download the Windows Azure Tools for Visual Studio (includes both SDK and tools).

New Windows Azure Applications and Demos

Windows Azure Management Tool (MMC)

The Windows Azure Management Tool was created to manage your storage accounts in Windows Azure. Developed as a managed MMC, the tool allows you to create and manage both blobs and queues. Easily create and manage containers, blobs, and permissions. Add and remove queues, inspect or add messages or empty queues as well.

Bid Now Sample

Bid Now is an online auction site designed to demonstrate how you can build highly scalable consumer applications. This sample is built using Windows Azure and uses Windows Azure Storage. Auctions are processed using Windows Azure Queues and Worker Roles. Authentication is provided via Live Id.

If you know of new and interesting Windows Azure content that would be of broad interest, please let me know and I will feature it in later updates.

Relevant Blog Postings

Thursday, 28 May 2009

New Windows Azure Storage Features

Some new features related to blob and table storage were announced on the forum today.  The key features announced were:

  1. Transactions support for table operations (batching)
  2. Copy Blob API (self explanatory)
  3. Updated GetBlockList API now returns both committed and uncommitted block lists

There are some more details around bug fixes/changes and timeout updates as well.  Refer to the announcement for more details.

The biggest impact to developers at this point is that to get these new features, you will need to include a new versioning header in your call to storage.  Additionally, the StorageClient library has not been updated yet to reflect these new APIs, so you will need to wait for some examples (coming from Steve) or an update to the SDK.  You can also refer to the MSDN documentation for more details on the API and roll your own in the meantime.

Friday, 22 May 2009

Create and Deploy your Windows Azure Service in 5 Steps

Step 1:  Obtaining a token

Sign up through to get a token for the service.  Turnaround is pretty quick, so you should have one in about a day or so.  If you don't see it in a day, make sure you check your SPAM folder to ensure that the message we send you is not trapped in purgatory there.

Step 2:  Redeeming the token

Navigate to the Windows Azure Portal and sign-in with the LiveID that you would like to use to manage your Windows Azure applications.  Today, in the CTP, only a single LiveID can manage your Windows Azure project and we cannot reassociate the token with another LiveID once redeemed.  As such, make sure you use the LiveID that you want long term to manage the solution.

The first time you login to the portal, you will be asked to associate your LiveID.


Click 'I Agree' to continue and once the association has been successful click 'Continue' again.  At this point, you will be presented with an option to redeem a token.  Here is where you input the token you received in Step 1.


Enter the token and click 'Next'.  You will then be presented with some EULAs to peruse.  Read them carefully (or don't) and then click 'Accept'.  You will get another confirmation screen, so click 'Continue'.

Step 3:  Create your Hosted Service

At this point, you can now create your first hosted solution.  You should be on the main Project page and you should see 2 and 1 project(s) remaining for Storage Account and Hosted Services, respectively.


Click 'Hosted Services' and provide a Project label and description.


Click 'Next' and provide a globally unique name that will be the basis for your public URL.  Click 'Check Availability' and ensure that the name hasn't been taken.  Next, you will need to create an Affinity Group in order to later get your storage account co-located with your hosted service.


Click the 'Yes' radio button and the option to create a new Affinity Group.  Give the Affinity Group a name and select a region where you would like this Affinity Group located.  Click 'Create' to finish the process.


Step 4:  Create your Storage account

Click the 'New Project' link near the left corner of the portal and this time, select the Storage Account option.


Similar to before, give a project label and description and click Next.


Enter a globally unique name for your storage account and click 'Check Availability'.  Since performance is best when the data is near the compute, you should make sure that you opt to have your storage co-located with your service.  We can do this with the Affinity Group we created earlier.  Click the 'Yes' radio button and use the existing Affinity group.  Click 'Create' to finish.


At this time, you should note that your endpoints have been created for your storage account and two access keys are provided for you to use for authentication.

Step 5:  Deploying an application.

At this point, I will skip past the minor detail of actually building an application for this tutorial and assume you have one (or you choose to use one from the SDK).  For Visual Studio users, you would want to right click your project and select 'Publish' to generate the service package you need to upload.  Alternatively, you can use the cspack.exe tool to create your package.


Using either method you will eventually have two files:  the actual service package (cspkg) and a configuration file (cscfg).


From the portal, select the hosted service project you just created in Step 3 and click the 'Deploy' button.


Browse to the cspkg location and upload both the package and the configuration settings file (cscfg).  Click 'Deploy'.


Over the next minute or so, you will see that your package is deploying.  Hold tight and you will see it come back with the status of "Allocated".  At this point, click the 'Configure' button**.


In order to use your storage account you created in Step 4, you will need to update this information from the local development fabric settings to your storage account settings.  An easy way to get this is to right click your Storage Account project link in the left hand navigation tabs and open it in a new tab.  With the storage settings in one tab and the configuration in another, you can easily switch between the two and cut & paste what you need.

Inside the XML configuration, replace the 'AccountName' setting with your storage account name.  If you are confused, teh account name is the one that is part of the unique global URL, i.e. <youraccountname>  Enter the 'AccountSharedKey' using the primary access key found on the storage project page (in your new tab).  Update the endpoints from the loop-back addresses to the cloud settings:,, respectively.  Note that the endpoints here do not include your account name and we are using https.  Set the 'allowInsecureRemoteEndpoints' to either false or just delete that XML element.

Finally, update the 'Instances' element to 2 (the limit in the CTP today).  It is strongly recommended that you run at least 2 instances at all times.  This ensures that you always have at least one instance running at all times if something fails or we need to do updates (we update by fault zones and your instances are automatically placed across fault zones).  Click 'Save' and you will see that your package is listed as 'Package is updating'.

When your package is back in the 'Allocated' state (a min later or so), click the 'Run' button.  Your service will then go to the 'Initializing' state and you will need to wait a few mins while it gets your instances up and running.  Eventually, your app will have a 'Started' status.  Congratulations, your app is deployed in the Staging environment.


Once deployed to staging, you should click the staging hyperlink for your app (the one with the GUID in the DNS name) and test your app.  If you get a hostname not found error, wait a few more seconds and try again - it is likely that you are waiting for DNS to update and propagate your GUID hostname. When you are comfortable, click the big circular button in between the two environments (it has two arrows on it) and promote your application to the 'production' URL.

Congratulations, you have deployed your first app in Windows Azure.

** Note, this part of the process might be optional for you.  If you have already configured your developer environment to run against cloud storage or you are not using the StorageClient sample at all, you might not need to do this as the uploaded configuration file will already include the appropriate cloud settings.  Of course, if you are not using these options, you are already likely a savvy user and this tutorial is unnecessary for you.

Thursday, 14 May 2009

Windows Azure MMC


Available immediately from Code Gallery, download the Windows Azure MMC. The Windows Azure Management Tool was created to manage your storage accounts in Windows Azure.


Developed as a managed MMC, the tool allows you to create and manage both blobs and queues. Easily create and manage containers, blobs, and permissions. Add and remove queues, inspect or add messages or empty queues as well..


  • Manage multiple storage accounts
  • Easily switch between remote and local storage services
  • Manage your blobs
    • Create containers and manage permissions
    • Upload files or even entire folders
    • Read metadata or preview the blob contents
  • Manage your queues
    • Create new queues
    • Monitor queues
    • Read (peek) messages
    • Post new messages to the queue
    • Purge queues

Known Issues

The current release does not work with Windows 7 due to a bug in the underlying PowerShell version. All other OS versions should be unaffected.  We will get this fixed as soon as possible.

PHP SDK for Windows Azure


I tweeted this earlier (really never thought I would say that.).  Anyhow, a first release of the PHP SDK for Windows Azure has been released on CodePlex.  Right now, it works with blobs and helps with the authentication pieces, but if you look on the roadmap you will see both queue support and table support coming down the road this year.


This is a great resource to use if you are a PHP developer looking to host your app in Windows Azure and leverage the native capabilities of the service (i.e. namely storage).

PHP Azure Project Site

Monday, 27 April 2009

Overlooking the Obvious

I was trying to troubleshoot a bug in my worker role in Windows Azure the other day.  To do this, I have a very cool tool (soon to be released) that lets me peek messages.  The problem was that I couldn't get ahold of any messages, it was like they were disappearing right from under my nose.  I would see them in the count, but couldn't get a single one to open.  I was thinking that I must have a bug in the tool.

Suddenly, the flash of insight came: something was actually popping the messages.  While I remembered to shut down my local development fabric, I forgot all about the version I had running in the cloud in the staging environment.  Since I have been developing against cloud storage, it is actually a shared environment now.  My staging workers were popping the messages, trying to process them and failing (it was an older version).  More frustrating, the messages were eventually showing up again, but getting picked up before I could see them in the tool.

So, what is the lesson here:  when staging, use a different account than your development accounts.  In fact, this is one of the primary reasons we have the cscfg file:  don't forget about it.

Thursday, 16 April 2009

Why does Windows Azure use a cscfg file?

Or, put another way:  why don't we just use the web.config file?  This question was recently posed to me and I thought I would share an answer more broadly since it is a fairly common question.

First, the web.config is part of the deployed package that gets loaded into a diff disk that in turn is started for you app.  Remember, we are using Hypervisor VM technology to run your code.  This would mean any change to this file would require a new diff disk and a restart.  We don't update the actual diff disk (that would be hard with many instances) without a redeploy.

Second, the web.config is special only because IIS knows to look for it and restart when it changes.  If you did that with a standard app.config (think worker roles) that doesn't work.  We want a mechanism that works for any type of role.

Finally, we have both staging and production.  If you wanted to hold different settings (like a different storage account for test vs. production), you would need to hold these values outside the diff disk again or you would not be able to promote between environments.

The cscfg file is 'special' and is held outside the diff disk so it can be updated.  The fabric code (RoleManager and its ilk) know when this file is updated, so it will trigger the app restart for you.  There is some short period of time between when you update this file and when the fabric notices.  Only the RoleManager and fabric are aware of this file - notice that you can't get to the values reading it from disk for instance.

And that is why in a nutshell, we have the cscfg file and we don't use web.config or app.config files.

All that being said, I don't want to give you the impression that you can't use web.config application settings in Windows Azure - you definitely can.  However, any update to the web.config file will require a redeploy, so just keep that in mind and choose the cscfg file when you can.

Thursday, 09 April 2009

Azure Training Kit and Tools Update

Are you looking for more information about Azure Services (Windows Azure, SQL Services, .NET Services, etc.)?  How about some demos and presentations?  What about a great little MMC tool to manage all your .NET Services?  Great news - it's here:

Azure Services Training Kit - April Update

clip_image001Today we released an updated version of the Azure Services Training Kit.   The first Azure Services Training Kit was released during the week of PDC and it contained all of the PDC hands-on labs.   Since then, the Azure Services Evangelism team has been creating new content covering new features in the platform.

The Azure Services Training Kit April update now includes the following content covering Windows Azure, .NET Services, SQL Services, and Live Services:

· 11 hands-on labs - including new hands-on labs for PHP and Native Code on Windows Azure.

· 18 demo scripts - These demo scripts are designed to provide detailed walkthroughs of key features so that someone can easily give a demo of a service

· 9 presentations - the presentations used for our 3 day training workshops including speaker notes.

The training kit is available as an installable package on the Microsoft Download Center. You can download it from

Azure Services Management Tools - April Update

The Azure Services Management Tools include an MMC SnapIn and Windows PowerShell cmdlets that enable a user to configure and manage several Azure Services including .NET Access Control Services, and the .NET Workflow Service. These tools can be helpful when developing and testing applications that use Azure Services. For instance, using these tools you can view and change .NET Access Control Rules, and deploy and view workflows.

You can download the latest management tools from

Thursday, 02 April 2009

Windows Azure and Geneva

I don't like to basically re-blog what others have written, but I will make a minor exception today, as it is important enough to repeat.  My friend and colleague, Vittorio has explained the current status of the Geneva framework running on Windows Azure today.

The short and sweet is that we know it is not working 100% today and we are working on a solution.  This is actually the main reason that you do not see a Windows Azure version of Azure Issue Tracker today.

Thursday, 19 March 2009

Quickly put PHP on Windows Azure without Visual Studio

The purpose of this post is to show you the command line options you have to first get PHP running locally on IIS7 with fastCGI and the to get it packaged and running in the Windows Azure local development fabric.

First, download the Windows Azure SDK and the latest version of PHP (or whatever version you wish).  I would recommend getting the .zip version and simply extracting to "C:\PHP" or something like that.  Now, configure your php.ini file according to best practices.

Next, create a new file called ServiceDefinition.csdef and copy the following into it:

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MyPHPApp" xmlns="">
<WebRole name="WebRole" enableNativeCodeExecution="true">
<!-- Must use port 80 for http and port 443 for https when running in the cloud -->
<InputEndpoint name="HttpIn" protocol="http" port="80" />

You will need this file in order to describe your application in the cloud.  This definition models what your application should look like in the cloud.  For this example, it is a simple web role listening to port 80 over HTTP.  Notice as well that we have the 'enableNativeCodeExecution' attribute set as well.  This is required in order to use fastCGI.

Then, create a .bat file for enabling fastCGI running locally.  I called mine PrepPHP.bat:

@echo off

set phpinstall=%1
set appname=%2
set phpappdir=%3

echo Removing existing virtual directory...
"%windir%\system32\inetsrv\appcmd.exe" delete app "Default Web Site/%appname%"

echo Creating new virtual directory...
"%windir%\system32\inetsrv\appcmd.exe" add app /"Default Web Site" /path:"/%appname%" /physicalPath:"%phpappdir%"

echo Updating applicationHost.config file with recommended settings...
"%windir%\system32\inetsrv\appcmd.exe" clear config -section:fastCGI
"%windir%\system32\inetsrv\appcmd.exe" set config -section:fastCgi /+"[fullPath='%phpinstall%\php-cgi.exe']

echo Setting PHP handler for application
%windir%\system32\inetsrv\appcmd.exe" clear config "Default Web Site/%appname%" -section:system.webServer/handlers
%windir%\system32\inetsrv\appcmd.exe" set config "Default Web Site/%appname%" -section:system.webServer/handlers /+[name='PHP_via_FastCGI',path='*.php',verb='*',modules='FastCgiModule',scriptProcessor='%phpinstall%\php-cgi.exe',resourceType='Unspecified']

echo Setting Default Document to index.php
%windir%\system32\inetsrv\appcmd.exe" clear config "Default Web Site/%appname%" -section:defaultDocument
%windir%\system32\inetsrv\appcmd.exe" set config "Default Web Site/%appname%" -section:defaultDocument /enabled:true /+files.[@start,value='index.php']

echo Done...

Create one more .bat file to enable PHP running in the local dev fabric:

@echo off

set phpinstall=%1
set appname=%2
set phpappdir=%3

echo Setting PHP handler for application
"%windir%\system32\inetsrv\appcmd.exe" set config "Default Web Site/%appname%" -section:system.webServer/handlers /[name='PHP_via_FastCGI'].scriptProcessor:%%RoleRoot%%\php\php-cgi.exe

echo Outputting the web.roleconfig file
del "%phpappdir%\web.roleconfig" /q

echo ^<;?xml version="1.0" encoding="utf-8" ?^> > "%phpappdir%\web.roleconfig"
echo ^<;configuration^> >> "%phpappdir%\web.roleconfig"
echo ^<;system.webServer^> >> "%phpappdir%\web.roleconfig"
echo ^<;fastCgi^> >> "%phpappdir%\web.roleconfig"
echo ^<;application fullPath="%%RoleRoot%%\php\php-cgi.exe" /^> >> "%phpappdir%\web.roleconfig"
echo ^<;/fastCgi^> >> "%phpappdir%\web.roleconfig"
echo ^<;/system.webServer^> >> "%phpappdir%\web.roleconfig"
echo ^<;/configuration^> >> "%phpappdir%\web.roleconfig"

echo Copying php assemblies and starting fabric

md %appname%_WebRole
md %appname%_WebRole\bin

robocopy %phpappdir% %appname%_WebRole\bin /E
robocopy %phpinstall% %appname%_WebRole\bin\php /E

"%programfiles%\windows azure sdk\v1.0\bin\cspack.exe" "%~dp0ServiceDefinition.csdef" /role:WebRole;"%~dp0%appname%_WebRole\bin" /copyOnly /generateConfigurationFile:"%~dp0ServiceDefinition.csx\ServiceConfig.cscfg"
"%programfiles%\windows azure sdk\v1.0\bin\csrun.exe" "%~dp0ServiceDefinition.csx" "%~dp0ServiceDefinition.csx\ServiceConfig.cscfg" /launchBrowser

echo Done...

Open a command prompt as an Administrator and make sure that the ServiceDefinition.csdef file you created is in the same directory as the .bat files you created.

From the command line, type:

PrepPHP.bat "path to php binaries directory" "myiis7appname" "path to my php app"

If I had installed PHP to "c:\php" and my PHP application was located at "c:\webroot\myphpapp", I would type:

prepphp.bat "c:\php" "myphpapp" "c:\webroot\myphpapp"

Now, I can launch IE and type in:  http://localhost/myphpapp and the index.php page will launch.

To get this running in Windows Azure local dev fabric, you would type:

prepforazure.bat "c:\php" "myphpapp" "c:\webroot\myphpapp"

The dev fabric will launch as well as IE and you will be looking at your PHP application running in the dev fabric.  If you wish to deploy to the cloud, simply change the command line call for cspack.exe to remove the 'copyOnly' option.  Next, comment out the csrun.exe and you have a package ready to upload to the cloud.  When you deploy at the portal, make sure you update the number of instances on the portal to at least 2 in order to get fault tolerance.

Changes to SDS Announced

Well, it is a bit of old news at this point:  the SDS team has broken the silence and announced the change from SOAP/REST to a full relational model over TDS.  First, I wanted to say I am super-excited about this change (and I really mean 'super', not in the usual Microsoft sense).  I believe that this will be a net-positive change for the majority of our customers.  I can't tell you how many times I have heard customers say, "well, SDS is great. but, I have this application over here that I want to move to the cloud and I don't want to re-architect to ACE".  Or, "my DBAs understand SQL - why can't you just give me SQL?".  We listen, really.  The feedback was loud and clear.

It may be a tiny bit contentious that this change comes with the removal of the SOAP and REST interfaces.  However, if you really love that flex model, you can get it in Windows Azure tables.

I know I have been pinged a few times to ask if the announcement is the reason for my change from SDS to Windows Azure.  The honest answer is not really.  The real reason for my change is that my group had a small re-org and the former Windows Azure tech evangelist moved up and took on higher level responsibilities (he is now my manager).  That, combined with the changes to SDS made it a natural transition point to move.  Zach Owens has now moved into my group and is looking after SDS - as the former SQL Server evangelist, it makes perfect sense for Zach to take this role now as SDS is now SQL Server in the cloud.

I would expect to see close collaboration between Windows Azure and SDS as this is the killer combination for so many applications.  If you want to know more about the changes and specific details, I would try to catch Nigel Ellis' talk at MIX09 this year or watch it online afterwards.  I will update this post with a specific link once Nigel gives his talk.

Updated:  Nigel's talk is here.

Sunday, 01 March 2009

Changing My Focus

I am officially back from parental leave and it was a wonderful time.  I had thought that I would have a lot more time to get some side projects done that I have had simmering for some time now.  However, that really didn't happen.  What can I say?  My daughter took a lot more of my time than I had planned.  I don't regret it in the least.

Shortly after I started my leave, I discovered that my group was undergoing a small re-organization (nothing to do with the recent layoffs at Microsoft, incidentally).  As part of this re-org, I have a new manager these days and a new role.  I am now the technical evangelist for Windows Azure as opposed to SQL Services.  As such, the content of this blog and my demos will obviously start to change to reflect my new role.

Part of me is sad to leave the SDS fold.  There is some great work happening there and I would have liked to see it through.  However, another part of me is happy to take on new challenges.  Windows Azure is certainly a key part of the overall Azure Services Platform and there is an incredible body of work happening in this area.

Incidentally, this reunites me with Steve Marx.  Steve and I used to work together on web evangelism (AJAX specifically) - he's an entertaining guy, so it should be fun to work with him again.

Thursday, 22 January 2009

Azure Issue Tracker Released

I am happy to announce the immediate release of the Azure Issue Tracker sample application.  I know. I know. the name is wildly creative.  This sample application is a simple issue tracking service and website that pulls together a couple of the Azure services:  SQL Data Services and .NET Access Control Service.

This demo is meant to show a realistic SaaS scenario.  As such, it features federation, claims-based authorization, and scalable data storage.

In this post, I will briefly walk through the application such that you can see how it is meant to work and how you might implement something similar.  Over the coming weeks, I will dig into features more deeply to demonstrate how they work.  I would also encourage you to follow Eugenio, who will be speaking more about this particular demo and the 'Enterprise' version over the coming weeks.

Let's get started:

  1. Make sure you have a .NET Services account (register for one here)
  2. Download and extract the 'Standard' version from the Codeplex project page
  3. Run the StartMe.bat file.  This file is the bootstrap for the provisioning process that is necessary to get the .NET Services solution configured as well as things like websites, certificates, and SDS storage.
  4. Open the 'Readme.txt' from the setup directory and follow along for the other bits

Once you have the sample installed and configured, navigate to your IssueTracker.Web site and you will see something like this:


Click the Join Now and then select the 'Standard' edition.  You will be taken to the 'Create Standard Tenant' page.  This is how we register our initial tenant and get them provisioned in the system.  The Windows LiveID and company name put into this page is what will be used for provisioning (the other information is not used right now).


Once you click 'Save', the provisioning service will be called and rules will be inserted into the Access Control service.  You can view the rules by looking using the .NET Services portal and viewing the Access Control Service (use the Advanced View) and select the 'http://localhost/IssueTracker.Web' scope.


We make heavy use of the forward chaining aspect of the rules transformation here.  Notice that Admin will be assigned the role of both Reader and Contributor.  Those roles have many more operations assigned to them.  The net effect will be that an Admin will have all the operations assigned to them when forward chaining is invoked.

Notice as well that we have 3 new rules created in the Access Control service (ACS).  We have a claim mapping that sets the Windows LiveID to the Admin role output claim, we have another that sets an email mapping claim, and finally one that sets a tenant mapping claim.  Since we don't many input claims to work with (only the LiveID really), there is not too much we can do here in the Standard edition.  This is where the Enterprise edition that can get claims from AD or any other identity provider is a much richer experience.

Once you have created the tenant and provisioned the rules, you will be able to navigate to a specific URL now for this tenant:  http://localhost/IssueTracker.Web/{tenant}

You will be prompted to login with your LiveID and once you are authenticated with Windows LiveID, you will be redirected back to your project page.  Under the covers, we federated with LiveID, got a token from them, sent the token to ACS, transformed the claims, and sent back a token containing the claims (all signed by the ACS).  I will speak more about this later, but for now, we will stick to the visible effects.


From the Project page, you should click 'New Project' and create a new project.

  • Give it a name and invite another Windows LiveID user (that you can login as later).  Make sure you invite a user with a different Windows LiveID than the one you used to initially provision the tenant (i.e. what you are logged in as currently).
  • Choose the 'Reader' role for the invited user and click the Invite button.
  • Add any custom fields you feel like using.  We will likely expand this functionality later to give you more choices for types and UI representation.
  • Click Save (make sure you added at least 1 user to invite!)

Once you click Save, additional rules are provisioned out to the ACS to handle the invited users.  From the newly created project, create an issue.  Notice that any additional fields you specified are present now in the UI for you to use for your 'custom' issue.


Once you have saved a new issue, go ahead and try the edit functionality for the Issue.  Add some comments, move it through the workflow by changing the status, etc.

Next, open a new instance of IE (not a new tab or same IE) and browse to the tenant home page (i.e. http://localhost/IssueTracker.Web/{tenant}).  This time however, login as the invited Windows LiveID user.  This user was assigned the 'Reader' role.  Notice the new browser window can read the projects for this tenant and can read any issue, but they cannot use the Edit functionality.


Now, I will make two comments about this.  First, so what?  We are checking claims here on the website and showing you a 'Not Authorized' screen.  While we could have just turned off the UI to not show the 'Edit' functionality, we did this intentionally in the demo so you can see how this claim checking works.  In this case, we are checking claims at the website using a MVC route filter (called ClaimAuthorizationRouteFilterAttribute).

One key point of this demo is that the website is just a client to the actual Issue Tracker service.  What if we tried to hit the service directly?

Let's check.  I am going to intentionally neuter the web site claims checking capabilities:


By commenting out the required claims that the Reader won't have, I can get by the website.  Here is what I get:


The Issue Tracker service is checking claims too. whoops.  No way to trick this one from a client.  So, what are the implications of this design?  I will let you noodle on that one for a bit.  You should also be asking yourself, how did we get those claims from the client (the website), to the service?  That is actually not entirely obvious and so I will save that for another post.  Enjoy for now!

Wednesday, 17 December 2008

SQL Down Under Podcast

I recently had the opportunity to sit down (virtually) with Greg Low from SQL Down Under fame and record a podcast with him.  I have to thank Greg for inviting me to ramble on about cloud services, data services like SQL Data Services and Microsoft's Azure services in particular.  It is about an hour's worth of content, but it seemed a lot faster than that to me.  I speak at a relatively high level on what we have done with SQL Services and Azure services and how to think about cloud services in general.  There are some interesting challenges to cloud services - both in the sense of what challenges they solve as well as new challenges they introduce.

I am show 42, linked from the Previous Shows page.  Thanks Greg!

Friday, 14 November 2008

Fixing the SDS HOL from Azure Training Kit

If you downloaded the Azure Services Training Kit (which you should), you would find a compilation error on some of the SQL Data Services HOLs.


The error is somewhat self-explanatory:  the solution is missing the AjaxControlToolkit.  The reason that this file is missing is not because we forgot it, but rather our automated packaging tool was trying to be helpful.  You see, we have a tool that cleans up the solutions by deleting the 'bin' and 'obj' folders and any .pdb files in the solution before packaging.  In this case, it killed the bin directory where the AjaxControlToolkit.dll was deployed.

To fix this error, you just need to visit AjaxControlToolkit project on CodePlex and download it again.  The easiest way is to download the, extract the 'Bin' directory and copy it in to the root of the solution you are trying to use.

Sorry about that - we will fix it for our next release.

(updated: added link)

Thursday, 13 November 2008

Azure Services Training Kit – PDC Preview

The Azure Services Training kit contains the hands on labs used at PDC along with presentations and demos.  We will continue to update this training kit with more demos, samples, and labs as they are built out.  This kit is a great way to try out the technologies in the Azure Services Platform at your own pace through the hands on labs.


You will need a token in order to run a few of the labs (specifically the .NET Services labs, the SQL Data Services labs, and one of the Live Services labs).  The Windows Azure labs use the local dev fabric so no token is necessary.  Once you install the kit, it will launch the browser with a navigation UI to find all the content within.  If you need an account, simply click the large blue box labeled 'Try it now' and follow the links to register at Microsoft Connect.

Happy Cloud Services.

Tuesday, 11 November 2008

Refreshed REST library for SQL Data Services

I finally got around to doing a quick refresh on the SSDS REST Library.  It should now be called SDS REST library of course, but I doubt I will change the name as that would break the URL in Code Gallery.

I am calling this one a 'Refresh' release because I am not adding any features.  The purpose of this release was to fix the serialization such that it runs in partial trust.  Partial trust support is desirable because that means you can use this library in Windows Azure projects.

I found out an interesting fact while working on this about the XmlSerializer.  First, serializing a generic type, in this case SsdsEntity<T> works just fine in partial trust.  However, deserializing that exact same type will not work without Full trust.  To fix it, I had to actually remove any and all code that tried to do it.  Instead, I deserialized the T in SsdsEntity<T> and manually created the SsdsEntity part.  You can see those updates in the SsdsEntitySerializer class as well as the SsdsEntity<T> Attributes property in the setter if you check.  I don't see any problems with my solution, and in fact, it may end up being more efficient.

Remaining work to do:  implement the JOIN, OrderBy, and TOP operations (if I find the time).

Get it here:  SDS REST Library

Tuesday, 04 November 2008

NeoGeo on Building with SDS

During PDC, I was lucky enough to film a short video with Marc Hoeppner, one of the Regional Directors from Germany and the Managing Director of NeoGeo New Media GmbH.  Marc was involved very early with SQL Data Services and has provided some valuable feedback to us on features and direction.

I managed to get Marc to show me his company's media asset management product, neoMediaCenter.NET.  What struck me during this interview was how his team did a hybrid approach to cloud services.  That is, their original product uses SQL Server on the backend for storage and querying.  Instead of forcing customers to make an either/or decision, they took the approach of offering both.  You can move your data seamlessly between the cloud or the on-premises database. 

There are some real advantages to using SQL Data Services for this product: namely, with a click, you can move the data to the cloud where it can essentially be archived forever, but still available for consumption.  We like to term this 'cold storage'.  Imagine the model where you have thousands and thousands of digital assets.  For the assets that are temporally relevant, you can store them in the local on-premises database for the least latency.  However, as the data ages, it tends to be used less and less frequently.  Today, companies either invest in a bunch of new storage, archive it off to tape, or just delete the content once it gets to a certain age.  Instead of forcing customers to make one of these choices, Marc has added the capability to move this data out of the on-premises store and out to the cloud seamlessly.  It still appears in the application, but is served from the cloud.  This makes accessing this data simple (unlike tape or deleting it) as well as relatively inexpensive (unlike buying more disk space yourself).

Once we have multiple datacenters up and operational, you also get the geo-location aspect of this for free.  It may be the case that for certain sets of distributed customers, using the geo-located data is in fact faster than accessing the data on-premises as well.

This is a very cool demo.  If you watch towards the end, Marc shows a CIFS provider for SDS that allows you to mount SDS just like a mapped network drive.  Marc mentions it in the video, but he managed to build all this functionality in just a week!  It is interesting to note that Marc's team also made use of the SSDS REST library that provided the LINQ and strongly typed abstraction for querying and working with SDS (it was named before SDS, hence SSDS still).  I am happy to see that of course since I had a bit to do with that library. :)

Watch it here

Sunday, 02 November 2008

Using SDS with Azure Access Control Service

It might not be entirely obvious to some folks how the integration with SQL Data Services and Azure Access Control Service works.  I thought I would walk you through a simple example.

First, let me set the stage that at this point the integration between the services is a bit nascent.  We are working towards a more fully featured authorization model with SDS and Access Control, but it is not there today.


I will group the authentication today into two forms:  the basic authentication used by SDS directly and the authentication used by Access Control.  While the two may look similar in the case of username and password, they are, in fact, not.  The idea is that eventually, the direct authentication to SDS using basic authentication (username/pwd) will eventually go away.  Only authentication via Access Control will survive going forward.  For most folks, this is not a super big change in the application.  While we don't have the REST story baked yet in the current CTP, we have support today in SOAP to show you how this looks.

Preparing your Azure .NET Services Solution

In order to use any of these methods, you must of course have provisioned an account for the CTP of Azure Services Platform and .NET Services in particular.  To do this, you must register at and work through Microsoft Connect to get an invitation code.  PDC attendees likely already have this code if they registered on Connect (using the LiveID they registered for PDC with).  Other folks should still register, but won't get the code as fast as PDC attendees.  Once you have the invitation code and have created and provisioned a solution, you need to click the Solution Credentials link and associate a personal card for CardSpace or a certificate for certificate authentication.


image image

Once you have credentials associated with your Azure Service Platform solution, you can prepare your code to use them.

Adding the Service Reference

Here is how you add a service reference to your project to get the SOAP proxy and endpoints necessary to use Access Control.  First, right click your project and choose Add Service Reference.


In the address, use  Note the trailing '/' in the URL.  Also, notice it is not using '', but just ''.  Next, name the proxy - I called mine 'SdsProxy'.

Click the Advanced button and choose System.Collections.Generic.List from the collection type dropdown list.


Once you have clicked OK a few times, you will get an app.config in your project that contains a number of bindings and endpoints.  Take a moment to see the new endpoints:


There are 3 bindings right now: basicHttpBinding (used for basic authentication directly against SDS), as well as customBinding and wsHttpBinding, which are used with the Access Control service .  There are also 4 endpoints added for the client:

  1. BasicAuthEndpoint used for basic authentication with SDS directly.
  2. UsernameTokenEndpoint used for authentication against Access Control (happens to be same username and password as #1 however).
  3. CertificateTokenEndpoint used for authentication against Access Control via certificate and finally
  4. CardSpaceTokenEndpoint used for authentication against Access Control via Cardspace.

Use the Access Control Service

At this point, you just need to actually use the service in code.  Here is a simple example on how to do it.  I am going to create a simple service that does nothing but queries my authority and I will do it in all three supported Access Control authentication methods (2-4 above).

Solution and Password

To use the username/password combination you simply do it exactly like the basic authentication you are used to, but use the 'UsernameTokenEndpoint' for the SOAP proxy.  It looks like this:

var authority = "yourauthority";

var proxy = new SitkaSoapServiceClient("UsernameTokenEndpoint");
proxy.ClientCredentials.UserName.UserName = "solutionname";
proxy.ClientCredentials.UserName.Password = "solutionapassword";

var scope = new Scope() { AuthorityId = authority };

//return first 500 containers
var results = proxy.Query(scope, "from e in entities select e");

Console.WriteLine("Containers via Username/Password:");
foreach (var item in results)



CardSpace has two tricks to get it working once you set the proxy to 'CardspaceTokenEndpoint'.  First, you must use the DisplayInitializeUI method on the proxy to trigger the CardSpace prompt.  Next, you must explicitly open the proxy by calling Open.  It looks like this:

//create a new one for CardSpace
proxy = new SitkaSoapServiceClient("CardSpaceTokenEndpoint");
proxy.DisplayInitializationUI(); //trigger the cardspace login

//need to explicitly open for CardSpace

//return first 500 containers
results = proxy.Query(scope, "from e in entities select e");

Console.WriteLine("Containers via CardSpace:");
foreach (var item in results)




Once you have created a certificate (with private key) and installed it somewhere on your machine (I used the local machine store in the Personal container (or 'My' container).  I have also set the self-generated certificate's public key in the trusted people store on the local machine so it validates. 

//create a new one for Certificates
proxy = new SitkaSoapServiceClient("CertificateTokenEndpoint");

//can also set in config

//return first 500 containers
results = proxy.Query(scope, "from e in entities select e");

Console.WriteLine("Containers via Certificates:");
foreach (var item in results)



The code is very similar, but I am explicitly setting the certificate here in the proxy credentials.  You can also configure the certificate through config.

And there you have it. using the Azure Access Control service with SQL Data Services.  As I mentioned earlier, while this is nascent integration today, you can expect going forward that the Access Control Service will be used to expose a very rich authorization model in conjunction with SDS.

Download the VS2008 Sample Project

Wednesday, 29 October 2008

Azure Services Platform Management Console

In case anyone was looking for a feature rich management tool for their Azure Services (.NET Services and SQL Services), we have posted a version of a managed console out to Code Gallery.  It is amazingly cool stuff which I am proud to say I had a little input into for the SDS stuff.  It manages your cloud based workflows, identity and access control rules, and SQL Data Services data.


In later iterations, I can imagine we will beef up the editors and keep the tools in sync with the live services as they evolve.  However, for pure utility - this thing is better than the Azure portal today.

Download it here

Monday, 27 October 2008

Ruby on SQL Data Services SDK

A few months back, I embarked on a mission to get these Ruby samples produced as I felt it was important to show the flexibility and open nature of SDS.  The problem was that I have no practical experience with Ruby.  With this in mind, I looked to my friend, former co-worker, and Ruby pro, James Avery to get this done.  He did a terrific job as the developer and I am happy to present this today.

With the announcement today at PDC of the Azure Services Platform, we are releasing a set of Ruby samples for SQL Data Services (SDS), formerly called SSDS.  We are putting the source on GitHub, and the samples will be available as gems from RubyForge.

The samples really consist of a number of moving parts:


At the core of the samples is a Ruby REST library for SDS.  It performs the main plumbing to using the service.  Next, we have two providers for building applications: an ActiveRecord provider and an ActiveResource provider.  These providers make use of the Ruby REST library for SDS.

Finally, we have two samples that make use of the providers.  We have built a version of RadiantCMS that uses the ActiveRecord provider and a simple task list sample that shows how to use the ActiveResource provider.

To get started:

  1. Visit GitHub and download sds-tasks or sds-radiant.
  2. View the Readme file in the sample as it will direct you to download Rails and some of other pre-requisites.
  3. Use the gem installers to download the sds-rest library.
  4. Set some configuration with your SDS username/password (now called Azure Solution name and password when you provision).
  5. That's it!  Just run the ruby script server and see how REST and Ruby works with SDS.

NOTE:  PDC folks can get provisioned for SQL Services and .NET Services by visiting the labs and getting a provisioning code.  This is exclusive to PDC attendees until the beta.

I will look to James to provide a few posts and details on how he built it.  Thanks again James!

What's new in SQL Data Services for Developers?

I dropped in on a couple of the developers in the SQL Data Services team (formerly SSDS) to chat and film a video about some of the new features being released with the PDC build of the service.  Jason Hunter and Jeff Currier are two of the senior developers on the SDS team and with a few days warning that I would be stopping by, they managed to put together a couple cool demos for us.

This is a longer video, with lots of code and deep content - so set aside some time and watch Jason and Jeff walk us through the new relational features as well as blob support.

What's new in SQL Data Services for Developers?

Thursday, 04 September 2008

PhluffyFotos v2 Released

We have just released an updated version of the SSDS sample application called 'PhluffyFotos'.  Clouds are fluffy and this is a cloud services application - get it?  This sample application is a ASP.NET MVC and Windows Mobile application showing how to build a photo tagging and sharing site using our cloud data service, SSDS.  For this update:

  • Updated to MVC Preview 4.  We have removed hardcoded links and used the new filtering capability for authorization.  Of course, Preview 5 was just (and just) released as we were putting this out the door.  I might update this to Preview 5 later, but it will not be a big deal to do so. 
  • Updated to add thumbnail support.  Originally, we just downloaded the entire image and resized to thumbnail size.  This drags down performance in larger data sizes, so we fixed it for this release.
  • Updated to use the SSDS blob support.  Blob support was recently added with the latest sprint.  Previously, we were using the 'base64Binary' attributes to store the picture data.  With the new blob support, you supply a content type and content disposition, which will be streamed back to you on request. 
  • Updated to use the latest SSDS REST library.  This library gives us the ability to use and persist CLR objects to the service and use a LINQ-like query syntax.  This library saved us a ton of time and effort in building the actual application.  All the blob work, querying, and data access was done using this library.

The sample is available for download at CodePlex, and a live version is available to play with at  I am opening this one up to the public to upload photos.  Maybe I am playing with fire here, so we will see how well it goes.  Keep in mind that this is a sample site and I will periodically blow away the data.  The live version has an added feature of integrating a source code viewer directly into the application.

SSDS REST Library v2 Released

I have just updated the Code Gallery page to reflect the new version of the REST-based library for SSDS.  This is a fairly major update to library and adds a ton of new features to make working with SSDS even easier than it already is for the .NET developer.  Added in this release:

  • Concurrency support via Etags and If-Match, If-None-Match headers.  To get a basic understanding of how this works, refer here.
  • Blob support.  The library introduces a new type called SsdsBlobEntity that encapsulates working with blobs in SSDS.  Overloads are available for both synchronous as well as async support.
  • Parallelization support via extension methods.  The jury is still out on this one and I would like to hear some feedback on it (both the technique as well as the methods).  Instead of using an interface, factory methods, etc., we are using extension methods supplied in a separate assembly to support parallel operations.  Since there are many different techniques to parallelize your code, this allows us to offer more than one option.  Each additional assembly can also take dependencies that the entire library might not want to take as well.  Imagine that we get providers for Parallel Extensions, CCR, or perhaps other home-baked remedies.  A very simple provider using Parallel Extensions is included.
  • Bug fixes.  Hard to believe, but yes, I did have a few bugs in my code.  This release cleans up a few of the ones found in the LINQ expression syntax parser as well as a few oversights in handling date/times.
  • Better test coverage.  Lots more tests included to not only prove out that stuff works, but also to show how to use it.

If you just want to see this library in action, refer to the photo sharing and tagging application called 'PhluffyFotos' that pulls it all together (sans parallelization, I suppose).  You can use the integrated source viewer to see how the library works on a 'real' application (or a real sample application at least).

Tuesday, 26 August 2008

Concurrency with SSDS via REST

Eugenio already covered concurrency via the SOAP interface with with latest post.  The idea is exactly the same in REST, but the mechanics are slightly different.  For REST, you specify a "Etag" value and either the If-Match or If-None-Match headers.

Here is a simplified client that does a PUT/POST operation on SSDS:

internal void Send(Uri scope, string etag, string method, string data,
    Action<string, WebHeaderCollection> action, Action<WebException> exception)
    using (var client = new WebClient { Credentials = _credentials })
        client.Headers.Add(HttpRequestHeader.ContentType, "application/x-ssds+xml");

        if (etag != null)
            client.Headers.Add(HttpRequestHeader.IfMatch, etag);

        client.UploadStringCompleted += (sender, e) =>
            if (e.Error != null && exception != null)
                if (action != null)
                    action(e.Result, client.ResponseHeaders);

        client.UploadStringAsync(scope, method, data);


All this does is add the If-Match header and the Etag (which corresponds to the Flexible Entity Version system attribute).  This instructs the system to only update if the version held in SSDS matches the version specified in the Etag with the If-Match header.

Failure of this condition will result in a 412 error "A precondition, such as Version, could not be met".  You simply need to handle this exception and move on.

Next, there are times when you have a large blob or a largish flexible entity.  You only want to perform the GET if you don't have the latest version.  In this case, you specify the Etag again with the If-None-Match header.

Here is a simplified client that shows how the GET would work:

public void Get(Uri scope, string etag,
    Action<string, WebHeaderCollection> action, Action<WebException> exception)
    using (var client = new WebClient { Credentials = _credentials })
        client.Headers.Add(HttpRequestHeader.ContentType, "application/x-ssds+xml");

        if (etag != null)
            client.Headers.Add(HttpRequestHeader.IfNoneMatch, etag);

        client.DownloadStringCompleted += (sender, e) =>
            if (e.Error != null && exception != null)
                if (action != null)
                    action(e.Result, client.ResponseHeaders);



When you add the Etag and this header, you will receive a 304 error "Not Modified" if the content has NOT changed since the Etag value you sent.

I am attaching a small Visual Studio sample that includes this code and demonstrates these techniques.

Friday, 25 July 2008

Rendering POX for SSDS in Internet Explorer

With the release of Sprint 3 bits, you might have noticed that you are prompted to download now when you hit the service directly from IE.  Because the content type changed from 'application/xml' to 'application/x-ssds+xml', IE just doesn't know how to render the resulting response.

This is simple to fix.  Copy the following to a .reg file and merge into your registry.

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\MIME\Database\Content Type\application/x-ssds+xml]

Now, you should be back to the behavior you are used to.

Wednesday, 02 July 2008

Working with Objects in SSDS Part 3

Here is my last installment in this series of working with objects in SQL Server Data Services.  For background, readers should read the following:

Serialization in SSDS

Working with Objects in SSDS Part 1

Working with Objects in SSDS Part 2

Last time, we concluded with a class called SsdsEntity<T> that became an all-purpose wrapper or veneer around our CLR objects.  This made it simple to take our existing classes and serialize them as entities in SSDS.

In this post, I want to discuss how the querying in the REST library works.  First a simple example:

var ctx = new SsdsContext(

var container = ctx.OpenContainer("foo");
var foo = new Foo { IsPublic = false, Name = "MyFoo", Size = 12 };

//insert it with unique id guid string
container.Insert(foo, Guid.NewGuid().ToString());

//now query for it
var results = container.Query<Foo>(e => e.Entity.IsPublic == false && e.Entity.Size > 2);

//Query<T> returns IEnumerable<SsdsEntity<T>>, so foreach over it
foreach (var item in results)

I glossed over it in my previous posts with this library, but I have a class called SsdsContext that acts as my credential store and factory to create SsdsContainer objects where I perform my operations.  Here, I have opened a container called 'foo', which would relate to the URI ( according to the authority name I passed on the SsdsContext constructor arguments.

I created an instance of my Foo class (see this post if you want to see what a Foo looks like) and inserted it.  We know that under the covers we have an XmlSerializer doing the work to serialize that to the proper POX wire format.  So far, so good.  Now, I want to retrieve that same entity back from SSDS. The key line here is the table.Query<T>() call.  It accepts a Expression<Func<SsdsEntity<T>, bool>> argument that represents a strongly typed query.

For the uninitiated, the Expression<TDelegate> is a way to represent lambda expressions in an abstract syntax tree.  We can think of them as a way to model what the expression does without generating the bits of code necessary to actually do it.  We can inspect the Expression and create new ones based on it until finally we can call Compile and actually convert the representation of the lambda into something that can execute.

The Func<SsdsEntity<T>, bool> represents a delegate that accepts a SsdsEntity<T> as an argument and returns a boolean.  This effectively represents the WHERE clause in the SSDS LINQ query syntax.  Since SsdsEntity<T> contains an actual type T in the Entity property, you can query directly against it in a strongly typed fashion!

What about those flexible properties that I added to support flexible attributes outside of our T?  I mentioned that I wanted to keep the PropertyBucket (a Dictionary<string, object>) property public for querying.  In order to use the flexible properties that you add, you simply use it in a weakly typed manner:

var results = container.Query<Foo>(e => e.PropertyBucket["MyFlexProp"] > 10);

As you can see, any boolean expression that you can think of in the string-based SSDS LINQ query syntax can now be expressed in a strongly-typed manner using the Func<SsdsEntity<T>, bool> lambda syntax.

How it works

Since I have the expression tree of what your query looks like in strongly-typed terms, it is a simple matter to take that and convert it to the SSDS LINQ query syntax that looks like "from e in entities where [....] select e" that is appended to the query string in the REST interface.  I should say it is a simple matter because Matt Warren did a lot of the heavy lifting for us and provided the abstract expression visitor (ExpressionVisitor) as well as the expression visitor that partially evaluates the tree to evaluate constants (SubTreeEvaluator).  This last part is important because it allows us to write this:

int i = 10;
string name = "MyFoo";

var results = container.Query<Foo>(e => e.Entity.Name == name && e.Entity.Size > i);

Without the partial tree evaluation, you would not be able to express the right hand side of the equation.  All I had to do was implement an expression visitor that correctly evaluated the lambda expression and converted it to the LINQ syntax that SSDS expects (SsdsExpressionVisitor).  It would be a trivial matter to actually implement the IQueryProvider and IQueryable interfaces to make the whole thing work inside LINQ to Objects.

Originally, I did supply the IQueryProvider for this implementation but after consideration I have decided that using methods from the SsdsContainer class instead of the standard LINQ syntax is the best way to proceed.  Mainly, this has to do with the fact that I want to make it more explicit to the developer what will happen under the covers rather than using the standard Where() extension method.

Querying data

The main interaction to return data is via the Query<T> method.  This method is smart enough to add the Kind into the query for you based on the T supplied.  So, if you write something like:

var results = container.Query<Foo>(e => e.Entity.Size > 2);

This is actually translated to "from e in entities where e["Size"] > 2 && e.Kind == "Foo" select e".  The addition of the kind is important because we want to limit the results as much as possible.  If there happened to be many kinds in the container that had the flexible property "Size", it would actually return those as well in the wire response.

Of course, what about if you want that to happen?  What if you want to return other kinds that have the "Size" property?  To do this, I have introduced a class called SsdsEntityBucket.  It is exactly what it sounds like.  To use it, you simply specify a query that uses additional types with either the Query<T,U,V> or Query<T,U> methods.  Here is an example:

var foo = new Foo
    IsPublic = true,
    MyCheese = new Cheese { LastModified = DateTime.Now, Name = "MyCheese" },
    Name = "FooMaster",
    Size = 10

container.Insert(foo, foo.Name);
container.Insert(foo.MyCheese, foo.MyCheese.Name);

//query for bucket...
var bucket = container.Query<Foo, Cheese>(
    (f, c) => f.Entity.Name == "FooMaster" || c.Entity.Name == "MyCheese"

var f1 = bucket.GetEntities<Foo>().Single();
var c1 = bucket.GetEntities<Cheese>().Single();

The calls to GetEntities<T> returns IEnumerable<SsdsEntity<T>> again.  However, this was done in a single call to SSDS instead of multiple calls per T.


As I mentioned earlier, I wanted the developer to understand what they were doing when they called each method, so I decided to make paging explicit.  If I had potentially millions of entities in SSDS, it would be a bad mistake to allow a developer to issue a simple query that seamlessly paged the items back - especially if the query was something like e => e.Id != "".  Here is how I handled paging:

var container = ctx.OpenContainer("paging");

List<Foo> items = new List<Foo>();
int i = 1;

    e => e.Entity.Size != 0,
    c =>
        Console.WriteLine("Got Page {0}", i++);
        items.AddRange(c.Select(s => s.Entity));


The PagedQuery<T> method takes two arguments.  One is the standard Expression<Func<SsdsEntity<T>, bool>> that you use to specify the WHERE clause for SSDS, and the other is Action<IEnumerable<SsdsEntity<T>>> which represents a delegate that takes an IEnumerable<SsdsEntity<T>> and has a void return.  This is a delegate you provide that does something with the 500 entities returned per page (it gets called once per page).  Here, I am just adding them into a List<T>, but I could easily be doing anything else here.  Under the covers, this is adding the paging term dynamically into the expression tree that is evaluated.

What's next

This is a good head start on using the REST API with SSDS today.  However, there are a number of optimizations that could be made to the model: additional overloads, perhaps some extension methods for common operations, etc.

As new features are added, I will endeavor to update this as well (blob support comes to mind here).  Additionally, I have a few optimizations planned around concurrency for CRUD operations. 

I have published this out to Code Gallery and I welcome feedback and bug fixes.  Linked here.

Thursday, 26 June 2008

Working with Objects in SSDS Part 2

This is the second post in my series on working with SQL Server Data Service (SSDS) and objects.  For background, you should read my post on Serializing Objects in SSDS and the first post in this series.

Last time I showed how to create a general purpose serializer for SSDS using the standard XmlSerializer class in .NET.  I created a shell entity or a 'thin veneer' for objects called SsdsEntity<T>, where T was any POCO (plain old C#/CLR object).  This allowed me to abstract away the metadata properties required for SSDS without changing my actual POCO object (which, I noted was lame to do).

If we decide that we will use SSDS to interact with POCO T, an interesting situation arises.  Namely, once we have defined T, we have in fact defined a schema - albeit one only enforced in code you write and not by the SSDS service itself.  One of the advantages of using something like SSDS is that you have a lot of flexibility in storing entities  (hence the term 'flexible entity') without conforming to schema.  Since, I want to support this flexibility, it means I need to think of a way to support not only the schema implied by T, but also additional and arbitrary properties that a user might consider.

Some may wonder why we need this flexibility:  after all, why not just change T to support whatever we like?  The issue comes up most often with code you do not control.  If you already have an existing codebase with objects that you would like to store in SSDS, it might not be practical or even possible to change the T to add additional schema.

Even if you completely control the codebase, expressing relationships between CLR objects and expressing relationships between things in your data are two different ideas - sometimes this problem has been termed 'impedance mismatch'.

In the CLR, if two objects are related, they are often part of a collection, or they refer to an instance on another object.  This is easy to express in the CLR (e.g. Instance.ChildrenCollection["key"]).  In your typical datasource, this same relationship is done using foreign keys to refer to other entities.

Consider the following classes:

public class Employee
    public string EmployeeId { get; set; }
    public string Name { get; set; }
    public DateTime HireDate { get; set; }
    public Employee Manager { get; set; }
    public Project[] Projects { get; set; }

public class Project
    public string ProjectId { get; set; }
    public string Name { get; set; }
    public string BillCode { get; set; }

Here we see that the Employee class refers to itself as well as contains a collection of related projects (Project class) that the employee works on.  SSDS only supports simple scalar types and no arrays or nested objects today, so we cannot directly express this in SSDS.  However, we can decompose this class and store the bits separately and then reassemble later.  First, let's see what that looks like and then we can see how it was done:

var projects = new Project[]
    new Project { BillCode = "123", Name = "TPS Slave", ProjectId = "PID01"},
    new Project { BillCode = "124", Name = "Programmer", ProjectId = "PID02" }

var bill = new Employee
    EmployeeId = "EMP01",
    HireDate = DateTime.Now.AddMonths(-1),
    Manager = null,
    Name = "Bill Lumbergh",
    Projects = new Project[] {}

var peter  = new Employee
    EmployeeId = "EMP02",
    HireDate = DateTime.Now,
    Manager = bill,
    Name = "Peter Gibbons",
    Projects = projects

var cloudpeter = new SsdsEntity<Employee>
    Entity = peter,
    Id = peter.EmployeeId

var cloudbill = new SsdsEntity<Employee>
    Entity = bill,
    Id = bill.EmployeeId

//here is how we add flexible props
cloudpeter.Add<string>("ManagerId", peter.Manager.EmployeeId);

var table = _context.OpenContainer("initech");

var cloudprojects = peter.Projects
    .Select(s => new SsdsEntity<Project>
        Entity = s,
        Id = Guid.NewGuid().ToString()

//add some metadata to track the project to employee
foreach (var proj in cloudprojects)
    proj.Add<string>("RelatedEmployee", peter.EmployeeId);

All this code does is create two employees and two projects and set the relationships between them.  Using the Add<K> method, I can insert any primitive type to go along for the ride with the POCO.  If we query the container now, this is what we see:

    <ProjectId xsi:type="x:string">PID01</ProjectId>
    <Name xsi:type="x:string">TPS Slave</Name>
    <BillCode xsi:type="x:string">123</BillCode>
    <RelatedEmployee xsi:type="x:string">EMP02</RelatedEmployee>
    <ProjectId xsi:type="x:string">PID02</ProjectId>
    <Name xsi:type="x:string">Programmer</Name>
    <BillCode xsi:type="x:string">124</BillCode>
    <RelatedEmployee xsi:type="x:string">EMP02</RelatedEmployee>
    <EmployeeId xsi:type="x:string">EMP01</EmployeeId>
    <Name xsi:type="x:string">Bill Lumbergh</Name>
    <HireDate xsi:type="x:dateTime">2008-05-25T23:59:49</HireDate>
    <EmployeeId xsi:type="x:string">EMP02</EmployeeId>
    <Name xsi:type="x:string">Peter Gibbons</Name>
    <HireDate xsi:type="x:dateTime">2008-06-25T23:59:49</HireDate>
    <ManagerId xsi:type="x:string">EMP01</ManagerId>

As you can see, I have stored extra data in my 'flexible' entity with the ManagerId property (on one entity) and RelatedEmployee property on the Project kinds.  This allows me to figure out later what objects are related to each other since we can't model the CLR objects relationships directly.  Let's see how this was done.

public class SsdsEntity<T> where T: class
    Dictionary<string, object> _propertyBucket = new Dictionary<string, object>();

    public SsdsEntity() { }

    public Dictionary<string, object> PropertyBucket
        get { return _propertyBucket; }

    public XElement[] Attributes
            //using XElement is much easier than XmlElement to build
            //take all properties on object instance and build XElement
            var props =  from prop in typeof(T).GetProperties()
                         let val = prop.GetValue(this.Entity, null)
                         where prop.GetSetMethod() != null
                         && allowableTypes.Contains(prop.PropertyType) 
                         && val != null
                         select new XElement(prop.Name,
                             new XAttribute(Constants.xsi + "type",

            //Then stuff in any extra stuff you want
            var extra = _propertyBucket.Select(
                e =>
                     new XElement(e.Key,
                        new XAttribute(Constants.xsi + "type",

            return props.Union(extra).ToArray();
            //wrap the XElement[] with the name of the type
            var xml = new XElement(typeof(T).Name, value);

            var xs = new XmlSerializer(typeof(T));

            //xml.CreateReader() cannot be used as it won't support base64 content
            XmlTextReader reader = new XmlTextReader(

            this.Entity = (T)xs.Deserialize(reader);

            //now deserialize the other stuff left over into the property bucket...
            var stuff = from v in value.AsEnumerable()
                        let props = typeof(T).GetProperties().Select(s => s.Name)
                        where !props.Contains(v.Name.ToString())
                        select v;

            foreach (var item in stuff)
                        item.Attribute(Constants.xsi + "type").Value,

    public void Add<K>(string key, K value)
        if (!allowableTypes.Contains(typeof(K)))
            throw new ArgumentException(
		"Type {0} not supported in SsdsEntity",

        if (!_propertyBucket.ContainsKey(key))
            _propertyBucket.Add(key, value);
            //replace the value
            _propertyBucket.Add(key, value);

I have omitted the parts of SsdsEntity<T> from the first post that didn't change.  The only other addition you don't see here is a helper method called DecodeValue, which as you might guess, interprets the string value in XML and attempts to cast it to a CLR type based on the xsi:type that comes back.

All we did here was add a Dictionary<string, object> property called PropertyBucket that holds our extra stuff we want to associate with our T instance.  Then in the getter and setter for the XElement[] property called Attributes, we are adding them into our array of XElement as well as pulling them back out on deserialization and stuffing them back into the Dictionary.  With this simple addition, we have fixed our in flexibility (or lack thereof) problem.  We are still limited to the simple scalar types, but as you can see you can work around this in a lot of cases by decomposing the objects down enough to be able to recreate them later.

The Add<K> method is a convenience only as we could operate directly against the Dictionary.  I also could have chosen to keep the Dictionary property bucket private and not expose it.  That would have worked just fine for serialization, but I wanted to also be able to query it later.

In my last post, I said I would introduce a library where all this code is coming from, but I didn't realize at the time how long this post would be and that I still need to cover querying.  So... next time, I will finish up this series by explaining how the strongly typed query model works and how all these pieces fit together to recompose the data back into objects (and release the library).

Tuesday, 17 June 2008

Working with Objects in SSDS Part 1

Last time we talked about SQL Server Data Services and serializing objects, we discussed how easy it was to use the XmlSerializer to deserialize objects using the REST interface.  The problem was that when we serialized objects using the XmlSerializer, it left out the xsi type declarations that we needed.  I gave two possible solutions to this problem - one that used the XmlSerializer and 'fixed' the output after the fact, and the other built the XML that we needed using XLINQ and Reflection.

Today, I am going to talk about a third technique that I have been using lately that I like better.  It uses some of the previous techniques and leverages a few tricks with XmlSerializer to get what I want.  First, let's start with a POCO (plain ol' C# object) class that we would like to use with SSDS.

public class Foo
    public string Name { get; set; }
    public int Size { get; set; }
    public bool IsPublic { get; set; }

In it's correctly serialized form, it looks like this on the wire:

<Foo xmlns:s=""
  <Name xsi:type="x:string">My Foo</Name>
  <Size xsi:type="x:decimal">10</Size>
  <IsPublic xsi:type="x:boolean">false</IsPublic>

You'll notice that we have the additional system metadata attributes "Id" and "Version" in the markup.  We can account for the metadata attributes by doing something cheesy like deriving from a base class:

public abstract class Cheese
    public string Id { get; set; }
    public int Version { get; set; }

However this is very unnatural as our classes would all have to derive from our "Cheese" abstract base class (ABC).

public class Foo : Cheese
    public string Name { get; set; }
    public int Size { get; set; }
    public bool IsPublic { get; set; }

Developers familiar with remoting in .NET should be cringing right now as they remember the hassles associated with deriving from MarshalByRefObject.  In a world without multiple inheritance, this can be painful.  I want a model where I can use arbitrary POCO objects (redundant, yes I know) and not be forced to derive from anything or do what I would otherwise term unnatural acts.

What if instead, we derived a generic entity that could contain any other entity?

public class SsdsEntity<T> where T: class
    string _kind;

    public SsdsEntity() { }

    [XmlElement(Namespace = @"")]
    public string Id { get; set; }

    public string Kind
            if (String.IsNullOrEmpty(_kind))
                _kind = typeof(T).Name;
            return _kind;
            _kind = value;

    [XmlElement(Namespace = @"")]
    public int Version { get; set; }

    public T Entity { get; set; }

In this case, we have simply wrapped the POCO that we care about in a class that knows about the specifics of the SSDS wire format (or more accurately could serialize down to the wire format).

This SsdsEntity<T> is easy to use and provides access to the strongly typed object via the Entity property.


Now, we just have to figure out how to serialize the SsdsEntity<Foo> object and we know that the metadata attributes are taken care of and our original POCO object that we care about is included.  I call it wrapping POCOs in a thin SSDS veneer.

The trick to this is to add a bucket of XElement objects on the SsdsEntity<T> class that will hold our public properties on our class T (i.e. 'Foo' class).  It looks something like this:

public XElement[] Attributes
        //using XElement is much easier than XmlElement to build
        //take all properties on object instance and build XElement
        var props =  from prop in typeof(T).GetProperties()
                     let val = prop.GetValue(this.Entity, null)
                     where prop.GetSetMethod() != null
                     && allowableTypes.Contains(prop.PropertyType)
                     && val != null
                     select new XElement(prop.Name,
                         new XAttribute(Constants.xsi + "type",

        return props.ToArray();
        //wrap the XElement[] with the name of the type
        var xml = new XElement(typeof(T).Name, value);

        var xs = new XmlSerializer(typeof(T));

        //xml.CreateReader() cannot be used as it won't support base64 content
        XmlTextReader reader = new XmlTextReader(

        this.Entity = (T)xs.Deserialize(reader);

In the getter, we use Reflection and pull back a list of all the public properties on the T object and build an array of XElement.  This is the same technique I used in my first post on serialization.  The 'allowableTypes' object is a HashSet<Type> that we use to figure out which property types we can support in the service (DateTime, numeric, string, boolean, and byte[]).  When this property serializes, the XElements are simply added to the markup.

The EncodeValue method shown is a simple helper method that correctly encodes string values, boolean, dates, integers, and byte[] values for the attribute.  Finally, we are using a helper method that returns from a Dictionary<Type,string> the correct xsi type for the required attribute (as determined from the property type).

For deserialization, what happens is that the [XmlAnyElement] attribute causes all unmapped attributes (in this case, all non-system metadata attributes) to be collected in a collection of XElement.  When we deserialize, if we simply wrap an enclosing element around this XElement collection, it is exactly what we need for deserialization of T.  This is shown in the setter implementation.

It might look a little complicated, but now simple serialization will just work via the XmlSerializer.  Here is one such implementation:

public string Serialize(SsdsEntity<T> entity)
    //add a bunch of namespaces and override the default ones too
    XmlSerializerNamespaces namespaces = new XmlSerializerNamespaces();
    namespaces.Add("s", Constants.ns.NamespaceName);
    namespaces.Add("x", Constants.x.NamespaceName);
    namespaces.Add("xsi", Constants.xsi.NamespaceName);

    var xs = new XmlSerializer(
        new XmlRootAttribute(typeof(T).Name)

    XmlWriterSettings xws = new XmlWriterSettings();
    xws.Indent = true;
    xws.OmitXmlDeclaration = true;

    using (var ms = new MemoryStream())
        using (XmlWriter writer = XmlWriter.Create(ms, xws))
            xs.Serialize(writer, entity, namespaces);
            ms.Position = 0; //reset to beginning

            using (var sr = new StreamReader(ms))
                return sr.ReadToEnd();

Deserialization is even easier since we are starting with the XML representation and don't have to build a Stream in memory.

public SsdsEntity<T> Deserialize(XElement node)
    var xs = new XmlSerializer(
        new XmlRootAttribute(typeof(T).Name)

    //xml.CreateReader() cannot be used as it won't support base64 content
    XmlTextReader reader = new XmlTextReader(
    return (SsdsEntity<T>)xs.Deserialize(reader);

If you notice, I am using an XmlTextReader to pass to the XmlSerializer.  Unfortunately, the XmlReader from XLINQ does not support handling of base64 content, so this workaround is necessary.

At this point, we have a working serializer/deserializer that can handle arbitrary POCOs.  There are some limitations of course:

  • We are limited to the same datatypes that SSDS supports.  This also means nested objects and arrays are not directly supported.
  • We have lost a little of the 'flexible' in the Flexible Entity (the E in the ACE model).  We now have a rigid schema defined by SSDS metadata and T public properties and enforced on our objects.

In my next post, I will attempt to address some of those limitations and I will introduce a library that handles most of this for you.

Sunday, 01 June 2008

Where has all the bandwidth gone?

According to a report released by Akamai (of CDN fame), Washington is the slowest state in the union when it comes to broadband.  More appalling however is the USA's positioning with respect to the rest of the world.  For the country that invented the internet (sorry Al), we are sure a long way behind our European and Asian counterparts, ranking only 24th for greater than 2Mbps broadband penetration.

What is the problem here?  Why are we so far behind?  More troubling, why has the cost of communication services like the internet access gone up year over year with no appreciable increases in either service quality nor speed?  The same cable companies that begged to be released from regulation - citing a free market would increase competition and lower prices have instead consolidated, reduced choice, and increased rates.

Let's take a look at what speeds you typically get from your crappy cable provider.  For an astonishing ~$50/month, you can order from your local cable provider a 6Mbps/768Kbps package.  In some markets, you can order higher speed packages for significantly higher rates.  Of course, in the markets that offer those higher speed tiers you will likely run into a cap:

An MSO talking 100 Mbit/s out of one side of its mouth and usage caps out the other is like a bi-polar buffet restaurateur. They continue adding more entrees to an all-you-can-eat spread, and then reduce the size of the plates and tell diners they only have 10 minutes to chow. It's a recipe for dissatisfaction. The buffet looks bigger and tastier – so the patron's hunger grows – and then they are asked to practice portion control. [source]

Most people I know that have cable (all in fact) use the standard plan because the higher speed plans are so much more expensive.  The plans have to be - if they were cheap people would buy it and then the cable companies could never meet their SLA for their customers.  So, let's deal with the average for now.

The part that gets most people is that those bandwidth numbers provided by the cable companies don't mean much in our every day surfing and downloading.  Quick, how does 768Kbps in upload speed translate to your Bittorrent client on Comcast?[1]  What is the max speed that you will be able to download the latest SP3 for XP on your 6Mbps connection?  Without doing the math, no one really knows - we are talking two different units here.

What most folks are used to seeing is the browser download progress window:


This measurement (usually in KB/s) is what most people can identify with to truly understand how fast they can download something.  On a 6Mbps connection, your theoretical max download speed is only 768KB/s.  Of course, no one but no one gets that max speed.  It might burst in some markets for a bit, but with a huge pipe coming down (like Microsoft Download Center), you are lucky if you get 400KB/s max on most cable systems.

An astute reader will notice that I am not using a normal cable provider here in the screen shot.  In fact, at 1.24 MB/s, I am getting roughly 3x faster speeds that 'normal' cable.  This is because after suffering Comcast's incredibly slow and laggy connection long enough, I bit the bullet and ordered FiOS from Verizon.  I chose the 15Mbps/15Mbps package for roughly $70/month.  I am a heavy internet user (both up and down) and so far it has been worth every penny.

Here I am uploading all my MP3 collection to my backup provider (Mozy).  Previously on cable, I never dared doing it because I didn't want to tie up my connection for a few weeks at the measly 40KB/s.  With FiOS, I can do this easily.


So what is the problem with the status quo?  Unless hordes of users start to migrate to FiOS, cloud services in general will suffer for years to come.  The value of a cloud service like Mozy is directly proportional to the bandwidth that users can access to get to it.  Today, backing up only 30GB worth of data to cloud storage at 40KB/s would take over 9 days!  It is impractical for most people to leave a machine on for 9 days and especially tie up all the bandwidth in the house for the same period of time.  I know my parents would never do it.

If we look at some of the more interesting cloud services that could be offered, we see again that bandwidth (especially up) constrains the value of the service.  Unfortunately, I don't hold out a lot of hope for things to improve soon.  Certain classes of applications that could go in the cloud will be fine (web sites, some services, and simple download only type services).  While truly interactive, rich media, or game changing devices (imagine the Network PC, but for real now) with be hobbled for years.  We'll see...

What do you think?

[1] Trick question: Comcast blocks Bittorrent, so it is likely 0 KB/s instead of the theoretical 96 KB/s (max 40-50 KB/s in real use).

Wednesday, 28 May 2008

Serialization in SSDS

SQL Server Data Services returns data in POX (plain ol' XML) format.  If you look carefully at the way the data is returned, you can see that individual flex entities look somewhat familiar to what is produced from the XmlSerializer.  I say 'somewhat' because we have the data wrapped in this 'EntitySet' tag.

    <PictureId xsi:type="x:string">3a1714bc-8771-4f6c-8d16-93238f126d9f</PictureId>
    <TagId xsi:type="x:string">ab696b85-1bdc-4bed-8824-dfbf9b67b5cc</TagId>

I am using the PictureTag from the PhluffyFotos sample application, but this could be any flexible entity.  If we extract the PictureTag element and children from the surrounding EntitySet, we can very easily deserialize this into a class.

Given a class 'PictureTag':

public class PictureTag
    public string Id { get; set; }
    [XmlElement(Namespace = "")]
    public int Version { get; set; }
    public string PictureId { get; set; }
    public string TagId { get; set; }

We can deserialize this class in just 3 lines of code:

string xml = @"<s:EntitySet 
                <PictureId xsi:type=""x:string"">3a1714bc-8771-4f6c-8d16-93238f126d9f</PictureId>
                <TagId xsi:type=""x:string"">ab696b85-1bdc-4bed-8824-dfbf9b67b5cc</TagId>

var xmlTag = XElement.Parse(xml).Element("PictureTag");

XmlSerializer xs = new XmlSerializer(typeof(PictureTag));
var tag = (PictureTag)xs.Deserialize(xmlTag.CreateReader());

Now, the 'tag' variable is a PictureTag instance.  As you can see, deserialization is a snap.  What about serialization, however?

If I reverse the process using the following code, you will notice that something has changed:

using (var ms = new MemoryStream())
    //add a bunch of namespaces and override the default ones too
    XmlSerializerNamespaces namespaces = new XmlSerializerNamespaces();
    namespaces.Add("s", @"");
    namespaces.Add("x", @"");
    namespaces.Add("xsi", @"");

    XmlWriterSettings xws = new XmlWriterSettings();
    xws.Indent = true;
    xws.OmitXmlDeclaration = true;

    using (XmlWriter writer = XmlWriter.Create(ms, xws))
        xs.Serialize(writer, tag, namespaces);
        ms.Position = 0; //reset to beginning

        using (var sr = new StreamReader(ms))
            xmlTag = XElement.Parse(sr.ReadToEnd());

If I look in the 'xmlTag' XElement, I get somewhat different XML back:


I lost the 'xsi:type' attributes that I need in order to signal to SSDS how to treat the type.  Bummer.

We can manually add the attributes (fix-up) after the serialization.  Let's see how that would work:

XNamespace xsi = @"";
XNamespace ns = @"";

//xmlTag is XElement holding our
var nodes = xmlTag.Descendants();
foreach (var node in nodes)
    if (node.Name != (ns + "Id") && node.Name != (ns + "Version"))
            new XAttribute(
                xsi + "type",
                GetAttributeType(node.Name.LocalName.ToString(), typeof(PictureTag))

We need to loop through each node and set the 'xsi:type' attribute appropriately.  Here is my quick and dirty implementation:

static Dictionary<Type, string> xsdTypes = new Dictionary<Type, string>()
    {typeof(string), "x:string"},
    {typeof(int), "x:decimal"},
    {typeof(long), "x:decimal"},
    {typeof(float), "x:decimal"},
    {typeof(decimal), "x:decimal"},
    {typeof(short), "x:decimal"},
    {typeof(DateTime), "x:dateTime"},
    {typeof(bool), "x:boolean"},
    {typeof(byte[]), "x:base64Binary"}

private static string GetAttributeType(string name, Type type)
    var prop = type.GetProperty(name);

    if (prop != null)
        if (xsdTypes.ContainsKey(prop.PropertyType))
            return xsdTypes[prop.PropertyType];

    return xsdTypes[typeof(string)];

When all is said and done, I am back to what I need:

  <PictureId xsi:type="x:string">3a1714bc-8771-4f6c-8d16-93238f126d9f</PictureId>
  <TagId xsi:type="x:string">ab696b85-1bdc-4bed-8824-dfbf9b67b5cc</TagId>

However, I am not sure I really like this technique.  It seems like that if I am going to be using Reflection to 'fix-up' the XML from the XmlSerializer, I might as well just use it to build the entire thing.  With that in mind, here is the next implementation of SSDS Serialization:

public static XElement CreateEntity<T>(T instance, string id) where T : class, new()
    XNamespace ns = @"";
    XNamespace xsi = @"";
    XNamespace x = @"";

    if (instance == null)
        return null;

    if (String.IsNullOrEmpty(id))
        throw new ArgumentNullException("id");

    Type type = typeof(T);

    // Create an element for each non-system, non-binary property on the class
    var properties =
        from p in type.GetProperties()
        where xsdTypes.ContainsKey(p.PropertyType) &&
              p.Name != "Id" &&
              p.Name != "Version" &&
        select new XElement(p.Name,
                   new XAttribute(xsi + "type", xsdTypes[p.PropertyType]),
                   p.GetValue(instance, null)

    // Binary properties are special, since they must be serialized as Base-64
    var binaryProperties =
        from p in type.GetProperties()
        where p.PropertyType.Equals(typeof(byte[])) && (p.GetValue(instance, null) != null)
        select new XElement(p.Name,
                   new XAttribute(xsi + "type", xsdTypes[p.PropertyType]),
                   Convert.ToBase64String((byte[])p.GetValue(instance, null))

    // Construct the Xml
    var xml = new XElement(type.Name,
        new XElement(ns + "Id", id), //here is the Id element
        new XAttribute(XNamespace.Xmlns + "s", ns),
        new XAttribute(XNamespace.Xmlns + "xsi", xsi),
        new XAttribute(XNamespace.Xmlns + "x", x),

    return xml;

In this case, we are using Reflection to build a list of Properties in the object and depending on the type (byte[] array is special), we build the XElement ourselves and assemble the entity by hand.  We can use it like this:

XElement entity = CreateEntity<PictureTag>(tag, tag.Id);

Of course, there are a number of other techniques that I am not covering in this already very long post.  Perhaps in my next post we will look at a few others.

Wednesday, 07 May 2008

The Business Value of SQL Server Data Services

In my second video of a planned series of videos (number yet to be determined), I interviewed Tudor Toma, Group Program Manager and Soumitra Sengupta, SSDS Architect about the business value of SSDS.

Watch the Video

One point that I want to emphasize is that from a business perspective, SSDS tackles two of the biggest pain points that companies have to deal with today when creating new IT systems:  capital expenditures (CapEx) and operational expenditures (OpEx).  Beyond just the dollar cost of how much software costs to create, you need to think about how much hardware or hosting will cost.  You have to plan capacity and guess the load the system will generate.  Next, you have to figure out how many support personnel will be required to maintain the system.  Someone has to maintain the software, repair machines when they go down, defrag, patch, update, etc.

One of the greatest business values (as opposed to technical values) you get from SSDS (and cloud services in general) is the ability to redeploy the capital to other resources.  In the case of CapEx, you can deploy this perhaps to marketing or content creation.  In the case of OpEx, those expenses can be redeployed to more useful tasks in the enterprise (creating new systems, upgrading, and expanding operations perhaps?).  While there are opportunities to just save this money, I think more realistically the money is going to get spent on the other things that you wished you had time and resources for.

I think the value is easy to understand from a startup's perspective.  But even enterprise users should be thinking about this.  Big budget IT programs could easily be switched out to cloud services at a fraction of the cost to build out the support and infrastructure needs.  Money that would otherwise be spent for disaster recovery (you do DR planning, right?) can be re-purposed or saved.  The days of multi-million dollar new IT investments are numbered in a lot of cases in my opinion.  I can still see a few cases where it would be necessary depending on a companies core value proposition, but for a majority, it just doesn't make sense.

Wednesday, 09 April 2008

PhluffyFotos Sample Available

I just posted the first version of PhluffyFotos, our SQL Server Data Services (SSDS) sample app to CodePlex.  PhluffyFotos is a photo sharing site that allows users to upload photos and metadata (tags, description) to SSDS for storage.  As the service gets more features and is updated, the sample will be rev'd as well.

Points of interest that will likely also be blog posts in themselves:

  • This sample has a LINQ-to-SSDS provider in it.  You will notice we don't use any strings for queries, but rather lambda expressions.  I had a lot of fun writing the first version of this and I would expect that there are a few more revisions here to go.  Of course, Matt Warren should get a ton of credit here for providing the base implementation.
  • This sample also uses a very simplistic ASP.NET Role provider for SSDS.  Likely updates here will include encryption and hashing support.
  • We have a number of Powershell cmdlets included for managing authorities and containers.

I have many other ideas for this app as time progresses, so you should check back from time to time to see the updates.

In case anyone was wondering about the name: clouds are fluffy... get it?

You need to have SSDS credentials to run this sample.  If you don't have credentials yet, you can see an online version until then at

Even if you don' t have access to SSDS credentials yet, the code is worth taking a look.

Monday, 31 March 2008

Interested in SQL Server Data Services?

Are you a Ruby on Rails (RoR) shop?  PHP shop?  Java shop? Are you a web startup using open source technologies to build your services?

Great news,  we have a limited number of seats available for folks like you to get firsthand exposure to our new HTTP-based (SOAP and REST) data service:  SQL Server Data Services (SSDS).  You will get early access to the service and the chance to influence the service itself.

We are interested in getting some feedback from people like you that don't necessarily use Microsoft technologies.  If you can connect using HTTP, you can use SSDS, so there are very few client limitations here.

How to get involved

If you are one of those non-.NET developers that sees value in utility storage and query processing, send me an email to dpesdr (AT)

In your email, please tell me about your company and about your product where you think SSDS might fit.  We will review your email and follow up with more information.


When: April 24-25th, 2008
Where:  Microsoft Silicon Valley Campus 


This is a free event but seating is limited - you must have a confirmed reservation to attend this event.  Attendees are responsible for any transportation or lodging costs.  Microsoft will provide breakfast, lunch, and light snacks during the event.  You are responsible for your own dinner expenses.

About SQL Server Data Services

SQL Server Data Services (SSDS) is a highly scalable web facing data storage and query processing utility. Built on robust SQL Server database technology, these services provide high availability and security and support standards-based web protocols and interfaces (SOAP, REST) for rapid provisioning and ease of programming. Businesses can store and access all types of data from birth to archival and sers can access information on any device, from the desktop to a mobile device.

Key Features and Solution Benefits

Application Agility for quick deployment
•    Internet standard protocols and Interfaces (REST, SOAP).
•    Flexible data model with no schema required.
•    Simple text base query model.
•    Easy to program to from any programming environment.
On-Demand Scalability
•    Easy storage and access. Pay as you grow model.
•    Scales as data grows.
•    Web services for provisioning, deployment, and monitoring.
Business-Ready SLA
•    Built on robust Microsoft SQL Server database and Windows server technologies.
•    Store and manage multiple copies of the data for reliability and availability.
•    Back up data stored in each data cluster. Geo-redundant data copies to ensure business continuity.
•    Secure data access to help provide business confidentiality and privacy.

Thursday, 13 March 2008

SSDS Query Model

Note: The query model is subject to change based on feedback; this is how it stands today.  You can pre-register for the beta at the SSDS home page.

Design Decisions

In this post, I am going to cover how the query model works in SQL Server Data Services (SSDS) today and some of the design goals of SSDS in general.

The first thing to understand is that the SSDS team made a conscious decision to start simple and expand functionality later.  The reasoning behind this is simple:

  • Simple services that lower the bar to get started are easier to adopt.  We want to offer the smallest useful feature set that developers can start using to build applications.
  • At the same time, we want to make sure that every feature that is available will scale appropriately at internet-level.

As such, the right model is what the team chose:  start simple and expose richer and richer functionality as we prove out the scale and developer's need.  The team is committed to short (8 week) development cycles that prioritizes the features based on feedback.

The Query Model

Now that I have covered the design decisions, let's take a look at how the query model actually operates.  From my last post, you see that I already showed you the syntax that begins the query operation (?q=).  What is important to understand is the following:

  • The widest scope for any search is the Container.

Well... almost, but I will get to that.  To put this another way: you cannot retrieve entity objects today by searching at the authority level.  That's right:  there is no cross-container search.  This might change in the future, but that is how it is today.  For developers familiar with LDAP terminology, this roughly equates to a SearchScope.OneLevel operation.  The syntax again is:


It is important to note that there is no trailing forward slash after the container Id in that. 

Now, what did I mean by "almost"?  If you saw my last post, I showed the following:


This would imply query capabilities, right?  Well, it turns out that you can query at the Authority level, but you can only query Container objects (not the entities contained in them).  Since a Container is a simple entity with only Id and Version today, it is of limited usefulness to query at this level.  However, if we were to add customizable metadata support to attach to a Container, then query might become much more interesting (e.g. find all containers where attribute "foo" equals "bar").


The SSDS team decided to adopt a LINQ-like syntax that is carried on the query string.  It has the basic format of the following:

from e in entities {where clause} select e

Only the part inside the {}'s is modifiable*.  We can infer the following from this syntax:  One, you can perform only simple selects today.  There are no joins, group by, ordering, etc.  Two, there is no projection today.  There is only the return of the complete entity as the entity is both the unit of storage (POST and PUT) as well as the unit of return (GET).

Now, let's inspect the "{where clause}".  The syntax here in more detail is:

where {property} {operator} {constant}


The '{property}' part of the expression can operate over both system properties (Id, Kind) as well as flexible properties (anything you added).  The syntax is slightly different depending on which it is.  For example:


In the case of the system properties, we use the dot notation and the name of the system property.  The custom flex properties are addressed using the brace [] syntax in a weakly-typed syntax.  This makes sense of course as there is no way we could know the syntax of a schema-less entity.


The operators ({operator}) that are supported are: ==, <, >, >=, <=, !=


Finally, for the '{constant}', we have just that - a constant.  We do not currently support other expressions or other properties here.  As an example the following is invalid:

e["Pages"] > e["AvgPages"]

while, this would be perfectly valid:

e["Pages"] > 300

The type of the constant is inferred by the syntax.  Using the wrong type will net you zero results possibly, so keep this in mind.  Here are some simple examples to show how to format the constant:

e["xsi_decimal_type"] > 300
e["xsi_string_type"] != "String"
e["xsi_boolean_type"] == true
e["xsi_dateTime_type"] == DateTime("2008-02-09T07:45:23.855")
e["xsi_base64Binary_type"] == Binary("AB34CD")
e.Kind == "String"
e.Id == "String"

The last point here in this syntax is that we can tie together multiple expressions using the && (AND), || (OR), and ! (NOT) logical operators.  Precedence can be set using parentheses ().


Results are limited to 500 per GET operation.  This is an arbitrary number right now, so don't depend on it.  The EntitySet that is returned is implicitly ordered by Id, so you would need to perform a simple loop logic to page through larger sets of data.  Something to the effect of:

from e in entities where e.Id > "last ID seen" select e

Pulling It Together

Since the query is submitted on the querystring of the URL, we need to encode the querystring using a URL encoding function.  It is actually easiest to use a UrlTemplate from .NET 3.5 which does all the right things for you.

Here is a properly formatted GET request that you can type into a browser to retrieve a set of entities.["Pages"]+>+300+&&+e.Kind=="Bookv2"+select+e

In this case, I am asking for Entities of the Kind "Bookv2" that have more than 300 (numeric) pages (from "Pages" flex property).  Simple, right?

The actual format that is returned is POX today (support for JSON and APP coming).  It would look something like this:

<s:EntitySet xmlns:s=""
    <Name xsi:type="x:string">My Special Book 423</Name>
    <ISBN xsi:type="x:string">ISBN 423</ISBN>
    <PublishDate xsi:type="x:dateTime">2008-02-09T07:43:51.13625</PublishDate>
    <Pages xsi:type="x:decimal">400</Pages>

The EntitySet tag wraps around one or many (in this case one) XML nodes that are of Kind "Bookv2".  As a developer, I need to interpret this XML and read back the results.

The last point here is that this is the entire entity.  As a consumer of this service, I need to think about how my entities will be used.  Since there is no projection and only a full entity is returned, it may make sense to break apart larger entities with commonly queried properties and leave larger binary properties (and later blobs) in separate entities.  You don't want to pay for the transfer cost or the performance hit of transferring multi-MB (or GB) entities when you only want to read a single flex property from it.  I envision people will start to make very light and composable entities to deal with this.

To anyone wondering what "Sitka" is in the namespaces or the URI... it is the old code name for SSDS.  That will more than likely change with the next refresh.

Limitations Today

The queries are fairly simple.  There is no capability for cross-container queries or joins of any type today.  There are no group by operators, or order by functionality.  There is also no "LIKE", Contains, BeginsWith, or EndsWith functionality.

I have to stress however, that this is a starting point for the SSDS query API, not the final functionality.  I will of course update this blog with the new functionality as it rolls into the service.  Again, the team decided that it was better to put a simple and approachable service out there today and gather feedback on what works and what doesn't for specific scenarios than to sit back and code a bunch of functionality that might not be necessary or meet the user's needs.  I think this was a good decision and there is an amazing variety of applications that you can build using just this API.

* - not quite... turns out you can change the 'e' to anything you like as long as you are consistent in reference, but that hardly counts as changing the query.

Monday, 10 March 2008

SQL Server Data Services Interview

My interview of Istvan Cseri is online now from MIX08.  Istvan covers why we should care about SSDS and how it would impact developers.  Check it out.

Thursday, 06 March 2008

Entities, Containers, and Authorities

SQL Server Data Services exposes what we call the 'ACE' concept.  This stands for Authority, Container, and Entity.  These basic concepts are the building blocks with which you build your applications using SSDS.

You will notice that I intentionally switched the order in my blog post title.  The reasoning is that I believe it is easier to understand the model when you learn the entity first.

Flexible Entities

At the core of it all is the idea of a flexible entity.  SSDS does not impose a schema on the shape of your data.  You are free to define whatever attributes you like with the model and choose some simple types to help you query that data later (string, number, boolean, datetime, or byte[]).  Consider the following C# class and how it will be represented on the wire (using the REST API).

public class Book
    public string Name { get; set; }
    public string ISBN { get; set; }
    public DateTime PublishDate { get; set; }
    public int Pages { get; set; }
    public byte[] Image { get; set; }
In order to store an instance of this Book class, we would need to serialize it on the wire like so (again this is for REST, not SOAP):
<Book xmlns:s=""
  <Name xsi:type="x:string">Some Book</Name>
  <ISBN xsi:type="x:string">1234567</ISBN>
  <PublishDate xsi:type="x:dateTime">2008-03-06T12:56:53.122-08:00</PublishDate>
  <Pages xsi:type="x:decimal">350</Pages>
  <Image xsi:type="x:base64Binary">CgsMDQ==</Image>

A couple of things to notice about this XML - there is an attribute called Id.  This is one of the system level attributes that are required.  Along with the container and authority ID, this id will form the basis of the unique URI that represents this resource.  The other system attributes are Kind and Version.  We control the Id and Kind, but the Version is maintained by SSDS (and why we don't need to send it initially).  For the REST API, the Kind equates to the root XML element for the entity (in this case Kind is "Book").

Flexible means flexible

I believe that most developers will use SSDS with a strongly-typed backing class like Book.  The serialization and subsequent deserialization of the instance is actually up to you as a developer however.

However, nothing forces me to actually use a backing class for this service.  It can be helpful for developers to think of the data they store in SSDS in terms of objects or even rows in a database, but that is not the only way to think about it.  More accurately, we can think about the data as a sparse property bag.

I can just as easily store another Book object with the following:

<Book xmlns:s=""
  <Name xsi:type="x:string">Some Book</Name>
  <ISBN xsi:type="x:string">1234567</ISBN>
  <PublishDate xsi:type="x:string">January 2008</PublishDate>

Notice in this example that I have removed some properties and then changed the PublishDate property type.  This is completely legal when using SSDS and no error will occur for different properties on the same Kind.  It is up to you as a developer to figure out the shape of the data coming back and deserialize it appropriately.

As you can see, a flexible entity is really about having different shapes to your data (and no schema).



Containers are collections of entities.  We like to say that the container is unit of consistency for our data.  It turns out that containers are the broadest domain of a query or search.  As such, containers must also contain a complete copy of all the entities within (that is, they must be co-located on the same node).  This further means that there must be some practical limit to the size of the container (as measured by space on disk) at some point because we are constrained by physical storage (also memory and CPU to some extent).

We don't have a hard limit at this point except to say it must exist, but eventually if your container gets big enough, you should consider partitioning it for resource efficiency reasons.


Because containers are another type of entity (albeit a somewhat special one), they also have the system attribute Id.  If we look at what a container looks like, you will see that it is a simple XML structure with no custom properties (though that could change).

<s:Container xmlns:s=""
In this case, the container here is called "Books".  The Version attribute comes back from SSDS to tell me what version of the entity I am looking at.  It can be used for simple concurrency (more on that in another post).


Finally, we have the idea of an authority.  This is a collection of containers co-located in a specific datacenter today (though this might not always remain true).  The authority is analogous to a namespace for .NET developers in a sense as it scopes the containers within it.  There is a DNS name attached to the authority that we use when we address a resource using REST.  This DNS name is provisioned for a particular user and added to a subdomain off the main SSDS domain name.



Pulling it together

Now that we have covered what ACE means, let's pull it together and see how it affects our URI that we build to address a resource.


This build a stable URI for us given an authority name, container Id, and entity Id.

I don't always need all three identifiers in order to address a resource.  I can actually query to find resources.  For instance, I will query the contents of an authority (the containers), if I target the authority with a GET request:


More commonly, I can query the contents of a container (the entities) by targeting the container URI with a GET request:


In this model, you will notice the ?q= portion of the URI.  This is the indicator that we want a query.  I am not specifying one here, so it acts more like a SELECT *.  In a later post, I will cover the query model in more detail.

Wednesday, 05 March 2008

Introducing SQL Server Data Services

Finally!  I can start to talk what I have been working on for the last three months.  Today, Ray Ozzie announced the release of SQL Server Data Services (SSDS).  SSDS is our cloud-based data storage and processing service.  Exposed over HTTP endpoints (REST or SOAP), SSDS delivers your data anywhere in a pay-as-you-go manner.  Fundamentally different than other storage services available today is the fact that SSDS is built on the SQL platform.  This allows us over time to expose richer and richer capabilities of the underlying platform.  Today, we have a set of simple query capabilities that allow us to still build quite sophisticated applications in 'scale-free' manner.

Over the coming weeks and months, I will be blogging more about the shape of the service as well as introducing some very cool samples that show off the power of SSDS.

For more SSDS coverage, checkout the SSDS Team Blog as well.

MIX Sessions

If you are attending MIX08, Nigel Ellis will be delivering a session on SSDS on Thursday at 8:30am in the Delfino 4005 Room. Session recording as well as slides will be available within 24 hours at

Additionally, we will have a few Open Space sessions available for more information:

  • Thursday 12pm – Soumitra Sengupta will talk about the business value of SSDS at Area 1 of Open Space.
  • Thursday 2pm – Jeff Currier and Jason Hunter will talk about developing with SSDS in the Theatre area at Open Space.
  • Friday 11:30am – Istvan Cseri will talk about architecting for SSDS at Area 1 of Open Space.

Beta Opportunity

A limited beta for SSDS will be starting soon.  If you are interested in participating, I encourage you sign up.