Tuesday, January 29, 2013
On 1/18 we quietly released a version of our scalable task scheduler (creatively named 'Scheduler' for right now) to the Windows Azure Store. If you missed it, you can see it in this post by Scott Guthrie. The service allows you to schedule re-occurring tasks using the well-known cron syntax. Today, we support a simple GET webhook that will notify you each time your cron fires. However, you can be sure that we are expanding support to more choices, including (authenticated) POST hooks, Windows Azure Queues, and Service Bus Queues to name a few.
In this post, I want to share a bit about how we designed the service to support many tenants and potentially millions of tasks. Let's start with a simplified, but accurate overall picture:

We have several main subsystems in our service (REST API façade, CRON Engine, and Task Engine) and additionally several shared subsystems across additional services (not pictured) such as Monitoring/Auditing and Billing/Usage. Each one can be scaled independently depending on our load and overall system demand. We knew that we needed to decouple our subsystems such that they did not depend on each other and could scale independently. We also wanted to be able to develop each subsystem potentially in isolation without affecting the other subsystems in use. As such, our systems do not communicate with each other directly, but only share a common messaging schema. All communication is done over queues and asynchronously.
REST API
This is the layer that end users communicate with and the only way to interact with the system (even our portal acts as a client). We use a shared secret key authentication mechanism where you sign your requests and we validate them as they enter our pipeline. We implemented this REST API using Web API. When you interact with the REST API, you are viewing fast, lightweight views of your scheduled task setup that reflects what is stored in our Job Repository. However, we never query the Job Repository directly to keep it responsive to its real job - providing the source data for the CRON Engine.
CRON Engine
This subsystem was designed to do as little as possible and farm out the work to the Task Engine. When you have an engine that evaluates cron expressions and fire times, it cannot get bogged down trying to actually do the work. This is a potentially IO-intensive role in the subsystem that is constantly evaluating when to fire a particular cron job. In order to support many tenants, it must be able run continuously without bogging down in execution. As such, this role only evaluates when a particular cron job must run and then fires a command to the Task Engine to actually execute the potentially long running job.
Task Engine
The Task Engine is the grunt of the service and it performs the actual work. It is the layer that will be scaled most dramatically depending on system load. Commands from the CRON Engine for work are accepted and performed at this layer. Subsequently, when the work is done it emits an event that other interested subsystems (like Audit and Billing) can subscribe to downstream. The emitted event contains details about the outcome of the task performed and is subsequently denormalized into views that the REST API can query to provide back to a tenant. This is how we can tell you your job history and report back any errors in execution. The beauty of the Task Engine emitting events (instead of directly acting) is that we can subscribe many different listeners for a particular event at any time in the future. In fact, we can orchestrate very complex workflows throughout the system as we communicate to unrelated, but vital subsystems. This keeps our system decoupled and allows us to develop those other subsystems in isolation.
Future Enhancements
Today we are in a beta mode, intended to give us feedback about the type of jobs, frequency of execution, and what our system baseline performance should look like. In the future, we know we will support additional types of scheduled tasks, more views into your tasks, and more complex orchestrations. Additionally, we have setup our infrastructure such that we can deploy to to multiple datacenters for resiliency (and even multiple clouds). Give us a try today and let us know about your experience.
Thursday, December 20, 2012
This is a quick post today that might save folks the same trouble I had to go through when upgrading my Windows Identity Foundation (WIF) enabled MVC website to the latest version of .NET. The scenario is that you might want to enrich the claims coming from your STS with additional claims of your choosing. To do this, there is a common technique of creating a class the derives from ClaimsAuthenticationManager and overrides the Authenticate method. Consider this sample ClaimsAuthenticationManager:
The issue we have is that we need to provide an implementation of ITenantRepository here in order to lookup the data for the additional claims we are adding. If you are lucky enough to find the article on MSDN, it will show you how to wire in a custom ClaimsAuthenticationManager using the web.config. I don't want to hardcode references to an implementation of my TenantRepository, so using config is not a great option for me.
In the older WIF model (Microsoft.IdentityModel) for .NET <= 4.0, you hooked the ServiceConfigurationCreated event:
But, in .NET 4.5, all of the namespaces and a lot of the classes are updated (System.IdentityModel). It took me a long time in Reflector to figure out how to hook the configuration being created again. Turns out you need to reference System.IdentityModel.Services and find the FederatedAuthentication class. Here you go:
Happy WIF-ing.
Friday, May 25, 2012
At this point in our diagnostics saga, we have our instances busily pumping out the data we need to manage and monitor our services. However, it is simply putting the raw data in our storage account(s). What we really want to do is query and analyze that data to figure out what is happening.
The Basics
Here I am going to show you the basic code for querying your data. For this, I am going to be using LINQPad. It is a tool that is invaluable for ad hoc querying and prototyping. You can cut & paste the following script (hit F4 and add references and namespaces for Microsoft.WindowsAzure.StorageClient.dll and System.Data.Service.Client.dll as well).
void Main()
{
var connectionString = "DefaultEndpointsProtocol=https;AccountName=youraccount;AccountKey=yourkey";
var account = CloudStorageAccount.Parse(connectionString);
var client = account.CreateCloudTableClient();
var ctx = client.GetDataServiceContext();
var deploymentId = new Guid("25d676fb-f031-42b4-aae1-039191156d1a").ToString("N").Dump();
var q = ctx.CreateQuery<PerfCounter>("WADPerformanceCountersTable")
.Where(f => f.RowKey.CompareTo(deploymentId) > 0 && f.RowKey.CompareTo(deploymentId + "__|") < 0)
.Where(f => f.PartitionKey.CompareTo(DateTime.Now.AddHours(-2).GetTicks()) > 0)
//.Take(1)
.AsTableServiceQuery()
.Dump();
//(q as DataServiceQuery<Foo>).RequestUri.AbsoluteUri.Dump();
//(q as CloudTableQuery<Foo>).Expression.Dump();
}
static class Funcs
{
public static string GetTicks(this DateTime dt)
{
return dt.Ticks.ToString("d19");
}
}
[System.Data.Services.Common.DataServiceKey("PartitionKey", "RowKey")]
class PerfCounter
{
public string PartitionKey { get; set; }
public string RowKey { get; set; }
public DateTime Timestamp { get; set; }
public long EventTickCount { get; set; }
public string Role { get; set; }
public string DeploymentId { get; set; }
public string RoleInstance { get; set; }
public string CounterName { get; set; }
public string CounterValue { get; set; }
public int Level { get; set; }
public int EventId { get; set; }
public string Message { get; set; }
}
What I have done here is setup a simple script that allows me to query the table storage location for performance counters. There are two big (and 1 little) things to note here:
- Notice how I am filtering down to the deployment ID (also called Private ID) of the deployment I am interested in seeing. If you use same storage account for multiple deployments, this is critical.
- Also, see how I have properly formatted the DateTime such that I can select a time range from the Partition Key appropriated. In this example, I am retrieving the last 2 hours of data for all roles in the selected deployment.
- I have also commented out some useful checks you can use to test your filters. If you uncomment the DataServiceQuery<T> line, you also should comment out the .AsTableServiceQuery() line.
Using the Data
If you haven't set absurd sample rates, you might actually get this data back in a reasonable time. If you have lots of performance counters to monitor and/or you have high sample rates, be prepared to sit and wait for awhile. Each tick is a single row in table storage. You can return 1000 rows in a single IO operation. It can take a very long time if you ask for large time ranges or have lots of data.
Once you have the query returned, you can actually export it into Excel using LINQPad and go about setting up graphs and pivot tables, etc. This is all very doable, but also tedious. I would not recommend this for long term management, but rather some simple point in time reporting perhaps.
For AzureOps.com, we went a bit further. We collect the raw data, compress, and index it for highly efficient searches by time. We also scale the data for the time range, otherwise you can have a very hard time graphing 20,000 data points. This makes it very easy to view both recent data (e.g. last few hours) as well as data over months. The value of the longer term data cannot be overstated.
Anyone that really wants to know what their service has been doing will likely need to invest in monitoring tools or services (e.g. AzureOps.com). It is simply impractical to pull more than a few hours of data by querying the WADPeformanceCountersTable directly. It is way too slow and way too much data for longer term analysis.
The Importance of Long Running Data
For lots of operations, you can just look at the last 2 hours of your data and see how your service has been doing. We put that view as the default view you see when charting your performance counters in AzureOps.com. However, you really should back out the data from time to time and observe larger trends. Here is an example:

This is actual data we had last year during our early development phase of the backend engine that processes all the data. This is the Average CPU over 8 hours and it doesn't look too bad. We really can't infer anything from this graph other than we are using about 15-35% of our CPU most of the time.
However, if we back that data out a bit.:

This picture tells a whole different story. We realized that we were slowly doing more and more work with our CPU that did not correlate with the load. This was not a sudden shift that happened in a few hours. This was manifesting itself over weeks. Very slow, for the same amount of operations, we were using more CPU. A quick check on memory told us that we were also chewing up more memory:

We eventually figured out the issue and fixed it (serialization issue, btw) - can you tell where?

Eventually, we determined what our threshold CPU usage should be under certain loads by observing long term trends. Now, we know that if our CPU spikes above 45% for more than 10 mins, it means something is amiss. We now alert ourselves when we detect high CPU usage:

Similarly, we do this for many other counters as well. There is no magic threshold to choose, but if you have enough data you will be able to easily pick out the threshold values for counters in your own application.
In the next post, I will talk about how we pull this data together analyzers, notifications, and automatically scale to meet demand.
Shameless plug: Interesting in getting your own data from Windows Azure and monitoring, alerting, and scaling? Try AzureOps.com for free!
Wednesday, August 24, 2011
I spent the last few hours debugging an issue where a query in Windows Azure table storage was not returning any results, even though I knew that data was there. It didn't start that way of course. Rather, stuff that should have been working and previously was working, just stopped working. Tracing through the code and debugging showed me it was a case of a method not returning data when it should have.
Now, I have known for quite some time that you must handle continuation tokens and you can never assume that a query will return data always (Steve talks about it waaaay back when here). However, what I did not know was that different methods of enumeration will give you different results. Let me explain by showing the code.
var q = this.CreateQuery()
.Where(filter)
.Where(f => f.PartitionKey.CompareTo(start.GetTicks()) > 0)
.Take(1)
.AsTableServiceQuery();
var first = q.FirstOrDefault();
if (first != null)
{
return new DateTime(long.Parse(first.PartitionKey));
}
In this scenario, you would assume that you have continuation tokens nailed because you have the magical AsTableServiceQuery extension method in use. It will magically chase the tokens until conclusion for you. However, this code does not work! It will actually return null in cases where you do not hit the partition server that holds your query results on the first try.
I could easily reproduce the query in LINQPad:
var q = ctx.CreateQuery<Foo>("WADPerformanceCountersTable")
.Where(f => f.RowKey.CompareTo("9232a4ca79344adf9b1a942d37deb44a") > 0 && f.RowKey.CompareTo("9232a4ca79344adf9b1a942d37deb44a__|") < 0)
.Where(f => f.PartitionKey.CompareTo(DateTime.Now.AddDays(-30).GetTicks()) > 0)
.Take(1)
.AsTableServiceQuery()
.Dump();
Yet, this query worked perfectly. I got exactly 1 result as I expected. I was pretty stumped for a bit, then I realized what was happening. You see FirstOrDefault will not trigger the enumeration required to generate the necessary two round-trips to table storage (first one gets continuation token, second gets results). It just will not force the continuation token to be chased. Pretty simple fix it turns out:
var first = q.AsEnumerable().SingleOrDefault();
Hours wasted for that one simple line fix. Hope this saves someone the pain I just went through.
Thursday, July 14, 2011
I was working on a Windows Azure website solution the other day and suddenly started getting this error when I tried to run the site with a debugger:

This error is one of the hardest to diagnose. Typically, it means that there is something crashing in your website before the debugger can attach. A good candidate to check is your global.asax to see if you have changed anything there. I knew that the global.asax had not been changed, so it was puzzling. Naturally, I took the normal course of action:
- Run the website without debug inside the emulator.
- Run the website with and without debugging outside the emulator.
- Tried it on another machine
None of these methods gave me any clue what the issue was as they all worked perfectly fine. It was killing me that it only happened on debugging inside the emulator and only on 1 machine (the one I really wanted to work). I was desperately looking for a solution that did not involve rebuilding the machine. I turned on SysInternal's DebugView to see if there were some debug messages telling me what the message was. I saw an interesting number of things, but nothing that really stood out as the source of the error. However, I did notice the process ID of what appeared to be reporting errors:

Looking at Process Explorer, I found this was for DFAgent.exe (the Dev Fabric Agent). I could see that it was starting with an environment variable, so I took a look at where that was happening:

That gave me a direction to start looking. I opened the %UserProfile%\AppData\Local\Temp directory and found a conveniently named file there called Visual Studio Web Debugger.log.

A quick look at it showed it to be HTML, so one rename later and viola!

One of our developers had overridden the <httpErrors> setting in web.config that was disallowed on my 1 machine. I opened my applicationHost.config using a Administatrive Notepad and sure enough:

So, the moral of the story is next time, just take a look at this log file and you might find the issue. I suspect the reason that this only happened on debug and not when running without the debugger was that for some reason the debugger is looking for a file called debugattach.aspx. Since this file does not exist on my machine, it throws a 404, which in turn tries to access the <httpErrors> setting, which culminates in the 500.19 server error. I hope this saves someone the many hours I spent finding it and I hope it prevents you from rebuilding your machine as I almost did.
Tuesday, May 17, 2011
One of the common patterns I have noticed across many customers is the desire to download a resource from the web and execute it as part of the bootup process for a Windows Azure web or worker role. The resource could be any number of things (e.g. exe, zip, msi, cmd, etc.), but typically is something that the customer does not want to package directly in the deployment package (cspkg) for size or update/maintainability reasons.
While the pattern is pretty common, the ways to approach the problem can certainly vary. A more complete solution will need to deal with the following issues:
- Downloading from arbitrary http(s) sources or Windows Azure blob storage
- Logging
- Parsing configuration from the RoleEnvironment or app.config
- Interacting with the RoleEnvironment to get ports, DIP addresses, and Local Resource paths
- Unzipping resources
- Launching processes
- Ensuring that resources are only installed once (or downloaded and unzipped once)
With these goals in mind, we built the Windows Azure Bootstrapper. It is a pretty simple tool to use and requires only packaging of the .exe and the .config file itself in your role. With these two items in place, you can then script out fairly complicated installations. For example, you could prepare your roles with MVC3 using a command like this:
bootstrapper.exe -get http://download.microsoft.com/download/F/3/1/F31EF055-3C46-4E35-AB7B-3261A303A3B6/AspNetMVC3ToolsUpdateSetup.exe -lr $lr(temp) -run $lr(temp)\AspNetMVC3ToolsUpdateSetup.exe -args /q
Check out the project page for more examples, but the possibilities are pretty endless here. One customer uses the Bootstrapper to download agents and drivers from their blob storage account to install at startup for their web and worker roles. Other folks use it to simply copy files out of blob storage and lay them out correctly on disk.
Of course, none of this would be available in the community if not for the great guys working at National Instruments. They allowed us to take this code written for them and turn it over to the community.
Enjoy and let us know your feedback or any bugs you find.
Tuesday, January 11, 2011
As some of you may know, I recently left Microsoft after almost 4 years. It was a wonderful experience and I met and worked with some truly great people there. I was lucky enough to join at the time when the cloud was just getting started at Microsoft. Code words like CloudDB, Strata, Sitka, Red Dog, and others I won't mention here flew around like crazy.
I started as the SQL Server Data Services (SSDS) evangelist and later became the Windows Azure Evangelist (after SSDS eventually became SQL Azure). For the last 2 years, Windows Azure has been my life and it was great seeing a technology very rapidly grow from CTP to a full featured platform in such a short time.
Well, I am not moving far from Windows Azure - I still strongly believe in what Microsoft has done here. I recently joined Cumulux as Director of Cloud Services. I will be driving product strategy, some key customer engagements, and take part in the leadership team there. Cumulux is a close Microsoft partner with a Windows Azure focus, so it is a great fit for me for both personal as well as professional reasons.
While I will miss the great folks I got to work with at Microsoft and being in the thick of everything, I am very excited to begin this new career path with Cumulux.
Sunday, December 19, 2010
Update: Looks like Wade also blogged about this, showing a similar way with some early scripts I wrote. However, there are some important differences around versioning of WebDeploy and creating users that you should pay attention to here. Also, I am using plugins, which are much easier to consume.
One of the features announced at PDC was the coming ability to use the standard IIS7 WebDeploy capability with Windows Azure. This is exciting news, but it comes with an important caveat. This feature is strictly for development purposes only. That is, you should not expect anything you deploy using WebDeploy to persist longterm on your running Windows Azure instances. If you are familiar with Windows Azure, you know the reason is that any changes to the OS post-startup are not durable.
So, the use case for WebDeploy in Windows Azure is in the case of a single instance of a role during development. The idea is that you would:
- Deploy your hosted service (using the .cspkg upload and deploy) with a single instance with WebDeploy enabled.
- Use WebDeploy on subsequent deploys for instant feedback
- Deploy the final version again using .cspkg (without WebDeploy enabled) so the results are durable with at least 2 instances.
The Visual Studio team will shortly be supplying some tooling to make this scenario simple. However, in the interim, it is relatively straightforward to implement this yourself and do what the tooling will eventually do for you.
If you look at the Windows Azure Training Kit, you will find the Advanced Web and Worker Lab Exercise 3 and it will show you the main things to get this done. You simply need to extrapolate from this to get the whole thing working.
We will be using Startup Tasks to perform a little bit of bootstrapping when the role instance (note the singular) starts in order to use Web Deploy. This will be implemented using the Role Plugin feature to make this a snap to consume. Right now, the plugins are undocumented, but if you peruse your SDK bin folder, it won't be too hard to figure out how they work.
The nice thing about using a plugin is that you don't need to litter your service definition with anything in order to use the feature. You simply need to include a single "Import" element and you are done!
Install the Plugin
In order to use this feature, simply extract the contents of the zip file into "%programfiles%\Windows Azure SDK\v1.3\bin\plugins\WebDeploy". You might have to extract the files locally in your profile first and copy them into this location if you run UAC. At the end of the day, you need a folder called "WebDeploy" in this location with all the files in this zip in it.
Use the Plugin
To use the plugin you simply need to add one line to your Service Definition:
<Imports>
<Import moduleName="RemoteAccess" />
<Import moduleName="RemoteForwarder" />
<Import moduleName="WebDeploy"/>
</Imports>
Notice, in my example, I am also including the Remote Access and Remote Forwarder plugins as well. You must have RemoteAccess enabled to use this plugin as we will rely on the user created here. The Remote Forwarder is required on one role in your solution.
Next, you should hit publish on the Windows Azure project (not Web Site) and setup an RDP user. We will be using this same user later in order to deploy because by default the RDP user is also an Admin with permission to use Web Deploy. Alternatively, you could create an admin user with a Startup Task, but this method is easier (and you get RDP). If you have not setup the certificates for RDP access before, it is a one time process outlined here.


Now, you publish this to Windows Azure using the normal Windows Azure publishing process:

That's it. Your running instance has WebDeploy and the IIS Management Service running now. All you had to do was import the WebDeploy plugin and make sure you also used RDP (which most developers will enable during development anyway).
At this point, it is a pretty simple matter to publish using WebDeploy. Just right click the Web Site (not the Cloud Project) and hit publish:

You will need to type in the name of your .cloudapp.net project in the Service URL. By default, it will assume port 8172; if you have chosen a different public VIP port (by editing the plugin -see *hint below), here is where you need to update it (using the full https:// syntax).
Next, you need to update the Site/Application. The format for this is ROLENAME_IN_0_SITENAME, so you need to look in your Service Definition to find this:

Notice, in my example, my ROLENAME is "WebUX" and the Site name is "Web". This will differ for each Web Role potentially, so make sure you check this carefully.
Finally, check the "Allow untrusted certificate" option and use the same RDP username and password you created on deploy. That's all. It should be possible for you to use Web Deploy now with your running instance: Hit Publish and Done!
How it works
If you check the plugin, you will see that we use 2 scripts and WebPI to do this. Actually, this is the command line version of WebPI, so it will run without prompts. You can also download it, but it is packaged already in the plugin.
The first script, called EnableWebAdmin.cmd simply enables the IIS Management service and gets it running. By default, this sets up the service to run and listen on port 8172. If you check the .csplugin file, you will notice we have opened that port on the Load Balancer as well.
start /w ocsetup IIS-ManagementService
reg add HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WebManagement\Server /v EnableRemoteManagement /t REG_DWORD /d 1 /f
net start wmsvc
sc config WMSVC start= auto
exit /b 0
*Hint: if you work at a place like Microsoft that prevents ports other than 443 to host SSL traffic, here would be a good place to map 443 to 8172 in order to deploy. For most normal environments, this is not necessary.
Next, we have the proper WebDeploy installation command using WebPI command line.
@echo off
ECHO "Starting WebDeploy Installation" >> log.txt
"%~dp0webpicmd\WebPICmdLine.exe" /Products: WDeploy /xml:https://www.microsoft.com/web/webpi/2.0/RTM/WebProductList.xml /log:webdeploy.txt
ECHO "Completed WebDeploy Installation" >> log.txt
net stop wmsvc
net start wmsvc
You will notice one oddity here - namely, we are using the WebPI 2.0 feed to install an older version of WebDeploy (v1.1). If you leave this off, it will default to the 3.0 version of WebPI that uses the Web Deploy 2.0 RC. In my testing, Web Deploy 2.0 RC only works sporadically and usually not at all. The 1.1 version is well integrated in VS tooling, so I would suggest this one instead until 2.0 works better.
Also, you will notice that I am stopping and restarting the IIS Management Service here as well. In my testing, WebDeploy was was unreliable on the Windows Server 2008 R2 family (osFamily=2). Starting and stopping the service seems to fix it.
Final Warning
Please bear in mind that this method is only for proto-typing and rapid development. Since the changes are not persisted, they will disappear the next time you are healed by the fabric controller. Eventually, the Visual Studio team will release their own plugin that does essentially the same thing and you can stop using this. In the meantime, have fun!
Tuesday, November 30, 2010
If you haven't already read on the official blog, Windows Azure SDK 1.3 was released (along with Visual Studio tooling). Rather than rehash what it contains, go read that blog post if you want to know what is included (and there are lots of great features). Even better, goto microsoftpdc.com and watch the videos on the specific features.
If you are a user of the Windows Azure MMC or the Windows Azure Service Management Cmdlets, you might notice that stuff gets broken on new installs. The underlying issue is that the MMC snapin was written against the 1.0 version of the Storage Client. With the 1.3 SDK, the Storage Client is now at 1.1. So, this means you can fix this two ways:
- Copy an older 1.2 SDK version of the Storage Client into the MMC's "release" directory. Of course, you need to either have the .dll handy or go find it. If you have the MMC installed prior to SDK 1.3, you probably already have it in the release directory and you are good to go.
- Use Assembly Redirection to allow the MMC or Powershell to call into the 1.1 client instead of the 1.0 client.
To use Assembly Redirection, just create a "mmc.exe.config" file for MMC (or "Powershell.exe.config" for cmdlets) and place it in the %windir%\system32 directory. Inside the .config file, just use the following xml:
<configuration>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="Microsoft.WindowsAzure.StorageClient"
publicKeyToken="31bf3856ad364e35"
culture="neutral" />
<bindingRedirect oldVersion="1.0.0.0"
newVersion="1.1.0.0"/>
</dependentAssembly>
<dependentAssembly>
<assemblyIdentity name="Microsoft.WindowsAzure.StorageClient"
publicKeyToken="31bf3856ad364e35"
culture="neutral" />
<publisherPolicy apply="no" />
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>
Tuesday, November 16, 2010
I am back from TechEd EU (and PDC)! It was a long and exciting last few weeks getting first ready for PDC and then flying to Germany to deliver a couple sessions. It was a great, if not hectic experience for the last few weeks. For PDC, I had a part to play in the keynote demos as well as the PDC Workshop on the following Saturday. Those two things together took more time than I would care to admit. Once PDC was a wrap, I got to start building my talks for TechEd EU.
You can find both of my talks on the TechEd Online Site and embedded below. The embedding doesn't work great on the blog because of layout issues I need to fix, but RSS readers should have a better time.
Thanks to everyone who attended these sessions in person and all the nice comments. I have almost completely recovered my voice at this point - my cold last just the duration of the trip of course!
Friday, October 15, 2010
For the 30th Cloud Cover show, Steve and I talked about coordinating work. It is not an entirely new topic as there have been a couple variations on the topic before. However, the solution I coded is actually different than both of those. I chose to simply iterate through all the available instances and fire a WCF message to each one. I could have done this is parallel or used multi-casting (probably more elegant), but for simplicity sake: it just works.
The reason I needed to coordinate work was in context of load testing a site. Someone had asked me about load testing in Windows Azure. The cloud is a perfect tool to use for load testing. Where else can you get a lot of capacity for a short period of time? With that thought in mind, I searched around to find a way to generate a lot of traffic quickly.
I built a simple 2 role solution where a controller could send commands to a worker and the worker in turn would spawn a load testing tool. For this sample, I chose a tool called ApacheBench. You should watch the show to understand why I chose ApacheBench instead of the more powerful WCAT.

Get the source - It's Hammer Time!
PS: Big props go to Steve for AJAX'ing the source up side the head.
Thursday, October 14, 2010
I have been meaning to update my blog for some time with the routing solution I presented in Cloud Cover 24. One failed laptop harddrive later and well. I lost my original post and didn't get around until now to try to replace it. Anyhow, enough of my sad story.
One of the many great benefits of using Windows Azure is that networking is taken care of for you. You get connected to a load balancer for all your input endpoints and we automatically round-robin the requests to all your instances that are declared to be part of the role. Visually, it is pretty easy to see:

However, this introduces a slight issue common to all webfarm scenarios: state doesn't work very well. If your service likes to rely on the fact that it will communicate with exactly the same machine as last time, you are in trouble. In fact, this is why you will always hear an overarching theme of designing to be stateless in the cloud (or any scale-out architecture).
There are inevitably scenarios where sticky sessions (routing when possible to the same instance) will benefit a given architecture. We talked a lot about this on the show. In order to produce sticky sessions in Windows Azure, you need a mechanism to route to a given instance. For the show, I talked about how to do this with a simple socket listener.

We introduce a router between the Load Balancer and the Web Roles in this scenario. The router will peek (1) the incoming headers to determine via cookies or other logic where to route the request. It then forwards the traffic over sockets to the appropriate server (2). For all return responses, the router will optionally peek that stream as well to determine whether to inject the cookie (3). Finally, if necessary, the router will inject a cookie that states where the user should be routed to in subsequent requests (4).
I should note that during the show we talked about how this cookie injection could work, but the demo did not use it. For this post, I went ahead and updated the code to actually perform the cookie injection. This allows me to have a stateless router as well. In the demo on the show, I had a router that simply used a dictionary lookup of SessionIDs to Endpoints. However, we pointed out that this was problematic without externalizing that state somewhere. A simple solution is to have the client deliver to the router via a cookie where it was last connected. This way the state is carried on each request and is no longer the router's responsibility.
You can download the code for this from Code Gallery. Please bear in mind that this was only lightly tested and I make no guarantees about production worthiness.
Friday, July 16, 2010
For Episode 19 of the Cloud Cover show, Steve and I discussed the importance of setting the Content-Type on your blobs in Windows Azure blob storage. This was was especially important for Silverlight clients. I mentioned that there was a way to look up a Content Type from your registry as opposed to hardcoding a list. The code is actually pretty simple. I pulled this from some code I had lying around that does uploads.
Here it is:
private static string GetContentType(string file)
{
string contentType = "application/octet-stream";
string fileExt = System.IO.Path.GetExtension(file).ToLowerInvariant();
RegistryKey fileExtKey = Registry.ClassesRoot.OpenSubKey(fileExt);
if (fileExtKey != null && fileExtKey.GetValue("Content Type") != null)
{
contentType = fileExtKey.GetValue("Content Type").ToString();
}
return contentType;
}
Friday, June 11, 2010
Last week, I had the opportunity to talk with Hal and Jonathan on the PowerScripting podcast about Windows Azure. It was a fun chat - lots on Windows Azure, a bit on the WASM cmdlets and MMC, and it revealed my favorite comic book character.
Listen to it now.
Friday, May 28, 2010
This post is a bit overdue: Steve threatened to blog it himself, so I figured I should get moving. In one of our Cloud Cover episodes, we covered how to host WCF services in Windows Azure. I showed how to host both publically accessible ones as well as how to host internal WCF services that are only visible within a hosted service.
In order to host an internal WCF Service, you need to setup an internal endpoint and use inter-role communication. The difference between doing this and hosting an external WCF service on an input endpoint is mainly in the fact that internal endpoints are not load-balanced, while input endpoints are hooked to the load-balancer.
Hosting an Internal WCF Service
Here you can see how simple it is to actually get the internal WCF service up and listening. Notice that the only thing that is different is that the base address I pass to my ServiceHost contains the internal endpoint I created. Since the port and IP address I am running on is not known until runtime, you have to create the host and pass this information in dynamically.
public override bool OnStart()
{
// Set the maximum number of concurrent connections
ServicePointManager.DefaultConnectionLimit = 12;
DiagnosticMonitor.Start("DiagnosticsConnectionString");
// For information on handling configuration changes
// see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
RoleEnvironment.Changing += RoleEnvironmentChanging;
StartWCFService();
return base.OnStart();
}
private void StartWCFService()
{
var baseAddress = String.Format(
"net.tcp://{0}",
RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["EchoService"].IPEndpoint
);
var host = new ServiceHost(typeof(EchoService), new Uri(baseAddress));
host.AddServiceEndpoint(typeof(IEchoService), new NetTcpBinding(SecurityMode.None), "echo");
host.Open();
}
Consuming the Internal WCF Service
From another role in my hosted service, I want to actually consume this service. From my code-behind, this was all the code I needed to actually call the service.
protected void Button1_Click(object sender, EventArgs e)
{
var factory = new ChannelFactory<WorkerHost.IEchoService>(new NetTcpBinding(SecurityMode.None));
var channel = factory.CreateChannel(GetRandomEndpoint());
Label1.Text = channel.Echo(TextBox1.Text);
}
private EndpointAddress GetRandomEndpoint()
{
var endpoints = RoleEnvironment.Roles["WorkerHost"].Instances
.Select(i => i.InstanceEndpoints["EchoService"])
.ToArray();
var r = new Random(DateTime.Now.Millisecond);
return new EndpointAddress(
String.Format(
"net.tcp://{0}/echo",
endpoints[r.Next(endpoints.Count() - 1)].IPEndpoint)
);
}
The only bit of magic here was querying the fabric to determine all the endpoints in the WorkerHost role that implemented the EchoService endpoint and routing a request to one of them randomly. You don't have to route requests randomly per se, but I did this because internal endpoints are not load-balanced. I wanted to distribute the load evenly over each of my WorkerHost instances.
One tip that I found out is that there is no need to cache the IPEndpoint information you find. It is already cached in the API call. However, you may want to cache your ChannelFactory according to best practices (unlike me).
Hosting Public WCF Services
This is all pretty easy as well. The only trick to this is that you need to apply a new behavior that knows how to deal with the load balancer for proper MEX endpoint generation. Additionally, you need to include a class attribute on your service to deal with an address filter mismatch issue. This is pretty well documented along with links to download the QFE that contains the behavior patch out on the WCF Azure Samples project on Code Gallery in Known Issues. Jim Nakashima actually posted about this the other day as well in detail on his blog as well, so I won't dig into this again here.
Lastly, if you just want the code from the show, have at it!
Monday, May 10, 2010
I am happy to announce the public release of the Windows Azure MMC - May Release. It is a very significant upgrade to the previous version on Code Gallery. So much, in fact, I tend to unofficially call it v2 (it has been called the May Release on Code Gallery). In addition to all-new and faster storage browsing capabilities, we have added service management as well as diagnostics support. We have also rebuilt the tool from the ground up to support extensibility. You can replace or supplement our table viewers, log viewers, and diagnostics tooling with your own creation.
This update has been in the pipeline for a very long time. It was actually finished and ready to go in late January. Given the amount of code however that we had to invest to produce this tool, we had to go through a lengthy legal review and produce a new EULA. As such, you may notice that we are no longer offering the source code in this release to the MMC snap-in itself. Included in this release is the source for the WASM cmdlets, but not for the MMC or the default plugins. In the future, we hope to be able to release the source code in its entirety.
Features At A Glance:
| Hosted Services | Upload / configure / control / upgrade / swap / remove Windows Azure application deployments
|
| Diagnostics | Configure instrumentation for Windows Azure applications (diagnostics) per source (perf counters, file based, app logs, infrastructure logs, event logs). Transfer the diagnostic data on-demand or scheduled. View / Analyze / Export to Excel and Clear instrumentation results.
|
| Certificates | Upload / manage certificates for Windows Azure applications |
| Storage Services | Configure Storage Services for Windows Azure applications |
| BLOBs and Containers | Add / Upload / Download / Remove BLOBs and Containers and connect to multiple storage accounts
|
| Queues | Add / Purge / Delete Windows Azure Queues |
| Tables | Query and delete Windows Azure Tables |
| Extensibility | Create plugins for rich diagnostics data visualization (e.g. add your own visualizer for performance counters). Create plugins for table viewers and editors or add completely new modules! Plugin Engine uses MEF (extensibility framework) to easily add functionality.
|
| PowerShell-based backend | The backend is based on PowerShell cmdlets. If you don't like our UI, you can still use the underlying cmdlets and script out anything we do |
How To Get Started:
There are so many features and updates in this release that I have prepared a very quick 15-min screencast on the features and how to get started managing your services and diagnostics in Windows Azure today!
Friday, March 5, 2010
In Episode 3 of Cloud Cover, I mentioned the tip of the week was how to measure your database size in SQL Azure. Here is the exact queries you can run to do it:
select
sum(reserved_page_count) * 8.0 / 1024
from
sys.dm_db_partition_stats
GO
select
sys.objects.name, sum(reserved_page_count) * 8.0 / 1024
from
sys.dm_db_partition_stats, sys.objects
where
sys.dm_db_partition_stats.object_id = sys.objects.object_id
group by sys.objects.name
The first one will give you the size of your database in MB and the second one will do the same, but break it out for each object in your database.
Hat tip to David Robinson and Tony Petrossian on the SQL Azure team for the query.
Wednesday, February 17, 2010
I am happy to announce the updated release of the Windows Azure Service Management (WASM) Cmdlets for PowerShell today. With these cmdlets you can effectively automate and manage all your services in Windows Azure. Specifically,
- Deploy new Hosted Services
- Automatically upload your packages from the file system to blob storage.
- Upgrade your services
- Choose between automatic or manual rolling upgrades
- Swap between staging and production environments
- Remove your Hosted Services
- Automatically pull down your services at the end of the day to stop billing. This is a critical need for test and development environments.
- Manage your Storage accounts
- Retrieve or regenerate your storage keys
- Manage your Certificates
- Deploy certificates from your Windows store or the local filesystem
- Configure your Diagnostics
- Remotely configure the event sources you wish to monitor (Event Logs, Tracing, IIS Logs, Performance Counters and more)
- Transfer your Diagnostics Information
- Schedule your transfers or Transfer on Demand.

Why did we build this?
The WASM cmdlets were built to unblock adoption for many of our customers as well as serve as a common underpinning to our labs and internal tooling. There was an immediate demand for an automation API that would fit into the standard toolset for IT Pros. Given the adoption and penetration of PowerShell, we determined that cmdlets focused on this core audience would be the most effective way forward. Furthermore, since PowerShell is a full scripting language with complete access to .NET, this allows these cmdlets to be used as the basis for very complicated deployment and automation scripts as part of the application lifecycle.
How can you use them?
Every call to the Service Management API requires an X509 certificate and the subscription ID for the account. To get started, you need to upload a valid certificate to the portal and have it installed locally to your workstation. If you are unfamiliar with how to do this, you can follow the procedure outlined on the Windows Azure Channel9 Learning Center here.
Here are a few examples of how to use the cmdlets for a variety of common tasks:
Common Setup
Each script referenced below will refer to the following variables:
Add-PSSnapin AzureManagementToolsSnapIn
#get your local certificate for authentication
$cert = Get-Item cert:\CurrentUser\My\<YourThumbPrint>
#subID from portal
$sub = 'c9f9b345-7ff5-4eba-9d58-0cea5793050c'
#your service name (without .cloudapp.net)
$service = 'yourservice'
#path to package (can also be http: address in blob storage)
$package = "D:\deploy\MyPackage.cspkg"
#configuration file
$config = "D:\deploy\ServiceConfiguration.cscfg"
Listing My Hosted Services
Get-HostedServices -SubscriptionId $sub -Certificate $cert
View Production Service Status
Get-HostedService $service -SubscriptionId $sub -Certificate $cert |
Get-Deployment 'Production' |
select RoleInstanceList -ExpandProperty RoleInstanceList |
ft InstanceName, InstanceStatus -GroupBy RoleName
Creating a new deployment
#Create a new Deployment
Get-HostedService $service -SubscriptionId $sub -Certificate $cert |
New-Deployment -Slot Production -Package $package -Configuration $config -Label 'v1' |
Get-OperationStatus -WaitToComplete
#Set the service to 'Running'
Get-HostedService $service -SubscriptionId $sub -Certificate $cert |
Get-Deployment 'Production'|
Set-DeploymentStatus 'Running' |
Get-OperationStatus -WaitToComplete
Removing a deployment
#Ensure that the service is first in Suspended mode
Get-HostedService $service -SubscriptionId $sub -Certificate $cert |
Get-Deployment 'Production'|
Set-DeploymentStatus 'Suspended' |
Get-OperationStatus -WaitToComplete |
#Remove the deployment
Get-HostedService $service -SubscriptionId $sub -Certificate $cert |
Get-Deployment 'Production'|
Remove-Deployment
Upgrading a single Role
Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
Get-Deployment -Slot Production |
Set-Deployment -mode Auto -roleName 'WebRole1' -package $package -label 'v1.2' |
Get-OperationStatus -WaitToComplete
Adding a local certificate
$deploycert = Get-Item cert:\CurrentUser\My\CBF145B628EA06685419AEDBB1EEE78805B135A2
Get-HostedService $service -SubscriptionId $sub -Certificate $cert |
Add-Certificate -CertificateToDeploy $deploycert |
Get-OperationStatus -WaitToComplete
Configuring Diagnostics - Adding a Performance Counter to All Running Instances
#get storage account name and key
$storage = "yourstorageaccount"
$key = (Get-StorageKeys -ServiceName $storage -Certificate $cert `
-SubscriptionId $sub).Primary
$deployId = (Get-HostedService $service -SubscriptionId $sub `
-Certificate $cert | Get-Deployment Production).DeploymentId
$counter = '\Processor\(_Total)\% Processor Time'
$rate = [TimeSpan]::FromSeconds(5)
Get-DiagnosticAwareRoles -StorageAccountName $storage -StorageAccountKey $key `
-DeploymentId $deployId |
foreach {
$role = $_
Get-DiagnosticAwareRoleInstances $role -DeploymentId $deployId `
-StorageAccountName $storage -StorageAccountKey $key |
foreach {
$instance = $_
$config = Get-DiagnosticConfiguration -RoleName $role -InstanceId $_ `
-StorageAccountName $storage -StorageAccountKey $key `
-BufferName PerformanceCounters -DeploymentId $deployId
$perf = New-Object Microsoft.WindowsAzure.Diagnostics.PerformanceCounterConfiguration `
-Property @{CounterSpecifier=$counter; SampleRate=$rate}
$config.DataSources.Add($perf)
$config.DataSources |
foreach {
Set-PerformanceCounter -PerformanceCounters $_ -RoleName $role `
-InstanceId $instance -DeploymentId $deployId -StorageAccountName $storage `
-StorageAccountKey $key
}
}
}
More Examples
You can find more examples and documentation on these cmdlets by typing 'Get-Help <cmdlet> -full' from the PowerShell cmd prompt.
If you have any questions for feedback, please send it directly to me through the blog (look at right hand navigation pane for Contact Ryan link).
Wednesday, February 10, 2010
It wasn't too long ago when Karsten Januszewski came to my office looking for a Windows Azure token (back when we were in CTP). I wondered what cool shenanigans the MIX team had going. Turns out it was for the Incarnate project (explained here). In short, this service finds all the different avatars you might be using across popular sites* and allows you to select an existing one instead of having to upload one again and again.
You will note that there is another 'dunnry' somewhere on the interwebs, stealing my exclusive trademark. I have conveniently crossed them out for reference. ;)
Since the entire Incarnate service is running in Windows Azure, I was interested in Karsten's experience:
We chose Windows Azure to host Incarnate because there was a lot of uncertainty in traffic. We didn't know how popular the service would be and knowing that we could scale to any load was a big factor in choosing it.
I asked him how the experience was, developing for Windows Azure
There is a ton of great documentation and samples. I relied heavily on the Windows Azure Platform Kit as well as the samples in the SDK to get started. Once I understood how the development environment worked and how the deployment model functioned, I was off and running. I'd definitely recommend those two resources as well as the videos from the PDC for people who are getting started.
I love it when I hear that. Karsten was able to get the Incarnate service up and running on Windows Azure easily and now he is scale-proof in addition to the management goodness that is baked into Windows Azure.
Checkout more about Incarnate and Karsten's Windows Azure learning on the MIX blog.
*turns out you can extend this to add a provider for any site (not just the ones that ship in source).
Wednesday, February 3, 2010
This might come as a surprise to some folks, but in Windows Azure you are billed when you deploy, not when you run. That means we don't care about CPU hours - we care about deployed hours. Your meter starts the second you deploy, irrespective of the state of the application. This means that even if you 'Suspend' your service so it is not reachable (and consumes no CPU), the meter is still running.
Visually, here is the meter still running:
Here is when the meter is stopped:
Right now, there is a 'free' offering of Windows Azure that offers a limited # of hours per month. If you are using MSDN benefits for Windows Azure, there is another offering that offers some bucket of 'free' hours. Any overage and you start to pay.
Now, if you are like me and have a fair number of hosted services squirreled around, you might forget to go to the portal and delete the deployments when you are done. Or, you might simply wish to automate the removal of your deployments at the end of day. There are lots of reasons to remove your deployments, but the primary one is to turn the meter off. Given that re-deploying your services is very simple (and can also be automated), this is not a huge issue to remove the deployment when you are done.
Automatic Service Removal
For those folks that wish an automated solution, it turns out that this is amazingly simple when using the Service Management API and the Azure cmdlets. Here is the complete, deployment-nuking script:
$cert = Get-Item cert:\CurrentUser\My\<cert thumbprint>
$sub = 'CCCEA07B. your sub ID'
$services = Get-HostedServices -Certificate $cert -SubscriptionId $sub
$services | Get-Deployment -Slot Production | Set-DeploymentStatus 'Suspended' | Get-OperationStatus -WaitToComplete
$services | Get-Deployment -Slot Staging | Set-DeploymentStatus 'Suspended' | Get-OperationStatus -WaitToComplete
$services | Get-Deployment -Slot Production | Remove-Deployment
$services | Get-Deployment -Slot Staging | Remove-Deployment
That's it - just 6 lines of Powershell. BE CAREFUL. This script will iterate through all the services in your subscription ID, stop any deployed service, and then remove it. After this runs, every hosted service will be gone and the billing meter has stopped (for hosted services anyway).
Monday, January 25, 2010
A few customers have asked how they can use tools like wazt, Windows Azure MMC, the Azure Cmdlets, etc. when they are behind proxies at work that require basic authentication. The tools themselves don't directly support this type of proxy. What we are doing is simply relying on the fact that the underlying HttpRequest object will pick up your IE's default proxy configuration. Most of the time, this just works.
However, if you are in an environment where you are prompted for your username and password, you might be on a basic auth proxy and the tools might not work. To work around this, you can actually implement a very simple proxy handler yourself and inject it into the application.
Here is one that I wrote to support wazt. To use this, add the following to your app.config and drop the output assembly from this project into your execution directory. Note, this would work with any tool in .NET that uses HttpWebRequest under the covers (like csmanage for instance).
<!-- basic auth proxy section declaration area-->
<!-- proxyHostAddress="Auto" : use Internet explorer configuration for name of the proxy -->
<configSections>
<sectionGroup name="proxyGroup">
<section name="basicProxy"
type="Proxy.Configuration.CustomProxySection, Proxy" />
</sectionGroup>
</configSections>
<system.net>
<defaultProxy enabled="true" useDefaultCredentials="false">
<module type="Proxy.CustomProxy, Proxy"/>
</defaultProxy>
</system.net>
<proxyGroup>
<basicProxy proxyHostAddress="Auto" proxyUserName="MyName" proxyUserPassword="MyPassword" />
</proxyGroup>
Download the source here.
I love elegant software. I knew about CloudXplorer from Clumsy Leaf for some time, but I hadn't used it for awhile because the Windows Azure MMC and MyAzureStorage.com have been all I need for storage for awhile. Also, I have a private tool that I wrote awhile back to generate Shared Access signatures for files I want to share.
I decided to check out the progress on this tool and noticed in the change log that support for Shared Access signatures is now included. Nice! So far, this is the only tool* that I have seen handle Shared Access signatures in such an elegant and complete manner. Nicely done!
Definitely a recommended tool to keep on your shortlist.
*My tool is complete, but not nearly as elegant.
Monday, November 23, 2009
For those of you that made PDC, this will serve as a reminder and for those of you that missed PDC this year (too bad!), this will serve as a guide to some great content.
PDC Sessions for the Windows Azure Platform
Getting Started
Windows Azure
Codename "Dallas"
SQL Azure
Identity
Customer & Partner Showcases
Channel 9 Learning Centers
Coinciding with PDC, we have released the first wave of learning content on Channel 9. The new Ch9 learning centers features content for both the Windows Azure Platform, as well as a course specifically designed for the Identity Developer. The content on both these sites will be continued to be developed by the team over the coming weeks and months. Watch out for updates and additions.
Downloadable Training Kits
To complement the learning centers on Ch9, we still continue to maintain the training kits on the Microsoft download center, which allows you to download and consume the content offline. You can download the Windows Azure Platform training kit here, and the Identity training kit here. The next update is planned for mid-December.
Monday, October 26, 2009
As I write this, I am sitting on a plane headed back to the US from a wonderful visit over to our UK office. While there, I got to meet a number of customers working with Windows Azure. It was clear from the interaction that these folks were looking for a way to simplify how to manage their deployments and build it into an automated process.
With the release of the Service Management API, this is now possible. As of today, you can download some Powershell cmdlets that wrap this API and make managing your Windows Azure applications simple from script. With these cmdlets, you can script your deploys, upgrades, and scaling operations very easily.

The cmdlets mirror the API quite closely, but since it is Powershell, we support piping which cuts down quite a bit on the things you need to type. As an example, here is how we can take an existing deployment, stop it, remove it, create a new deployment, and start it:
Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
Get-Deployment -Slot Production |
Set-DeploymentStatus 'Suspended' |
Get-OperationStatus -WaitToComplete |
Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
Get-Deployment -Slot Production |
Remove-Deployment |
Get-OperationStatus -WaitToComplete
Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
New-Deployment Production $package $config -Label 'v.Next' |
Get-OperationStatus -WaitToComplete
Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
Get-Deployment -Slot Production |
Set-DeploymentStatus 'Running' |
Get-OperationStatus -WaitToComplete
Notice that in each case, we are first getting our service by passing in the certificate and our subscription ID. Again, since this is Powershell, we can get the certificate quite easily:
$cert = Get-Item cert:\CurrentUser\My\D6BE55AC428FAC6CDEBAFF432BDC0780F1BD00CF
You will find your Subscription ID on the portal under the 'Account' tab. Note that we are breaking up the steps by using the Get-OperationStatus cmdlet and having it block until it completes. This is because the Service Management API is an asynchronous model.
Similarly, here is a script that will upgrade a single role or the entire deployment depending on the arguments passed to it:
$label = 'nolabel'
$role = ''
if ($args.Length -eq 2)
{
$role = $args[0]
$label = $args[1]
}
if ($args.Length -eq 1)
{
$label = $args[0]
}
if ($role -ne '')
{
Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
Get-Deployment -Slot Production |
Set-Deployment -mode Auto -roleName $role -package $package -label $label |
Get-OperationStatus -WaitToComplete
}
else
{
Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
Get-Deployment -Slot Production |
Set-Deployment -mode Auto -package $package -label $label |
Get-OperationStatus -WaitToComplete
}
Download the cmdlets from Code Gallery and leave me some feedback if you like them or if they are not working for you.
Tuesday, October 6, 2009
Things are getting crazy here at Microsoft getting ready for PDC. I haven't had much time to blog or tweet for that matter. However, I am taking a break from the grind to announce something I am really excited about - a sample TableBrowser service we are hosting for developers at MyAzureStorage.com.
We built this service using ASP.NET MVC on a rich AJAX interface. The goals of this service were to provide developers to an easy way to create, query, and manage their Windows Azure tables. What better way to host this than on a scalable compute platform like Windows Azure?
Create and Delete Tables
If you need to create or manage your tables, you get a nice big list of the ones you have.
Create, Edit, and Clone your Entities
I love being able to edit my table data on the fly. Since we can clone the entity, it makes it trivial to copy large entities around and just apply updates.
Query Entities
Of course, no browser application would be complete without being able to query your data as well. Since the ADO.NET Data Services syntax can be a little unfamiliar at first, we decided to go for a more natural syntax route. Using simple predicates long with OR, AND, and NOT operations, you can easily test your queries.
Display Data
Lastly, we have tried to make showing data in Windows Azure as convenient as possible. Since data is not necessarily rectangular in nature in Windows Azure tables, we have given you some options: First, you can choose the attributes to display in columns by partition. Next, you expand the individual entity to show each attribute.
Please note: during login you will need to supply your storage account name and key. We do not store this key. It is kept in an encrypted cookie and passed back and forth on each request. Furthermore, we have SSL enabled to protect the channel.
The service is open for business right now and will run at least until PDC (and hopefully longer). Enjoy and let me know through the blog any feedback you have or issues you run into.
Friday, September 18, 2009
Windows Azure has been in CTP since PDC 08 in October of last year. Since that time, we have had a fairly simple, yet powerful concept for how to upgrade your application. Essentially, we have two environments: staging and production.
The difference between these two environments is only in the URI that points to any web exposed services. In staging, we give you an opaque GUID-like URI (e.g. <guidvalue>.cloudapp.net) that is hard to publically discover and in production, we give you the URI that you chose when you created the hosted service (e.g. <yourservice>.cloudapp.net).
VIP Swaps, Deploys, and Upgrades
When you wanted to upgrade your service, you needed to deploy the updated service package containing all your roles into one of the environments. Typically, this was in the staging environment. Whenever you were ready, you would then click the big button in the middle to swap environments. This re-programmed the load balancers and suddenly staging was production and vice versa. If anything went wrong in your upgrade, you could hit the button again and you were back to the original deployment in seconds. We called this model a "VIP Swap" and it is easy to understand and very powerful.
We heard from some customers that they wanted more flexibility to upgrade an individual role without redeploying the entire service. Typically, this can be because there might be some state or caching going on in one of the other roles that a VIP swap would cause to be lost.
The good news is that now you can upgrade individual roles (or even the whole service) using the in place upgrade. When you click the new 'Upgrade' button on the portal, you will see a screen very similar to the 'Deploy' screen that you would be used to from before, but this time you have two new options.
Upgrade Domains
The first new option allow you to choose if you want the upgrade to be 'Automatic' or 'Manual' across the upgrade domains. To understand this option, you would probably want to understand what an 'Upgrade Domain' is all about. You can think of upgrade domains as vertical slices of your application, crossing roles. So, if I had a service with a single web role using 10 instances with 2 worker roles, each with 4 instances, then with 2 upgrade domains, I would have a 5 web role instance, and 2 + 2 worker roles instances in each upgrade domain. Illustrated:
If I choose 'Automatic', it simply means that each upgrade domain will be sequentially be brought down and upgraded in turn. If I choose 'Manual', then I need to click another button between each upgrade domain update in order to proceed.
Note: in the CTP today, 2 upgrade domains are automatically defined and set. In the future, you will be able to specify how many upgrade domains you would like to have.
Role Upgrades
Next, we have a radio button that specifies if you want to update the whole service, or a specific role with in the service. Most folks will likely use the role specific update.
It is important to note that these upgrades are for the services where the topology has not changed. That is, you cannot update the Service Definition (e.g. adding, removing roles or configuration options). If you want to change the topology, you would need to use the more familiar VIP swap model.
Once you click Deploy, the selected role will be upgraded according to the upgrade mode you specified.
More information about in-place upgrades and update domains can be found here. Lastly, you can of course eschew the portal and perform all of these actions using the new Service Management API. Happy upgrading!
Wednesday, September 2, 2009
I am on vacation right now, but I read this over at Steve's blog and I just had to make sure everyone knows about it. Right now, when you register at Microsoft Connect for Windows Azure, you will get an instant token. No more 1 or 2 days wait!
Register for Windows Azure
Thursday, August 27, 2009
There has been a bit of interest in an application called 'myTODO' that we built for the World Partner Conference (WPC) event back in July. It is a simple, yet useful application. The application allows you to create and share lists very easily. It integrates with Twitter, so if you decide to share your lists, it will tweet them and their updates automatically. You can also subscribe to lists using standard RSS if Twitter isn't your thing.
The funny thing is that we only built this app because we wanted something more interesting than the standard "Hello World" application. The entire purpose of the app was to show how easily you can deploy an application (in just mins) on Windows Azure.
You can learn more about this application in 3 ways:
- Get the deployment package and deploy this yourself using our demo script from the Windows Azure Platform Training Kit. You will find it in the Demos section and called "Deploying Windows Azure Services".
- Watch me show how the app works and how to deploy it by watching my screencast.
- Download the source code and see how we built the rich dynamic UI and how we modeled the data using tables.
Tuesday, August 4, 2009
My teammate Vittorio has put out some new guidance and a great new toolkit that shows how to use federation today with Windows Identity Foundation (WIF or Geneva) on Windows Azure. I know this has been a very common request, especially as services move outside of the private datacenters and into the cloud and as vendors try to build true SaaS applications that need to integrate seamlessly into the customer's experience.
As the technologies evolve, the guidance will be kept up to date. For now, this is a great piece of work that gets us past some of the early roadblocks we encountered.
Monday, July 20, 2009
You can download the new SDK and Windows Azure Tools for Visual Studio right now. The big feature in this release is the support for multiple roles per deployment (instead of a single web and worker role). This is surfaced as a new wizard UI in the tools. Additionally, there is new support for any type of web app to be associated to the project. Previously this was limited to a specific project type (but now it supports MVC directly!).
Download Windows Azure Tools for Visual Studio (includes SDK)
Download Windows Azure SDK
Monday, June 1, 2009
Ahh, it is the start of a glorious June and here in Washington the weather is starting to really get nice. The previous cold and rainy spring must have made the product group more productive as Windows Azure has continued adding features since the last major update at MIX.
Given that Windows Azure is a service and not a boxed product, you can expect updates and features to roll-out over the coming months. In this round-up, we have a number of features that have gone live in the last month or two.
New Feature: Geo-Location support

Starting in May, a new option was added to the portal to support geo-locating your code and data. In order to use this most effectively, the idea of an 'Affinity Group' was created. This allows you to associate various services under an umbrella label for the location.
Read more about this feature here and see a complete provisioning walk-through.
New Feature: Storage API updates
I briefly mentioned this last week, but on Thursday (5/28), new features were released to the cloud for Windows Azure storage. The long awaited batch transaction capability for tables as well as a new blob copying capability were released. Additionally, the GetBlockList API was updated to return both committed and uncommitted blocks in blob storage.
One more significant change of note is that a new versioning mechanism has been added. New features will be versioned by a new header ("x-ms-version"). This versioning header must be present to opt-in to new features. This mechanism is in place to prevent breaking changes from impacting existing clients in the future. It is recommended that you start including this header in all authenticated API calls.
Rounding out these updates were some changes to how property names are stored in table storage as well as the size for Partition and Row keys. Unicode chars and up to 1K key size are supported, respectively. Finally, the timeout values for various storage operations were updated as well.
For more details, please read the announcement.
Please note: There currently is no SDK support for these new storage features. The local developer fabric does NOT currently support these features and the StorageClient SDK sample has not been updated yet. At this point, you need to use the samples provided on Steve Marx's blog. A later SDK update will add these features officially.
Windows Azure SDK Update
The May CTP SDK update has been released to the download center. While this release does NOT support the new storage features, it does add a few new capabilities that will be of interest to the Visual Studio 2010 beta testers. Specifically:
- Support for Visual Studio 2010 Beta 1 (templates, local dev fabric, etc.)
- Updated support for Visual Studio 2008 - you can now configure settings through the UI instead of munging XML files.
- Improved reliability of the local dev fabric for debugging
- Enhanced robustness and stability (aka bug fixes).
Download the Windows Azure Tools for Visual Studio (includes both SDK and tools).
New Windows Azure Applications and Demos
Windows Azure Management Tool (MMC)
The Windows Azure Management Tool was created to manage your storage accounts in Windows Azure. Developed as a managed MMC, the tool allows you to create and manage both blobs and queues. Easily create and manage containers, blobs, and permissions. Add and remove queues, inspect or add messages or empty queues as well.
Bid Now Sample
Bid Now is an online auction site designed to demonstrate how you can build highly scalable consumer applications. This sample is built using Windows Azure and uses Windows Azure Storage. Auctions are processed using Windows Azure Queues and Worker Roles. Authentication is provided via Live Id.
If you know of new and interesting Windows Azure content that would be of broad interest, please let me know and I will feature it in later updates.
Relevant Blog Postings
http://blog.smarx.com/posts/sample-code-for-new-windows-azure-blob-features
http://blogs.msdn.com/jnak/archive/2009/05/28/may-ctp-of-the-windows-azure-tools-and-sdk-now-supports-visual-studio-2010-beta-1.aspx
http://blogs.msdn.com/jnak/archive/2009/04/30/windows-azure-geo-location-is-live.aspx
http://dunnry.com/blog/CreateAndDeployYourWindowsAzureServiceIn5Steps.aspx
Thursday, May 28, 2009
Some new features related to blob and table storage were announced on the forum today. The key features announced were:
- Transactions support for table operations (batching)
- Copy Blob API (self explanatory)
- Updated GetBlockList API now returns both committed and uncommitted block lists
There are some more details around bug fixes/changes and timeout updates as well. Refer to the announcement for more details.
The biggest impact to developers at this point is that to get these new features, you will need to include a new versioning header in your call to storage. Additionally, the StorageClient library has not been updated yet to reflect these new APIs, so you will need to wait for some examples (coming from Steve) or an update to the SDK. You can also refer to the MSDN documentation for more details on the API and roll your own in the meantime.
Friday, May 22, 2009
Step 1: Obtaining a token
Sign up through http://dev.windowsazure.com to get a token for the service. Turnaround is pretty quick, so you should have one in about a day or so. If you don't see it in a day, make sure you check your SPAM folder to ensure that the message we send you is not trapped in purgatory there.
Step 2: Redeeming the token
Navigate to the Windows Azure Portal and sign-in with the LiveID that you would like to use to manage your Windows Azure applications. Today, in the CTP, only a single LiveID can manage your Windows Azure project and we cannot reassociate the token with another LiveID once redeemed. As such, make sure you use the LiveID that you want long term to manage the solution.
The first time you login to the portal, you will be asked to associate your LiveID.
Click 'I Agree' to continue and once the association has been successful click 'Continue' again. At this point, you will be presented with an option to redeem a token. Here is where you input the token you received in Step 1.
Enter the token and click 'Next'. You will then be presented with some EULAs to peruse. Read them carefully (or don't) and then click 'Accept'. You will get another confirmation screen, so click 'Continue'.
Step 3: Create your Hosted Service
At this point, you can now create your first hosted solution. You should be on the main Project page and you should see 2 and 1 project(s) remaining for Storage Account and Hosted Services, respectively.
Click 'Hosted Services' and provide a Project label and description.
Click 'Next' and provide a globally unique name that will be the basis for your public URL. Click 'Check Availability' and ensure that the name hasn't been taken. Next, you will need to create an Affinity Group in order to later get your storage account co-located with your hosted service.
Click the 'Yes' radio button and the option to create a new Affinity Group. Give the Affinity Group a name and select a region where you would like this Affinity Group located. Click 'Create' to finish the process.
Step 4: Create your Storage account
Click the 'New Project' link near the left corner of the portal and this time, select the Storage Account option.
Similar to before, give a project label and description and click Next.
Enter a globally unique name for your storage account and click 'Check Availability'. Since performance is best when the data is near the compute, you should make sure that you opt to have your storage co-located with your service. We can do this with the Affinity Group we created earlier. Click the 'Yes' radio button and use the existing Affinity group. Click 'Create' to finish.
At this time, you should note that your endpoints have been created for your storage account and two access keys are provided for you to use for authentication.
Step 5: Deploying an application.
At this point, I will skip past the minor detail of actually building an application for this tutorial and assume you have one (or you choose to use one from the SDK). For Visual Studio users, you would want to right click your project and select 'Publish' to generate the service package you need to upload. Alternatively, you can use the cspack.exe tool to create your package.
Using either method you will eventually have two files: the actual service package (cspkg) and a configuration file (cscfg).
From the portal, select the hosted service project you just created in Step 3 and click the 'Deploy' button.
Browse to the cspkg location and upload both the package and the configuration settings file (cscfg). Click 'Deploy'.
Over the next minute or so, you will see that your package is deploying. Hold tight and you will see it come back with the status of "Allocated". At this point, click the 'Configure' button**.
In order to use your storage account you created in Step 4, you will need to update this information from the local development fabric settings to your storage account settings. An easy way to get this is to right click your Storage Account project link in the left hand navigation tabs and open it in a new tab. With the storage settings in one tab and the configuration in another, you can easily switch between the two and cut & paste what you need.
Inside the XML configuration, replace the 'AccountName' setting with your storage account name. If you are confused, teh account name is the one that is part of the unique global URL, i.e. <youraccountname>.blob.core.windows.net. Enter the 'AccountSharedKey' using the primary access key found on the storage project page (in your new tab). Update the endpoints from the loop-back addresses to the cloud settings: https://blob.core.windows.net, https://queue.core.windows.net, https://table.core.windows.net respectively. Note that the endpoints here do not include your account name and we are using https. Set the 'allowInsecureRemoteEndpoints' to either false or just delete that XML element.
Finally, update the 'Instances' element to 2 (the limit in the CTP today). It is strongly recommended that you run at least 2 instances at all times. This ensures that you always have at least one instance running at all times if something fails or we need to do updates (we update by fault zones and your instances are automatically placed across fault zones). Click 'Save' and you will see that your package is listed as 'Package is updating'.
When your package is back in the 'Allocated' state (a min later or so), click the 'Run' button. Your service will then go to the 'Initializing' state and you will need to wait a few mins while it gets your instances up and running. Eventually, your app will have a 'Started' status. Congratulations, your app is deployed in the Staging environment.
Once deployed to staging, you should click the staging hyperlink for your app (the one with the GUID in the DNS name) and test your app. If you get a hostname not found error, wait a few more seconds and try again - it is likely that you are waiting for DNS to update and propagate your GUID hostname. When you are comfortable, click the big circular button in between the two environments (it has two arrows on it) and promote your application to the 'production' URL.
Congratulations, you have deployed your first app in Windows Azure.
** Note, this part of the process might be optional for you. If you have already configured your developer environment to run against cloud storage or you are not using the StorageClient sample at all, you might not need to do this as the uploaded configuration file will already include the appropriate cloud settings. Of course, if you are not using these options, you are already likely a savvy user and this tutorial is unnecessary for you.
Thursday, May 14, 2009
Available immediately from Code Gallery, download the Windows Azure MMC. The Windows Azure Management Tool was created to manage your storage accounts in Windows Azure.

Developed as a managed MMC, the tool allows you to create and manage both blobs and queues. Easily create and manage containers, blobs, and permissions. Add and remove queues, inspect or add messages or empty queues as well..
Features
- Manage multiple storage accounts
- Easily switch between remote and local storage services
- Manage your blobs
- Create containers and manage permissions
- Upload files or even entire folders
- Read metadata or preview the blob contents
- Manage your queues
- Create new queues
- Monitor queues
- Read (peek) messages
- Post new messages to the queue
- Purge queues
Known Issues
The current release does not work with Windows 7 due to a bug in the underlying PowerShell version. All other OS versions should be unaffected. We will get this fixed as soon as possible.

I tweeted this earlier (really never thought I would say that.). Anyhow, a first release of the PHP SDK for Windows Azure has been released on CodePlex. Right now, it works with blobs and helps with the authentication pieces, but if you look on the roadmap you will see both queue support and table support coming down the road this year.

This is a great resource to use if you are a PHP developer looking to host your app in Windows Azure and leverage the native capabilities of the service (i.e. namely storage).
PHP Azure Project Site
Monday, April 27, 2009
I was trying to troubleshoot a bug in my worker role in Windows Azure the other day. To do this, I have a very cool tool (soon to be released) that lets me peek messages. The problem was that I couldn't get ahold of any messages, it was like they were disappearing right from under my nose. I would see them in the count, but couldn't get a single one to open. I was thinking that I must have a bug in the tool.
Suddenly, the flash of insight came: something was actually popping the messages. While I remembered to shut down my local development fabric, I forgot all about the version I had running in the cloud in the staging environment. Since I have been developing against cloud storage, it is actually a shared environment now. My staging workers were popping the messages, trying to process them and failing (it was an older version). More frustrating, the messages were eventually showing up again, but getting picked up before I could see them in the tool.
So, what is the lesson here: when staging, use a different account than your development accounts. In fact, this is one of the primary reasons we have the cscfg file: don't forget about it.
Thursday, April 16, 2009
Or, put another way: why don't we just use the web.config file? This question was recently posed to me and I thought I would share an answer more broadly since it is a fairly common question.
First, the web.config is part of the deployed package that gets loaded into a diff disk that in turn is started for you app. Remember, we are using Hypervisor VM technology to run your code. This would mean any change to this file would require a new diff disk and a restart. We don't update the actual diff disk (that would be hard with many instances) without a redeploy.
Second, the web.config is special only because IIS knows to look for it and restart when it changes. If you did that with a standard app.config (think worker roles) that doesn't work. We want a mechanism that works for any type of role.
Finally, we have both staging and production. If you wanted to hold different settings (like a different storage account for test vs. production), you would need to hold these values outside the diff disk again or you would not be able to promote between environments.
The cscfg file is 'special' and is held outside the diff disk so it can be updated. The fabric code (RoleManager and its ilk) know when this file is updated, so it will trigger the app restart for you. There is some short period of time between when you update this file and when the fabric notices. Only the RoleManager and fabric are aware of this file - notice that you can't get to the values reading it from disk for instance.
And that is why in a nutshell, we have the cscfg file and we don't use web.config or app.config files.
All that being said, I don't want to give you the impression that you can't use web.config application settings in Windows Azure - you definitely can. However, any update to the web.config file will require a redeploy, so just keep that in mind and choose the cscfg file when you can.
Thursday, April 2, 2009
I don't like to basically re-blog what others have written, but I will make a minor exception today, as it is important enough to repeat. My friend and colleague, Vittorio has explained the current status of the Geneva framework running on Windows Azure today.
The short and sweet is that we know it is not working 100% today and we are working on a solution. This is actually the main reason that you do not see a Windows Azure version of Azure Issue Tracker today.
Thursday, March 19, 2009
The purpose of this post is to show you the command line options you have to first get PHP running locally on IIS7 with fastCGI and the to get it packaged and running in the Windows Azure local development fabric.
First, download the Windows Azure SDK and the latest version of PHP (or whatever version you wish). I would recommend getting the .zip version and simply extracting to "C:\PHP" or something like that. Now, configure your php.ini file according to best practices.
Next, create a new file called ServiceDefinition.csdef and copy the following into it:
<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MyPHPApp" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="WebRole" enableNativeCodeExecution="true">
<InputEndpoints>
<!-- Must use port 80 for http and port 443 for https when running in the cloud -->
<InputEndpoint name="HttpIn" protocol="http" port="80" />
</InputEndpoints>
</WebRole>
</ServiceDefinition>
You will need this file in order to describe your application in the cloud. This definition models what your application should look like in the cloud. For this example, it is a simple web role listening to port 80 over HTTP. Notice as well that we have the 'enableNativeCodeExecution' attribute set as well. This is required in order to use fastCGI.
Then, create a .bat file for enabling fastCGI running locally. I called mine PrepPHP.bat:
@echo off
set phpinstall=%1
set appname=%2
set phpappdir=%3
echo Removing existing virtual directory...
"%windir%\system32\inetsrv\appcmd.exe" delete app "Default Web Site/%appname%"
echo.
echo Creating new virtual directory...
"%windir%\system32\inetsrv\appcmd.exe" add app /site.name:"Default Web Site" /path:"/%appname%" /physicalPath:"%phpappdir%"
echo.
echo Updating applicationHost.config file with recommended settings...
"%windir%\system32\inetsrv\appcmd.exe" clear config -section:fastCGI
"%windir%\system32\inetsrv\appcmd.exe" set config -section:fastCgi /+"[fullPath='%phpinstall%\php-cgi.exe']
echo.
echo Setting PHP handler for application
"%windir%\system32\inetsrv\appcmd.exe" clear config "Default Web Site/%appname%" -section:system.webServer/handlers
"%windir%\system32\inetsrv\appcmd.exe" set config "Default Web Site/%appname%" -section:system.webServer/handlers /+[name='PHP_via_FastCGI',path='*.php',verb='*',modules='FastCgiModule',scriptProcessor='%phpinstall%\php-cgi.exe',resourceType='Unspecified']
echo.
echo Setting Default Document to index.php
"%windir%\system32\inetsrv\appcmd.exe" clear config "Default Web Site/%appname%" -section:defaultDocument
"%windir%\system32\inetsrv\appcmd.exe" set config "Default Web Site/%appname%" -section:defaultDocument /enabled:true /+files.[@start,value='index.php']
echo.
echo Done...
Create one more .bat file to enable PHP running in the local dev fabric:
@echo off
set phpinstall=%1
set appname=%2
set phpappdir=%3
echo.
echo Setting PHP handler for application
"%windir%\system32\inetsrv\appcmd.exe" set config "Default Web Site/%appname%" -section:system.webServer/handlers /[name='PHP_via_FastCGI'].scriptProcessor:%%RoleRoot%%\php\php-cgi.exe
echo.
echo Outputting the web.roleconfig file
del "%phpappdir%\web.roleconfig" /q
echo ^<;?xml version="1.0" encoding="utf-8" ?^> > "%phpappdir%\web.roleconfig"
echo ^<;configuration^> >> "%phpappdir%\web.roleconfig"
echo ^<;system.webServer^> >> "%phpappdir%\web.roleconfig"
echo ^<;fastCgi^> >> "%phpappdir%\web.roleconfig"
echo ^<;application fullPath="%%RoleRoot%%\php\php-cgi.exe" /^> >> "%phpappdir%\web.roleconfig"
echo ^<;/fastCgi^> >> "%phpappdir%\web.roleconfig"
echo ^<;/system.webServer^> >> "%phpappdir%\web.roleconfig"
echo ^<;/configuration^> >> "%phpappdir%\web.roleconfig"
echo Copying php assemblies and starting fabric
md %appname%_WebRole
md %appname%_WebRole\bin
robocopy %phpappdir% %appname%_WebRole\bin /E
robocopy %phpinstall% %appname%_WebRole\bin\php /E
"%programfiles%\windows azure sdk\v1.0\bin\cspack.exe" "%~dp0ServiceDefinition.csdef" /role:WebRole;"%~dp0%appname%_WebRole\bin" /copyOnly /generateConfigurationFile:"%~dp0ServiceDefinition.csx\ServiceConfig.cscfg"
"%programfiles%\windows azure sdk\v1.0\bin\csrun.exe" "%~dp0ServiceDefinition.csx" "%~dp0ServiceDefinition.csx\ServiceConfig.cscfg" /launchBrowser
echo.
echo Done...
Open a command prompt as an Administrator and make sure that the ServiceDefinition.csdef file you created is in the same directory as the .bat files you created.
From the command line, type:
PrepPHP.bat "path to php binaries directory" "myiis7appname" "path to my php app"
If I had installed PHP to "c:\php" and my PHP application was located at "c:\webroot\myphpapp", I would type:
prepphp.bat "c:\php" "myphpapp" "c:\webroot\myphpapp"
Now, I can launch IE and type in: http://localhost/myphpapp and the index.php page will launch.
To get this running in Windows Azure local dev fabric, you would type:
prepforazure.bat "c:\php" "myphpapp" "c:\webroot\myphpapp"
The dev fabric will launch as well as IE and you will be looking at your PHP application running in the dev fabric. If you wish to deploy to the cloud, simply change the command line call for cspack.exe to remove the 'copyOnly' option. Next, comment out the csrun.exe and you have a package ready to upload to the cloud. When you deploy at the portal, make sure you update the number of instances on the portal to at least 2 in order to get fault tolerance.
Sunday, March 1, 2009
I am officially back from parental leave and it was a wonderful time. I had thought that I would have a lot more time to get some side projects done that I have had simmering for some time now. However, that really didn't happen. What can I say? My daughter took a lot more of my time than I had planned. I don't regret it in the least.
Shortly after I started my leave, I discovered that my group was undergoing a small re-organization (nothing to do with the recent layoffs at Microsoft, incidentally). As part of this re-org, I have a new manager these days and a new role. I am now the technical evangelist for Windows Azure as opposed to SQL Services. As such, the content of this blog and my demos will obviously start to change to reflect my new role.
Part of me is sad to leave the SDS fold. There is some great work happening there and I would have liked to see it through. However, another part of me is happy to take on new challenges. Windows Azure is certainly a key part of the overall Azure Services Platform and there is an incredible body of work happening in this area.
Incidentally, this reunites me with Steve Marx. Steve and I used to work together on web evangelism (AJAX specifically) - he's an entertaining guy, so it should be fun to work with him again.