Wednesday, 04 June 2014

Brewmaster Template Definition

In order to author a Brewmaster template, you will need to have a basic understanding of how the template is structured.  In this blog post, I will cover all the basic terminology.

Methodology Explained

A central tenant to the Brewmaster template is that a template represents the goal state of how you want your deployment to be configured.  This has some interesting implications.  Namely, it means that if the state of your deployment is already as described, we skip it.  For example, if you describe 3 VMs and we only find 2 VMs in an existing deployment, we will deploy 1 VM.  We won't try to deploy 3 additional VMs in this example.  If you describe a Storage Account, we will make sure it exists.  Same thing for the Affinity Group.

Since we also view the template as your goal state, it means you can have the reverse scenario with a slight caveat:  my template describes 2 VMs and Brewmaster finds 3 VMs already deployed.  In this case and with your explicit configuration, we will remove the unreferenced VM.  The caveat here is that you need to explicitly tell Brewmaster to be destructive and remove VMs.

There is one more caveat to this 'goal state' idea and that is around Network settings.  Given that Network settings are global in nature to the subscription, we didn't want you to have to describe every possible Virtual Network and Subnet for every deployment (even the ones not done in Brewmaster).  If we were strict about a 'goal state', we would have to do so.  Instead, because of the global nature of these things and the monolithic description and configuration, we have taken an additive-only approach here.  We will only attempt to add additional Virtual Networks, Subnets, DNS Servers, etc., and we will never delete them if they become unreferenced or are not fully described.  One consequence of this is that if you describe Virtual Network A with a single subnet called Subnet AA and Brewmaster finds Virtual Network A with existing Subnet BB, we will not remove Subnet BB, and instead you will be left with both Subnet AA and Subnet BB.

Now that you have the general methodology we follow, let's look at the possible configuration options.

Template Schema

At the highest level, you have 9 basic sections inside a template:

  • Parameters
  • Network
  • AffinityGroup
  • StorageAccounts
  • CloudServices
  • DeploymentGroups
  • Credentials
  • ConfigSets
  • Configurations

Parameters

This section contains replaceable params used throughout the template.  Parameters values themselves are either supplied at deployment time or rely upon a described default.  Template authors can include optional metadata in the parameter definition for things like description, type hints, and even validation logic.  A parameter can be of type String, DateTime, Boolean, or Number.  Brewmaster's website will generate a dynamic UI and perform validation based on the values declared in this section.

We recommend that template authors use the type hints and validation capabilities to prevent easily avoided misconfigurations due to data entry mistakes.

Network

The network settings described in here allow you to declare things like Virtual Networks, subnets, DNS servers, and Local Sites.  Keeping in mind the caveat that Networks are an additive-only approach, you need only describe the portions of the Networks settings in Azure that directly apply to your intended deployment.

AffinityGroup

A Brewmaster template is scoped to a single Affinity Group (AG).  Early on, we supported multiple AGs, but soon realized that things became far too complicated for a user to keep the definitions straight inside a single template.  In the IaaS world, it turns out that you really need an AG to do most anything interesting.  Virtual Networks must exist in an AG for instance.  If you want your IaaS VMs to be able to communicate, you have to have one.  The implication of having a required AG definition is two-fold:  1.)  we enforce the best practices that all your resources are in an AG, and 2.) it means you can only deploy to a single datacenter right now per deployment.  If you need to have a multi-datacenter (region) deployment, you would just deploy the same template multiple times.

CloudServices

Each described Cloud Service (CS) can also describe a single VmDeployment within it.  That deployment can next describe multiple VMs (up to Azure's limit).  All the supported VM settings are described within a VM (Disks, Endpoints, etc.).  Each VM can be associated with one or more ConfigSets.  I will describe this more below in detail, but ultimately, this is how you control what configuration is done on each VM.  Each VM can also be assigned a single Deployment Group (more below).  If one is not assigned, it belongs to the 'default' Deployment Group.

If the CS does not exist, Brewmaster will create it.  If the CS already exists and there is a deployment in the Production slot, Brewmaster will attempt to reconcile what is in the template with what has been found in the existing deployments.  Each machine name is unique within a CS, so if we find a matching VM name in the template configuration, Brewmaster will attempt to configure the machine as described.

DeploymentGroups

Deployment Groups (DGs) are a mechanism by which you can control the dependencies between machines.  By default, all VMs are in a single DG aptly called 'default'.  If you specify additional DGs and VMs have been assigned to them, those machines will be configured completely before any other VMs are configured (end to end).  The order of the described DGs are also the order in which everything is deployed and configured.  Most templates will not use this capability - the default DG is just fine.  However, a classic example of needing this capability is something like Active Directory.  You would want your domain controllers to fully deploy and be configured and ready before additional VMs were spun up and configured if they were to also join the domain.

Credentials

Instead of littering the template with a bunch of usernames and passwords, the template allows you to describe the credentials once and then refer to it everywhere.  We strongly suggest parameterizing the credentials here and not hardcoding them.  This will maximize the re-usability of your template.

ConfigSets

ConfigSets provide a convenient way to group VMs together for common configurations.  Inside of a ConfigSet, you can configure Endpoints that will be applied to all VMs within the ConfigSet.  Additionally, this is where you associate a particular Configuration (1 or more) to a set of VMs.

At deployment time, the common Endpoints will be expanded and added into the Endpoint configuration for each VM found to reference that ConfigSet.  In general, you will have a single ConfigSet per type of VM within your template definition.  So, if I have a SQL Server Always On template, I might have 1 ConfigSet for my SQL nodes, 1 for my Quorum nodes, and 1 for my Active Directory nodes.  If you find that you have common configuration between ConfigSets (e.g. each ConfigSet needs to format data disks), that means you can factor those operations out and apply it as a reusable Configuration in each ConfigSet.

Configurations

A Configuration is a set of DSC resources (or Powershell scripts) that will be applied to a VM.  You can factor your DSC resources into smaller, more re-usable sets, and reference them from the ConfigSet.  For instance, suppose I build out Active Directory and SQL Server in two ConfigSets.  I could have 2 different configurations - one for configuring Active Directory and one for configuring SQL Server.  However, if I also had some common configuration (e.g. formatting attached disks), I could create a 3rd Configuration and simply include that Configuration as a reference in each ConfigSet.  This way, we keep to the DRY principle and you only need to reference the reusable Configurations amongst one or more ConfigSets.

As a best practice, we recommend building out any custom actions as a DSC resource.  Long term, this ends up being a much more manageable practice than attempting to manage configuration as simple Powershell scripts.

Wrap Up

Hopefully this gives you a quick tour around the Brewmaster template definition and philosophy.  As we update things to support new features in Azure, you will see the schema change slightly.  Make sure to check out our documentation to keep up.

Wednesday, 28 May 2014

How Does Brewmaster Work?

In this post, I will cover at a high level what Brewmaster does behind the scenes to enable your templates to deploy in Microsoft Azure.  There are around 8 major steps that we have to orchestrate the template deployment.

First, let's start with an architectural overview:

image

We built Brewmaster as an API first service.  The portal where you register has no abilities other than Registration that cannot be done via API.  That is, once you are registered with us, you have a subscription and a secret key that can be used to deploy.  In some subsequent posts, I will detail how to use the API.  The key components to Brewmaster are the API layer that everyone interacts with, our Template Validation logic that helps ensure you don't have common deployment errors, as well as our Workflow Engine that handles failure, retries, and allows us to efficiently scale our resources.

All of the actions that we take on your behalf in Azure are recorded and presented back to the user in our View repositories.  You could drive your own dashboard with this data or simply use ours.

Template Registration

image

Brewmaster templates are stored in Git repositories.  Today, we support both Bitbucket and Github public repositories.  There is a known layout and convention for creating templates.  When you first come to Brewmaster, you will register your Git repository for your template, or choose one of our pre-existing templates (hosted at Github).  We chose Git because we wanted to be able to version the templates and always have a repeatable deployment.  This has the effect that you can have a single template with different branches for each deployment type (Production, Development, etc.)

Once you register your Git repository with us, it becomes available as a deployment option:

image

When you click the Deploy button, we will pull that template from Git, parse it for any parameters you have defined and generate a UI to accept those params.  Once you submit the parameters, we will combine the template and params together, expand the template into a full deployment schema and then run our Validation logic.  As a template author, you have control over validation logic as well for any parameters coming into the template.  Once the template is validated, it is passed to our Workflow engine to start the execution.

Azure Provisioning

Brewmaster templates are scoped to a single Azure Affinity Groups (AG).  We are requiring that any Azure assets you want to either use or create must reside in the same AG within a single template.  While that sounds restrictive, it really isn't.  There is not much you can do that is interesting in the IaaS world without your resources in a single AG.  As such, this also means that if you want to deploy to multiple AGs, you must create multiple templates.

Today, we support the provisioning of IaaS VMs, Cloud Services, Storage Accounts, and Virtual Networks through Brewmaster.  We may expand that to other Azure resources (e.g. Hadoop, Service Bus, CDN, etc.) as demand dictates.

 

image

In the simple (80%) cases, Brewmaster will deploy your entire set Azure resources in a single go.  However, for more complicated scenarios where there are dependencies between Azure resources, you can actually stage them using the concept of Deployment Groups.  This allows for some sets of Azure resources to be provisioned and configured before other sets.  That is a more advanced topic that I will delve into more later.

Bootstrap the VM

In order to communicate with the Windows machines and allow it to be configured, we must enable a few things on the node.  For instance, Powershell Remoting is required along with Powershell 4.  The node might be rebooted during this time if required.

Brewmaster is dependent on Desired State Configuration (DSC).  In a nutshell, DSC is a declarative Powershell syntax where you define 'what to do' and leave the 'how to do' up to DSC.  This fits perfectly in our own declarative template model.  As such, we support DSC completely.  That means you can not only use any of the built-in DSC resource, but we also support any DSC resources - community released, or subsequently released "Wave" resources.  Don't see a DSC resource you need?  No worries, we also give you access to the 'Script' DSC resource, which will allow you to call into arbitrary Powershell code (or your existing scripts).

Once we have ensured that DSC is available, Brewmaster dynamically creates a DSC configuration script from the template and pushes it onto each node.

During this process, we also push down to each a node a number of scripts and modules that the node will later execute locally.  Once your nodes have been configured with DSC and PS Remoting, we are ready to download your template package.

Pulling the Template Package

We refer to the Brewmaster template as a JSON file that describes your Azure and VM topology.  However, we refer the template package as both the template JSON file as well as any other supporting assets required for configuring the VM.  Inside the package you can include things like custom scripts or DSC resources, installers, certificates, or other file assets.  The package has a known layout that will be used by Brewmaster.

Brewmaster instructs the node to pull the template package from the Git repository.  The node must pull the package from Git as opposed to Brewmaster pushing the repository.  The main reason for this is for quality of service and unintentional (or intentional) denial of service.  Theoretically, the template package could be very big and we would not want to lock our Brewmaster workers while trying to push a large package to each node.

Running the DSC Configuration

At this point, your node has all the assets from the template package and the DSC configuration that describes how to use them all.  Brewmaster now instructs the local node to compile the DSC into the appropriate MOF file and start a staged execution.

Brewmaster periodically connects to the node to discover information about the running DSC configuration (error, status, etc.).  It pulls what it finds back into Brewmaster and exposes that data to the user at the website (or via API).  This is how you can track status of a running configuration and know when something is done.

Wrap up

Once a deployment runs to its conclusion (either failure or success), Brewmaster records the details and presents it back to the user.  In this way the user has a complete picture of what was deployed and any logs or errors that occurred.  Should something go wrong during a deployment, the user can simply fix the template definition (or an asset in the template package) and hit the 'Deploy' button again.  Brewmaster will re-execute the deployment, but this time, it will skip the things it has already done (like provisioning in Azure or full Bootstrapping) and will re-execute the DSC configurations.  Given the efficient nature of DSC, it will also only re-execute resources that are not in the 'desired state'.  As such, subsequent re-deploys can be magnitudes faster that the initial.

Next up

In the coming installments, I will talk more about the key Brewmaster concepts and what each part of the schema does and about how to troubleshoot your templates and debug them.

Thursday, 22 May 2014

Introducing Brewmaster Template SDK

I am excited to announce the release of the Brewmaster Template SDK.  With our new SDK, you can author your own Brewmaster templates that can deploy almost anything into Microsoft Azure.  Over the next few weeks, I will be updating my blog with more background on how Brewmaster templates work and how to easily author them.

First, a bit of backstory:  when we first built Brewmaster 6 months ago, we released with 5 supported templates.  The templates were for popular workloads that users had difficulty deploying easily (e.g.  SQL Server Always On).  The idea was that we would allow the user to get something bootstrapped into Azure quickly and then allow the user to simply RDP into the machine to update settings that were not exactly to their liking.  Even then, we knew that it would simply not be possible to offer enough combinations of options to satisfy every possible user and desired configuration.

The situation with static templates was that they were close(ish) for everyone, but perfect for no one.  We wanted to change that and to support arbitrarily complex deployment topologies.  With the release of our Template SDK, we have done just that.  With our new Template SDK you can:

  • Define and deploy any Windows IaaS topology in Azure.  You tell us what you want and we manage the creation in Azure.  We support any number of cloud services, VMs, networking, or storage account configurations.
  • Configure any set(s) of VMs that you deploy.  Install any software, configure any roles, manage firewalls, etc.  You name it.  We support Microsoft's Desired State Configuration (DSC) technology and we support it completely.  This means you can use any of Microsoft's many DSC configurations or author your own.  Don't have DSC resources for what you want yet?  We also support arbitrary Powershell scripts, so don't worry.
  • Clone, fork, or build any template.  We are open-sourcing all of our templates that were previously released as static templates.  We support public Git deployment from both GitHub and Bitbucket, so this means you can version any template you choose or deploy any revision.  This also means you can branch a template (e.g. DevTest, Production, Feature-AB) and deploy any or all of them at runtime.
  • Easily author your templates.  While the template itself is JSON-based, we are also shipping C# fluent syntax builders that give you the Intellisense you need to build out any configuration.  For more advanced configurations, we also support the Liquid template syntax.  This means you can put complex if/then, looping, filtering, and other control logic directly in your templates should you desire.

Brewmaster itself was built API first.  What this means is that you can script out any interaction with Brewmaster.  Want to deploy daily/weekly/hourly as part of your CI build?  With Brewmaster, you can do this as well.

One other thing:  Did I mention this was free?  Everything we are talking about here is being released for free.  Deploy for free, build for free,  and share your templates for free.

Visit http://brewmaster.aditicloud.com to register and get started today!

Tuesday, 29 January 2013

Anatomy of a Scalable Task Scheduler

On 1/18 we quietly released a version of our scalable task scheduler (creatively named 'Scheduler' for right now) to the Windows Azure Store.  If you missed it, you can see it in this post by Scott Guthrie.  The service allows you to schedule re-occurring tasks using the well-known cron syntax.  Today, we support a simple GET webhook that will notify you each time your cron fires.  However, you can be sure that we are expanding support to more choices, including (authenticated) POST hooks, Windows Azure Queues, and Service Bus Queues to name a few.

In this post, I want to share a bit about how we designed the service to support many tenants and potentially millions of tasks.  Let's start with a simplified, but accurate overall picture:

image

We have several main subsystems in our service (REST API façade, CRON Engine, and Task Engine) and additionally several shared subsystems across additional services (not pictured) such as Monitoring/Auditing and Billing/Usage.  Each one can be scaled independently depending on our load and overall system demand.  We knew that we needed to decouple our subsystems such that they did not depend on each other and could scale independently.  We also wanted to be able to develop each subsystem potentially in isolation without affecting the other subsystems in use.  As such, our systems do not communicate with each other directly, but only share a common messaging schema.  All communication is done over queues and asynchronously.

REST API

This is the layer that end users communicate with and the only way to interact with the system (even our portal acts as a client).  We use a shared secret key authentication mechanism where you sign your requests and we validate them as they enter our pipeline.  We implemented this REST API using Web API.  When you interact with the REST API, you are viewing fast, lightweight views of your scheduled task setup that reflects what is stored in our Job Repository.  However, we never query the Job Repository directly to keep it responsive to its real job - providing the source data for the CRON Engine.

CRON Engine

This subsystem was designed to do as little as possible and farm out the work to the Task Engine.  When you have an engine that evaluates cron expressions and fire times, it cannot get bogged down trying to actually do the work.  This is a potentially IO-intensive role in the subsystem that is constantly evaluating when to fire a particular cron job.  In order to support many tenants, it must be able run continuously without bogging down in execution.  As such, this role only evaluates when a particular cron job must run and then fires a command to the Task Engine to actually execute the potentially long running job.

Task Engine

The Task Engine is the grunt of the service and it performs the actual work.  It is the layer that will be scaled most dramatically depending on system load.  Commands from the CRON Engine for work are accepted and performed at this layer.  Subsequently, when the work is done it emits an event that other interested subsystems (like Audit and Billing) can subscribe to downstream.  The emitted event contains details about the outcome of the task performed and is subsequently denormalized into views that the REST API can query to provide back to a tenant.  This is how we can tell you your job history and report back any errors in execution.  The beauty of the Task Engine emitting events (instead of directly acting) is that we can subscribe many different listeners for a particular event at any time in the future.  In fact, we can orchestrate very complex workflows throughout the system as we communicate to unrelated, but vital subsystems.  This keeps our system decoupled and allows us to develop those other subsystems in isolation.

Future Enhancements

Today we are in a beta mode, intended to give us feedback about the type of jobs, frequency of execution, and what our system baseline performance should look like.  In the future, we know we will support additional types of scheduled tasks, more views into your tasks, and more complex orchestrations.  Additionally, we have setup our infrastructure such that we can deploy to to multiple datacenters for resiliency (and even multiple clouds).  Give us a try today and let us know about your experience.

Wednesday, 10 October 2012

Next Stop: Aditi Technologies

I am excited to announce that I have officially joined Aditi Technologies as Director of Product Services.  Taking what I have learned building large scale solutions in Windows Azure, I will be responsible for building Aditi's own portfolio of SaaS services and IP/frameworks.  We have a number of exciting projects underway and I hope to blog more about what we are building soon.

Along with this move, I get to rejoin Wade (now my boss!) and Steve as well as some of my former Cumulux colleagues.  I took this role because I see a great opportunity to build software and services in the 'cloud' and I am convinced that Aditi has been making the right investments.  It doesn't hurt at all that I get to work with top-notch technologists either.

Along the way, I plan to build a team to deliver on these cloud services.  If you think you have what it takes to build great software, send me a note and your resume.  Thanks!

 

image

Friday, 25 May 2012

Interpreting Diagnostics Data and Making Adjustments

At this point in our diagnostics saga, we have our instances busily pumping out the data we need to manage and monitor our services.  However, it is simply putting the raw data in our storage account(s).  What we really want to do is query and analyze that data to figure out what is happening.

The Basics

Here I am going to show you the basic code for querying your data.  For this, I am going to be using LINQPad.  It is a tool that is invaluable for ad hoc querying and prototyping.  You can cut & paste the following script (hit F4 and add references and namespaces for Microsoft.WindowsAzure.StorageClient.dll and System.Data.Service.Client.dll as well).

void Main()
{

    var connectionString = "DefaultEndpointsProtocol=https;AccountName=youraccount;AccountKey=yourkey";
    var account = CloudStorageAccount.Parse(connectionString);
    var client = account.CreateCloudTableClient();
     
    var ctx = client.GetDataServiceContext();
    
    var deploymentId = new Guid("25d676fb-f031-42b4-aae1-039191156d1a").ToString("N").Dump();
    
    var q = ctx.CreateQuery<PerfCounter>("WADPerformanceCountersTable")
        .Where(f => f.RowKey.CompareTo(deploymentId) > 0 && f.RowKey.CompareTo(deploymentId + "__|") < 0)
        .Where(f => f.PartitionKey.CompareTo(DateTime.Now.AddHours(-2).GetTicks()) > 0)
        //.Take(1)
        .AsTableServiceQuery()
        .Dump();

    //(q as DataServiceQuery<Foo>).RequestUri.AbsoluteUri.Dump(); 
    //(q as CloudTableQuery<Foo>).Expression.Dump();
}

static class Funcs
{
    public static string GetTicks(this DateTime dt)
    {
        return dt.Ticks.ToString("d19");
    }
}

[System.Data.Services.Common.DataServiceKey("PartitionKey", "RowKey")]
class PerfCounter
{
    public string PartitionKey { get; set; }
    public string RowKey { get; set; }
    public DateTime Timestamp { get; set; }
    public long EventTickCount { get; set; }
    public string Role { get; set; }
    public string DeploymentId { get; set; }
    public string RoleInstance { get; set; }
    public string CounterName { get; set; }
    public string CounterValue { get; set; }
    public int Level { get; set; }
    public int EventId { get; set; }
    public string Message { get; set; }
}

What I have done here is setup a simple script that allows me to query the table storage location for performance counters.  There are two big (and 1 little) things to note here:

  1. Notice how I am filtering down to the deployment ID (also called Private ID) of the deployment I am interested in seeing.  If you use same storage account for multiple deployments, this is critical.
  2. Also, see how I have properly formatted the DateTime such that I can select a time range from the Partition Key appropriated.  In this example, I am retrieving the last 2 hours of data for all roles in the selected deployment.
  3. I have also commented out some useful checks you can use to test your filters.  If you uncomment the DataServiceQuery<T> line, you also should comment out the .AsTableServiceQuery() line.

Using the Data

If you haven't set absurd sample rates, you might actually get this data back in a reasonable time.  If you have lots of performance counters to monitor and/or you have high sample rates, be prepared to sit and wait for awhile.  Each tick is a single row in table storage.  You can return 1000 rows in a single IO operation.  It can take a very long time if you ask for large time ranges or have lots of data.

Once you have the query returned, you can actually export it into Excel using LINQPad and go about setting up graphs and pivot tables, etc.  This is all very doable, but also tedious.  I would not recommend this for long term management, but rather some simple point in time reporting perhaps.

For AzureOps.com, we went a bit further.  We collect the raw data, compress, and index it for highly efficient searches by time.  We also scale the data for the time range, otherwise you can have a very hard time graphing 20,000 data points.  This makes it very easy to view both recent data (e.g. last few hours) as well as data over months.  The value of the longer term data cannot be overstated.

Anyone that really wants to know what their service has been doing will likely need to invest in monitoring tools or services (e.g. AzureOps.com).  It is simply impractical to pull more than a few hours of data by querying the WADPeformanceCountersTable directly.  It is way too slow and way too much data for longer term analysis.

The Importance of Long Running Data

For lots of operations, you can just look at the last 2 hours of your data and see how your service has been doing.  We put that view as the default view you see when charting your performance counters in AzureOps.com.  However, you really should back out the data from time to time and observe larger trends.  Here is an example:

image

This is actual data we had last year during our early development phase of the backend engine that processes all the data.  This is the Average CPU over 8 hours and it doesn't look too bad.  We really can't infer anything from this graph other than we are using about 15-35% of our CPU most of the time.

However, if we back that data out a bit.:

image

This picture tells a whole different story.  We realized that we were slowly doing more and more work with our CPU that did not correlate with the load.  This was not a sudden shift that happened in a few hours.  This was manifesting itself over weeks.  Very slow, for the same amount of operations, we were using more CPU.  A quick check on memory told us that we were also chewing up more memory:

image

We eventually figured out the issue and fixed it (serialization issue, btw) - can you tell where?

image

Eventually, we determined what our threshold CPU usage should be under certain loads by observing long term trends.  Now, we know that if our CPU spikes above 45% for more than 10 mins, it means something is amiss.  We now alert ourselves when we detect high CPU usage:

image

Similarly, we do this for many other counters as well.  There is no magic threshold to choose, but if you have enough data you will be able to easily pick out the threshold values for counters in your own application.

In the next post, I will talk about how we pull this data together analyzers, notifications, and automatically scale to meet demand.

Shameless plug:  Interesting in getting your own data from Windows Azure and monitoring, alerting, and scaling?  Try AzureOps.com for free!

Monday, 16 April 2012

Getting Diagnostics Data From Windows Azure

Assuming you know what to monitor and you have configured your deployments to start monitoring, now you need to actually get the data and do something with it.

First, let's briefly recap how the Diagnostics Manager (DM) stores data.  Once it has been configured, the DM will start to buffer data to disk locally on the VM using the temporal scratch disk*.  It will buffer it using the quota policy found in configuration.  By default, this allocates 4GB of local disk space to hold diagnostics data.  You can change the quota with a little more work if you need to hold more, but most folks should be served just fine with the default.  Data is buffered as FIFO (first in, first out) in order to age out the oldest data first.

Scheduled versus OnDemand

Once the data is buffering locally on the VM, you need to somehow transfer the data from the VM to your cloud storage account.  You can do this by either setting a Scheduled or OnDemand transfer.  In practice, I tend to recommend always using Scheduled transfers and ignoring the OnDemand option (it ends up being a lot easier). 

But, for completeness, here is an example of setting an OnDemand transfer:

void Main()
{
    var account = new CloudStorageAccount(
        new StorageCredentialsAccountAndKey("dunnry", "yourkey"),
        true
        );
        
    var mgr = new DeploymentDiagnosticManager(account, "6468a8b749a54c3...");
    
    foreach (string role in mgr.GetRoleNames())
    {
        var ridm = mgr.GetRoleInstanceDiagnosticManagersForRole(role);
        
        var options = new OnDemandTransferOptions()
        {
            From = DateTime.UtcNow - TimeSpan.FromMinutes(10),
            To = DateTime.UtcNow,
            NotificationQueueName = "pollme"
        };
        
        var qc = account.CreateCloudQueueClient();
        var q = qc.GetQueueReference("pollme");
        q.CreateIfNotExist();
        
        foreach (var i in ridm)
        {
            //cancel all pending transfers
            foreach (var pt in i.GetActiveTransfers())
            {
                i.CancelOnDemandTransfers(pt.Key);
            }
            
            var key = i.BeginOnDemandTransfer(DataBufferName.Logs, options);
            //poll here... why bother...
        }
    }
}

It's not exactly straightforward, but essentially, you need to specify the time range to transfer and optionally a queue to notify when completed.  You must ensure that all outstanding OnDemand transfers are canceled and then you can begin the transfer and ideally you should also cancel the transfer when it is completed.  In theory, this gives you some flexibility on what you want transferred.

As with most things in life, there are some gotchas to using this code.  Most of the time, folks forget to cancel the transfer after it completes.  When that happens, it prevents any updates to the affected data source.  This can impact you when you try to set new performance counters and see an error about an OnDemand transfer for instance.  As such, you end up writing a lot of code to detect and cancel pending transfers first before doing anything else in the API.

Using Scheduled transfers ends up being easier in the long run because you end up getting the same amount of data, but without having the pain of remembering to cancel pending transfers and all that.  Here is similar code (you should adapt for each data source you need to transfer):

void Main()
{
    var account = new CloudStorageAccount(
        new StorageCredentialsAccountAndKey("dunnry", "yourkey"),
        true
        );
        
    var mgr = new DeploymentDiagnosticManager(account, "6468a8b749a54c3...");
    
    foreach (string role in mgr.GetRoleNames())
    {
        var ridm = mgr.GetRoleInstanceDiagnosticManagersForRole(role);
        
        foreach (var idm in ridm)
        {
            var config = idm.GetCurrentConfiguration()
                ?? DiagnosticMonitor.GetDefaultInitialConfiguration();
            config.PerformanceCounters.ScheduledTransferPeriod = TimeSpan.FromMinutes(5);
            
            //set other scheduled intervals here...

            idm.SetCurrentConfiguration(config);
        }
    }
}

This ends up being the technique we use for AzureOps.com.  When you setup your subscription with us, we detect the diagnostics connection string and allow you to change your data source settings.  For Performance Counters, we force the transfer to 5 minutes to today (a good compromise) and allow you to choose the interval for other sources (i.e. Traces, Windows Event Logs).  When you use a provider like AzureOps, it is usually best to stream the data in in relatively small chunks as opposed to say transferring once an hour.  Firstly, we won't be able to do anything with your data until we see it and you probably want to be notified sooner than 1 time an hour.  Secondly, when you set long transfer period times, there is a risk that you exceed the buffer quota and start to lose data that was never transferred.  In practice, we have not observed any noticeable overhead by transferring more often.  When in doubt, pick 5 mins.

Whew!  If you have made it this far, you now have a reasonable set of performance counters and trace information that is both being collected on your VMs in Windows Azure as well as being persisted to your storage account.  So, essentially, you need to now figure out what to do with that data.  That will be the subject of the next post in this series.

 

*if you are interested, RDP into an instance and check the resource drive (usually C:) under /Resources/Directory/<roleuniquename>/Monitor to see buffered data.

Sunday, 19 February 2012

Choosing What To Monitor In Windows Azure

One of the first questions I often get when onboarding a customer is "What should I be monitoring?".  There is no definitive list, but there are certainly some things that tend to be more useful.  I recommend the following Performance Counters in Windows Azure for all role types at bare minimum:

  • \Processor(_Total)\% Processor Time
  • \Memory\Available Bytes
  • \Memory\Committed Bytes
  • \.NET CLR Memory(_Global_)\% Time in GC

You would be surprised how much these 4 counters tell someone without any other input.  You can see trends over time very clearly when monitoring over weeks that will tell you what is a 'normal' range that your application should be in.  If you start to see any of these counters spike (or spike down in the case of Available Memory), this should be an indicator to you that something is going on that you should care about.

Web Roles

For ASP.NET applications, there are some additional counters that tend to be pretty useful:

  • \ASP.NET Applications(__Total__)\Requests Total
  • \ASP.NET Applications(__Total__)\Requests/Sec
  • \ASP.NET Applications(__Total__)\Requests Not Authorized
  • \ASP.NET Applications(__Total__)\Requests Timed Out
  • \ASP.NET Applications(__Total__)\Requests Not Found
  • \ASP.NET Applications(__Total__)\Request Error Events Raised
  • \Network Interface(*)\Bytes Sent/sec

If you are using something other than the latest version of .NET, you might need to choose the version specific instances of these counters.  By default, these are going to only work for .NET 4 ASP.NET apps.  If you are using .NET 2 CLR apps (including .NET 3.5), you will want to choose the version specific counters.

The last counter you see in this list is somewhat special as it includes a wildcard instance (*).  This is important to choose in Windows Azure as the names of the actual instance adapter can (and tends to) change over time and deployments.  Sometimes it is "Local Area Connection* 12", sometimes it is "Microsoft Virtual Machine Bus Network Adapter".  The latter one tends to be the one that you see most often with data, but just to be sure, I would include them all.  Note, this is not an exhaustive list - if you have custom counters or additional system counters that are meaningful, by all means, include them.  In AzureOps, we can set these remotely on your instances using the property page for your deployment.

image

Choosing a Sample Rate

You should not need to sample any counter faster than 30 seconds.  Period.  In fact, in 99% of all cases, I would actually recommend 120 seconds (that is our default we recommend in AzureOps).  This might seem like you are losing too much data or that you are going to miss something.  However, experience has shown that this sample rate is more than sufficient to monitor the system over days, weeks, and months with enough resolution to know what is happening in your application.  The difference between 30 seconds and 120 seconds is 4 times as much data.  When you sample at 1 and 5 second sample rates, you are talking about 120x and 24x the amount of data.  That is per instance, by the way.  If you are have more than 1 instance, now multiply that by number of instances.  It will quickly approach absurd quantities of data that costs you money in transactions and storage to store, and that has no additional value to parse, but a lot more pain to keep.  Resist the urge to put 1, 5, or even 10 seconds - try 120 seconds to start and tune down if you really need to.

Tracing

The other thing I recommend for our customers is to use tracing in their application.  If you only use the built-in Trace.TraceInformation (and similar), you are ahead of the game.  There is an excellent article in MSDN about how to setup more advanced tracing with TraceSources that I recommend as well.

I recommend using tracing for a variety of reasons.  First, it will definitely help you when your app is running in the cloud and you want to gain insight into issues you see.  If you had logged exceptions to Trace or critical code paths to Trace, then you now have potential insight into your system.  Additionally, you can use this as a type of metric in the system to be mined later.  For instance, you can log length of time a particular request or operation is taking.  Later, you can pull those logs and analyze what was the bottleneck in your running application.  Within AzureOps, we can parse trace messages in variety of ways (including semantically).  We use this functionality to alert ourselves when something strange is happening (more on this in a later post).

The biggest obstacle I see with new customers is remembering to turn on transfer for their trace messages.  Luckily, within AzureOps, this is again easy to do.  Simply set a Filter Level and a Transfer Interval (I recommend 5 mins).

image

The Filter Level will depend a bit on how you use the filtering in your own traces.  I have seen folks that trace rarely, so lower filters are fine.  However, I have also seen customers trace upwards of 500 traces/sec.  As a point of reference, at that level of tracing, you are talking about 2GB of data on the wire each minute if you transfer at that verbosity.  Heavy tracers, beware!  I usually recommend verbose for light tracers and Warning for tracers that are instrumenting each method for instance.  You can always change this setting later, so don't worry too much right now.

Coming Up

In the next post, I will walk you through how to setup your diagnostics in Windows Azure and point out some common pitfalls that I see.

 

 

Monitoring in Windows Azure

For the last year since leaving Microsoft, I have been deeply involved in building a world class SaaS monitoring service called AzureOps.  During this time, it was inevitable that I see not only how to best monitor services running in Windows Azure, but also see the common pitfalls amongst our beta users.  It is one thing to be a Technical Evangelist like I was and occasionally use a service for a demo or two, and quite another to attempt to build a business on it.

Monitoring a running service in Windows Azure can actually be daunting if you have not worked with it before.  In this coming series, I will attempt to share the knowledge we have gained building AzureOps and from our customers.  The series will be grounded in these 5 areas:

  1. Choosing what to monitor in Windows Azure.
  2. Getting the diagnostics data from Windows Azure
  3. Interpreting diagnostics data and making adjustments
  4. Maintaining your service in Windows Azure.

Each one of these areas will be a post in the series and I will update this post to keep a link to the latest.  I will use AzureOps as an example in some cases to highlight both what we learned as well as the approach we take now due to this experience.

If you are interested in monitoring your own services in Windows Azure, grab an invite and get started today!.

Wednesday, 24 August 2011

Handling Continuation Tokens in Windows Azure - Gotcha

I spent the last few hours debugging an issue where a query in Windows Azure table storage was not returning any results, even though I knew that data was there.  It didn't start that way of course.  Rather, stuff that should have been working and previously was working, just stopped working.  Tracing through the code and debugging showed me it was a case of a method not returning data when it should have.

Now, I have known for quite some time that you must handle continuation tokens and you can never assume that a query will return data always (Steve talks about it waaaay back when here).  However, what I did not know was that different methods of enumeration will give you different results.  Let me explain by showing the code.

var q = this.CreateQuery()
    .Where(filter)
    .Where(f => f.PartitionKey.CompareTo(start.GetTicks()) > 0)
    .Take(1)
    .AsTableServiceQuery();
var first = q.FirstOrDefault();
if (first != null)
{
    return new DateTime(long.Parse(first.PartitionKey));
}

In this scenario, you would assume that you have continuation tokens nailed because you have the magical AsTableServiceQuery extension method in use.  It will magically chase the tokens until conclusion for you.  However, this code does not work!  It will actually return null in cases where you do not hit the partition server that holds your query results on the first try.

I could easily reproduce the query in LINQPad:

var q = ctx.CreateQuery<Foo>("WADPerformanceCountersTable")
    .Where(f => f.RowKey.CompareTo("9232a4ca79344adf9b1a942d37deb44a") > 0 && f.RowKey.CompareTo("9232a4ca79344adf9b1a942d37deb44a__|") < 0)
    .Where(f => f.PartitionKey.CompareTo(DateTime.Now.AddDays(-30).GetTicks()) > 0)
    .Take(1)
    .AsTableServiceQuery()
    .Dump();    

Yet, this query worked perfectly.  I got exactly 1 result as I expected.  I was pretty stumped for a bit, then I realized what was happening.  You see FirstOrDefault will not trigger the enumeration required to generate the necessary two round-trips to table storage (first one gets continuation token, second gets results).  It just will not force the continuation token to be chased.  Pretty simple fix it turns out:

var first = q.AsEnumerable().SingleOrDefault();

Hours wasted for that one simple line fix.  Hope this saves someone the pain I just went through.

Thursday, 14 July 2011

How to Diagnose Windows Azure Error Attaching Debugger Errors

I was working on a Windows Azure website solution the other day and suddenly started getting this error when I tried to run the site with a debugger:

image

This error is one of the hardest to diagnose.  Typically, it means that there is something crashing in your website before the debugger can attach.  A good candidate to check is your global.asax to see if you have changed anything there.  I knew that the global.asax had not been changed, so it was puzzling.  Naturally, I took the normal course of action:

  1. Run the website without debug inside the emulator.
  2. Run the website with and without debugging outside the emulator.
  3. Tried it on another machine

None of these methods gave me any clue what the issue was as they all worked perfectly fine.  It was killing me that it only happened on debugging inside the emulator and only on 1 machine (the one I really wanted to work).  I was desperately looking for a solution that did not involve rebuilding the machine.   I turned on SysInternal's DebugView to see if there were some debug messages telling me what the message was.  I saw an interesting number of things, but nothing that really stood out as the source of the error.  However, I did notice the process ID of what appeared to be reporting errors:

image

Looking at Process Explorer, I found this was for DFAgent.exe (the Dev Fabric Agent).  I could see that it was starting with an environment variable, so I took a look at where that was happening:

image

That gave me a direction to start looking.  I opened the %UserProfile%\AppData\Local\Temp directory and found a conveniently named file there called Visual Studio Web Debugger.log. 

image

A quick look at it showed it to be HTML, so one rename later and viola!

image

One of our developers had overridden the <httpErrors> setting in web.config that was disallowed on my 1 machine.  I opened my applicationHost.config using a Administatrive Notepad and sure enough:

image

So, the moral of the story is next time, just take a look at this log file and you might find the issue.  I suspect the reason that this only happened on debug and not when running without the debugger was that for some reason the debugger is looking for a file called debugattach.aspx.  Since this file does not exist on my machine, it throws a 404, which in turn tries to access the <httpErrors> setting, which culminates in the 500.19 server error.  I hope this saves someone the many hours I spent finding it and I hope it prevents you from rebuilding your machine as I almost did.

Tuesday, 17 May 2011

Windows Azure Bootstrapper

One of the common patterns I have noticed across many customers is the desire to download a resource from the web and execute it as part of the bootup process for a Windows Azure web or worker role.  The resource could be any number of things (e.g. exe, zip, msi, cmd, etc.), but typically is something that the customer does not want to package directly in the deployment package (cspkg) for size or update/maintainability reasons.

While the pattern is pretty common, the ways to approach the problem can certainly vary.  A more complete solution will need to deal with the following issues:

  • Downloading from arbitrary http(s) sources or Windows Azure blob storage
  • Logging
  • Parsing configuration from the RoleEnvironment or app.config
  • Interacting with the RoleEnvironment to get ports, DIP addresses, and Local Resource paths
  • Unzipping resources
  • Launching processes
  • Ensuring that resources are only installed once (or downloaded and unzipped once)

With these goals in mind, we built the Windows Azure Bootstrapper.  It is a pretty simple tool to use and requires only packaging of the .exe and the .config file itself in your role.  With these two items in place, you can then script out fairly complicated installations.  For example, you could prepare your roles with MVC3 using a command like this:

bootstrapper.exe -get http://download.microsoft.com/download/F/3/1/F31EF055-3C46-4E35-AB7B-3261A303A3B6/AspNetMVC3ToolsUpdateSetup.exe -lr $lr(temp) -run $lr(temp)\AspNetMVC3ToolsUpdateSetup.exe -args /q

Check out the project page for more examples, but the possibilities are pretty endless here.  One customer uses the Bootstrapper to download agents and drivers from their blob storage account to install at startup for their web and worker roles.  Other folks use it to simply copy files out of blob storage and lay them out correctly on disk.

Of course, none of this would be available in the community if not for the great guys working at National Instruments.  They allowed us to take this code written for them and turn it over to the community.

Enjoy and let us know your feedback or any bugs you find.

Tuesday, 11 January 2011

A new Career Path

As some of you may know, I recently left Microsoft after almost 4 years.  It was a wonderful experience and I met and worked with some truly great people there.  I was lucky enough to join at the time when the cloud was just getting started at Microsoft.  Code words like CloudDB, Strata, Sitka, Red Dog, and others I won't mention here flew around like crazy.

I started as the SQL Server Data Services (SSDS) evangelist and later became the Windows Azure Evangelist (after SSDS eventually became SQL Azure).  For the last 2 years, Windows Azure has been my life and it was great seeing a technology very rapidly grow from CTP to a full featured platform in such a short time.

Well, I am not moving far from Windows Azure - I still strongly believe in what Microsoft has done here.  I recently joined Cumulux as Director of Cloud Services.  I will be driving product strategy, some key customer engagements, and take part in the leadership team there.  Cumulux is a close Microsoft partner with a Windows Azure focus, so it is a great fit for me for both personal as well as professional reasons.

While I will miss the great folks I got to work with at Microsoft and being in the thick of everything, I am very excited to begin this new career path with Cumulux.

Sunday, 19 December 2010

Using WebDeploy with Windows Azure

Update:  Looks like Wade also blogged about this, showing a similar way with some early scripts I wrote.  However, there are some important differences around versioning of WebDeploy and creating users that you should pay attention to here.  Also, I am using plugins, which are much easier to consume.

One of the features announced at PDC was the coming ability to use the standard IIS7 WebDeploy capability with Windows Azure.  This is exciting news, but it comes with an important caveat.  This feature is strictly for development purposes only.  That is, you should not expect anything you deploy using WebDeploy to persist longterm on your running Windows Azure instances.  If you are familiar with Windows Azure, you know the reason is that any changes to the OS post-startup are not durable.

So, the use case for WebDeploy in Windows Azure is in the case of a single instance of a role during development.  The idea is that you would:

  1. Deploy your hosted service (using the .cspkg upload and deploy) with a single instance with WebDeploy enabled.
  2. Use WebDeploy on subsequent deploys for instant feedback
  3. Deploy the final version again using .cspkg (without WebDeploy enabled) so the results are durable with at least 2 instances.

The Visual Studio team will shortly be supplying some tooling to make this scenario simple.  However, in the interim, it is relatively straightforward to implement this yourself and do what the tooling will eventually do for you.

If you look at the Windows Azure Training Kit, you will find the Advanced Web and Worker Lab Exercise 3 and it will show you the main things to get this done.  You simply need to extrapolate from this to get the whole thing working.

We will be using Startup Tasks to perform a little bit of bootstrapping when the role instance (note the singular) starts in order to use Web Deploy.  This will be implemented using the Role Plugin feature to make this a snap to consume.  Right now, the plugins are undocumented, but if you peruse your SDK bin folder, it won't be too hard to figure out how they work.

The nice thing about using a plugin is that you don't need to litter your service definition with anything in order to use the feature.  You simply need to include a single "Import" element and you are done!

Install the Plugin

In order to use this feature, simply extract the contents of the zip file into "%programfiles%\Windows Azure SDK\v1.3\bin\plugins\WebDeploy".  You might have to extract the files locally in your profile first and copy them into this location if you run UAC.  At the end of the day, you need a folder called "WebDeploy" in this location with all the files in this zip in it.

Use the Plugin

To use the plugin you simply need to add one line to your Service Definition:

 <Imports>
      <Import moduleName="RemoteAccess" />
      <Import moduleName="RemoteForwarder" />
      <Import moduleName="WebDeploy"/>
    </Imports>

Notice, in my example, I am also including the Remote Access and Remote Forwarder plugins as well.  You must have RemoteAccess enabled to use this plugin as we will rely on the user created here.  The Remote Forwarder is required on one role in your solution.

Next, you should hit publish on the Windows Azure project (not Web Site) and setup an RDP user.  We will be using this same user later in order to deploy because by default the RDP user is also an Admin with permission to use Web Deploy.  Alternatively, you could create an admin user with a Startup Task, but this method is easier (and you get RDP).  If you have not setup the certificates for RDP access before, it is a one time process outlined here.

 

image

 

image

Now, you publish this to Windows Azure using the normal Windows Azure publishing process:

image

That's it.  Your running instance has WebDeploy and the IIS Management Service running now.  All you had to do was import the WebDeploy plugin and make sure you also used RDP (which most developers will enable during development anyway).

At this point, it is a pretty simple matter to publish using WebDeploy.  Just right click the Web Site (not the Cloud Project) and hit publish:

image

You will need to type in the name of your .cloudapp.net project in the Service URL.  By default, it will assume port 8172; if you have chosen a different public VIP port (by editing the plugin -see *hint below), here is where you need to update it (using the full https:// syntax).

Next, you need to update the Site/Application.  The format for this is ROLENAME_IN_0_SITENAME, so you need to look in your Service Definition to find this:

image

Notice, in my example, my ROLENAME is "WebUX" and the Site name is "Web".  This will differ for each Web Role potentially, so make sure you check this carefully.

Finally, check the "Allow untrusted certificate" option and use the same RDP username and password you created on deploy.  That's all.  It should be possible for you to use Web Deploy now with your running instance:  Hit Publish and Done!

How it works

If you check the plugin, you will see that we use 2 scripts and WebPI to do this.  Actually, this is the command line version of WebPI, so it will run without prompts.  You can also download it, but it is packaged already in the plugin.

The first script, called EnableWebAdmin.cmd simply enables the IIS Management service and gets it running.  By default, this sets up the service to run and listen on port 8172.  If you check the .csplugin file, you will notice we have opened that port on the Load Balancer as well.

start /w ocsetup IIS-ManagementService
reg add HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WebManagement\Server /v EnableRemoteManagement /t REG_DWORD /d 1 /f
net start wmsvc
sc config WMSVC start= auto
exit /b 0

*Hint: if you work at a place like Microsoft that prevents ports other than 443 to host SSL traffic, here would be a good place to map 443 to 8172 in order to deploy.  For most normal environments, this is not necessary.

Next, we have the proper WebDeploy installation command using WebPI command line.

@echo off
ECHO "Starting WebDeploy Installation" >> log.txt
"%~dp0webpicmd\WebPICmdLine.exe" /Products: WDeploy /xml:https://www.microsoft.com/web/webpi/2.0/RTM/WebProductList.xml /log:webdeploy.txt

ECHO "Completed WebDeploy Installation" >> log.txt

 

net stop wmsvc

net start wmsvc

You will notice one oddity here - namely, we are using the WebPI 2.0 feed to install an older version of WebDeploy (v1.1).  If you leave this off, it will default to the 3.0 version of WebPI that uses the Web Deploy 2.0 RC.  In my testing, Web Deploy 2.0 RC only works sporadically and usually not at all.  The 1.1 version is well integrated in VS tooling, so I would suggest this one instead until 2.0 works better.

Also, you will notice that I am stopping and restarting the IIS Management Service here as well.  In my testing, WebDeploy was was unreliable on the Windows Server 2008 R2 family (osFamily=2).  Starting and stopping the service seems to fix it.

Final Warning

Please bear in mind that this method is only for proto-typing and rapid development.  Since the changes are not persisted, they will disappear the next time you are healed by the fabric controller.  Eventually, the Visual Studio team will release their own plugin that does essentially the same thing and you can stop using this.  In the meantime, have fun!

Tuesday, 30 November 2010

Using Windows Azure MMC and Cmdlet with Windows Azure SDK 1.3

If you haven't already read on the official blog, Windows Azure SDK 1.3 was released (along with Visual Studio tooling).  Rather than rehash what it contains, go read that blog post if you want to know what is included (and there are lots of great features).  Even better, goto microsoftpdc.com and watch the videos on the specific features.

If you are a user of the Windows Azure MMC or the Windows Azure Service Management Cmdlets, you might notice that stuff gets broken on new installs.  The underlying issue is that the MMC snapin was written against the 1.0 version of the Storage Client.  With the 1.3 SDK, the Storage Client is now at 1.1.  So, this means you can fix this two ways:

  1. Copy an older 1.2 SDK version of the Storage Client into the MMC's "release" directory.  Of course, you need to either have the .dll handy or go find it.  If you have the MMC installed prior to SDK 1.3, you probably already have it in the release directory and you are good to go.
  2. Use Assembly Redirection to allow the MMC or Powershell to call into the 1.1 client instead of the 1.0 client.

To use Assembly Redirection, just create a "mmc.exe.config" file for MMC (or "Powershell.exe.config" for cmdlets) and place it in the %windir%\system32 directory.  Inside the .config file, just use the following xml:

<configuration>
   <runtime>
      <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
       <dependentAssembly>
         <assemblyIdentity name="Microsoft.WindowsAzure.StorageClient"
                           publicKeyToken="31bf3856ad364e35"
                           culture="neutral" />
         <bindingRedirect oldVersion="1.0.0.0"
                          newVersion="1.1.0.0"/>
       </dependentAssembly>
       <dependentAssembly>
         <assemblyIdentity name="Microsoft.WindowsAzure.StorageClient"
                           publicKeyToken="31bf3856ad364e35"
                           culture="neutral" />
          <publisherPolicy apply="no" />
       </dependentAssembly>
 
      </assemblyBinding>
   </runtime>
</configuration>

Tuesday, 16 November 2010

Back from PDC and TechEd Europe

I am back from TechEd EU (and PDC)!  It was a long and exciting last few weeks getting first ready for PDC and then flying to Germany to deliver a couple sessions.  It was a great, if not hectic experience for the last few weeks.  For PDC, I had a part to play in the keynote demos as well as the PDC Workshop on the following Saturday.  Those two things together took more time than I would care to admit.  Once PDC was a wrap, I got to start building my talks for TechEd EU.

You can find both of my talks on the TechEd Online Site and embedded below.  The embedding doesn't work great on the blog because of layout issues I need to fix, but RSS readers should have a better time.

Thanks to everyone who attended these sessions in person and all the nice comments.  I have almost completely recovered my voice at this point - my cold last just the duration of the trip of course!

 

 

Get Microsoft Silverlight

Get Microsoft Silverlight

Friday, 15 October 2010

Load Testing on Windows Azure

For the 30th Cloud Cover show, Steve and I talked about coordinating work.  It is not an entirely new topic as there have been a couple variations on the topic before.  However, the solution I coded is actually different than both of those.  I chose to simply iterate through all the available instances and fire a WCF message to each one.  I could have done this is parallel or used multi-casting (probably more elegant), but for simplicity sake: it just works.

The reason I needed to coordinate work was in context of load testing a site.  Someone had asked me about load testing in Windows Azure.  The cloud is a perfect tool to use for load testing.   Where else can you get a lot of capacity for a short period of time?  With that thought in mind, I searched around to find a way to generate a lot of traffic quickly.

I built a simple 2 role solution where a controller could send commands to a worker and the worker in turn would spawn a load testing tool.  For this sample, I chose a tool called ApacheBench.  You should watch the show to understand why I chose ApacheBench instead of the more powerful WCAT.

hammerdance

Get the source - It's Hammer Time!

 

PS: Big props go to Steve for AJAX'ing the source up side the head.

Thursday, 14 October 2010

Sticky HTTP Session Routing in Windows Azure

I have been meaning to update my blog for some time with the routing solution I presented in Cloud Cover 24.  One failed laptop harddrive later and well. I lost my original post and didn't get around until now to try to replace it.  Anyhow, enough of my sad story.

One of the many great benefits of using Windows Azure is that networking is taken care of for you.  You get connected to a load balancer for all your input endpoints and we automatically round-robin the requests to all your instances that are declared to be part of the role.  Visually, it is pretty easy to see:

image

However, this introduces a slight issue common to all webfarm scenarios:  state doesn't work very well.  If your service likes to rely on the fact that it will communicate with exactly the same machine as last time, you are in trouble.  In fact, this is why you will always hear an overarching theme of designing to be stateless in the cloud (or any scale-out architecture).

There are inevitably scenarios where sticky sessions (routing when possible to the same instance) will benefit a given architecture.  We talked a lot about this on the show.  In order to produce sticky sessions in Windows Azure, you need a mechanism to route to a given instance.  For the show, I talked about how to do this with a simple socket listener.

 

image

We introduce a router between the Load Balancer and the Web Roles in this scenario.  The router will peek (1) the incoming headers to determine via cookies or other logic where to route the request.  It then forwards the traffic over sockets to the appropriate server (2).  For all return responses, the router will optionally peek that stream as well to determine whether to inject the cookie (3).  Finally, if necessary, the router will inject a cookie that states where the user should be routed to in subsequent requests (4).

I should note that during the show we talked about how this cookie injection could work, but the demo did not use it.  For this post, I went ahead and updated the code to actually perform the cookie injection.  This allows me to have a stateless router as well.  In the demo on the show, I had a router that simply used a dictionary lookup of SessionIDs to Endpoints.  However, we pointed out that this was problematic without externalizing that state somewhere.  A simple solution is to have the client deliver to the router via a cookie where it was last connected.  This way the state is carried on each request and is no longer the router's responsibility.

You can download the code for this from Code Gallery.  Please bear in mind that this was only lightly tested and I make no guarantees about production worthiness.

Friday, 16 July 2010

Getting the Content-Type from your Registry

For Episode 19 of the Cloud Cover show, Steve and I discussed the importance of setting the Content-Type on your blobs in Windows Azure blob storage.  This was was especially important for Silverlight clients.  I mentioned that there was a way to look up a Content Type from your registry as opposed to hardcoding a list.  The code is actually pretty simple.  I pulled this from some code I had lying around that does uploads. 

Here it is:

private static string GetContentType(string file)
{
	string contentType = "application/octet-stream";
	string fileExt = System.IO.Path.GetExtension(file).ToLowerInvariant();
	RegistryKey fileExtKey = Registry.ClassesRoot.OpenSubKey(fileExt);
	if (fileExtKey != null && fileExtKey.GetValue("Content Type") != null)
	{
		contentType = fileExtKey.GetValue("Content Type").ToString();
	}
	return contentType;
}

Friday, 11 June 2010

PowerScripting Podcast

Last week, I had the opportunity to talk with Hal and Jonathan on the PowerScripting podcast about Windows Azure.  It was a fun chat - lots on Windows Azure, a bit on the WASM cmdlets and MMC, and it revealed my favorite comic book character.

Listen to it now.

Friday, 28 May 2010

Hosting WCF in Windows Azure

This post is a bit overdue:  Steve threatened to blog it himself, so I figured I should get moving.  In one of our Cloud Cover episodes, we covered how to host WCF services in Windows Azure.  I showed how to host both publically accessible ones as well as how to host internal WCF services that are only visible within a hosted service.

In order to host an internal WCF Service, you need to setup an internal endpoint and use inter-role communication.  The difference between doing this and hosting an external WCF service on an input endpoint is mainly in the fact that internal endpoints are not load-balanced, while input endpoints are hooked to the load-balancer.

Hosting an Internal WCF Service

Here you can see how simple it is to actually get the internal WCF service up and listening.  Notice that the only thing that is different is that the base address I pass to my ServiceHost contains the internal endpoint I created.  Since the port and IP address I am running on is not known until runtime, you have to create the host and pass this information in dynamically.

public override bool OnStart()
{
    // Set the maximum number of concurrent connections 
    ServicePointManager.DefaultConnectionLimit = 12;
    DiagnosticMonitor.Start("DiagnosticsConnectionString");
    // For information on handling configuration changes
    // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
    RoleEnvironment.Changing += RoleEnvironmentChanging;
    StartWCFService();
    return base.OnStart();
}
private void StartWCFService()
{
    var baseAddress = String.Format(
        "net.tcp://{0}",
        RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["EchoService"].IPEndpoint
        );
    var host = new ServiceHost(typeof(EchoService), new Uri(baseAddress));
    host.AddServiceEndpoint(typeof(IEchoService), new NetTcpBinding(SecurityMode.None), "echo");
    host.Open();
}

Consuming the Internal WCF Service

From another role in my hosted service, I want to actually consume this service.  From my code-behind, this was all the code I needed to actually call the service.

protected void Button1_Click(object sender, EventArgs e)
{
    var factory = new ChannelFactory<WorkerHost.IEchoService>(new NetTcpBinding(SecurityMode.None));
    var channel = factory.CreateChannel(GetRandomEndpoint());
    Label1.Text = channel.Echo(TextBox1.Text);
}
private EndpointAddress GetRandomEndpoint()
{
    var endpoints = RoleEnvironment.Roles["WorkerHost"].Instances
        .Select(i => i.InstanceEndpoints["EchoService"])
        .ToArray();
    var r = new Random(DateTime.Now.Millisecond);
    return new EndpointAddress(
        String.Format(
            "net.tcp://{0}/echo",
            endpoints[r.Next(endpoints.Count() - 1)].IPEndpoint)
            );
}

The only bit of magic here was querying the fabric to determine all the endpoints in the WorkerHost role that implemented the EchoService endpoint and routing a request to one of them randomly.  You don't have to route requests randomly per se, but I did this because internal endpoints are not load-balanced.  I wanted to distribute the load evenly over each of my WorkerHost instances.

One tip that I found out is that there is no need to cache the IPEndpoint information you find.  It is already cached in the API call.  However, you may want to cache your ChannelFactory according to best practices (unlike me).

Hosting Public WCF Services

This is all pretty easy as well.  The only trick to this is that you need to apply a new behavior that knows how to deal with the load balancer for proper MEX endpoint generation.  Additionally, you need to include a class attribute on your service to deal with an address filter mismatch issue.  This is pretty well documented along with links to download the QFE that contains the behavior patch out on the WCF Azure Samples project on Code Gallery in Known Issues. Jim Nakashima actually posted about this the other day as well in detail on his blog as well, so I won't dig into this again here.

Lastly, if you just want the code from the show, have at it!

Monday, 10 May 2010

Windows Azure MMC v2 Released

I am happy to announce the public release of the Windows Azure MMC - May Release.  It is a very significant upgrade to the previous version on Code Gallery.  So much, in fact, I tend to unofficially call it v2 (it has been called the May Release on Code Gallery).  In addition to all-new and faster storage browsing capabilities, we have added service management as well as diagnostics support.  We have also rebuilt the tool from the ground up to support extensibility.  You can replace or supplement our table viewers, log viewers, and diagnostics tooling with your own creation.

This update has been in the pipeline for a very long time.  It was actually finished and ready to go in late January.  Given the amount of code however that we had to invest to produce this tool, we had to go through a lengthy legal review and produce a new EULA.  As such, you may notice that we are no longer offering the source code in this release to the MMC snap-in itself.  Included in this release is the source for the WASM cmdlets, but not for the MMC or the default plugins.  In the future, we hope to be able to release the source code in its entirety.

 

Features At A Glance:

 

hosted_services Hosted Services Upload / configure / control / upgrade / swap / remove Windows Azure application deployments
diagnostics Diagnostics Configure instrumentation for Windows Azure applications (diagnostics) per source (perf counters, file based, app logs, infrastructure logs, event logs).   Transfer the diagnostic data on-demand or scheduled.

View / Analyze / Export to Excel and Clear instrumentation results.

certificates Certificates Upload / manage certificates for Windows Azure applications
storage_services Storage Services Configure Storage Services for Windows Azure applications
blobs BLOBs and Containers Add / Upload / Download / Remove BLOBs and Containers and connect to multiple storage accounts
queues Queues Add / Purge / Delete Windows Azure Queues
tables Tables Query and delete Windows Azure Tables
extensibility Extensibility Create plugins for rich diagnostics data visualization (e.g. add your own visualizer for performance counters). Create plugins for table viewers and editors or add completely new modules!  Plugin Engine uses MEF (extensibility framework) to easily add functionality.

powershell PowerShell-based backend The backend is based on PowerShell cmdlets. If you don't like our UI, you can still use the underlying cmdlets and script out anything we do

 

How To Get Started:

There are so many features and updates in this release that I have prepared a very quick 15-min screencast on the features and how to get started managing your services and diagnostics in Windows Azure today!

Friday, 05 March 2010

Calculating the Size of Your SQL Azure Database

In Episode 3 of Cloud Cover, I mentioned the tip of the week was how to measure your database size in SQL Azure.  Here is the exact queries you can run to do it:

select
      sum(reserved_page_count) * 8.0 / 1024
from
      sys.dm_db_partition_stats
GO

select
      sys.objects.name, sum(reserved_page_count) * 8.0 / 1024
from
      sys.dm_db_partition_stats, sys.objects
where
      sys.dm_db_partition_stats.object_id = sys.objects.object_id

group by sys.objects.name

The first one will give you the size of your database in MB and the second one will do the same, but break it out for each object in your database.

Hat tip to David Robinson and Tony Petrossian on the SQL Azure team for the query.

Wednesday, 17 February 2010

WASM Cmdlets Updated

image

I am happy to announce the updated release of the Windows Azure Service Management (WASM) Cmdlets for PowerShell today. With these cmdlets you can effectively automate and manage all your services in Windows Azure. Specifically,

  • Deploy new Hosted Services
    • Automatically upload your packages from the file system to blob storage.
  • Upgrade your services
    • Choose between automatic or manual rolling upgrades
    • Swap between staging and production environments
  • Remove your Hosted Services
    • Automatically pull down your services at the end of the day to stop billing. This is a critical need for test and development environments.
  • Manage your Storage accounts
    • Retrieve or regenerate your storage keys
  • Manage your Certificates
    • Deploy certificates from your Windows store or the local filesystem
  • Configure your Diagnostics
    • Remotely configure the event sources you wish to monitor (Event Logs, Tracing, IIS Logs, Performance Counters and more)
  • Transfer your Diagnostics Information
    • Schedule your transfers or Transfer on Demand.

clip_image003

Why did we build this?

The WASM cmdlets were built to unblock adoption for many of our customers as well as serve as a common underpinning to our labs and internal tooling. There was an immediate demand for an automation API that would fit into the standard toolset for IT Pros. Given the adoption and penetration of PowerShell, we determined that cmdlets focused on this core audience would be the most effective way forward. Furthermore, since PowerShell is a full scripting language with complete access to .NET, this allows these cmdlets to be used as the basis for very complicated deployment and automation scripts as part of the application lifecycle.

How can you use them?

Every call to the Service Management API requires an X509 certificate and the subscription ID for the account. To get started, you need to upload a valid certificate to the portal and have it installed locally to your workstation. If you are unfamiliar with how to do this, you can follow the procedure outlined on the Windows Azure Channel9 Learning Center here.

Here are a few examples of how to use the cmdlets for a variety of common tasks:

Common Setup

Each script referenced below will refer to the following variables:

Add-PSSnapin AzureManagementToolsSnapIn

#get your local certificate for authentication

$cert = Get-Item cert:\CurrentUser\My\<YourThumbPrint>

#subID from portal

$sub = 'c9f9b345-7ff5-4eba-9d58-0cea5793050c'

#your service name (without .cloudapp.net)

$service = 'yourservice'

#path to package (can also be http: address in blob storage)

$package = "D:\deploy\MyPackage.cspkg"

#configuration file

$config = "D:\deploy\ServiceConfiguration.cscfg"

 

Listing My Hosted Services

Get-HostedServices -SubscriptionId $sub -Certificate $cert

 

View Production Service Status

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production' |

select RoleInstanceList -ExpandProperty RoleInstanceList |

ft InstanceName, InstanceStatus -GroupBy RoleName

 

Creating a new deployment

#Create a new Deployment

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

New-Deployment -Slot Production -Package $package -Configuration $config -Label 'v1' |

Get-OperationStatus -WaitToComplete

#Set the service to 'Running'

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production'|

Set-DeploymentStatus 'Running' |

Get-OperationStatus -WaitToComplete

 

Removing a deployment

#Ensure that the service is first in Suspended mode

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production'|

Set-DeploymentStatus 'Suspended' |

Get-OperationStatus -WaitToComplete |

#Remove the deployment

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production'|

Remove-Deployment

 

Upgrading a single Role

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |

Get-Deployment -Slot Production |

Set-Deployment -mode Auto -roleName 'WebRole1' -package $package -label 'v1.2' |

Get-OperationStatus -WaitToComplete

 

Adding a local certificate

$deploycert = Get-Item cert:\CurrentUser\My\CBF145B628EA06685419AEDBB1EEE78805B135A2

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Add-Certificate -CertificateToDeploy $deploycert |

Get-OperationStatus -WaitToComplete

 

Configuring Diagnostics - Adding a Performance Counter to All Running Instances

#get storage account name and key

$storage = "yourstorageaccount"

$key = (Get-StorageKeys -ServiceName $storage -Certificate $cert `

    -SubscriptionId $sub).Primary

 

$deployId = (Get-HostedService $service -SubscriptionId $sub `

    -Certificate $cert | Get-Deployment Production).DeploymentId

 

$counter = '\Processor\(_Total)\% Processor Time'

$rate = [TimeSpan]::FromSeconds(5)

 

Get-DiagnosticAwareRoles -StorageAccountName $storage -StorageAccountKey $key `

-DeploymentId $deployId |

foreach {

    $role = $_

    Get-DiagnosticAwareRoleInstances $role -DeploymentId $deployId `

    -StorageAccountName $storage -StorageAccountKey $key |

    foreach {

        $instance = $_

        $config = Get-DiagnosticConfiguration -RoleName $role -InstanceId $_ `

            -StorageAccountName $storage -StorageAccountKey $key `

            -BufferName PerformanceCounters -DeploymentId $deployId

        $perf = New-Object Microsoft.WindowsAzure.Diagnostics.PerformanceCounterConfiguration `

            -Property @{CounterSpecifier=$counter; SampleRate=$rate}

        $config.DataSources.Add($perf)

        $config.DataSources |

         foreach {

             Set-PerformanceCounter -PerformanceCounters $_ -RoleName $role `

             -InstanceId $instance -DeploymentId $deployId -StorageAccountName $storage `

             -StorageAccountKey $key

         }

    }   

}

More Examples

You can find more examples and documentation on these cmdlets by typing 'Get-Help <cmdlet> -full' from the PowerShell cmd prompt.

If you have any questions for feedback, please send it directly to me through the blog (look at right hand navigation pane for Contact Ryan link).

Friday, 12 February 2010

Sharing Blobs in Windows Azure

Windows Azure storage makes use of a symmetric key authentication system.  Essentially, we take a 256-bit key and sign each HTTP request to the storage subsystem.  In order to access storage, you have to prove you know the key.  What this means of course is that you need to protect that key well.  It is an all-or-nothing scenario, if you have the key, you can do anything.  If you don't possess the key, you can do nothing*.

The natural question for most folks when they understand this model is, how can I grant access to someone without compromising my master key?  The solution turns out to be something called Shared Access Signatures (SAS) for Windows Azure.  SAS works by specifying a few query string parameters, canonicalizing those parameters, hashing them and signing the hash in the query string.  This creates a unique URL that embeds not only the required access, but the proof that it was created by someone that knew the master key.  The parameters on the query string are:

  • st - this is the start time of when the signature becomes valid.  It is optional.  If not supplied, then now is implied.
  • se - this the the expiration date and time.  All signatures are timebound and this parameter is required.
  • sr - this is the resource that the signature applies to and will be either (b)lob or (c)ontainer.  This is required.
  • sp -this is the permission set that you are granting - (r)ead, (w)rite, (d)elete, and (l)ist.  This is required.
  • si - this is a signed identifier or a named policy that can incorporate any of the previous elements. Optional.
  • sig - this is the signed hash of the querystring and URI that proves it was created with the master key.  It is required.

There is one other caveat that is important to mention here.  Unless you use a signed identifier - what I refer to as a policy - there is no way to create a signature that has a lifetime longer than an hour.  This is for good reason.  A SAS URL that was mistakenly created with an extremely long lifetime and without using the signed idenifier (policy) could not be revoked without changing the master key on the storage account.  If that URL was to leak, your signed resource would be open to abuse for a potentially long time.  By making the longest lifetime of a signature only an hour, we have limited the window in which you are exposed.

If you want to create a longer-lived SAS, you must create a policy.  The policy is very interesting.  Because this policy contains any of those parameters mentioned above and is stored at the service, it means that we can revoke those permission or completely change them instantly.

Let's walk through an example using MyAzureStorage.com, where it is trivial to create a SAS.  Once I login to the site, I am going to select the BLOBs option from the navigation tabs on top.  Here I will see a list of all my containers.  I can select a container's Actions menu and click Manage Policies.

image 

Next, I am going to create two policies (signed identifiers), called Read and Write.  These will have different expirations dates and permission sets.  Notice I am not specifying a Start Date, so they are immediately valid.

 

image

Next, I am going to select the Actions menu for one of the blobs under this container and click Share.  I am going to apply one of the policies (shared access signature) that I just created by selecting it from the dropdown.

image

You will notice that the value that I created in the policy fill in the values in the Share BLOB UI and disable you from changing them.  The reason is that a policy is like a template.  If you don't set a value, then you can set (or must set it if it is required).  However, if the policy states one of the parameters, you cannot supply that parameter.  In this case, the 'read' policy that was created specified the expiration (se) and permissions (sp).  It is implied from the dialog selection that the resource (sr) is a (b)lob.  The only value that could be supplied here outside of the policy is the start time (st), which I am not supplying as it is optional.

When I click the Get URL button, I get back a URL that looks like this:

http://<account>.blob.core.windows.net/shared/blobname?sr=b&si=read&sig=hIbD%.%3D

Now, if that URL was to leak and I no longer wanted to provide read access to the blob, I could simply delete the 'read' policy or change the expiration date.  It would instantly be invalidated.  Compare this to the the same signature created without a policy:

http://<account>.blob.core.windows.net/shared/blobname?se=2010-02-13T02%3A17%3A46Z&sr=b&sp=r&sig=bYfBBb1yf.%3D

This signature could not be revoked until it either expired or I regenerated the master key.

If you want to see how the SAS feature works or easily share blobs or containers in your Windows Azure storage account, give it a try at MyAzureStorage.com and see how easy it is to do.

 

* assuming the key holder has not marked the container as blob- or container-level public access already, in which case it is public read-only.

Wednesday, 10 February 2010

Do you Incarnate?

It wasn't too long ago when Karsten Januszewski came to my office looking for a Windows Azure token (back when we were in CTP).  I wondered what cool shenanigans the MIX team had going.  Turns out it was for the Incarnate project (explained here).  In short, this service finds all the different avatars you might be using across popular sites* and allows you to select an existing one instead of having to upload one again and again.

image

You will note that there is another 'dunnry' somewhere on the interwebs, stealing my exclusive trademark.  I have conveniently crossed them out for reference. ;)

Since the entire Incarnate service is running in Windows Azure, I was interested in Karsten's experience:

We chose Windows Azure to host Incarnate because there was a lot of uncertainty in traffic.  We didn't know how popular the service would be and knowing that we could scale to any load was a big factor in choosing it.

I asked him how the experience was, developing for Windows Azure

There is a ton of great documentation and samples.  I relied heavily on the Windows Azure Platform Kit as well as the samples in the SDK to get started.  Once I understood how the development environment worked and how the deployment model functioned, I was off and running. I'd definitely recommend those two resources as well as the videos from the PDC for people who are getting started.

I love it when I hear that.  Karsten was able to get the Incarnate service up and running on Windows Azure easily and now he is scale-proof in addition to the management goodness that is baked into Windows Azure.

Checkout more about Incarnate and Karsten's Windows Azure learning on the MIX blog.

 

*turns out you can extend this to add a provider for any site (not just the ones that ship in source).

Wednesday, 03 February 2010

How Do I Stop the Billing Meter in Windows Azure?

This might come as a surprise to some folks, but in Windows Azure you are billed when you deploy, not when you run.  That means we don't care about CPU hours - we care about deployed hours.  Your meter starts the second you deploy, irrespective of the state of the application.  This means that even if you 'Suspend' your service so it is not reachable (and consumes no CPU), the meter is still running.

Visually, here is the meter still running:

image

Here is when the meter is stopped:

image

Right now, there is a 'free' offering of Windows Azure that offers a limited # of hours per month.  If you are using MSDN benefits for Windows Azure, there is another offering that offers some bucket of 'free' hours.  Any overage and you start to pay.

Now, if you are like me and have a fair number of hosted services squirreled around, you might forget to go to the portal and delete the deployments when you are done.  Or, you might simply wish to automate the removal of your deployments at the end of day.  There are lots of reasons to remove your deployments, but the primary one is to turn the meter off.  Given that re-deploying your services is very simple (and can also be automated), this is not a huge issue to remove the deployment when you are done.

Automatic Service Removal

For those folks that wish an automated solution, it turns out that this is amazingly simple when using the Service Management API and the Azure cmdlets.  Here is the complete, deployment-nuking script:

$cert = Get-Item cert:\CurrentUser\My\<cert thumbprint>
$sub = 'CCCEA07B. your sub ID'

$services = Get-HostedServices -Certificate $cert -SubscriptionId $sub

$services | Get-Deployment -Slot Production | Set-DeploymentStatus 'Suspended' | Get-OperationStatus -WaitToComplete
$services | Get-Deployment -Slot Staging | Set-DeploymentStatus 'Suspended' | Get-OperationStatus -WaitToComplete

$services | Get-Deployment -Slot Production | Remove-Deployment
$services | Get-Deployment -Slot Staging | Remove-Deployment

That's it - just 6 lines of Powershell. BE CAREFUL.  This script will iterate through all the services in your subscription ID, stop any deployed service, and then remove it.  After this runs, every hosted service will be gone and the billing meter has stopped (for hosted services anyway).

Monday, 25 January 2010

Supporting Basic Auth Proxies

A few customers have asked how they can use tools like wazt, Windows Azure MMC, the Azure Cmdlets, etc. when they are behind proxies at work that require basic authentication.  The tools themselves don't directly support this type of proxy.  What we are doing is simply relying on the fact that the underlying HttpRequest object will pick up your IE's default proxy configuration.  Most of the time, this just works.

However, if you are in an environment where you are prompted for your username and password, you might be on a basic auth proxy and the tools might not work.  To work around this, you can actually implement a very simple proxy handler yourself and inject it into the application.

Here is one that I wrote to support wazt.  To use this, add the following to your app.config and drop the output assembly from this project into your execution directory.  Note, this would work with any tool in .NET that uses HttpWebRequest under the covers (like csmanage for instance).

 

<!-- basic auth proxy section declaration area-->
<!-- proxyHostAddress="Auto" : use Internet explorer configuration for name of the proxy -->
<configSections>
  <sectionGroup name="proxyGroup">
    <section name="basicProxy"
             type="Proxy.Configuration.CustomProxySection, Proxy" />
  </sectionGroup>
</configSections>

<system.net>
  <defaultProxy enabled="true" useDefaultCredentials="false">
    <module type="Proxy.CustomProxy, Proxy"/>
  </defaultProxy>
</system.net>

<proxyGroup>   
  <basicProxy proxyHostAddress="Auto" proxyUserName="MyName" proxyUserPassword="MyPassword" />
</proxyGroup>

 

Download the source here.

Neat Windows Azure Storage Tool

I love elegant software.  I knew about CloudXplorer from Clumsy Leaf for some time, but I hadn't used it for awhile because the Windows Azure MMC and MyAzureStorage.com have been all I need for storage for awhile.  Also, I have a private tool that I wrote awhile back to generate Shared Access signatures for files I want to share.

I decided to check out the progress on this tool and noticed in the change log that support for Shared Access signatures is now included.  Nice!  So far, this is the only tool* that I have seen handle Shared Access signatures in such an elegant and complete manner.  Nicely done!

Definitely a recommended tool to keep on your shortlist.

image

*My tool is complete, but not nearly as elegant.

Monday, 11 January 2010

LINQPad supports SQL Azure

Some time back, I put in a request to LINQPad's feature request page to support SQL Azure.  I love using LINQPad for basically all my quick demo programs and prototypes.  Since all I work with these days is the Windows Azure platform, it was killing me to have to go to SSMS to do anything with SQL Azure.

Well, my request was granted!  Today, you can use the beta version of LINQPad against SQL Azure and get the full LINQ experience.  Behold:

image

In this case, I am querying the firewall rules on my database using LINQ.  Hot damn.  Nice work Joe!  If you pay a few bucks, you get the intellisense version of the tool too, which is well worth it.  This tool has completely replaced SnippetCompiler for me and continues to get better and better.  Now, if Joe would add F# support.

LINQPad Beta

Monday, 23 November 2009

PDC 2009 Windows Azure Resources

For those of you that made PDC, this will serve as a reminder and for those of you that missed PDC this year (too bad!), this will serve as a guide to some great content.

PDC Sessions for the Windows Azure Platform

Getting Started

Windows Azure

Codename "Dallas"

SQL Azure

Identity

Customer & Partner Showcases

Channel 9 Learning Centers

Coinciding with PDC, we have released the first wave of learning content on Channel 9. The new Ch9 learning centers features content for both the Windows Azure Platform, as well as a course specifically designed for the Identity Developer. The content on both these sites will be continued to be developed by the team over the coming weeks and months. Watch out for updates and additions.

Downloadable Training Kits

To complement the learning centers on Ch9, we still continue to maintain the training kits on the Microsoft download center, which allows you to download and consume the content offline. You can download the Windows Azure Platform training kit here, and the Identity training kit here. The next update is planned for mid-December.

Monday, 26 October 2009

Windows Azure Service Management CmdLets

As I write this, I am sitting on a plane headed back to the US from a wonderful visit over to our UK office.  While there, I got to meet a number of customers working with Windows Azure.  It was clear from the interaction that these folks were looking for a way to simplify how to manage their deployments and build it into an automated process.

With the release of the Service Management API, this is now possible.  As of today, you can download some Powershell cmdlets that wrap this API and make managing your Windows Azure applications simple from script.  With these cmdlets, you can script your deploys, upgrades, and scaling operations very easily.

clip_image001

The cmdlets mirror the API quite closely, but since it is Powershell, we support piping which cuts down quite a bit on the things you need to type.  As an example, here is how we can take an existing deployment, stop it, remove it, create a new deployment, and start it:

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production |
    Set-DeploymentStatus 'Suspended' |
    Get-OperationStatus -WaitToComplete |

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production |
    Remove-Deployment |
    Get-OperationStatus -WaitToComplete

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    New-Deployment Production $package $config -Label 'v.Next' |
    Get-OperationStatus -WaitToComplete

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production |
    Set-DeploymentStatus 'Running' |
    Get-OperationStatus -WaitToComplete

Notice that in each case, we are first getting our service by passing in the certificate and our subscription ID.  Again, since this is Powershell, we can get the certificate quite easily:

$cert = Get-Item cert:\CurrentUser\My\D6BE55AC428FAC6CDEBAFF432BDC0780F1BD00CF

You will find your Subscription ID on the portal under the 'Account' tab.  Note that we are breaking up the steps by using the Get-OperationStatus cmdlet and having it block until it completes.  This is because the Service Management API is an asynchronous model.

Similarly, here is a script that will upgrade a single role or the entire deployment depending on the arguments passed to it:

$label = 'nolabel'
$role = ''

if ($args.Length -eq 2)
{
    $role = $args[0]
    $label = $args[1]
}

if ($args.Length -eq 1)
{
    $label = $args[0]
}

if ($role -ne '')
{
    Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production |
    Set-Deployment -mode Auto -roleName $role -package $package -label $label |
    Get-OperationStatus -WaitToComplete
}
else
{
    Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot Production |
    Set-Deployment -mode Auto -package $package -label $label |
    Get-OperationStatus -WaitToComplete
}

Download the cmdlets from Code Gallery and leave me some feedback if you like them or if they are not working for you.

Tuesday, 06 October 2009

Launching MyAzureStorage.com

Things are getting crazy here at Microsoft getting ready for PDC.  I haven't had much time to blog or tweet for that matter.  However, I am taking a break from the grind to announce something I am really excited about - a sample TableBrowser service we are hosting for developers at MyAzureStorage.com.

We built this service using ASP.NET MVC on a rich AJAX interface.  The goals of this service were to provide developers to an easy way to create, query, and manage their Windows Azure tables.  What better way to host this than on a scalable compute platform like Windows Azure?

Create and Delete Tables

If you need to create or manage your tables, you get a nice big list of the ones you have.

image

Create, Edit, and Clone your Entities

I love being able to edit my table data on the fly.  Since we can clone the entity, it makes it trivial to copy large entities around and just apply updates.

image

Query Entities

Of course, no browser application would be complete without being able to query your data as well.  Since the ADO.NET Data Services syntax can be a little unfamiliar at first, we decided to go for a more natural syntax route.  Using simple predicates long with OR, AND, and NOT operations, you can easily test your queries.

image

Display Data

Lastly, we have tried to make showing data in Windows Azure as convenient as possible.  Since data is not necessarily rectangular in nature in Windows Azure tables, we have given you some options:  First, you can choose the attributes to display in columns by partition.  Next, you expand the individual entity to show each attribute.

image

Please note:  during login you will need to supply your storage account name and key.  We do not store this key.  It is kept in an encrypted cookie and passed back and forth on each request.  Furthermore, we have SSL enabled to protect the channel.

The service is open for business right now and will run at least until PDC (and hopefully longer).  Enjoy and let me know through the blog any feedback you have or issues you run into.

Friday, 18 September 2009

Upgrading Your Service in Windows Azure

Windows Azure has been in CTP since PDC 08 in October of last year.  Since that time, we have had a fairly simple, yet powerful concept for how to upgrade your application.  Essentially, we have two environments: staging and production.

image 

The difference between these two environments is only in the URI that points to any web exposed services.  In staging, we give you an opaque GUID-like URI (e.g. <guidvalue>.cloudapp.net) that is hard to publically discover and in production, we give you the URI that you chose when you created the hosted service (e.g. <yourservice>.cloudapp.net).

VIP Swaps, Deploys, and Upgrades

When you wanted to upgrade your service, you needed to deploy the updated service package containing all your roles into one of the environments.  Typically, this was in the staging environment.  Whenever you were ready, you would then click the big button in the middle to swap environments.  This re-programmed the load balancers and suddenly staging was production and vice versa.  If anything went wrong in your upgrade, you could hit the button again and you were back to the original deployment in seconds.  We called this model a "VIP Swap" and it is easy to understand and very powerful.

We heard from some customers that they wanted more flexibility to upgrade an individual role without redeploying the entire service.  Typically, this can be because there might be some state or caching going on in one of the other roles that a VIP swap would cause to be lost.

The good news is that now you can upgrade individual roles (or even the whole service) using the in place upgrade.  When you click the new 'Upgrade' button on the portal, you will see a screen very similar to the 'Deploy' screen that you would be used to from before, but this time you have two new options.

image 

Upgrade Domains

The first new option allow you to choose if you want the upgrade to be 'Automatic' or 'Manual' across the upgrade domains.  To understand this option, you would probably want to understand what an 'Upgrade Domain' is all about.  You can think of upgrade domains as vertical slices of your application, crossing roles.  So, if I had a service with a single web role using 10 instances with 2 worker roles, each with 4 instances, then with 2 upgrade domains, I would have a 5 web role instance, and 2 + 2 worker roles instances in each upgrade domain.  Illustrated:

image

If I choose 'Automatic', it simply means that each upgrade domain will be sequentially be brought down and upgraded in turn.  If I choose 'Manual', then I need to click another button between each upgrade domain update in order to proceed.

Note:  in the CTP today, 2 upgrade domains are automatically defined and set.  In the future, you will be able to specify how many upgrade domains you would like to have.

Role Upgrades

Next, we have a radio button that specifies if you want to update the whole service, or a specific role with in the service.  Most folks will likely use the role specific update.

It is important to note that these upgrades are for the services where the topology has not changed.  That is, you cannot update the Service Definition (e.g. adding, removing roles or configuration options).  If you want to change the topology, you would need to use the more familiar VIP swap model.

Once you click Deploy, the selected role will be upgraded according to the upgrade mode you specified.

More information about in-place upgrades and update domains can be found here.  Lastly, you can of course eschew the portal and perform all of these actions using the new Service Management API.  Happy upgrading!

Wednesday, 02 September 2009

Instant Windows Azure Tokens

I am on vacation right now, but I read this over at Steve's blog and I just had to make sure everyone knows about it.  Right now, when you register at Microsoft Connect for Windows Azure, you will get an instant token.  No more 1 or 2 days wait!

Register for Windows Azure

Thursday, 27 August 2009

Deploying Applications on Windows Azure

There has been a bit of interest in an application called 'myTODO' that we built for the World Partner Conference (WPC) event back in July.  It is a simple, yet useful application.  The application allows you to create and share lists very easily.  It integrates with Twitter, so if you decide to share your lists, it will tweet them and their updates automatically.  You can also subscribe to lists using standard RSS if Twitter isn't your thing.

image

The funny thing is that we only built this app because we wanted something more interesting than the standard "Hello World" application.  The entire purpose of the app was to show how easily you can deploy an application (in just mins) on Windows Azure.

You can learn more about this application in 3 ways:

  1. Get the deployment package and deploy this yourself using our demo script from the Windows Azure Platform Training Kit.  You will find it in the Demos section and called "Deploying Windows Azure Services".
  2. Watch me show how the app works and how to deploy it by watching my screencast.
  3. Download the source code and see how we built the rich dynamic UI and how we modeled the data using tables.

Tuesday, 04 August 2009

Federation on Windows Azure

My teammate Vittorio has put out some new guidance and a great new toolkit that shows how to use federation today with Windows Identity Foundation (WIF or Geneva) on Windows Azure.  I know this has been a very common request, especially as services move outside of the private datacenters and into the cloud and as vendors try to build true SaaS applications that need to integrate seamlessly into the customer's experience.

As the technologies evolve, the guidance will be kept up to date.  For now, this is a great piece of work that gets us past some of the early roadblocks we encountered.

Monday, 20 July 2009

Windows Azure SDK update for July

You can download the new SDK and Windows Azure Tools for Visual Studio right now.  The big feature in this release is the support for multiple roles per deployment (instead of a single web and worker role).  This is surfaced as a new wizard UI in the tools.  Additionally, there is new support for any type of web app to be associated to the project.  Previously this was limited to a specific project type (but now it supports MVC directly!).

Download Windows Azure Tools for Visual Studio (includes SDK)
Download Windows Azure SDK

Monday, 01 June 2009

Windows Azure June Round-up

Ahh, it is the start of a glorious June and here in Washington the weather is starting to really get nice.  The previous cold and rainy spring must have made the product group more productive as Windows Azure has continued adding features since the last major update at MIX.

Given that Windows Azure is a service and not a boxed product, you can expect updates and features to roll-out over the coming months.  In this round-up, we have a number of features that have gone live in the last month or two.

New Feature: Geo-Location support

clip_image002

Starting in May, a new option was added to the portal to support geo-locating your code and data. In order to use this most effectively, the idea of an 'Affinity Group' was created. This allows you to associate various services under an umbrella label for the location.

Read more about this feature here and see a complete provisioning walk-through.

New Feature: Storage API updates

I briefly mentioned this last week, but on Thursday (5/28), new features were released to the cloud for Windows Azure storage. The long awaited batch transaction capability for tables as well as a new blob copying capability were released. Additionally, the GetBlockList API was updated to return both committed and uncommitted blocks in blob storage.

One more significant change of note is that a new versioning mechanism has been added. New features will be versioned by a new header ("x-ms-version"). This versioning header must be present to opt-in to new features. This mechanism is in place to prevent breaking changes from impacting existing clients in the future. It is recommended that you start including this header in all authenticated API calls.

Rounding out these updates were some changes to how property names are stored in table storage as well as the size for Partition and Row keys. Unicode chars and up to 1K key size are supported, respectively. Finally, the timeout values for various storage operations were updated as well.

For more details, please read the announcement.

Please note: There currently is no SDK support for these new storage features. The local developer fabric does NOT currently support these features and the StorageClient SDK sample has not been updated yet. At this point, you need to use the samples provided on Steve Marx's blog. A later SDK update will add these features officially.

Windows Azure SDK Update

The May CTP SDK update has been released to the download center. While this release does NOT support the new storage features, it does add a few new capabilities that will be of interest to the Visual Studio 2010 beta testers. Specifically:

  • Support for Visual Studio 2010 Beta 1 (templates, local dev fabric, etc.)
  • Updated support for Visual Studio 2008 - you can now configure settings through the UI instead of munging XML files.
  • Improved reliability of the local dev fabric for debugging
  • Enhanced robustness and stability (aka bug fixes).

Download the Windows Azure Tools for Visual Studio (includes both SDK and tools).

New Windows Azure Applications and Demos

Windows Azure Management Tool (MMC)

The Windows Azure Management Tool was created to manage your storage accounts in Windows Azure. Developed as a managed MMC, the tool allows you to create and manage both blobs and queues. Easily create and manage containers, blobs, and permissions. Add and remove queues, inspect or add messages or empty queues as well.

Bid Now Sample

Bid Now is an online auction site designed to demonstrate how you can build highly scalable consumer applications. This sample is built using Windows Azure and uses Windows Azure Storage. Auctions are processed using Windows Azure Queues and Worker Roles. Authentication is provided via Live Id.

If you know of new and interesting Windows Azure content that would be of broad interest, please let me know and I will feature it in later updates.

Relevant Blog Postings

http://blog.smarx.com/posts/sample-code-for-new-windows-azure-blob-features

http://blogs.msdn.com/jnak/archive/2009/05/28/may-ctp-of-the-windows-azure-tools-and-sdk-now-supports-visual-studio-2010-beta-1.aspx

http://blogs.msdn.com/jnak/archive/2009/04/30/windows-azure-geo-location-is-live.aspx

http://dunnry.com/blog/CreateAndDeployYourWindowsAzureServiceIn5Steps.aspx

Thursday, 28 May 2009

New Windows Azure Storage Features

Some new features related to blob and table storage were announced on the forum today.  The key features announced were:

  1. Transactions support for table operations (batching)
  2. Copy Blob API (self explanatory)
  3. Updated GetBlockList API now returns both committed and uncommitted block lists

There are some more details around bug fixes/changes and timeout updates as well.  Refer to the announcement for more details.

The biggest impact to developers at this point is that to get these new features, you will need to include a new versioning header in your call to storage.  Additionally, the StorageClient library has not been updated yet to reflect these new APIs, so you will need to wait for some examples (coming from Steve) or an update to the SDK.  You can also refer to the MSDN documentation for more details on the API and roll your own in the meantime.

Friday, 22 May 2009

Create and Deploy your Windows Azure Service in 5 Steps

Step 1:  Obtaining a token

Sign up through http://dev.windowsazure.com to get a token for the service.  Turnaround is pretty quick, so you should have one in about a day or so.  If you don't see it in a day, make sure you check your SPAM folder to ensure that the message we send you is not trapped in purgatory there.

Step 2:  Redeeming the token

Navigate to the Windows Azure Portal and sign-in with the LiveID that you would like to use to manage your Windows Azure applications.  Today, in the CTP, only a single LiveID can manage your Windows Azure project and we cannot reassociate the token with another LiveID once redeemed.  As such, make sure you use the LiveID that you want long term to manage the solution.

The first time you login to the portal, you will be asked to associate your LiveID.

image

Click 'I Agree' to continue and once the association has been successful click 'Continue' again.  At this point, you will be presented with an option to redeem a token.  Here is where you input the token you received in Step 1.

image

Enter the token and click 'Next'.  You will then be presented with some EULAs to peruse.  Read them carefully (or don't) and then click 'Accept'.  You will get another confirmation screen, so click 'Continue'.

Step 3:  Create your Hosted Service

At this point, you can now create your first hosted solution.  You should be on the main Project page and you should see 2 and 1 project(s) remaining for Storage Account and Hosted Services, respectively.

image

Click 'Hosted Services' and provide a Project label and description.

image

Click 'Next' and provide a globally unique name that will be the basis for your public URL.  Click 'Check Availability' and ensure that the name hasn't been taken.  Next, you will need to create an Affinity Group in order to later get your storage account co-located with your hosted service.

image

Click the 'Yes' radio button and the option to create a new Affinity Group.  Give the Affinity Group a name and select a region where you would like this Affinity Group located.  Click 'Create' to finish the process.

image

Step 4:  Create your Storage account

Click the 'New Project' link near the left corner of the portal and this time, select the Storage Account option.

image

Similar to before, give a project label and description and click Next.

image

Enter a globally unique name for your storage account and click 'Check Availability'.  Since performance is best when the data is near the compute, you should make sure that you opt to have your storage co-located with your service.  We can do this with the Affinity Group we created earlier.  Click the 'Yes' radio button and use the existing Affinity group.  Click 'Create' to finish.

image

At this time, you should note that your endpoints have been created for your storage account and two access keys are provided for you to use for authentication.

Step 5:  Deploying an application.

At this point, I will skip past the minor detail of actually building an application for this tutorial and assume you have one (or you choose to use one from the SDK).  For Visual Studio users, you would want to right click your project and select 'Publish' to generate the service package you need to upload.  Alternatively, you can use the cspack.exe tool to create your package.

image

Using either method you will eventually have two files:  the actual service package (cspkg) and a configuration file (cscfg).

image

From the portal, select the hosted service project you just created in Step 3 and click the 'Deploy' button.

image

Browse to the cspkg location and upload both the package and the configuration settings file (cscfg).  Click 'Deploy'.

image

Over the next minute or so, you will see that your package is deploying.  Hold tight and you will see it come back with the status of "Allocated".  At this point, click the 'Configure' button**.

image

In order to use your storage account you created in Step 4, you will need to update this information from the local development fabric settings to your storage account settings.  An easy way to get this is to right click your Storage Account project link in the left hand navigation tabs and open it in a new tab.  With the storage settings in one tab and the configuration in another, you can easily switch between the two and cut & paste what you need.

Inside the XML configuration, replace the 'AccountName' setting with your storage account name.  If you are confused, teh account name is the one that is part of the unique global URL, i.e. <youraccountname>.blob.core.windows.net.  Enter the 'AccountSharedKey' using the primary access key found on the storage project page (in your new tab).  Update the endpoints from the loop-back addresses to the cloud settings:  https://blob.core.windows.net, https://queue.core.windows.net, https://table.core.windows.net respectively.  Note that the endpoints here do not include your account name and we are using https.  Set the 'allowInsecureRemoteEndpoints' to either false or just delete that XML element.

Finally, update the 'Instances' element to 2 (the limit in the CTP today).  It is strongly recommended that you run at least 2 instances at all times.  This ensures that you always have at least one instance running at all times if something fails or we need to do updates (we update by fault zones and your instances are automatically placed across fault zones).  Click 'Save' and you will see that your package is listed as 'Package is updating'.

When your package is back in the 'Allocated' state (a min later or so), click the 'Run' button.  Your service will then go to the 'Initializing' state and you will need to wait a few mins while it gets your instances up and running.  Eventually, your app will have a 'Started' status.  Congratulations, your app is deployed in the Staging environment.

image

Once deployed to staging, you should click the staging hyperlink for your app (the one with the GUID in the DNS name) and test your app.  If you get a hostname not found error, wait a few more seconds and try again - it is likely that you are waiting for DNS to update and propagate your GUID hostname. When you are comfortable, click the big circular button in between the two environments (it has two arrows on it) and promote your application to the 'production' URL.

Congratulations, you have deployed your first app in Windows Azure.

** Note, this part of the process might be optional for you.  If you have already configured your developer environment to run against cloud storage or you are not using the StorageClient sample at all, you might not need to do this as the uploaded configuration file will already include the appropriate cloud settings.  Of course, if you are not using these options, you are already likely a savvy user and this tutorial is unnecessary for you.

Thursday, 14 May 2009

Windows Azure MMC

image

Available immediately from Code Gallery, download the Windows Azure MMC. The Windows Azure Management Tool was created to manage your storage accounts in Windows Azure.

clip_image003

Developed as a managed MMC, the tool allows you to create and manage both blobs and queues. Easily create and manage containers, blobs, and permissions. Add and remove queues, inspect or add messages or empty queues as well..

Features

  • Manage multiple storage accounts
  • Easily switch between remote and local storage services
  • Manage your blobs
    • Create containers and manage permissions
    • Upload files or even entire folders
    • Read metadata or preview the blob contents
  • Manage your queues
    • Create new queues
    • Monitor queues
    • Read (peek) messages
    • Post new messages to the queue
    • Purge queues

Known Issues

The current release does not work with Windows 7 due to a bug in the underlying PowerShell version. All other OS versions should be unaffected.  We will get this fixed as soon as possible.

PHP SDK for Windows Azure

image

I tweeted this earlier (really never thought I would say that.).  Anyhow, a first release of the PHP SDK for Windows Azure has been released on CodePlex.  Right now, it works with blobs and helps with the authentication pieces, but if you look on the roadmap you will see both queue support and table support coming down the road this year.

image

This is a great resource to use if you are a PHP developer looking to host your app in Windows Azure and leverage the native capabilities of the service (i.e. namely storage).

PHP Azure Project Site

Monday, 27 April 2009

Overlooking the Obvious

I was trying to troubleshoot a bug in my worker role in Windows Azure the other day.  To do this, I have a very cool tool (soon to be released) that lets me peek messages.  The problem was that I couldn't get ahold of any messages, it was like they were disappearing right from under my nose.  I would see them in the count, but couldn't get a single one to open.  I was thinking that I must have a bug in the tool.

Suddenly, the flash of insight came: something was actually popping the messages.  While I remembered to shut down my local development fabric, I forgot all about the version I had running in the cloud in the staging environment.  Since I have been developing against cloud storage, it is actually a shared environment now.  My staging workers were popping the messages, trying to process them and failing (it was an older version).  More frustrating, the messages were eventually showing up again, but getting picked up before I could see them in the tool.

So, what is the lesson here:  when staging, use a different account than your development accounts.  In fact, this is one of the primary reasons we have the cscfg file:  don't forget about it.

Thursday, 16 April 2009

Why does Windows Azure use a cscfg file?

Or, put another way:  why don't we just use the web.config file?  This question was recently posed to me and I thought I would share an answer more broadly since it is a fairly common question.

First, the web.config is part of the deployed package that gets loaded into a diff disk that in turn is started for you app.  Remember, we are using Hypervisor VM technology to run your code.  This would mean any change to this file would require a new diff disk and a restart.  We don't update the actual diff disk (that would be hard with many instances) without a redeploy.

Second, the web.config is special only because IIS knows to look for it and restart when it changes.  If you did that with a standard app.config (think worker roles) that doesn't work.  We want a mechanism that works for any type of role.

Finally, we have both staging and production.  If you wanted to hold different settings (like a different storage account for test vs. production), you would need to hold these values outside the diff disk again or you would not be able to promote between environments.

The cscfg file is 'special' and is held outside the diff disk so it can be updated.  The fabric code (RoleManager and its ilk) know when this file is updated, so it will trigger the app restart for you.  There is some short period of time between when you update this file and when the fabric notices.  Only the RoleManager and fabric are aware of this file - notice that you can't get to the values reading it from disk for instance.

And that is why in a nutshell, we have the cscfg file and we don't use web.config or app.config files.

All that being said, I don't want to give you the impression that you can't use web.config application settings in Windows Azure - you definitely can.  However, any update to the web.config file will require a redeploy, so just keep that in mind and choose the cscfg file when you can.

Thursday, 09 April 2009

Azure Training Kit and Tools Update

Are you looking for more information about Azure Services (Windows Azure, SQL Services, .NET Services, etc.)?  How about some demos and presentations?  What about a great little MMC tool to manage all your .NET Services?  Great news - it's here:

Azure Services Training Kit - April Update

clip_image001Today we released an updated version of the Azure Services Training Kit.   The first Azure Services Training Kit was released during the week of PDC and it contained all of the PDC hands-on labs.   Since then, the Azure Services Evangelism team has been creating new content covering new features in the platform.

The Azure Services Training Kit April update now includes the following content covering Windows Azure, .NET Services, SQL Services, and Live Services:

· 11 hands-on labs - including new hands-on labs for PHP and Native Code on Windows Azure.

· 18 demo scripts - These demo scripts are designed to provide detailed walkthroughs of key features so that someone can easily give a demo of a service

· 9 presentations - the presentations used for our 3 day training workshops including speaker notes.

The training kit is available as an installable package on the Microsoft Download Center. You can download it from http://go.microsoft.com/fwlink/?LinkID=130354

Azure Services Management Tools - April Update

The Azure Services Management Tools include an MMC SnapIn and Windows PowerShell cmdlets that enable a user to configure and manage several Azure Services including .NET Access Control Services, and the .NET Workflow Service. These tools can be helpful when developing and testing applications that use Azure Services. For instance, using these tools you can view and change .NET Access Control Rules, and deploy and view workflows.

You can download the latest management tools from http://code.msdn.microsoft.com/AzureManagementTools.

Thursday, 02 April 2009

Windows Azure and Geneva

I don't like to basically re-blog what others have written, but I will make a minor exception today, as it is important enough to repeat.  My friend and colleague, Vittorio has explained the current status of the Geneva framework running on Windows Azure today.

The short and sweet is that we know it is not working 100% today and we are working on a solution.  This is actually the main reason that you do not see a Windows Azure version of Azure Issue Tracker today.

Thursday, 19 March 2009

Quickly put PHP on Windows Azure without Visual Studio

The purpose of this post is to show you the command line options you have to first get PHP running locally on IIS7 with fastCGI and the to get it packaged and running in the Windows Azure local development fabric.

First, download the Windows Azure SDK and the latest version of PHP (or whatever version you wish).  I would recommend getting the .zip version and simply extracting to "C:\PHP" or something like that.  Now, configure your php.ini file according to best practices.

Next, create a new file called ServiceDefinition.csdef and copy the following into it:

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MyPHPApp" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="WebRole" enableNativeCodeExecution="true">
<InputEndpoints>
<!-- Must use port 80 for http and port 443 for https when running in the cloud -->
<InputEndpoint name="HttpIn" protocol="http" port="80" />
</InputEndpoints>
</WebRole>
</ServiceDefinition>

You will need this file in order to describe your application in the cloud.  This definition models what your application should look like in the cloud.  For this example, it is a simple web role listening to port 80 over HTTP.  Notice as well that we have the 'enableNativeCodeExecution' attribute set as well.  This is required in order to use fastCGI.

Then, create a .bat file for enabling fastCGI running locally.  I called mine PrepPHP.bat:

@echo off

set phpinstall=%1
set appname=%2
set phpappdir=%3

echo Removing existing virtual directory...
"%windir%\system32\inetsrv\appcmd.exe" delete app "Default Web Site/%appname%"

echo.
echo Creating new virtual directory...
"%windir%\system32\inetsrv\appcmd.exe" add app /site.name:"Default Web Site" /path:"/%appname%" /physicalPath:"%phpappdir%"

echo.
echo Updating applicationHost.config file with recommended settings...
"%windir%\system32\inetsrv\appcmd.exe" clear config -section:fastCGI
"%windir%\system32\inetsrv\appcmd.exe" set config -section:fastCgi /+"[fullPath='%phpinstall%\php-cgi.exe']

echo.
echo Setting PHP handler for application
"
%windir%\system32\inetsrv\appcmd.exe" clear config "Default Web Site/%appname%" -section:system.webServer/handlers
"
%windir%\system32\inetsrv\appcmd.exe" set config "Default Web Site/%appname%" -section:system.webServer/handlers /+[name='PHP_via_FastCGI',path='*.php',verb='*',modules='FastCgiModule',scriptProcessor='%phpinstall%\php-cgi.exe',resourceType='Unspecified']

echo.
echo Setting Default Document to index.php
"
%windir%\system32\inetsrv\appcmd.exe" clear config "Default Web Site/%appname%" -section:defaultDocument
"
%windir%\system32\inetsrv\appcmd.exe" set config "Default Web Site/%appname%" -section:defaultDocument /enabled:true /+files.[@start,value='index.php']

echo.
echo Done...

Create one more .bat file to enable PHP running in the local dev fabric:

@echo off

set phpinstall=%1
set appname=%2
set phpappdir=%3

echo.
echo Setting PHP handler for application
"%windir%\system32\inetsrv\appcmd.exe" set config "Default Web Site/%appname%" -section:system.webServer/handlers /[name='PHP_via_FastCGI'].scriptProcessor:%%RoleRoot%%\php\php-cgi.exe

echo.
echo Outputting the web.roleconfig file
del "%phpappdir%\web.roleconfig" /q

echo ^<;?xml version="1.0" encoding="utf-8" ?^> > "%phpappdir%\web.roleconfig"
echo ^<;configuration^> >> "%phpappdir%\web.roleconfig"
echo ^<;system.webServer^> >> "%phpappdir%\web.roleconfig"
echo ^<;fastCgi^> >> "%phpappdir%\web.roleconfig"
echo ^<;application fullPath="%%RoleRoot%%\php\php-cgi.exe" /^> >> "%phpappdir%\web.roleconfig"
echo ^<;/fastCgi^> >> "%phpappdir%\web.roleconfig"
echo ^<;/system.webServer^> >> "%phpappdir%\web.roleconfig"
echo ^<;/configuration^> >> "%phpappdir%\web.roleconfig"

echo Copying php assemblies and starting fabric

md %appname%_WebRole
md %appname%_WebRole\bin

robocopy %phpappdir% %appname%_WebRole\bin /E
robocopy %phpinstall% %appname%_WebRole\bin\php /E

"%programfiles%\windows azure sdk\v1.0\bin\cspack.exe" "%~dp0ServiceDefinition.csdef" /role:WebRole;"%~dp0%appname%_WebRole\bin" /copyOnly /generateConfigurationFile:"%~dp0ServiceDefinition.csx\ServiceConfig.cscfg"
"%programfiles%\windows azure sdk\v1.0\bin\csrun.exe" "%~dp0ServiceDefinition.csx" "%~dp0ServiceDefinition.csx\ServiceConfig.cscfg" /launchBrowser

echo.
echo Done...

Open a command prompt as an Administrator and make sure that the ServiceDefinition.csdef file you created is in the same directory as the .bat files you created.

From the command line, type:

PrepPHP.bat "path to php binaries directory" "myiis7appname" "path to my php app"

If I had installed PHP to "c:\php" and my PHP application was located at "c:\webroot\myphpapp", I would type:

prepphp.bat "c:\php" "myphpapp" "c:\webroot\myphpapp"

Now, I can launch IE and type in:  http://localhost/myphpapp and the index.php page will launch.

To get this running in Windows Azure local dev fabric, you would type:

prepforazure.bat "c:\php" "myphpapp" "c:\webroot\myphpapp"

The dev fabric will launch as well as IE and you will be looking at your PHP application running in the dev fabric.  If you wish to deploy to the cloud, simply change the command line call for cspack.exe to remove the 'copyOnly' option.  Next, comment out the csrun.exe and you have a package ready to upload to the cloud.  When you deploy at the portal, make sure you update the number of instances on the portal to at least 2 in order to get fault tolerance.

Changes to SDS Announced

Well, it is a bit of old news at this point:  the SDS team has broken the silence and announced the change from SOAP/REST to a full relational model over TDS.  First, I wanted to say I am super-excited about this change (and I really mean 'super', not in the usual Microsoft sense).  I believe that this will be a net-positive change for the majority of our customers.  I can't tell you how many times I have heard customers say, "well, SDS is great. but, I have this application over here that I want to move to the cloud and I don't want to re-architect to ACE".  Or, "my DBAs understand SQL - why can't you just give me SQL?".  We listen, really.  The feedback was loud and clear.

It may be a tiny bit contentious that this change comes with the removal of the SOAP and REST interfaces.  However, if you really love that flex model, you can get it in Windows Azure tables.

I know I have been pinged a few times to ask if the announcement is the reason for my change from SDS to Windows Azure.  The honest answer is not really.  The real reason for my change is that my group had a small re-org and the former Windows Azure tech evangelist moved up and took on higher level responsibilities (he is now my manager).  That, combined with the changes to SDS made it a natural transition point to move.  Zach Owens has now moved into my group and is looking after SDS - as the former SQL Server evangelist, it makes perfect sense for Zach to take this role now as SDS is now SQL Server in the cloud.

I would expect to see close collaboration between Windows Azure and SDS as this is the killer combination for so many applications.  If you want to know more about the changes and specific details, I would try to catch Nigel Ellis' talk at MIX09 this year or watch it online afterwards.  I will update this post with a specific link once Nigel gives his talk.

Updated:  Nigel's talk is here.

Thursday, 22 January 2009

Azure Issue Tracker Released

I am happy to announce the immediate release of the Azure Issue Tracker sample application.  I know. I know. the name is wildly creative.  This sample application is a simple issue tracking service and website that pulls together a couple of the Azure services:  SQL Data Services and .NET Access Control Service.

This demo is meant to show a realistic SaaS scenario.  As such, it features federation, claims-based authorization, and scalable data storage.

In this post, I will briefly walk through the application such that you can see how it is meant to work and how you might implement something similar.  Over the coming weeks, I will dig into features more deeply to demonstrate how they work.  I would also encourage you to follow Eugenio, who will be speaking more about this particular demo and the 'Enterprise' version over the coming weeks.

Let's get started:

  1. Make sure you have a .NET Services account (register for one here)
  2. Download and extract the 'Standard' version from the Codeplex project page
  3. Run the StartMe.bat file.  This file is the bootstrap for the provisioning process that is necessary to get the .NET Services solution configured as well as things like websites, certificates, and SDS storage.
  4. Open the 'Readme.txt' from the setup directory and follow along for the other bits

Once you have the sample installed and configured, navigate to your IssueTracker.Web site and you will see something like this:

image

Click the Join Now and then select the 'Standard' edition.  You will be taken to the 'Create Standard Tenant' page.  This is how we register our initial tenant and get them provisioned in the system.  The Windows LiveID and company name put into this page is what will be used for provisioning (the other information is not used right now).

image

Once you click 'Save', the provisioning service will be called and rules will be inserted into the Access Control service.  You can view the rules by looking using the .NET Services portal and viewing the Access Control Service (use the Advanced View) and select the 'http://localhost/IssueTracker.Web' scope.

image

We make heavy use of the forward chaining aspect of the rules transformation here.  Notice that Admin will be assigned the role of both Reader and Contributor.  Those roles have many more operations assigned to them.  The net effect will be that an Admin will have all the operations assigned to them when forward chaining is invoked.

Notice as well that we have 3 new rules created in the Access Control service (ACS).  We have a claim mapping that sets the Windows LiveID to the Admin role output claim, we have another that sets an email mapping claim, and finally one that sets a tenant mapping claim.  Since we don't many input claims to work with (only the LiveID really), there is not too much we can do here in the Standard edition.  This is where the Enterprise edition that can get claims from AD or any other identity provider is a much richer experience.

Once you have created the tenant and provisioned the rules, you will be able to navigate to a specific URL now for this tenant:  http://localhost/IssueTracker.Web/{tenant}

You will be prompted to login with your LiveID and once you are authenticated with Windows LiveID, you will be redirected back to your project page.  Under the covers, we federated with LiveID, got a token from them, sent the token to ACS, transformed the claims, and sent back a token containing the claims (all signed by the ACS).  I will speak more about this later, but for now, we will stick to the visible effects.

image

From the Project page, you should click 'New Project' and create a new project.

  • Give it a name and invite another Windows LiveID user (that you can login as later).  Make sure you invite a user with a different Windows LiveID than the one you used to initially provision the tenant (i.e. what you are logged in as currently).
  • Choose the 'Reader' role for the invited user and click the Invite button.
  • Add any custom fields you feel like using.  We will likely expand this functionality later to give you more choices for types and UI representation.
  • Click Save (make sure you added at least 1 user to invite!)

Once you click Save, additional rules are provisioned out to the ACS to handle the invited users.  From the newly created project, create an issue.  Notice that any additional fields you specified are present now in the UI for you to use for your 'custom' issue.

image

Once you have saved a new issue, go ahead and try the edit functionality for the Issue.  Add some comments, move it through the workflow by changing the status, etc.

Next, open a new instance of IE (not a new tab or same IE) and browse to the tenant home page (i.e. http://localhost/IssueTracker.Web/{tenant}).  This time however, login as the invited Windows LiveID user.  This user was assigned the 'Reader' role.  Notice the new browser window can read the projects for this tenant and can read any issue, but they cannot use the Edit functionality.

image

Now, I will make two comments about this.  First, so what?  We are checking claims here on the website and showing you a 'Not Authorized' screen.  While we could have just turned off the UI to not show the 'Edit' functionality, we did this intentionally in the demo so you can see how this claim checking works.  In this case, we are checking claims at the website using a MVC route filter (called ClaimAuthorizationRouteFilterAttribute).

One key point of this demo is that the website is just a client to the actual Issue Tracker service.  What if we tried to hit the service directly?

Let's check.  I am going to intentionally neuter the web site claims checking capabilities:

image

By commenting out the required claims that the Reader won't have, I can get by the website.  Here is what I get:

image 

The Issue Tracker service is checking claims too. whoops.  No way to trick this one from a client.  So, what are the implications of this design?  I will let you noodle on that one for a bit.  You should also be asking yourself, how did we get those claims from the client (the website), to the service?  That is actually not entirely obvious and so I will save that for another post.  Enjoy for now!

Wednesday, 17 December 2008

SQL Down Under Podcast

I recently had the opportunity to sit down (virtually) with Greg Low from SQL Down Under fame and record a podcast with him.  I have to thank Greg for inviting me to ramble on about cloud services, data services like SQL Data Services and Microsoft's Azure services in particular.  It is about an hour's worth of content, but it seemed a lot faster than that to me.  I speak at a relatively high level on what we have done with SQL Services and Azure services and how to think about cloud services in general.  There are some interesting challenges to cloud services - both in the sense of what challenges they solve as well as new challenges they introduce.

I am show 42, linked from the Previous Shows page.  Thanks Greg!

Friday, 14 November 2008

Fixing the SDS HOL from Azure Training Kit

If you downloaded the Azure Services Training Kit (which you should), you would find a compilation error on some of the SQL Data Services HOLs.

image

The error is somewhat self-explanatory:  the solution is missing the AjaxControlToolkit.  The reason that this file is missing is not because we forgot it, but rather our automated packaging tool was trying to be helpful.  You see, we have a tool that cleans up the solutions by deleting the 'bin' and 'obj' folders and any .pdb files in the solution before packaging.  In this case, it killed the bin directory where the AjaxControlToolkit.dll was deployed.

To fix this error, you just need to visit AjaxControlToolkit project on CodePlex and download it again.  The easiest way is to download the AjaxControlToolkit-Framework3.5Sp1-DllOnly.zip, extract the 'Bin' directory and copy it in to the root of the solution you are trying to use.

Sorry about that - we will fix it for our next release.

(updated: added link)

Thursday, 13 November 2008

Azure Services Training Kit – PDC Preview

The Azure Services Training kit contains the hands on labs used at PDC along with presentations and demos.  We will continue to update this training kit with more demos, samples, and labs as they are built out.  This kit is a great way to try out the technologies in the Azure Services Platform at your own pace through the hands on labs.

image

You will need a token in order to run a few of the labs (specifically the .NET Services labs, the SQL Data Services labs, and one of the Live Services labs).  The Windows Azure labs use the local dev fabric so no token is necessary.  Once you install the kit, it will launch the browser with a navigation UI to find all the content within.  If you need an account, simply click the large blue box labeled 'Try it now' and follow the links to register at Microsoft Connect.

Happy Cloud Services.

Sunday, 02 November 2008

Using SDS with Azure Access Control Service

It might not be entirely obvious to some folks how the integration with SQL Data Services and Azure Access Control Service works.  I thought I would walk you through a simple example.

First, let me set the stage that at this point the integration between the services is a bit nascent.  We are working towards a more fully featured authorization model with SDS and Access Control, but it is not there today.

Authentication

I will group the authentication today into two forms:  the basic authentication used by SDS directly and the authentication used by Access Control.  While the two may look similar in the case of username and password, they are, in fact, not.  The idea is that eventually, the direct authentication to SDS using basic authentication (username/pwd) will eventually go away.  Only authentication via Access Control will survive going forward.  For most folks, this is not a super big change in the application.  While we don't have the REST story baked yet in the current CTP, we have support today in SOAP to show you how this looks.

Preparing your Azure .NET Services Solution

In order to use any of these methods, you must of course have provisioned an account for the CTP of Azure Services Platform and .NET Services in particular.  To do this, you must register at http://www.azure.com and work through Microsoft Connect to get an invitation code.  PDC attendees likely already have this code if they registered on Connect (using the LiveID they registered for PDC with).  Other folks should still register, but won't get the code as fast as PDC attendees.  Once you have the invitation code and have created and provisioned a solution, you need to click the Solution Credentials link and associate a personal card for CardSpace or a certificate for certificate authentication.

 

image image

Once you have credentials associated with your Azure Service Platform solution, you can prepare your code to use them.

Adding the Service Reference

Here is how you add a service reference to your project to get the SOAP proxy and endpoints necessary to use Access Control.  First, right click your project and choose Add Service Reference.

image

In the address, use https://database.windows.net/soap/v1/.  Note the trailing '/' in the URL.  Also, notice it is not using 'data.database.windows.net', but just 'database.windows.net'.  Next, name the proxy - I called mine 'SdsProxy'.

Click the Advanced button and choose System.Collections.Generic.List from the collection type dropdown list.

image 

Once you have clicked OK a few times, you will get an app.config in your project that contains a number of bindings and endpoints.  Take a moment to see the new endpoints:

image 

There are 3 bindings right now: basicHttpBinding (used for basic authentication directly against SDS), as well as customBinding and wsHttpBinding, which are used with the Access Control service .  There are also 4 endpoints added for the client:

  1. BasicAuthEndpoint used for basic authentication with SDS directly.
  2. UsernameTokenEndpoint used for authentication against Access Control (happens to be same username and password as #1 however).
  3. CertificateTokenEndpoint used for authentication against Access Control via certificate and finally
  4. CardSpaceTokenEndpoint used for authentication against Access Control via Cardspace.

Use the Access Control Service

At this point, you just need to actually use the service in code.  Here is a simple example on how to do it.  I am going to create a simple service that does nothing but queries my authority and I will do it in all three supported Access Control authentication methods (2-4 above).

Solution and Password

To use the username/password combination you simply do it exactly like the basic authentication you are used to, but use the 'UsernameTokenEndpoint' for the SOAP proxy.  It looks like this:

var authority = "yourauthority";

//Username/Pwd
var proxy = new SitkaSoapServiceClient("UsernameTokenEndpoint");
proxy.ClientCredentials.UserName.UserName = "solutionname";
proxy.ClientCredentials.UserName.Password = "solutionapassword";

var scope = new Scope() { AuthorityId = authority };

//return first 500 containers
var results = proxy.Query(scope, "from e in entities select e");

Console.WriteLine("Containers via Username/Password:");
Console.WriteLine("================");
foreach (var item in results)
{
    Console.WriteLine(item.Id);
}
Console.WriteLine("================");
proxy.Close();

 

CardSpace

CardSpace has two tricks to get it working once you set the proxy to 'CardspaceTokenEndpoint'.  First, you must use the DisplayInitializeUI method on the proxy to trigger the CardSpace prompt.  Next, you must explicitly open the proxy by calling Open.  It looks like this:

//CardSpace
//create a new one for CardSpace
proxy = new SitkaSoapServiceClient("CardSpaceTokenEndpoint");
proxy.DisplayInitializationUI(); //trigger the cardspace login


//need to explicitly open for CardSpace
proxy.Open();

//return first 500 containers
results = proxy.Query(scope, "from e in entities select e");

Console.WriteLine("Containers via CardSpace:");
Console.WriteLine("================");
foreach (var item in results)
{
    Console.WriteLine(item.Id);
}
Console.WriteLine("================");

proxy.Close();

 

Certificates

Once you have created a certificate (with private key) and installed it somewhere on your machine (I used the local machine store in the Personal container (or 'My' container).  I have also set the self-generated certificate's public key in the trusted people store on the local machine so it validates. 

//Certificates
//create a new one for Certificates
proxy = new SitkaSoapServiceClient("CertificateTokenEndpoint");

//can also set in config
proxy.ClientCredentials.ClientCertificate.SetCertificate(
    "CN=localhost",
    StoreLocation.LocalMachine,
    StoreName.My
    );

//return first 500 containers
results = proxy.Query(scope, "from e in entities select e");

Console.WriteLine("Containers via Certificates:");
Console.WriteLine("================");
foreach (var item in results)
{
    Console.WriteLine(item.Id);
}
Console.WriteLine("================");

proxy.Close();

 

The code is very similar, but I am explicitly setting the certificate here in the proxy credentials.  You can also configure the certificate through config.

And there you have it. using the Azure Access Control service with SQL Data Services.  As I mentioned earlier, while this is nascent integration today, you can expect going forward that the Access Control Service will be used to expose a very rich authorization model in conjunction with SDS.

Download the VS2008 Sample Project