Wednesday, 04 June 2014

Brewmaster Template Definition

In order to author a Brewmaster template, you will need to have a basic understanding of how the template is structured.  In this blog post, I will cover all the basic terminology.

Methodology Explained

A central tenant to the Brewmaster template is that a template represents the goal state of how you want your deployment to be configured.  This has some interesting implications.  Namely, it means that if the state of your deployment is already as described, we skip it.  For example, if you describe 3 VMs and we only find 2 VMs in an existing deployment, we will deploy 1 VM.  We won't try to deploy 3 additional VMs in this example.  If you describe a Storage Account, we will make sure it exists.  Same thing for the Affinity Group.

Since we also view the template as your goal state, it means you can have the reverse scenario with a slight caveat:  my template describes 2 VMs and Brewmaster finds 3 VMs already deployed.  In this case and with your explicit configuration, we will remove the unreferenced VM.  The caveat here is that you need to explicitly tell Brewmaster to be destructive and remove VMs.

There is one more caveat to this 'goal state' idea and that is around Network settings.  Given that Network settings are global in nature to the subscription, we didn't want you to have to describe every possible Virtual Network and Subnet for every deployment (even the ones not done in Brewmaster).  If we were strict about a 'goal state', we would have to do so.  Instead, because of the global nature of these things and the monolithic description and configuration, we have taken an additive-only approach here.  We will only attempt to add additional Virtual Networks, Subnets, DNS Servers, etc., and we will never delete them if they become unreferenced or are not fully described.  One consequence of this is that if you describe Virtual Network A with a single subnet called Subnet AA and Brewmaster finds Virtual Network A with existing Subnet BB, we will not remove Subnet BB, and instead you will be left with both Subnet AA and Subnet BB.

Now that you have the general methodology we follow, let's look at the possible configuration options.

Template Schema

At the highest level, you have 9 basic sections inside a template:

  • Parameters
  • Network
  • AffinityGroup
  • StorageAccounts
  • CloudServices
  • DeploymentGroups
  • Credentials
  • ConfigSets
  • Configurations


This section contains replaceable params used throughout the template.  Parameters values themselves are either supplied at deployment time or rely upon a described default.  Template authors can include optional metadata in the parameter definition for things like description, type hints, and even validation logic.  A parameter can be of type String, DateTime, Boolean, or Number.  Brewmaster's website will generate a dynamic UI and perform validation based on the values declared in this section.

We recommend that template authors use the type hints and validation capabilities to prevent easily avoided misconfigurations due to data entry mistakes.


The network settings described in here allow you to declare things like Virtual Networks, subnets, DNS servers, and Local Sites.  Keeping in mind the caveat that Networks are an additive-only approach, you need only describe the portions of the Networks settings in Azure that directly apply to your intended deployment.


A Brewmaster template is scoped to a single Affinity Group (AG).  Early on, we supported multiple AGs, but soon realized that things became far too complicated for a user to keep the definitions straight inside a single template.  In the IaaS world, it turns out that you really need an AG to do most anything interesting.  Virtual Networks must exist in an AG for instance.  If you want your IaaS VMs to be able to communicate, you have to have one.  The implication of having a required AG definition is two-fold:  1.)  we enforce the best practices that all your resources are in an AG, and 2.) it means you can only deploy to a single datacenter right now per deployment.  If you need to have a multi-datacenter (region) deployment, you would just deploy the same template multiple times.


Each described Cloud Service (CS) can also describe a single VmDeployment within it.  That deployment can next describe multiple VMs (up to Azure's limit).  All the supported VM settings are described within a VM (Disks, Endpoints, etc.).  Each VM can be associated with one or more ConfigSets.  I will describe this more below in detail, but ultimately, this is how you control what configuration is done on each VM.  Each VM can also be assigned a single Deployment Group (more below).  If one is not assigned, it belongs to the 'default' Deployment Group.

If the CS does not exist, Brewmaster will create it.  If the CS already exists and there is a deployment in the Production slot, Brewmaster will attempt to reconcile what is in the template with what has been found in the existing deployments.  Each machine name is unique within a CS, so if we find a matching VM name in the template configuration, Brewmaster will attempt to configure the machine as described.


Deployment Groups (DGs) are a mechanism by which you can control the dependencies between machines.  By default, all VMs are in a single DG aptly called 'default'.  If you specify additional DGs and VMs have been assigned to them, those machines will be configured completely before any other VMs are configured (end to end).  The order of the described DGs are also the order in which everything is deployed and configured.  Most templates will not use this capability - the default DG is just fine.  However, a classic example of needing this capability is something like Active Directory.  You would want your domain controllers to fully deploy and be configured and ready before additional VMs were spun up and configured if they were to also join the domain.


Instead of littering the template with a bunch of usernames and passwords, the template allows you to describe the credentials once and then refer to it everywhere.  We strongly suggest parameterizing the credentials here and not hardcoding them.  This will maximize the re-usability of your template.


ConfigSets provide a convenient way to group VMs together for common configurations.  Inside of a ConfigSet, you can configure Endpoints that will be applied to all VMs within the ConfigSet.  Additionally, this is where you associate a particular Configuration (1 or more) to a set of VMs.

At deployment time, the common Endpoints will be expanded and added into the Endpoint configuration for each VM found to reference that ConfigSet.  In general, you will have a single ConfigSet per type of VM within your template definition.  So, if I have a SQL Server Always On template, I might have 1 ConfigSet for my SQL nodes, 1 for my Quorum nodes, and 1 for my Active Directory nodes.  If you find that you have common configuration between ConfigSets (e.g. each ConfigSet needs to format data disks), that means you can factor those operations out and apply it as a reusable Configuration in each ConfigSet.


A Configuration is a set of DSC resources (or Powershell scripts) that will be applied to a VM.  You can factor your DSC resources into smaller, more re-usable sets, and reference them from the ConfigSet.  For instance, suppose I build out Active Directory and SQL Server in two ConfigSets.  I could have 2 different configurations - one for configuring Active Directory and one for configuring SQL Server.  However, if I also had some common configuration (e.g. formatting attached disks), I could create a 3rd Configuration and simply include that Configuration as a reference in each ConfigSet.  This way, we keep to the DRY principle and you only need to reference the reusable Configurations amongst one or more ConfigSets.

As a best practice, we recommend building out any custom actions as a DSC resource.  Long term, this ends up being a much more manageable practice than attempting to manage configuration as simple Powershell scripts.

Wrap Up

Hopefully this gives you a quick tour around the Brewmaster template definition and philosophy.  As we update things to support new features in Azure, you will see the schema change slightly.  Make sure to check out our documentation to keep up.

Wednesday, 28 May 2014

How Does Brewmaster Work?

In this post, I will cover at a high level what Brewmaster does behind the scenes to enable your templates to deploy in Microsoft Azure.  There are around 8 major steps that we have to orchestrate the template deployment.

First, let's start with an architectural overview:


We built Brewmaster as an API first service.  The portal where you register has no abilities other than Registration that cannot be done via API.  That is, once you are registered with us, you have a subscription and a secret key that can be used to deploy.  In some subsequent posts, I will detail how to use the API.  The key components to Brewmaster are the API layer that everyone interacts with, our Template Validation logic that helps ensure you don't have common deployment errors, as well as our Workflow Engine that handles failure, retries, and allows us to efficiently scale our resources.

All of the actions that we take on your behalf in Azure are recorded and presented back to the user in our View repositories.  You could drive your own dashboard with this data or simply use ours.

Template Registration


Brewmaster templates are stored in Git repositories.  Today, we support both Bitbucket and Github public repositories.  There is a known layout and convention for creating templates.  When you first come to Brewmaster, you will register your Git repository for your template, or choose one of our pre-existing templates (hosted at Github).  We chose Git because we wanted to be able to version the templates and always have a repeatable deployment.  This has the effect that you can have a single template with different branches for each deployment type (Production, Development, etc.)

Once you register your Git repository with us, it becomes available as a deployment option:


When you click the Deploy button, we will pull that template from Git, parse it for any parameters you have defined and generate a UI to accept those params.  Once you submit the parameters, we will combine the template and params together, expand the template into a full deployment schema and then run our Validation logic.  As a template author, you have control over validation logic as well for any parameters coming into the template.  Once the template is validated, it is passed to our Workflow engine to start the execution.

Azure Provisioning

Brewmaster templates are scoped to a single Azure Affinity Groups (AG).  We are requiring that any Azure assets you want to either use or create must reside in the same AG within a single template.  While that sounds restrictive, it really isn't.  There is not much you can do that is interesting in the IaaS world without your resources in a single AG.  As such, this also means that if you want to deploy to multiple AGs, you must create multiple templates.

Today, we support the provisioning of IaaS VMs, Cloud Services, Storage Accounts, and Virtual Networks through Brewmaster.  We may expand that to other Azure resources (e.g. Hadoop, Service Bus, CDN, etc.) as demand dictates.



In the simple (80%) cases, Brewmaster will deploy your entire set Azure resources in a single go.  However, for more complicated scenarios where there are dependencies between Azure resources, you can actually stage them using the concept of Deployment Groups.  This allows for some sets of Azure resources to be provisioned and configured before other sets.  That is a more advanced topic that I will delve into more later.

Bootstrap the VM

In order to communicate with the Windows machines and allow it to be configured, we must enable a few things on the node.  For instance, Powershell Remoting is required along with Powershell 4.  The node might be rebooted during this time if required.

Brewmaster is dependent on Desired State Configuration (DSC).  In a nutshell, DSC is a declarative Powershell syntax where you define 'what to do' and leave the 'how to do' up to DSC.  This fits perfectly in our own declarative template model.  As such, we support DSC completely.  That means you can not only use any of the built-in DSC resource, but we also support any DSC resources - community released, or subsequently released "Wave" resources.  Don't see a DSC resource you need?  No worries, we also give you access to the 'Script' DSC resource, which will allow you to call into arbitrary Powershell code (or your existing scripts).

Once we have ensured that DSC is available, Brewmaster dynamically creates a DSC configuration script from the template and pushes it onto each node.

During this process, we also push down to each a node a number of scripts and modules that the node will later execute locally.  Once your nodes have been configured with DSC and PS Remoting, we are ready to download your template package.

Pulling the Template Package

We refer to the Brewmaster template as a JSON file that describes your Azure and VM topology.  However, we refer the template package as both the template JSON file as well as any other supporting assets required for configuring the VM.  Inside the package you can include things like custom scripts or DSC resources, installers, certificates, or other file assets.  The package has a known layout that will be used by Brewmaster.

Brewmaster instructs the node to pull the template package from the Git repository.  The node must pull the package from Git as opposed to Brewmaster pushing the repository.  The main reason for this is for quality of service and unintentional (or intentional) denial of service.  Theoretically, the template package could be very big and we would not want to lock our Brewmaster workers while trying to push a large package to each node.

Running the DSC Configuration

At this point, your node has all the assets from the template package and the DSC configuration that describes how to use them all.  Brewmaster now instructs the local node to compile the DSC into the appropriate MOF file and start a staged execution.

Brewmaster periodically connects to the node to discover information about the running DSC configuration (error, status, etc.).  It pulls what it finds back into Brewmaster and exposes that data to the user at the website (or via API).  This is how you can track status of a running configuration and know when something is done.

Wrap up

Once a deployment runs to its conclusion (either failure or success), Brewmaster records the details and presents it back to the user.  In this way the user has a complete picture of what was deployed and any logs or errors that occurred.  Should something go wrong during a deployment, the user can simply fix the template definition (or an asset in the template package) and hit the 'Deploy' button again.  Brewmaster will re-execute the deployment, but this time, it will skip the things it has already done (like provisioning in Azure or full Bootstrapping) and will re-execute the DSC configurations.  Given the efficient nature of DSC, it will also only re-execute resources that are not in the 'desired state'.  As such, subsequent re-deploys can be magnitudes faster that the initial.

Next up

In the coming installments, I will talk more about the key Brewmaster concepts and what each part of the schema does and about how to troubleshoot your templates and debug them.

Thursday, 22 May 2014

Introducing Brewmaster Template SDK

I am excited to announce the release of the Brewmaster Template SDK.  With our new SDK, you can author your own Brewmaster templates that can deploy almost anything into Microsoft Azure.  Over the next few weeks, I will be updating my blog with more background on how Brewmaster templates work and how to easily author them.

First, a bit of backstory:  when we first built Brewmaster 6 months ago, we released with 5 supported templates.  The templates were for popular workloads that users had difficulty deploying easily (e.g.  SQL Server Always On).  The idea was that we would allow the user to get something bootstrapped into Azure quickly and then allow the user to simply RDP into the machine to update settings that were not exactly to their liking.  Even then, we knew that it would simply not be possible to offer enough combinations of options to satisfy every possible user and desired configuration.

The situation with static templates was that they were close(ish) for everyone, but perfect for no one.  We wanted to change that and to support arbitrarily complex deployment topologies.  With the release of our Template SDK, we have done just that.  With our new Template SDK you can:

  • Define and deploy any Windows IaaS topology in Azure.  You tell us what you want and we manage the creation in Azure.  We support any number of cloud services, VMs, networking, or storage account configurations.
  • Configure any set(s) of VMs that you deploy.  Install any software, configure any roles, manage firewalls, etc.  You name it.  We support Microsoft's Desired State Configuration (DSC) technology and we support it completely.  This means you can use any of Microsoft's many DSC configurations or author your own.  Don't have DSC resources for what you want yet?  We also support arbitrary Powershell scripts, so don't worry.
  • Clone, fork, or build any template.  We are open-sourcing all of our templates that were previously released as static templates.  We support public Git deployment from both GitHub and Bitbucket, so this means you can version any template you choose or deploy any revision.  This also means you can branch a template (e.g. DevTest, Production, Feature-AB) and deploy any or all of them at runtime.
  • Easily author your templates.  While the template itself is JSON-based, we are also shipping C# fluent syntax builders that give you the Intellisense you need to build out any configuration.  For more advanced configurations, we also support the Liquid template syntax.  This means you can put complex if/then, looping, filtering, and other control logic directly in your templates should you desire.

Brewmaster itself was built API first.  What this means is that you can script out any interaction with Brewmaster.  Want to deploy daily/weekly/hourly as part of your CI build?  With Brewmaster, you can do this as well.

One other thing:  Did I mention this was free?  Everything we are talking about here is being released for free.  Deploy for free, build for free,  and share your templates for free.

Visit to register and get started today!

Friday, 11 June 2010

PowerScripting Podcast

Last week, I had the opportunity to talk with Hal and Jonathan on the PowerScripting podcast about Windows Azure.  It was a fun chat - lots on Windows Azure, a bit on the WASM cmdlets and MMC, and it revealed my favorite comic book character.

Listen to it now.

Wednesday, 17 February 2010

WASM Cmdlets Updated


I am happy to announce the updated release of the Windows Azure Service Management (WASM) Cmdlets for PowerShell today. With these cmdlets you can effectively automate and manage all your services in Windows Azure. Specifically,

  • Deploy new Hosted Services
    • Automatically upload your packages from the file system to blob storage.
  • Upgrade your services
    • Choose between automatic or manual rolling upgrades
    • Swap between staging and production environments
  • Remove your Hosted Services
    • Automatically pull down your services at the end of the day to stop billing. This is a critical need for test and development environments.
  • Manage your Storage accounts
    • Retrieve or regenerate your storage keys
  • Manage your Certificates
    • Deploy certificates from your Windows store or the local filesystem
  • Configure your Diagnostics
    • Remotely configure the event sources you wish to monitor (Event Logs, Tracing, IIS Logs, Performance Counters and more)
  • Transfer your Diagnostics Information
    • Schedule your transfers or Transfer on Demand.


Why did we build this?

The WASM cmdlets were built to unblock adoption for many of our customers as well as serve as a common underpinning to our labs and internal tooling. There was an immediate demand for an automation API that would fit into the standard toolset for IT Pros. Given the adoption and penetration of PowerShell, we determined that cmdlets focused on this core audience would be the most effective way forward. Furthermore, since PowerShell is a full scripting language with complete access to .NET, this allows these cmdlets to be used as the basis for very complicated deployment and automation scripts as part of the application lifecycle.

How can you use them?

Every call to the Service Management API requires an X509 certificate and the subscription ID for the account. To get started, you need to upload a valid certificate to the portal and have it installed locally to your workstation. If you are unfamiliar with how to do this, you can follow the procedure outlined on the Windows Azure Channel9 Learning Center here.

Here are a few examples of how to use the cmdlets for a variety of common tasks:

Common Setup

Each script referenced below will refer to the following variables:

Add-PSSnapin AzureManagementToolsSnapIn

#get your local certificate for authentication

$cert = Get-Item cert:\CurrentUser\My\<YourThumbPrint>

#subID from portal

$sub = 'c9f9b345-7ff5-4eba-9d58-0cea5793050c'

#your service name (without

$service = 'yourservice'

#path to package (can also be http: address in blob storage)

$package = "D:\deploy\MyPackage.cspkg"

#configuration file

$config = "D:\deploy\ServiceConfiguration.cscfg"


Listing My Hosted Services

Get-HostedServices -SubscriptionId $sub -Certificate $cert


View Production Service Status

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production' |

select RoleInstanceList -ExpandProperty RoleInstanceList |

ft InstanceName, InstanceStatus -GroupBy RoleName


Creating a new deployment

#Create a new Deployment

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

New-Deployment -Slot Production -Package $package -Configuration $config -Label 'v1' |

Get-OperationStatus -WaitToComplete

#Set the service to 'Running'

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production'|

Set-DeploymentStatus 'Running' |

Get-OperationStatus -WaitToComplete


Removing a deployment

#Ensure that the service is first in Suspended mode

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production'|

Set-DeploymentStatus 'Suspended' |

Get-OperationStatus -WaitToComplete |

#Remove the deployment

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Get-Deployment 'Production'|



Upgrading a single Role

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |

Get-Deployment -Slot Production |

Set-Deployment -mode Auto -roleName 'WebRole1' -package $package -label 'v1.2' |

Get-OperationStatus -WaitToComplete


Adding a local certificate

$deploycert = Get-Item cert:\CurrentUser\My\CBF145B628EA06685419AEDBB1EEE78805B135A2

Get-HostedService $service -SubscriptionId $sub -Certificate $cert |

Add-Certificate -CertificateToDeploy $deploycert |

Get-OperationStatus -WaitToComplete


Configuring Diagnostics - Adding a Performance Counter to All Running Instances

#get storage account name and key

$storage = "yourstorageaccount"

$key = (Get-StorageKeys -ServiceName $storage -Certificate $cert `

    -SubscriptionId $sub).Primary


$deployId = (Get-HostedService $service -SubscriptionId $sub `

    -Certificate $cert | Get-Deployment Production).DeploymentId


$counter = '\Processor\(_Total)\% Processor Time'

$rate = [TimeSpan]::FromSeconds(5)


Get-DiagnosticAwareRoles -StorageAccountName $storage -StorageAccountKey $key `

-DeploymentId $deployId |

foreach {

    $role = $_

    Get-DiagnosticAwareRoleInstances $role -DeploymentId $deployId `

    -StorageAccountName $storage -StorageAccountKey $key |

    foreach {

        $instance = $_

        $config = Get-DiagnosticConfiguration -RoleName $role -InstanceId $_ `

            -StorageAccountName $storage -StorageAccountKey $key `

            -BufferName PerformanceCounters -DeploymentId $deployId

        $perf = New-Object Microsoft.WindowsAzure.Diagnostics.PerformanceCounterConfiguration `

            -Property @{CounterSpecifier=$counter; SampleRate=$rate}


        $config.DataSources |

         foreach {

             Set-PerformanceCounter -PerformanceCounters $_ -RoleName $role `

             -InstanceId $instance -DeploymentId $deployId -StorageAccountName $storage `

             -StorageAccountKey $key




More Examples

You can find more examples and documentation on these cmdlets by typing 'Get-Help <cmdlet> -full' from the PowerShell cmd prompt.

If you have any questions for feedback, please send it directly to me through the blog (look at right hand navigation pane for Contact Ryan link).