Thursday, 02 April 2009

Windows Azure and Geneva

I don't like to basically re-blog what others have written, but I will make a minor exception today, as it is important enough to repeat.  My friend and colleague, Vittorio has explained the current status of the Geneva framework running on Windows Azure today.

The short and sweet is that we know it is not working 100% today and we are working on a solution.  This is actually the main reason that you do not see a Windows Azure version of Azure Issue Tracker today.

Thursday, 22 January 2009

Azure Issue Tracker Released

I am happy to announce the immediate release of the Azure Issue Tracker sample application.  I know. I know. the name is wildly creative.  This sample application is a simple issue tracking service and website that pulls together a couple of the Azure services:  SQL Data Services and .NET Access Control Service.

This demo is meant to show a realistic SaaS scenario.  As such, it features federation, claims-based authorization, and scalable data storage.

In this post, I will briefly walk through the application such that you can see how it is meant to work and how you might implement something similar.  Over the coming weeks, I will dig into features more deeply to demonstrate how they work.  I would also encourage you to follow Eugenio, who will be speaking more about this particular demo and the 'Enterprise' version over the coming weeks.

Let's get started:

  1. Make sure you have a .NET Services account (register for one here)
  2. Download and extract the 'Standard' version from the Codeplex project page
  3. Run the StartMe.bat file.  This file is the bootstrap for the provisioning process that is necessary to get the .NET Services solution configured as well as things like websites, certificates, and SDS storage.
  4. Open the 'Readme.txt' from the setup directory and follow along for the other bits

Once you have the sample installed and configured, navigate to your IssueTracker.Web site and you will see something like this:


Click the Join Now and then select the 'Standard' edition.  You will be taken to the 'Create Standard Tenant' page.  This is how we register our initial tenant and get them provisioned in the system.  The Windows LiveID and company name put into this page is what will be used for provisioning (the other information is not used right now).


Once you click 'Save', the provisioning service will be called and rules will be inserted into the Access Control service.  You can view the rules by looking using the .NET Services portal and viewing the Access Control Service (use the Advanced View) and select the 'http://localhost/IssueTracker.Web' scope.


We make heavy use of the forward chaining aspect of the rules transformation here.  Notice that Admin will be assigned the role of both Reader and Contributor.  Those roles have many more operations assigned to them.  The net effect will be that an Admin will have all the operations assigned to them when forward chaining is invoked.

Notice as well that we have 3 new rules created in the Access Control service (ACS).  We have a claim mapping that sets the Windows LiveID to the Admin role output claim, we have another that sets an email mapping claim, and finally one that sets a tenant mapping claim.  Since we don't many input claims to work with (only the LiveID really), there is not too much we can do here in the Standard edition.  This is where the Enterprise edition that can get claims from AD or any other identity provider is a much richer experience.

Once you have created the tenant and provisioned the rules, you will be able to navigate to a specific URL now for this tenant:  http://localhost/IssueTracker.Web/{tenant}

You will be prompted to login with your LiveID and once you are authenticated with Windows LiveID, you will be redirected back to your project page.  Under the covers, we federated with LiveID, got a token from them, sent the token to ACS, transformed the claims, and sent back a token containing the claims (all signed by the ACS).  I will speak more about this later, but for now, we will stick to the visible effects.


From the Project page, you should click 'New Project' and create a new project.

  • Give it a name and invite another Windows LiveID user (that you can login as later).  Make sure you invite a user with a different Windows LiveID than the one you used to initially provision the tenant (i.e. what you are logged in as currently).
  • Choose the 'Reader' role for the invited user and click the Invite button.
  • Add any custom fields you feel like using.  We will likely expand this functionality later to give you more choices for types and UI representation.
  • Click Save (make sure you added at least 1 user to invite!)

Once you click Save, additional rules are provisioned out to the ACS to handle the invited users.  From the newly created project, create an issue.  Notice that any additional fields you specified are present now in the UI for you to use for your 'custom' issue.


Once you have saved a new issue, go ahead and try the edit functionality for the Issue.  Add some comments, move it through the workflow by changing the status, etc.

Next, open a new instance of IE (not a new tab or same IE) and browse to the tenant home page (i.e. http://localhost/IssueTracker.Web/{tenant}).  This time however, login as the invited Windows LiveID user.  This user was assigned the 'Reader' role.  Notice the new browser window can read the projects for this tenant and can read any issue, but they cannot use the Edit functionality.


Now, I will make two comments about this.  First, so what?  We are checking claims here on the website and showing you a 'Not Authorized' screen.  While we could have just turned off the UI to not show the 'Edit' functionality, we did this intentionally in the demo so you can see how this claim checking works.  In this case, we are checking claims at the website using a MVC route filter (called ClaimAuthorizationRouteFilterAttribute).

One key point of this demo is that the website is just a client to the actual Issue Tracker service.  What if we tried to hit the service directly?

Let's check.  I am going to intentionally neuter the web site claims checking capabilities:


By commenting out the required claims that the Reader won't have, I can get by the website.  Here is what I get:


The Issue Tracker service is checking claims too. whoops.  No way to trick this one from a client.  So, what are the implications of this design?  I will let you noodle on that one for a bit.  You should also be asking yourself, how did we get those claims from the client (the website), to the service?  That is actually not entirely obvious and so I will save that for another post.  Enjoy for now!

Friday, 09 March 2007

HD DVD Commentary

I have been following the saga of the folks at decrypting the AACS protections found on BluRay and HD-DVD discs.  In my mind, I wish them the best of luck.  I know I won't bite on either HD format until I know that I can copy, transcode, or move the content to any format or device of my choosing.  The entire concept of DRM is pretty non-sensical if you think about it.  We have encrypted content and the keys needed to decrypt it are either in the media player or embedded in the content itself.  Anyone spot a problem here?

This is just Security 101:  You can *never* secure your content if also have to distribute keys to decrypt your content to the same parties you want to keep your content hidden from.  The logical fallacy of the scheme is really stunning to consider.

Anyhow, I just read this post over on the DVDFile website.  At first, I thought it was an obvious parody, but then I realized that the author really believes that people that hack AACS are terrorists!  Wow.  He believes that HD video now is at risk because of a few people that believe in their rights to use the media they purchase in any device or manner they choose.

In a follow up to the hate mail he received, he gives an analogy that he is not allowed to drive a sports car 150 mph (its against the law and could hurt others), so hackers should not expect to be able to use their HD media on non-HDCP capable devices (because now the studios might revoke the media for others).  Yeah, I am still scratching my head on that one - the analogy sucks.

If we must keep with crappy car analogies, perhaps a better one is that you have bought an expensive sports car (your computer, monitor, HD player, TV, etc.) and also paid for the private use of a high speed race track anytime you so choose (the media).  Only, you find out later that unless you completely replace your car (new monitor, trusted OS, new "secure" player, etc.), you either cannot drive on the track you paid for (unauthorized players!) or your car has to be fitted with a governer to keep you from exceeding 5 mph (ICT or downrezzing).

The best thing that ever happened to consumers was the day that DVD protection was broken.  Now you can copy your DVDs to any format of your choosing and play it on any device anywhere (phone, iPod, etc.).  That day would never have come if the CSS protection was not broken and your only options for getting content in the form or device you want would be to purchase it again.  History tells me that AACS being broken is a good thing for everyone.

Thursday, 18 January 2007

OpenVPN on your WRT54G

One of the coolest things to come out of the SOHO router market has been the ability to take a few of the Linux-based routers and significantly upgrade their capabilities using community driven 3rd-party firmware.  The most popular of these of course is the WRT54G(S) varieties since they can be had for under $50 pretty easily.  Unfortunately, Linksys (or Cisco) decided that they didn't appreciate the competition, so newer WRT54G based routers no longer have as much memory or even run Linux anymore, making them much more difficult to upgrade.  Instead, they now offer a more expensive WRT54GL (where L stands for Linux I guess) model that is essentially what the older models were and still are easily upgradeable.  Of course, Asus and Buffalo make decent and affordable routers that can be upgraded as well, so you needn't worry too much if you can't find an affordable Linksys version.

I have previously mentioned OpenVPN on this blog and sang its praises as an extremely capable SSL VPN solution.  In the past, I was running a VPN server on my home computer and forwarding the port through the WRT54G such that my client laptop could connect from anywhere to my home network.  This is very useful when you have very restrictive firewall or web proxy policies you don't feel like obeying.

I use DD-WRT firmware on my WRT54GS router.  I initially looked at using Sveasoft, but found their business model to be a little disturbing and hypocritical.  The DD-WRT firmware is top notch, well maintained, and free however.  The other day I was checking out what progress has been made for new features, and found that in addition to working as an OpenVPN client, the latest release of the DD-WRT firmware also allows the router to work as a server.  This is huge.  This means that I can now remove the VPN server from my home box and locate it on the router which allows me to hit each and every computer easily on my network instead of just one.

Setting up everything appears intimidating, but it really isn't.  Here is how to perform this simple task and get your own SSL VPN.  Assuming you have a capable router, just follow these easy steps:


  1. Download the DD-WRT firmware from here.  Install the one with VPN support built into the image.  When I did this, it was v.23 SP2 VPN (dd-wrt.v23_vpn_wrt54gsv4.bin). However, you should read the Wiki to make sure you are installing the correct version for your model (mine was a v4 GS version).
  2. Download the OpenVPN GUI or OpenVPN Admin software and install on your client.  For Vista users, first install the latest OpenVPN (version 2.0.9 or later) and then install one of the clients on top (GUI only).  Vista will choke on installing a new TAP adapter if you don't use a later release (I used version 2.1 RC1 with no problems).  Since the GUIs are packaged usually a bit behind the latest release of OpenVPN, you may not be able to download and use an all-in-one install on Vista.
  3. Follow directions here on how to create SSL certificates for your server and for each client machine that will be connecting.  Note, I was unable to get OpenSSL to sign my certificate request from Vista (even running as an admin).  When I took the files to an XP machine, I had no problems however.  Technically, you don't even need to use certs and can use a static key.  However, I like the idea of using certificates as you can get even fancier later with your own CA authority and automatic enrollment if you would like.
  4. Use this script here in the section entitled "Server Mode with Certificates" to install the server certificates and startup script on your router.  Reboot your router.  You just need to cut and paste your information into the script.
  5. Create your client configuration file and start it!  This is very easy to do using OpenVPN Admin and pretty easy to do using the sample config file from your OpenVPN installation or if you used OpenVPN GUI.  You now have a secure VPN connection back to your home network and should find that you can ping any machine on your network as if you were sitting behind the router.  See screenshots below using OpenVPN Admin for config params.  Note, if you are at a place that has a web proxy, use the Proxy tab to tunnel right on through.

A couple final notes:  If you are using a web proxy, you must be using TCP instead of UDP.  The server is already setup using TCP, so your client should be setup with that as well.  Additionally, you can use a TLS handshake initially for even more security.  I did not do this in my router install, but had it working on my home server installation.  I also modified the scripts in step #4 and in step #5 to use port 443 instead of the default 1194.  The reason is that certain locations will block all ports but 80 and 443 typically, so it is easiest to use this and tunnel through this port.

So, with a couple hours effort (to initially read the Wiki) and a $30 hardware investment, I now have an extremely capable and resilient solution that allows me to securely access my home network from virtually any place that has an internet connection.

Tuesday, 31 October 2006

Configuring Kerberos Delegation

One of the challenges to using something like System.DirectoryServices with web apps is managing the security context.  By default, your web application runs with the security context of a local account (often ASPNET or NETWORK SERVICE).  Those accounts are not domain accounts (unless you did something stupid like install IIS on a domain controller), so naturally any code that attempts to read or write to Active Directory is going to fail when the security context is unknown.

First of all, there are many approaches to getting your code in web apps to work.  I am not going to go into all of them, it would take far too long of a post.  Whenever possible, I tell people to use a trusted subsystem model, where the application runs with the desired security context.  When that is not possible, or the application requires fine-grain ACL control (or auditing), you need to actually get the client's security context down through the layers, which means we have to use delegation.

We can break down how to configure delegation based on the deployment of your IIS server.  I am talking about IIS6 here, but this would apply to some degree to IIS5 as well (sans App Pool).  I am also going to make the assumption that your IIS server is a member of a domain with appropriate trust relationships if necessary.  Kerberos delegation doesn't really work without this.

Step 1:  Determine the current security context of your IIS application

You should be able to rattle this off (it is that important to know), but if you are unsure, just open your IIS MMC (Start > Run > 'inetmgr'), find your application and check which application pool it is in (Properties > Virtual Directory > Application Pool).

Next, you should find this application pool in the MMC and check the identity (Properties > Identity).  It might be set to a predefined local account (e.g. NETWORK SERVICE), or it might be set explicitly to some other account.

This is your unmanaged security context.  Now, depending on the type of account this is, we will configure Kerberos delegation differently.  The next two steps 2 and 2a, work slightly differently depending on what your current security context has been determined to be.

Step 2: Is your current security context a local account (i.e. non-domain account like NETWORK SERVICE)?

If you are running IIS as a local account, you should understand that your application will still have an Active Directory identity on the network.  It will be that of the IIS machine account.  Keith Brown has a good write-up on this if you need more clarification why this happens.  Here are the steps to allow the current IIS server to delegate.

  1. IIS server must be a member of the domain and <identity impersonate="true"/> should be in web.config
  2. Set IIS server computer account in AD Users & Computers MMC as "Trusted for Delegation"
  3. IIS Server must be rebooted for this policy to take effect.
  4. Integrated Windows Authentication only must be selected for site / virtual directory
  5. IIS must not have NTLM only set as authentication method (this is usually not a problem, NEGOTIATE is default, so unless you specifically ran a script to change this, don't worry about it).
  6. IIS server name either must match exactly account name in AD, or SetSPN tool should be used in cases where IIS site is set as alternative name (e.g. server is called, and website is called

Step 2a: Is your current security context a domain account?

  1. IIS server must be a member of the domain and <identity impersonate="true"/> should be in web.config
  2. Set Domain's App Pool Identity account in AD Users & Computers MMC as "Trusted for Delegation"
  3. SetSPN must be used to add SPN to domain service account (e.g. "HTTP/").
  4. Integrated Windows Authentication only must be selected for site / virtual directory
  5. IIS must not have NTLM only set as authentication method (this is usually not a problem, NEGOTIATE is default, so unless you specifically ran a script to change this, don't worry about it).

Step 3:  Make sure your clients are configured to use Kerberos

The final step to getting this all to work is to make sure your clients are configured such that they will attempt Kerberos.  This requires a few items to check.

  1. Client must be using IE 5.x+. If client is running IE 6, ensure that "Enable Integrated Windows Authentication (requires restart)" is selected from Tools > Internet Options > Advanced.
  2. Web site MUST be recognized as Local Intranet (not Internet Zone) or Trusted site to client. Since Kerberos is not really considered an internet protocol, IE does not even try Kerberos if the current zone is determined to be Internet.  If necessary, specifically add this to Local Intranet sites list.
  3. Client account must not be marked as "Sensitive, Do not Delegate" in AD Users and Computers MMC.  Smart administrators often mark their admin accounts with this option to prevent them from being abused.  Delegation will fail if you (the client) are trying to delegate one of these types of credentials.


The service principal name (SPN) can be a tripping point for a lot of applications.  Load balancing IIS servers adds additional complication as well.  I would really recommend reading Keith's explanation of Kerberos and how it works.  This should make it clear what the SPN is and why it is necessary.

Finally, Microsoft has a support article out there that is fairly decent.  Check it out if you have problems.

Friday, 08 September 2006

Does Microsoft really care about security?

Bruce Schneier makes a pretty strong case about Microsoft’s true motivation for ‘security’ patches.

Saturday, 31 December 2005

WMF Exploit Firsthand

The last few days have been pretty crappy with regards to the computer situation.  On Wednesday (the 27th), I was trying to recover some files from a portable harddrive that had decided to go tits-up.  Well, the short story is that I was browsing the web using Google cache and all of the sudden my AV package and MS Antispyware went crazy.  Something was dropping trojans onto my machine at an alarming rate.  This was before all the hubbub broke out the next day describing the issue.

I spent an inordinate amount of time trying to undo all the hooks that had been put into my XP SP2 (fully patched) box.  First, I found that my firewall had been disabled, group policy had been applied to prevent me from accessing the task manager, and a bunch of stuff was injected into my startup portions in the registry.

Using Autoruns and Process Explorer, I also discovered that my Explorer.exe had been replaced and shelled out by another program that was trying to prevent me from removing the trojan(s).  A number of unknown services were installed and all of my browser settings were hijacked (no help from Antispyware there for some reason).  A static HTML page was inserted as my homepage that told me that my computer was at risk from spyware.  I suspect the idiots that wrote the malware expected me to take them up on an offer to remove it.

I was able to undo all the hooks and set the system in order, but I was not comfortable that I got every last thing.  As such, I had to back up a few files I was working on and restore a backup image I luckily had.  This took me a few days with everything else I had going on.

The points that bother me are:

  • I don’t think running as lower privilege user would have helped (yes, I run as an Admin, bad Ryan).  It appears that this is using a buffer overflow and RevertToSelf() to get to SYSTEM account.  Perhaps I am wrong on this one, but I have not read anything to contravene this viewpoint as it appears all XP machines are vulnerable regardless of setup.  I got this from Google’s cache, not by clicking or running any files.
  • MS Antispyware was easily defeated.  It did notify me that a trojan was installed and tried to let me remove it (which it said it did, but did not).  It did not protect me from any of my settings being hijacked.  That was a big miss.  Not only that, but I could not restore my old settings since the trojan wiped them as well.
  • My AV package did not help either.  It was nice enough to let me know that trojans were being installed, but did not appear to prevent it either.  What was the point of that?

This is a really bad one folks.  This is the very first time I have ever been infected or compromised.  I shudder to think how easily it occurred.  Make sure you patch up.  There is an unofficial patch you can use right now to help.


Friday, 04 November 2005

Sony DRM Rootkit has an Actual Use

Following on the tail of my previous post on this subject, it turns out that there is actually a beneficial use to the Sony rootkit installed as part of their DRM scheme: cheating on WoW.  Yes, that’s right, it turns out that Blizzard’s anti-cheat software cannot detect the cheating programs when they take advantage of the “$sys$” naming format that turns files invisible with the rootkit’s help.  Full details can be found here.

Monday, 31 October 2005

Sony DRM Rootkit

I read this post from Mark Russinovich’s blog today and it really struck me how far record companies have taken DRM schemes.  This particular DRM infestation was especially nasty and very few people would know how to remove it or detect it (except by the errors it can cause).

It is this type of unauthorized and potentially malicious software installations for which other authors have been sued.  I would have been furious had this software been installed without my permission or knowledge, especially when no un-installation exists.


Monday, 12 September 2005

A VPN for Road Warriors

As a traveling consultant, I often find myself working for clients that have very restrictive internet access policies (to say the least).  I have worked on clients that keep things so locked down that I actually have difficulty doing my job as a developer.  One client in particular was causing me grief because of the following restrictions:

  • No outside laptops on the network (they issued me one when I got there).  Their intent is to stop outsiders from introducing worms or viruses, but in reality it is just a pain for the hundreds of contractors that are engaged there and an added expense in needing to provide a laptop to everyone.  I was lucky enough to get local admin rights to my laptop, but there are plenty that don’t have that.  With local admin access, I could at least install the tools I needed to get my job done.
  • No outside mail on the network – including all types of web mail.  This is also intended to stop the spread of any mail bound viruses.  I would think they would make an exception for work email, but that is not the case.  This means that I am totally disconnected from the mothership except for my smartphone during the day.  This is pretty hard to deal with as I get plenty of mail from work during the day and ignoring it until I can check it from the hotel is usually not a good option.
  • Extremely restrictive web filtering:  I am talking no blogs, no Google groups, nothing remotely related to the word ‘download’.  This hurts a bit since I tend to use Google groups and blogs quite a bit for .NET development.  I never realized what a handicap this would be until I tried to go without it.  I think the intent here is to keep people from looking at anything considered ‘social’.

I respect the intent of the restrictions, no matter the methods used.  I decided to keep the spirit of this and just terminal into my home machine when I needed to check my work mail and look at Google groups for answers.  I would leave the local drives disconnected so as not to introduce anything from their network to mine or vice versa.  This would keep their network risk free of the viruses they were afraid of, but allow me to be able to function.  It initially worked pretty well and I was able to just keep a remote window open and reference it as needed during the day.

Then something changed… I could no longer access my machine using RDS.  I managed to RDS to another machine and then RDS from there back to my machine.  I changed the port my server listened on from the default 3389 and got it working again.  This told me that they had decided to block 3389.  Things continued fine for a few more weeks when all of a sudden, it stopped working again.  This time, it looked like they had put a filter on the firewall to block all RDS traffic because port changing had no effect.  I could see this was an escalating arms race.  I decided to to look at getting a personal VPN solution.

In my research, I found that there are a few consumer router models on the market that can act as VPN end-points.  I also knew that I could buy a Linksys WRT54G and use something like Sveasoft to get a cheap but sophisticated VPN solution.  However, the one nagging issue with all of these was that they relied on either IPSec or PPTP.  These can easily be blocked and cannot traverse an HTTP Proxy.  I wanted something that would tunnel through an HTTP Proxy – which meant SSL VPN.

If you are not aware, buying an SSL VPN device is not a cheap proposition (they start in the low $10K ranges).  Some of them are a bit of a mis-nomer as well as they just web-ify certain applications, but do not actually provide a VPN.  In my searching, I finally found OpenVPN.  It is a free, flexible, and powerful software SSL VPN solution.  All I had to do was setup the VPN client on my home machine and my laptop, create a couple certificates (easy to do), twiddle the config slightly and I have an encrypted connection back to my home server to use as I please. I also had to use port forwarding to send the OpenVPN traffic through my router’s firewall to my home machine VPN Server.  If I really wanted to,  I could even route my internet traffic over the SSL VPN and evade their proxy server filtering completely.  However, as I mentioned earlier, it is not my intent to do anything that violates the spirit of why they have these restrictions.  So, I am typing this in my RDS session that is being tunneled through the proxy server over an encrypted SSL VPN connection.  I have launched the latest salvo in the arms race, I wonder what will be next…

Wednesday, 17 August 2005

.NET versus Java Security

Oh boy… I can’t wait to see all the rebuttals (and the likely Slashdot):

.NET versus Java Security

Zealots… arise!

Thursday, 23 June 2005

A quick reminder on Dispose and foreach

Keith points out today that the foreach loop does not actually call Dispose() on the items being enumerated – only on the enumerator itself.  That is actually news to me – I was always under the assumption that Dispose() would be called on the iterated object when it left scope.

Why should you care?  For anyone programming System.DirectoryServices, this actually has a big impact.  Consider the following, very common code:

        DirectoryEntry entry = new DirectoryEntry(
        using (entry)
            foreach(DirectoryEntry child in entry.Children)
                //do something

Whoops, we have potentially many undisposed DirectoryEntry objects here.  Given that the SDS namespace has a couple problems with leaking memory, we always recommend you Dispose() your objects in SDS.  This will not do it for you (and no, the ‘entry’ will not call Dispose() for its children either here).

In general, always explicitly call Dispose() on the following object types:

  • DirectoryEntry
  • SearchResultCollection (from .FindAll())
  • DirectorySearcher (if you have not explicitly set a SearchRoot)

Keep this is mind now with the foreach loops as well.