Thursday, 20 December 2012

Setting ClaimsAuthenticationManager Programmatically in .NET 4.5

This is a quick post today that might save folks the same trouble I had to go through when upgrading my Windows Identity Foundation (WIF) enabled MVC website to the latest version of .NET.  The scenario is that you might want to enrich the claims coming from your STS with additional claims of your choosing.  To do this, there is a common technique of creating a class the derives from ClaimsAuthenticationManager and overrides the Authenticate method.  Consider this sample ClaimsAuthenticationManager:

The issue we have is that we need to provide an implementation of ITenantRepository here in order to lookup the data for the additional claims we are adding.  If you are lucky enough to find the article on MSDN, it will show you how to wire in a custom ClaimsAuthenticationManager using the web.config.  I don't want to hardcode references to an implementation of my TenantRepository, so using config is not a great option for me.

In the older WIF model (Microsoft.IdentityModel) for .NET <= 4.0, you hooked the ServiceConfigurationCreated event:

But, in .NET 4.5, all of the namespaces and a lot of the classes are updated (System.IdentityModel).  It took me a long time in Reflector to figure out how to hook the configuration being created again.  Turns out you need to reference System.IdentityModel.Services and find the FederatedAuthentication class.  Here you go:

Happy WIF-ing.

Thursday, 14 July 2011

How to Diagnose Windows Azure Error Attaching Debugger Errors

I was working on a Windows Azure website solution the other day and suddenly started getting this error when I tried to run the site with a debugger:

image

This error is one of the hardest to diagnose.  Typically, it means that there is something crashing in your website before the debugger can attach.  A good candidate to check is your global.asax to see if you have changed anything there.  I knew that the global.asax had not been changed, so it was puzzling.  Naturally, I took the normal course of action:

  1. Run the website without debug inside the emulator.
  2. Run the website with and without debugging outside the emulator.
  3. Tried it on another machine

None of these methods gave me any clue what the issue was as they all worked perfectly fine.  It was killing me that it only happened on debugging inside the emulator and only on 1 machine (the one I really wanted to work).  I was desperately looking for a solution that did not involve rebuilding the machine.   I turned on SysInternal's DebugView to see if there were some debug messages telling me what the message was.  I saw an interesting number of things, but nothing that really stood out as the source of the error.  However, I did notice the process ID of what appeared to be reporting errors:

image

Looking at Process Explorer, I found this was for DFAgent.exe (the Dev Fabric Agent).  I could see that it was starting with an environment variable, so I took a look at where that was happening:

image

That gave me a direction to start looking.  I opened the %UserProfile%\AppData\Local\Temp directory and found a conveniently named file there called Visual Studio Web Debugger.log. 

image

A quick look at it showed it to be HTML, so one rename later and viola!

image

One of our developers had overridden the <httpErrors> setting in web.config that was disallowed on my 1 machine.  I opened my applicationHost.config using a Administatrive Notepad and sure enough:

image

So, the moral of the story is next time, just take a look at this log file and you might find the issue.  I suspect the reason that this only happened on debug and not when running without the debugger was that for some reason the debugger is looking for a file called debugattach.aspx.  Since this file does not exist on my machine, it throws a 404, which in turn tries to access the <httpErrors> setting, which culminates in the 500.19 server error.  I hope this saves someone the many hours I spent finding it and I hope it prevents you from rebuilding your machine as I almost did.

Thursday, 04 September 2008

PhluffyFotos v2 Released

We have just released an updated version of the SSDS sample application called 'PhluffyFotos'.  Clouds are fluffy and this is a cloud services application - get it?  This sample application is a ASP.NET MVC and Windows Mobile application showing how to build a photo tagging and sharing site using our cloud data service, SSDS.  For this update:

  • Updated to MVC Preview 4.  We have removed hardcoded links and used the new filtering capability for authorization.  Of course, Preview 5 was just (and just) released as we were putting this out the door.  I might update this to Preview 5 later, but it will not be a big deal to do so. 
  • Updated to add thumbnail support.  Originally, we just downloaded the entire image and resized to thumbnail size.  This drags down performance in larger data sizes, so we fixed it for this release.
  • Updated to use the SSDS blob support.  Blob support was recently added with the latest sprint.  Previously, we were using the 'base64Binary' attributes to store the picture data.  With the new blob support, you supply a content type and content disposition, which will be streamed back to you on request. 
  • Updated to use the latest SSDS REST library.  This library gives us the ability to use and persist CLR objects to the service and use a LINQ-like query syntax.  This library saved us a ton of time and effort in building the actual application.  All the blob work, querying, and data access was done using this library.

The sample is available for download at CodePlex, and a live version is available to play with at PhluffyFotos.com.  I am opening this one up to the public to upload photos.  Maybe I am playing with fire here, so we will see how well it goes.  Keep in mind that this is a sample site and I will periodically blow away the data.  The live version has an added feature of integrating a source code viewer directly into the application.

SSDS REST Library v2 Released

I have just updated the Code Gallery page to reflect the new version of the REST-based library for SSDS.  This is a fairly major update to library and adds a ton of new features to make working with SSDS even easier than it already is for the .NET developer.  Added in this release:

  • Concurrency support via Etags and If-Match, If-None-Match headers.  To get a basic understanding of how this works, refer here.
  • Blob support.  The library introduces a new type called SsdsBlobEntity that encapsulates working with blobs in SSDS.  Overloads are available for both synchronous as well as async support.
  • Parallelization support via extension methods.  The jury is still out on this one and I would like to hear some feedback on it (both the technique as well as the methods).  Instead of using an interface, factory methods, etc., we are using extension methods supplied in a separate assembly to support parallel operations.  Since there are many different techniques to parallelize your code, this allows us to offer more than one option.  Each additional assembly can also take dependencies that the entire library might not want to take as well.  Imagine that we get providers for Parallel Extensions, CCR, or perhaps other home-baked remedies.  A very simple provider using Parallel Extensions is included.
  • Bug fixes.  Hard to believe, but yes, I did have a few bugs in my code.  This release cleans up a few of the ones found in the LINQ expression syntax parser as well as a few oversights in handling date/times.
  • Better test coverage.  Lots more tests included to not only prove out that stuff works, but also to show how to use it.

If you just want to see this library in action, refer to the photo sharing and tagging application called 'PhluffyFotos' that pulls it all together (sans parallelization, I suppose).  You can use the integrated source viewer to see how the library works on a 'real' application (or a real sample application at least).

Tuesday, 26 August 2008

Concurrency with SSDS via REST

Eugenio already covered concurrency via the SOAP interface with with latest post.  The idea is exactly the same in REST, but the mechanics are slightly different.  For REST, you specify a "Etag" value and either the If-Match or If-None-Match headers.

Here is a simplified client that does a PUT/POST operation on SSDS:

internal void Send(Uri scope, string etag, string method, string data,
    Action<string, WebHeaderCollection> action, Action<WebException> exception)
{
    using (var client = new WebClient { Credentials = _credentials })
    {
        client.Headers.Add(HttpRequestHeader.ContentType, "application/x-ssds+xml");

        if (etag != null)
            client.Headers.Add(HttpRequestHeader.IfMatch, etag);

        client.UploadStringCompleted += (sender, e) =>
        {
            if (e.Error != null && exception != null)
            {
                exception((WebException)e.Error);
            }
            else
            {
                if (action != null)
                {
                    action(e.Result, client.ResponseHeaders);
                }
            }
        };

        client.UploadStringAsync(scope, method, data);
    }
}

 

All this does is add the If-Match header and the Etag (which corresponds to the Flexible Entity Version system attribute).  This instructs the system to only update if the version held in SSDS matches the version specified in the Etag with the If-Match header.

Failure of this condition will result in a 412 error "A precondition, such as Version, could not be met".  You simply need to handle this exception and move on.

Next, there are times when you have a large blob or a largish flexible entity.  You only want to perform the GET if you don't have the latest version.  In this case, you specify the Etag again with the If-None-Match header.

Here is a simplified client that shows how the GET would work:

public void Get(Uri scope, string etag,
    Action<string, WebHeaderCollection> action, Action<WebException> exception)
{
    using (var client = new WebClient { Credentials = _credentials })
    {
        client.Headers.Add(HttpRequestHeader.ContentType, "application/x-ssds+xml");

        if (etag != null)
            client.Headers.Add(HttpRequestHeader.IfNoneMatch, etag);

        client.DownloadStringCompleted += (sender, e) =>
        {
            if (e.Error != null && exception != null)
            {
                exception((WebException)e.Error);
            }
            else
            {
                if (action != null)
                    action(e.Result, client.ResponseHeaders);
            }

        };

        client.DownloadStringAsync(scope);
    }

When you add the Etag and this header, you will receive a 304 error "Not Modified" if the content has NOT changed since the Etag value you sent.

I am attaching a small Visual Studio sample that includes this code and demonstrates these techniques.

Wednesday, 02 July 2008

Working with Objects in SSDS Part 3

Here is my last installment in this series of working with objects in SQL Server Data Services.  For background, readers should read the following:

Serialization in SSDS

Working with Objects in SSDS Part 1

Working with Objects in SSDS Part 2

Last time, we concluded with a class called SsdsEntity<T> that became an all-purpose wrapper or veneer around our CLR objects.  This made it simple to take our existing classes and serialize them as entities in SSDS.

In this post, I want to discuss how the querying in the REST library works.  First a simple example:

var ctx = new SsdsContext(
    "authority=http://dunnry.data.beta.mssds.com/v1/;username=dunnry;password=secret"
    );

var container = ctx.OpenContainer("foo");
var foo = new Foo { IsPublic = false, Name = "MyFoo", Size = 12 };

//insert it with unique id guid string
container.Insert(foo, Guid.NewGuid().ToString());

//now query for it
var results = container.Query<Foo>(e => e.Entity.IsPublic == false && e.Entity.Size > 2);

//Query<T> returns IEnumerable<SsdsEntity<T>>, so foreach over it
foreach (var item in results)
{
    Console.WriteLine(item.Entity.Name);
}

I glossed over it in my previous posts with this library, but I have a class called SsdsContext that acts as my credential store and factory to create SsdsContainer objects where I perform my operations.  Here, I have opened a container called 'foo', which would relate to the URI (http://dunnry.data.beta.mssds.com/v1/foo) according to the authority name I passed on the SsdsContext constructor arguments.

I created an instance of my Foo class (see this post if you want to see what a Foo looks like) and inserted it.  We know that under the covers we have an XmlSerializer doing the work to serialize that to the proper POX wire format.  So far, so good.  Now, I want to retrieve that same entity back from SSDS. The key line here is the table.Query<T>() call.  It accepts a Expression<Func<SsdsEntity<T>, bool>> argument that represents a strongly typed query.

For the uninitiated, the Expression<TDelegate> is a way to represent lambda expressions in an abstract syntax tree.  We can think of them as a way to model what the expression does without generating the bits of code necessary to actually do it.  We can inspect the Expression and create new ones based on it until finally we can call Compile and actually convert the representation of the lambda into something that can execute.

The Func<SsdsEntity<T>, bool> represents a delegate that accepts a SsdsEntity<T> as an argument and returns a boolean.  This effectively represents the WHERE clause in the SSDS LINQ query syntax.  Since SsdsEntity<T> contains an actual type T in the Entity property, you can query directly against it in a strongly typed fashion!

What about those flexible properties that I added to support flexible attributes outside of our T?  I mentioned that I wanted to keep the PropertyBucket (a Dictionary<string, object>) property public for querying.  In order to use the flexible properties that you add, you simply use it in a weakly typed manner:

var results = container.Query<Foo>(e => e.PropertyBucket["MyFlexProp"] > 10);

As you can see, any boolean expression that you can think of in the string-based SSDS LINQ query syntax can now be expressed in a strongly-typed manner using the Func<SsdsEntity<T>, bool> lambda syntax.

How it works

Since I have the expression tree of what your query looks like in strongly-typed terms, it is a simple matter to take that and convert it to the SSDS LINQ query syntax that looks like "from e in entities where [....] select e" that is appended to the query string in the REST interface.  I should say it is a simple matter because Matt Warren did a lot of the heavy lifting for us and provided the abstract expression visitor (ExpressionVisitor) as well as the expression visitor that partially evaluates the tree to evaluate constants (SubTreeEvaluator).  This last part is important because it allows us to write this:

int i = 10;
string name = "MyFoo";

var results = container.Query<Foo>(e => e.Entity.Name == name && e.Entity.Size > i);

Without the partial tree evaluation, you would not be able to express the right hand side of the equation.  All I had to do was implement an expression visitor that correctly evaluated the lambda expression and converted it to the LINQ syntax that SSDS expects (SsdsExpressionVisitor).  It would be a trivial matter to actually implement the IQueryProvider and IQueryable interfaces to make the whole thing work inside LINQ to Objects.

Originally, I did supply the IQueryProvider for this implementation but after consideration I have decided that using methods from the SsdsContainer class instead of the standard LINQ syntax is the best way to proceed.  Mainly, this has to do with the fact that I want to make it more explicit to the developer what will happen under the covers rather than using the standard Where() extension method.

Querying data

The main interaction to return data is via the Query<T> method.  This method is smart enough to add the Kind into the query for you based on the T supplied.  So, if you write something like:

var results = container.Query<Foo>(e => e.Entity.Size > 2);

This is actually translated to "from e in entities where e["Size"] > 2 && e.Kind == "Foo" select e".  The addition of the kind is important because we want to limit the results as much as possible.  If there happened to be many kinds in the container that had the flexible property "Size", it would actually return those as well in the wire response.

Of course, what about if you want that to happen?  What if you want to return other kinds that have the "Size" property?  To do this, I have introduced a class called SsdsEntityBucket.  It is exactly what it sounds like.  To use it, you simply specify a query that uses additional types with either the Query<T,U,V> or Query<T,U> methods.  Here is an example:

var foo = new Foo
{
    IsPublic = true,
    MyCheese = new Cheese { LastModified = DateTime.Now, Name = "MyCheese" },
    Name = "FooMaster",
    Size = 10
};

container.Insert(foo, foo.Name);
container.Insert(foo.MyCheese, foo.MyCheese.Name);

//query for bucket...
var bucket = container.Query<Foo, Cheese>(
    (f, c) => f.Entity.Name == "FooMaster" || c.Entity.Name == "MyCheese"
    );

var f1 = bucket.GetEntities<Foo>().Single();
var c1 = bucket.GetEntities<Cheese>().Single();

The calls to GetEntities<T> returns IEnumerable<SsdsEntity<T>> again.  However, this was done in a single call to SSDS instead of multiple calls per T.

Paging

As I mentioned earlier, I wanted the developer to understand what they were doing when they called each method, so I decided to make paging explicit.  If I had potentially millions of entities in SSDS, it would be a bad mistake to allow a developer to issue a simple query that seamlessly paged the items back - especially if the query was something like e => e.Id != "".  Here is how I handled paging:

var container = ctx.OpenContainer("paging");

List<Foo> items = new List<Foo>();
int i = 1;

container.PagedQuery<Foo>(
    e => e.Entity.Size != 0,
    c =>
    {
        Console.WriteLine("Got Page {0}", i++);
        items.AddRange(c.Select(s => s.Entity));
    }
);

Console.WriteLine(items.Count);

The PagedQuery<T> method takes two arguments.  One is the standard Expression<Func<SsdsEntity<T>, bool>> that you use to specify the WHERE clause for SSDS, and the other is Action<IEnumerable<SsdsEntity<T>>> which represents a delegate that takes an IEnumerable<SsdsEntity<T>> and has a void return.  This is a delegate you provide that does something with the 500 entities returned per page (it gets called once per page).  Here, I am just adding them into a List<T>, but I could easily be doing anything else here.  Under the covers, this is adding the paging term dynamically into the expression tree that is evaluated.

What's next

This is a good head start on using the REST API with SSDS today.  However, there are a number of optimizations that could be made to the model: additional overloads, perhaps some extension methods for common operations, etc.

As new features are added, I will endeavor to update this as well (blob support comes to mind here).  Additionally, I have a few optimizations planned around concurrency for CRUD operations. 

I have published this out to Code Gallery and I welcome feedback and bug fixes.  Linked here.

Thursday, 26 June 2008

Working with Objects in SSDS Part 2

This is the second post in my series on working with SQL Server Data Service (SSDS) and objects.  For background, you should read my post on Serializing Objects in SSDS and the first post in this series.

Last time I showed how to create a general purpose serializer for SSDS using the standard XmlSerializer class in .NET.  I created a shell entity or a 'thin veneer' for objects called SsdsEntity<T>, where T was any POCO (plain old C#/CLR object).  This allowed me to abstract away the metadata properties required for SSDS without changing my actual POCO object (which, I noted was lame to do).

If we decide that we will use SSDS to interact with POCO T, an interesting situation arises.  Namely, once we have defined T, we have in fact defined a schema - albeit one only enforced in code you write and not by the SSDS service itself.  One of the advantages of using something like SSDS is that you have a lot of flexibility in storing entities  (hence the term 'flexible entity') without conforming to schema.  Since, I want to support this flexibility, it means I need to think of a way to support not only the schema implied by T, but also additional and arbitrary properties that a user might consider.

Some may wonder why we need this flexibility:  after all, why not just change T to support whatever we like?  The issue comes up most often with code you do not control.  If you already have an existing codebase with objects that you would like to store in SSDS, it might not be practical or even possible to change the T to add additional schema.

Even if you completely control the codebase, expressing relationships between CLR objects and expressing relationships between things in your data are two different ideas - sometimes this problem has been termed 'impedance mismatch'.

In the CLR, if two objects are related, they are often part of a collection, or they refer to an instance on another object.  This is easy to express in the CLR (e.g. Instance.ChildrenCollection["key"]).  In your typical datasource, this same relationship is done using foreign keys to refer to other entities.

Consider the following classes:

public class Employee
{
    public string EmployeeId { get; set; }
    public string Name { get; set; }
    public DateTime HireDate { get; set; }
    public Employee Manager { get; set; }
    public Project[] Projects { get; set; }
}

public class Project
{
    public string ProjectId { get; set; }
    public string Name { get; set; }
    public string BillCode { get; set; }
}

Here we see that the Employee class refers to itself as well as contains a collection of related projects (Project class) that the employee works on.  SSDS only supports simple scalar types and no arrays or nested objects today, so we cannot directly express this in SSDS.  However, we can decompose this class and store the bits separately and then reassemble later.  First, let's see what that looks like and then we can see how it was done:

var projects = new Project[]
{
    new Project { BillCode = "123", Name = "TPS Slave", ProjectId = "PID01"},
    new Project { BillCode = "124", Name = "Programmer", ProjectId = "PID02" }
};

var bill = new Employee
{
    EmployeeId = "EMP01",
    HireDate = DateTime.Now.AddMonths(-1),
    Manager = null,
    Name = "Bill Lumbergh",
    Projects = new Project[] {}
};

var peter  = new Employee
{
    EmployeeId = "EMP02",
    HireDate = DateTime.Now,
    Manager = bill,
    Name = "Peter Gibbons",
    Projects = projects
};

var cloudpeter = new SsdsEntity<Employee>
{
    Entity = peter,
    Id = peter.EmployeeId
};

var cloudbill = new SsdsEntity<Employee>
{
    Entity = bill,
    Id = bill.EmployeeId
};

//here is how we add flexible props
cloudpeter.Add<string>("ManagerId", peter.Manager.EmployeeId);

var table = _context.OpenContainer("initech");
table.Insert(cloudpeter);
table.Insert(cloudbill);

var cloudprojects = peter.Projects
    .Select(s => new SsdsEntity<Project>
    { 
        Entity = s,
        Id = Guid.NewGuid().ToString()
    });

//add some metadata to track the project to employee
foreach (var proj in cloudprojects)
{
    proj.Add<string>("RelatedEmployee", peter.EmployeeId);
    table.Insert(proj);
}

All this code does is create two employees and two projects and set the relationships between them.  Using the Add<K> method, I can insert any primitive type to go along for the ride with the POCO.  If we query the container now, this is what we see:

<s:EntitySet 
    xmlns:s="http://schemas.microsoft.com/sitka/2008/03/" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
    xmlns:x="http://www.w3.org/2001/XMLSchema">
  <Project>
    <s:Id>2ffd7a92-2a3b-4cd8-a5f7-55f40c3ba2b0</s:Id>
    <s:Version>1</s:Version>
    <ProjectId xsi:type="x:string">PID01</ProjectId>
    <Name xsi:type="x:string">TPS Slave</Name>
    <BillCode xsi:type="x:string">123</BillCode>
    <RelatedEmployee xsi:type="x:string">EMP02</RelatedEmployee>
  </Project>
  <Project>
    <s:Id>892dbb1e-ba47-4c87-80e6-64fbb46da935</s:Id>
    <s:Version>1</s:Version>
    <ProjectId xsi:type="x:string">PID02</ProjectId>
    <Name xsi:type="x:string">Programmer</Name>
    <BillCode xsi:type="x:string">124</BillCode>
    <RelatedEmployee xsi:type="x:string">EMP02</RelatedEmployee>
  </Project>
  <Employee>
    <s:Id>EMP01</s:Id>
    <s:Version>1</s:Version>
    <EmployeeId xsi:type="x:string">EMP01</EmployeeId>
    <Name xsi:type="x:string">Bill Lumbergh</Name>
    <HireDate xsi:type="x:dateTime">2008-05-25T23:59:49</HireDate>
  </Employee>
  <Employee>
    <s:Id>EMP02</s:Id>
    <s:Version>1</s:Version>
    <EmployeeId xsi:type="x:string">EMP02</EmployeeId>
    <Name xsi:type="x:string">Peter Gibbons</Name>
    <HireDate xsi:type="x:dateTime">2008-06-25T23:59:49</HireDate>
    <ManagerId xsi:type="x:string">EMP01</ManagerId>
  </Employee>
</s:EntitySet>

As you can see, I have stored extra data in my 'flexible' entity with the ManagerId property (on one entity) and RelatedEmployee property on the Project kinds.  This allows me to figure out later what objects are related to each other since we can't model the CLR objects relationships directly.  Let's see how this was done.

public class SsdsEntity<T> where T: class
{
    Dictionary<string, object> _propertyBucket = new Dictionary<string, object>();

    public SsdsEntity() { }

     [XmlIgnore]
    public Dictionary<string, object> PropertyBucket
    {
        get { return _propertyBucket; }
    }

    [XmlAnyElement]
    public XElement[] Attributes
    {
        get
        {
            //using XElement is much easier than XmlElement to build
            //take all properties on object instance and build XElement
            var props =  from prop in typeof(T).GetProperties()
                         let val = prop.GetValue(this.Entity, null)
                         where prop.GetSetMethod() != null
                         && allowableTypes.Contains(prop.PropertyType) 
                         && val != null
                         select new XElement(prop.Name,
                             new XAttribute(Constants.xsi + "type",
                                 XsdTypeResolver.Solve(prop.PropertyType)),
                             EncodeValue(val)
                             );

            //Then stuff in any extra stuff you want
            var extra = _propertyBucket.Select(
                e =>
                     new XElement(e.Key,
                        new XAttribute(Constants.xsi + "type",
                             XsdTypeResolver.Solve(e.Value.GetType())),
                            EncodeValue(e.Value)
                            )
                );

            return props.Union(extra).ToArray();
        }
        set
        {
            //wrap the XElement[] with the name of the type
            var xml = new XElement(typeof(T).Name, value);

            var xs = new XmlSerializer(typeof(T));

            //xml.CreateReader() cannot be used as it won't support base64 content
            XmlTextReader reader = new XmlTextReader(
                xml.ToString(),
                XmlNodeType.Document,
                null
                );

            this.Entity = (T)xs.Deserialize(reader);

            //now deserialize the other stuff left over into the property bucket...
            var stuff = from v in value.AsEnumerable()
                        let props = typeof(T).GetProperties().Select(s => s.Name)
                        where !props.Contains(v.Name.ToString())
                        select v;

            foreach (var item in stuff)
            {
                _propertyBucket.Add(
                    item.Name.ToString(),
                    DecodeValue(
                        item.Attribute(Constants.xsi + "type").Value,
                        item.Value)
                    );
            }
        }
    }

    public void Add<K>(string key, K value)
    {
        if (!allowableTypes.Contains(typeof(K)))
            throw new ArgumentException(
		String.Format(
		"Type {0} not supported in SsdsEntity",
		typeof(K).Name)
		);

        if (!_propertyBucket.ContainsKey(key))
        {
            _propertyBucket.Add(key, value);
        }
        else
        {
            //replace the value
            _propertyBucket.Remove(key);
            _propertyBucket.Add(key, value);
        }
    }
}

I have omitted the parts of SsdsEntity<T> from the first post that didn't change.  The only other addition you don't see here is a helper method called DecodeValue, which as you might guess, interprets the string value in XML and attempts to cast it to a CLR type based on the xsi:type that comes back.

All we did here was add a Dictionary<string, object> property called PropertyBucket that holds our extra stuff we want to associate with our T instance.  Then in the getter and setter for the XElement[] property called Attributes, we are adding them into our array of XElement as well as pulling them back out on deserialization and stuffing them back into the Dictionary.  With this simple addition, we have fixed our in flexibility (or lack thereof) problem.  We are still limited to the simple scalar types, but as you can see you can work around this in a lot of cases by decomposing the objects down enough to be able to recreate them later.

The Add<K> method is a convenience only as we could operate directly against the Dictionary.  I also could have chosen to keep the Dictionary property bucket private and not expose it.  That would have worked just fine for serialization, but I wanted to also be able to query it later.

In my last post, I said I would introduce a library where all this code is coming from, but I didn't realize at the time how long this post would be and that I still need to cover querying.  So... next time, I will finish up this series by explaining how the strongly typed query model works and how all these pieces fit together to recompose the data back into objects (and release the library).

Tuesday, 17 June 2008

Working with Objects in SSDS Part 1

Last time we talked about SQL Server Data Services and serializing objects, we discussed how easy it was to use the XmlSerializer to deserialize objects using the REST interface.  The problem was that when we serialized objects using the XmlSerializer, it left out the xsi type declarations that we needed.  I gave two possible solutions to this problem - one that used the XmlSerializer and 'fixed' the output after the fact, and the other built the XML that we needed using XLINQ and Reflection.

Today, I am going to talk about a third technique that I have been using lately that I like better.  It uses some of the previous techniques and leverages a few tricks with XmlSerializer to get what I want.  First, let's start with a POCO (plain ol' C# object) class that we would like to use with SSDS.

public class Foo
{
    public string Name { get; set; }
    public int Size { get; set; }
    public bool IsPublic { get; set; }
}

In it's correctly serialized form, it looks like this on the wire:

<Foo xmlns:s="http://schemas.microsoft.com/sitka/2008/03/"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:x="http://www.w3.org/2001/XMLSchema">
  <s:Id>someid</s:Id>
  <s:Version>1</s:Version>
  <Name xsi:type="x:string">My Foo</Name>
  <Size xsi:type="x:decimal">10</Size>
  <IsPublic xsi:type="x:boolean">false</IsPublic>
</Foo>

You'll notice that we have the additional system metadata attributes "Id" and "Version" in the markup.  We can account for the metadata attributes by doing something cheesy like deriving from a base class:

public abstract class Cheese
{
    public string Id { get; set; }
    public int Version { get; set; }
}

However this is very unnatural as our classes would all have to derive from our "Cheese" abstract base class (ABC).

public class Foo : Cheese
{
    public string Name { get; set; }
    public int Size { get; set; }
    public bool IsPublic { get; set; }
}

Developers familiar with remoting in .NET should be cringing right now as they remember the hassles associated with deriving from MarshalByRefObject.  In a world without multiple inheritance, this can be painful.  I want a model where I can use arbitrary POCO objects (redundant, yes I know) and not be forced to derive from anything or do what I would otherwise term unnatural acts.

What if instead, we derived a generic entity that could contain any other entity?

public class SsdsEntity<T> where T: class
{
    string _kind;

    public SsdsEntity() { }

    [XmlElement(Namespace = @"http://schemas.microsoft.com/sitka/2008/03/")]
    public string Id { get; set; }

    [XmlIgnore]
    public string Kind
    {
        get
        {
            if (String.IsNullOrEmpty(_kind))
            {
                _kind = typeof(T).Name;
            }
            return _kind;
        }
        set
        {
            _kind = value;
        }
    }

    [XmlElement(Namespace = @"http://schemas.microsoft.com/sitka/2008/03/")]
    public int Version { get; set; }

    [XmlIgnore]
    public T Entity { get; set; }
}

In this case, we have simply wrapped the POCO that we care about in a class that knows about the specifics of the SSDS wire format (or more accurately could serialize down to the wire format).

This SsdsEntity<T> is easy to use and provides access to the strongly typed object via the Entity property.

foomembers

Now, we just have to figure out how to serialize the SsdsEntity<Foo> object and we know that the metadata attributes are taken care of and our original POCO object that we care about is included.  I call it wrapping POCOs in a thin SSDS veneer.

The trick to this is to add a bucket of XElement objects on the SsdsEntity<T> class that will hold our public properties on our class T (i.e. 'Foo' class).  It looks something like this:

[XmlAnyElement]
public XElement[] Attributes
{
    get
    {
        //using XElement is much easier than XmlElement to build
        //take all properties on object instance and build XElement
        var props =  from prop in typeof(T).GetProperties()
                     let val = prop.GetValue(this.Entity, null)
                     where prop.GetSetMethod() != null
                     && allowableTypes.Contains(prop.PropertyType)
                     && val != null
                     select new XElement(prop.Name,
                         new XAttribute(Constants.xsi + "type",
                            XsdTypeResolver.Solve(prop.PropertyType)),
                         EncodeValue(val)
                         );

        return props.ToArray();
    }
    set
    {
        //wrap the XElement[] with the name of the type
        var xml = new XElement(typeof(T).Name, value);

        var xs = new XmlSerializer(typeof(T));

        //xml.CreateReader() cannot be used as it won't support base64 content
        XmlTextReader reader = new XmlTextReader(
            xml.ToString(),
            XmlNodeType.Document,
            null);

        this.Entity = (T)xs.Deserialize(reader);
    }
}

In the getter, we use Reflection and pull back a list of all the public properties on the T object and build an array of XElement.  This is the same technique I used in my first post on serialization.  The 'allowableTypes' object is a HashSet<Type> that we use to figure out which property types we can support in the service (DateTime, numeric, string, boolean, and byte[]).  When this property serializes, the XElements are simply added to the markup.

The EncodeValue method shown is a simple helper method that correctly encodes string values, boolean, dates, integers, and byte[] values for the attribute.  Finally, we are using a helper method that returns from a Dictionary<Type,string> the correct xsi type for the required attribute (as determined from the property type).

For deserialization, what happens is that the [XmlAnyElement] attribute causes all unmapped attributes (in this case, all non-system metadata attributes) to be collected in a collection of XElement.  When we deserialize, if we simply wrap an enclosing element around this XElement collection, it is exactly what we need for deserialization of T.  This is shown in the setter implementation.

It might look a little complicated, but now simple serialization will just work via the XmlSerializer.  Here is one such implementation:

public string Serialize(SsdsEntity<T> entity)
{
    //add a bunch of namespaces and override the default ones too
    XmlSerializerNamespaces namespaces = new XmlSerializerNamespaces();
    namespaces.Add("s", Constants.ns.NamespaceName);
    namespaces.Add("x", Constants.x.NamespaceName);
    namespaces.Add("xsi", Constants.xsi.NamespaceName);

    var xs = new XmlSerializer(
        entity.GetType(),
        new XmlRootAttribute(typeof(T).Name)
        );

    XmlWriterSettings xws = new XmlWriterSettings();
    xws.Indent = true;
    xws.OmitXmlDeclaration = true;

    using (var ms = new MemoryStream())
    {
        using (XmlWriter writer = XmlWriter.Create(ms, xws))
        {
            xs.Serialize(writer, entity, namespaces);
            ms.Position = 0; //reset to beginning

            using (var sr = new StreamReader(ms))
            {
                return sr.ReadToEnd();
            }
        }
    }
}

Deserialization is even easier since we are starting with the XML representation and don't have to build a Stream in memory.

public SsdsEntity<T> Deserialize(XElement node)
{
    var xs = new XmlSerializer(
        typeof(SsdsEntity<T>),
        new XmlRootAttribute(typeof(T).Name)
        );

    //xml.CreateReader() cannot be used as it won't support base64 content
    XmlTextReader reader = new XmlTextReader(
        node.ToString(),
        XmlNodeType.Document,
        null);
    
    return (SsdsEntity<T>)xs.Deserialize(reader);
}

If you notice, I am using an XmlTextReader to pass to the XmlSerializer.  Unfortunately, the XmlReader from XLINQ does not support handling of base64 content, so this workaround is necessary.

At this point, we have a working serializer/deserializer that can handle arbitrary POCOs.  There are some limitations of course:

  • We are limited to the same datatypes that SSDS supports.  This also means nested objects and arrays are not directly supported.
  • We have lost a little of the 'flexible' in the Flexible Entity (the E in the ACE model).  We now have a rigid schema defined by SSDS metadata and T public properties and enforced on our objects.

In my next post, I will attempt to address some of those limitations and I will introduce a library that handles most of this for you.

Wednesday, 11 June 2008

LINQPad - not just for LINQ

I officially love LINQPad.  Joe Albahari has done a great job of introducing a light weight tool that is great for learning and prototyping LINQ queries.  From what I gather, Joe and Ben Albahari built this tool as part of their book offering.  It was so useful, it has taken on a life of its own.

It may not be entirely obvious, but it turns out don't have to use LINQPad solely for LINQ queries.  You can actually prototype any type of snippet of code.  I have been using it now instead of SnippetCompiler (another great quick snippet tool).

As an example, here is how to use System.DirectoryServices snippets inside of LINQPad:

Hit F4 to bring up the Advanced Query Properties Window

image

Add the System.DirectoryServices.dll reference in the Additional References window, and then add "System.DirectoryServices" in the Additional Namespace Imports window.

Now, just type your code normally and hit F5 when you are done:

image 

This is a great little tool to have as you can query databases, build LINQ expressions, and visually inspect the results that come back pretty easily.  Now, as you can see you can also execute arbitrary code snippets as well.  Highly recommended.

Thursday, 05 June 2008

Paged Asynchronous LDAP Searches Revisited

A member in the book's forum mentioned some code I had originally posted here in the blog for asynchronous, paged searches in System.DirectoryServices.Protocols (SDS.P).  He questioned whether or not it was thread safe.  I honestly don't know - it might not be as I didn't test it extensively.

Regardless, I had actually moved on from that code and started using anonymous delegates for callbacks instead of events.  I liked this pattern a bit better because it also got rid of the shared resources.

After reading Stephen Toub's article on asynchronous stream processing, I learned about the AsyncOperationManager which was something I was missing in my implementation.  I have been doing a lot lately with .NET 3.5, LINQ, and lambda expressions, so I also decided to rewrite the anonymous delegates to lambda expressions.  That is not as big a change, but it is more concise.

I actively investigated using async iterators, but ultimately I decided closures seemed to be more intuitive for me.  I might revisit this at some time and change my mind.  Here is my outcome:

public class AsyncSearcher
{
    LdapConnection _connect;

    public AsyncSearcher(LdapConnection connection)
    {
        this._connect = connection;
        this._connect.AutoBind = true; //will bind on first search
    }

    public void BeginPagedSearch(
            string baseDN,
            string filter,
            string[] attribs,
            int pageSize,
            Action<SearchResponse> page,
            Action<Exception> completed                
            )
    {
        if (page == null)
            throw new ArgumentNullException("page");

        AsyncOperation asyncOp = AsyncOperationManager.CreateOperation(null);

        Action<Exception> done = e =>
            {
                if (completed != null) asyncOp.Post(delegate
                {
                    completed(e);
                }, null);
            };

        SearchRequest request = new SearchRequest(
            baseDN,
            filter,
            System.DirectoryServices.Protocols.SearchScope.Subtree,
            attribs
            );

        PageResultRequestControl prc = new PageResultRequestControl(pageSize);

        //add the paging control
        request.Controls.Add(prc);

        AsyncCallback rc = null;

        rc = readResult =>
            {
                try
                {
                    var response = (SearchResponse)_connect.EndSendRequest(readResult);
                    
                    //let current thread handle results
                    asyncOp.Post(delegate
                    {
                        page(response);
                    }, null);

                    var cookie = response.Controls
                        .Where(c => c is PageResultResponseControl)
                        .Select(s => ((PageResultResponseControl)s).Cookie)
                        .Single();

                    if (cookie != null && cookie.Length != 0)
                    {
                        prc.Cookie = cookie;
                        _connect.BeginSendRequest(
                            request,
                            PartialResultProcessing.NoPartialResultSupport,
                            rc,
                            null
                            );
                    }
                    else done(null); //signal complete
                }
                catch (Exception ex) { done(ex); }
            };


        //kick off async
        try
        {
            _connect.BeginSendRequest(
                request,
                PartialResultProcessing.NoPartialResultSupport,
                rc,
                null
                );
        }
        catch (Exception ex) { done(ex); }
    }

}

It can be consumed very easily using something like this:

class Program
{
    static ManualResetEvent _resetEvent = new ManualResetEvent(false);
    
     static void Main(string[] args)
    {
        //set these to your environment
        string servername = "server.yourdomain.com";
        string baseDN = "dc=yourdomain,dc=com";

        using (LdapConnection connection = CreateConnection(servername))
        {
            AsyncSearcher searcher = new AsyncSearcher(connection);

            searcher.BeginPagedSearch(
                baseDN,
                "(sn=Dunn)",
                null,
                100,
                f => //runs per page
                {
                    foreach (var item in f.Entries)
                    {
                        var entry = item as SearchResultEntry;

                        if (entry != null)
                        {
                            Console.WriteLine(entry.DistinguishedName);
                        }
                    }

                },
                c => //runs on error or when done
                {
                    if (c != null) Console.WriteLine(c.ToString());
                    Console.WriteLine("Done");
                    _resetEvent.Set();
                }
            );

            _resetEvent.WaitOne();
            
        }

        Console.WriteLine();
        Console.WriteLine("Finished.... Press Enter to Continue.");
        Console.ReadLine();
    }

    static LdapConnection CreateConnection(string server)
    {
        LdapConnection connect = new LdapConnection(
            new LdapDirectoryIdentifier(server),
            null,
            AuthType.Negotiate
            );

        connect.SessionOptions.ProtocolVersion = 3;
        connect.SessionOptions.ReferralChasing = ReferralChasingOptions.None;

        connect.SessionOptions.Sealing = true;
        connect.SessionOptions.Signing = true;

        return connect;
    }
}

 

The important thing to note is that because everything is running asynchronously, it is totally possible for the end delegate to be invoked before the paging delegate has a chance to finish processing results (depending on how complicated your code is).  You would need to compensate for this yourself.

This client is a console application, so I am using a ManualResetEvent just to prevent it from closing before finishing.  You wouldn't need to do this in a WinForms or WPF app.

I am sure there are other optimizations you could make to pass in parameters or even other directory controls.  However, the general pattern should apply.

Wednesday, 09 April 2008

PhluffyFotos Sample Available

I just posted the first version of PhluffyFotos, our SQL Server Data Services (SSDS) sample app to CodePlex.  PhluffyFotos is a photo sharing site that allows users to upload photos and metadata (tags, description) to SSDS for storage.  As the service gets more features and is updated, the sample will be rev'd as well.

Points of interest that will likely also be blog posts in themselves:

  • This sample has a LINQ-to-SSDS provider in it.  You will notice we don't use any strings for queries, but rather lambda expressions.  I had a lot of fun writing the first version of this and I would expect that there are a few more revisions here to go.  Of course, Matt Warren should get a ton of credit here for providing the base implementation.
  • This sample also uses a very simplistic ASP.NET Role provider for SSDS.  Likely updates here will include encryption and hashing support.
  • We have a number of Powershell cmdlets included for managing authorities and containers.

I have many other ideas for this app as time progresses, so you should check back from time to time to see the updates.

In case anyone was wondering about the name: clouds are fluffy... get it?

You need to have SSDS credentials to run this sample.  If you don't have credentials yet, you can see an online version until then at http://www.phluffyfotos.com

Even if you don' t have access to SSDS credentials yet, the code is worth taking a look.

Monday, 03 December 2007

.NET 3.5 VPC and Resources Available

If you are interested in learning more about Visual Studio 2008, make sure you check out the Visual Studio 2008 Training Kit.  Weighing in at roughly 120MB compressed, it contains, "a full 5-days of technical content including 20 hands-on labs, 28 presentations, and 20 scripted demos.   The technologies covered in the kit include:  LINQ, C# 3.0, VB 9, WCF, WF, WPF, Windows CardSpace, Silverlight, ASP.NET Ajax, .NET Compact Framework 3.5, VSTO 3.0, Visual Studio Team System, and Team Foundation Server".

Naturally, you will want to have a machine setup to run all these labs and samples... so, thanks to the hard work of David and James, you can now download a VPC with Vista, Visual Studio 2008 (trial), and the .NET 3.5 framework pre-loaded and ready to run.  Get it here.

You want more?  Ok, how about 17 training videos describing the technologies and running through a number of demos?  Get those here.

These are truly some great resources to get you jumpstarted!

Tuesday, 30 October 2007

Implementing Change Notifications in .NET

There are three ways of figuring out things that have changed in Active Directory (or ADAM).  These have been documented for some time over at MSDN in the aptly titled "Overview of Change Tracking Techniques".  In summary:

  1. Polling for Changes using uSNChanged. This technique checks the 'highestCommittedUSN' value to start and then performs searches for 'uSNChanged' values that are higher subsequently.  The 'uSNChanged' attribute is not replicated between domain controllers, so you must go back to the same domain controller each time for consistency.  Essentially, you perform a search looking for the highest 'uSNChanged' value + 1 and then read in the results tracking them in any way you wish.
    • Benefits
      • This is the most compatible way.  All languages and all versions of .NET support this way since it is a simple search.
    • Disadvantages
      • There is a lot here for the developer to take care of.  You get the entire object back, and you must determine what has changed on the object (and if you care about that change).
      • Dealing with deleted objects is a pain.
      • This is a polling technique, so it is only as real-time as how often you query.  This can be a good thing depending on the application. Note, intermediate values are not tracked here either.
  2. Polling for Changes Using the DirSync Control.  This technique uses the ADS_SEARCHPREF_DIRSYNC option in ADSI and the LDAP_SERVER_DIRSYNC_OID control under the covers.  Simply make an initial search, store the cookie, and then later search again and send the cookie.  It will return only the objects that have changed.
    • Benefits
      • This is an easy model to follow.  Both System.DirectoryServices and System.DirectoryServices.Protocols support this option.
      • Filtering can reduce what you need to bother with.  As an example, if my initial search is for all users "(objectClass=user)", I can subsequently filter on polling with "(sn=dunn)" and only get back the combination of both filters, instead of having to deal with everything from the intial filter.
      • Windows 2003+ option removes the administrative limitation for using this option (object security).
      • Windows 2003+ option will also give you the ability to return only the incremental values that have changed in large multi-valued attributes.  This is a really nice feature.
      • Deals well with deleted objects.
    • Disadvantages
      • This is .NET 2.0+ or later only option.  Users of .NET 1.1 will need to use uSNChanged Tracking.  Scripting languages cannot use this method.
      • You can only scope the search to a partition.  If you want to track only a particular OU or object, you must sort out those results yourself later.
      • Using this with non-Windows 2003 mode domains comes with the restriction that you must have replication get changes permissions (default only admin) to use.
      • This is a polling technique.  It does not track intermediate values either.  So, if an object you want to track changes between the searches multiple times, you will only get the last change.  This can be an advantage depending on the application.
  3. Change Notifications in Active Directory.  This technique registers a search on a separate thread that will receive notifications when any object changes that matches the filter.  You can register up to 5 notifications per async connection.
    • Benefits
      • Instant notification.  The other techniques require polling.
      • Because this is a notification, you will get all changes, even the intermediate ones that would have been lost in the other two techniques.
    • Disadvantages
      • Relatively resource intensive.  You don't want to do a whole ton of these as it could cause scalability issues with your controller.
      • This only tells you if the object has changed, but it does not tell you what the change was.  You need to figure out if the attribute you care about has changed or not.  That being said, it is pretty easy to tell if the object has been deleted (easier than uSNChanged polling at least).
      • You can only do this in unmanaged code or with System.DirectoryServices.Protocols.

For the most part, I have found that DirSync has fit the bill for me in virtually every situation.  I never bothered to try any of the other techniques.  However, a reader asked if there was a way to do the change notifications in .NET.  I figured it was possible using SDS.P, but had never tried it.  Turns out, it is possible and actually not too hard to do.

My first thought on writing this was to use the sample code found on MSDN (and referenced from option #3) and simply convert this to System.DirectoryServices.Protocols.  This turned out to be a dead end.  The way you do it in SDS.P and the way the sample code works are different enough that it is of no help.  Here is the solution I came up with:

public class ChangeNotifier : IDisposable
{
    LdapConnection _connection;
    HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
    
    public ChangeNotifier(LdapConnection connection)
    {
        _connection = connection;
        _connection.AutoBind = true;
    }
 
    public void Register(string dn, SearchScope scope)
    {
        SearchRequest request = new SearchRequest(
            dn, //root the search here
            "(objectClass=*)", //very inclusive
            scope, //any scope works
            null //we are interested in all attributes
            );
 
        //register our search
        request.Controls.Add(new DirectoryNotificationControl());
 
        //we will send this async and register our callback
        //note how we would like to have partial results
        IAsyncResult result = _connection.BeginSendRequest(
            request,
            TimeSpan.FromDays(1), //set timeout to a day...
            PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
            Notify,
            request
            );
 
        //store the hash for disposal later
        _results.Add(result);
    }
 
    private void Notify(IAsyncResult result)
    {
        //since our search is long running, we don't want to use EndSendRequest
        PartialResultsCollection prc = _connection.GetPartialResults(result);
 
        foreach (SearchResultEntry entry in prc)
        {
            OnObjectChanged(new ObjectChangedEventArgs(entry));
        }
    }
 
    private void OnObjectChanged(ObjectChangedEventArgs args)
    {
        if (ObjectChanged != null)
        {
            ObjectChanged(this, args);
        }
    }
 
    public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
 
    #region IDisposable Members
 
    public void Dispose()
    {
        foreach (var result in _results)
        {
            //end each async search
            _connection.Abort(result);
        }
    }
 
    #endregion
}
 
public class ObjectChangedEventArgs : EventArgs
{
    public ObjectChangedEventArgs(SearchResultEntry entry)
    {
        Result = entry;
    }
 
    public SearchResultEntry Result { get; set;}
}

It is a relatively simple class that you can use to register searches.  The trick is using the GetPartialResults method in the callback method to get only the change that has just occurred.  I have also included the very simplified EventArgs class I am using to pass results back.  Note, I am not doing anything about threading here and I don't have any error handling (this is just a sample).  You can consume this class like so:

static void Main(string[] args)
{
    using (LdapConnection connect = CreateConnection("localhost"))
    {
        using (ChangeNotifier notifier = new ChangeNotifier(connect))
        {
            //register some objects for notifications (limit 5)
            notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
            notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
 
            notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
 
            Console.WriteLine("Waiting for changes...");
            Console.WriteLine();
            Console.ReadLine();
        }
    }
}
 
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
    Console.WriteLine(e.Result.DistinguishedName);
    foreach (string attrib in e.Result.Attributes.AttributeNames)
    {
        foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
        {
            Console.WriteLine("\t{0}: {1}", attrib, item);
        }
    }
    Console.WriteLine();
    Console.WriteLine("====================");
    Console.WriteLine();
}

And there you have it... change notifications in .NET.  You can also download my project file for Visual Studio 2008.

Friday, 14 September 2007

Using SQLDependency objects with LINQ

A question came up the other day on how to get LINQ to SQL to participate in using the SQL Notification Services.  Of course, I didn't know, but Mike Pizzo from the ADO.NET team was kind enough to answer.  I figured it must be possible, and sure enough, it is.  Essentially, you have to create a SQL dependency context, which is very similar to a transaction context.  Any code that participates within that context will automatically be associated with the SQLDependency.  Create the dependency first, before any LINQ (or other data access technology).  Here is the relevant code (note: this code is not optimized, so you might want to do things like change the SQLDependency to static or pass it in so it won't be garbage collected).

    static class GlobalNotifications

    {

        public static event OnChangeEventHandler OnChange;

 

        public static void InitializeNotifications(string connectString)

        {

            // Initialize notifications

            SqlDependency.Start(connectString);

            // Create and register a new dependency

            SqlDependency dependency = new SqlDependency();

            dependency.OnChange += new OnChangeEventHandler(NotificationCallback);

            System.Runtime.Remoting.Messaging.CallContext.SetData("MS.SqlDependencyCookie", dependency.Id);

        }

 

        internal static void NotificationCallback(object o, SqlNotificationEventArgs args)

        {

            OnChange.Invoke(o, args);

        }

    }

There is also one major caveat to using this with LINQ: Beware of complex queries that can be easily generated using LINQ.  The Query Processor will invalidate the command and fire an error event saying it was too complex.  Since you can easily generate lots of complex queries using LINQ (part of its power really), you need to really be cognizant of this limitation.

Friday, 10 August 2007

Range Retrieval using System.DirectoryServices.Protocols

Link pair attributes in Active Directory and ADAM can be quite big.  I don't know the official limit, but needless to say, for practical purposes you can assume they are quite large indeed.  By default, AD and ADAM will not return the entire attribute if it contains more than a certain number of values (1000 for Windows 2000 and 1500 for Windows 2003+ by default).  As such, if you truly want robust code, you need to always use what is called range retrieval for link value paired attributes.

Range retrieval is a process similar to paging in directory services, whereby you ask the directory for a certain range of particular attribute.  You know that you are using range retrieval when you see the attribute being requested in the following format:

"[attribute];range=[start]-[end]"

As an example, in the case of the 'member' attribute, you might ask for the first 1500 values like so:

"member;range=0-1499"

Notice, it is zero based so you need to take this into account.  The general algorithm as such is:

  1. Ask for as big as you can get.  This means use the "*" for the ending range to ask for it all.
  2. The directory will respond with either the actual max value (some integer), or with a "*" indicating you got everything.  If you got everything, you are done.
  3. If not, using the max value now as your step, repeatedly ask for larger and larger values inside a loop until the directory responds with a "*" as the end range.

We covered how to use range retrieval in SDS in our book, and you can download sample code that shows how from the book's website.  What we didn't cover was how to do it in SDS.P.

SDS.P is a layer closer the the metal than our ADSI based System.DirectoryServices (SDS).  As such, if you are expected to do range retrieval for SDS, you can be assured that you need to do it for SDS.P as well.  Adopting the code SDS, you get something like this (but modified to some extent):

static List<string> RangeRetrieve(

    LdapConnection connect, string dn, string attribute)

{

    int idx = 0;

    int step = 0;

    List<string> list = new List<string>();

 

    string range = String.Format(

       "{0};range={{0}}-{{1}}",

       attribute

       );

 

    string currentRange = String.Format(range, idx, "*");

 

    SearchRequest request = new SearchRequest(

        dn,

        String.Format("({0}=*)", attribute),

        SearchScope.Base,

        new string[] { currentRange }

        );

 

    SearchResultEntry entry = null;

    bool lastSearch = false;

 

    while (true)

    {

        SearchResponse response =

            (SearchResponse)connect.SendRequest(request);

 

        if (response.Entries.Count == 1) //should only be one

        {

            entry = response.Entries[0];

 

            //this might be optimized to find full step or just use 1000 for

            //compromise

            foreach (string attrib in entry.Attributes.AttributeNames)

            {

                currentRange = attrib;

                lastSearch = currentRange.IndexOf("*", 0) > 0;

                step = entry.Attributes[currentRange].Count;

            }

 

            foreach (string member in

                entry.Attributes[currentRange].GetValues(typeof(string)))

            {

                list.Add(member);

                idx++;

            }

 

            if (lastSearch)

                break;

 

            currentRange = String.Format(range, idx, (idx + step));

 

            request.Attributes.Clear();

            request.Attributes.Add(currentRange);

 

        }

        else

            break;

 

    }

    return list;

}

Happy coding... of course, if you are clever you will realize you can avoid all this range retrieval mess by using an attribute scope query (ASQ). :)

*edit: tried to fix the style for code to render in Google Reader correctly

Wednesday, 01 August 2007

Getting Active Directory Group Membership in .NET 3.5

I have previously covered pretty extensively the options for getting a user's group membership in Active Directory or ADAM (soon to be Active Directory LDS (Lightweight Directory Services)) here on the blog, in the forum, and in the book.  However, there is a new option for users of .NET 3.5 that should be of interest.

The Directory Services group at Microsoft has released in beta form a new API for dealing with a lot of the common things we need to do with users, groups, and computers in Active Directory, ADAM, and the local machine.  This API is called System.DirectoryServices.AccountManagement (or SDS.AM).  Here is a simple example of how to get a users groups (including nested, and primary):

static void Main(string[] args)
{
    PrincipalContext ctx = new PrincipalContext(ContextType.Domain);

    using (ctx)
    {
        Principal p = Principal.FindByIdentity(ctx, "ryandunn");

        using (p)
        {
            var groups = p.GetGroups();
            using (groups)
            {
                foreach (Principal group in groups)
                {
                    Console.WriteLine(group.SamAccountName + "-" + group.DisplayName);
                }
            }
        }

    }
    Console.ReadLine();
}

That's not too bad - in fact, it looks worse than it is because I am trying to make sure everything is wrapped in a 'using' statement where necessary.  The equivalent code to do this would be many times more (using DsCrackNames or LDAP searches) and would yield far less information being returned (just the DN in most cases).

Over the next few weeks and months, I intend to dig more deeply into this namespace and put some samples up here for everyone.  This is just a taste for now, but it should show you how powerful this namespace really is.

 *Updated to fix CSS renderings in Google Reader

Friday, 27 July 2007

Discovering IIS7 Schema

One of the major changes introduced with IIS7 is the removal of the metabase.  This probably comes as a great relief to many of the admins and developers that struggled with coordinating configuration across multiple machines.  Instead of the metabase, IIS7 has chosen to schematize all the web server settings and host them centrally in a new XML file called applicationHost.config located at '%windir%\system32\inetsrv\config'.

The schema itself can be found in the 'schema' folder right below the 'config' directory.  This new centralized configuration file holds all the global default values for IIS7 as well as defines what sites, applications, virtual directories, app pools, etc. are on the server.

Let's take a look just a couple items I found in my applicationHost.config:

<applicationPools>
    <add name="DefaultAppPool" />
    <add name="Classic .NET AppPool" managedPipelineMode="Classic" />
    <applicationPoolDefaults>
        <processModel identityType="NetworkService" />
    </applicationPoolDefaults>
</applicationPools>

Here is the configuration of the application pools on my IIS7 server.  Let's see what happens if we just edit this directly and add a new application pool.

AppPool

If we open our new IIS Manager (Start > Run > inetmgr), we can see that this is all there is to it.

AppPoolsInManager

Further poking around will yield lists of sites, applications, and virtual directories as well - all described neatly in XML format.  Now, how did the IIS team put all this together?  Were they using their own black box implementation?  If we inspect the schema, we will find there is no magic here and the IIS team is leveraging its own schema system for IIS itself.  Opening the "IIS_Schema.xml" file found in the 'schema' directory shows us the exact settings available for 'applicationPools'.

  <sectionSchema name="system.applicationHost/applicationPools">
    <collection addElement="add" defaultElement="applicationPoolDefaults">
      <attribute name="name" type="string" required="true" isUniqueKey="true" validationType="applicationPoolName" />
      <attribute name="queueLength" type="uint" defaultValue="1000" validationType="integerRange" validationParameter="10,65535"/>
      <attribute name="autoStart" type="bool" defaultValue="true" />
      <attribute name="enable32BitAppOnWin64" type="bool" defaultValue="false" />
      <attribute name="managedRuntimeVersion" type="string" defaultValue="v2.0" />
      <attribute name="managedPipelineMode" type="enum" defaultValue="Integrated">
        <enum name="Integrated" value="0" />
        <enum name="Classic" value="1" />
      </attribute>
      <attribute name="passAnonymousToken" type="bool" defaultValue="true" />
      <element name="processModel">
        <attribute name="identityType" type="enum" defaultValue="NetworkService">
          <enum name="LocalSystem" value="0"/>
          <enum name="LocalService" value="1"/>
          <enum name="NetworkService" value="2"/>
          <enum name="SpecificUser" value="3"/>
        </attribute>
            <...SNIP...>
  </sectionSchema>

As we can see here, there are some basic typed attributes that describe our application pool.  They have things like types, default values, and even enumerations with possible values (managedPipelineMode is one such).  There really is no magic here.  Someone could very easily write a program that inspected the schema of IIS7 fully and presented all possible options just by inspecting this schema.

What should immediately come to mind as a developer is, "how can I leverage this system"?  If there is one word that describes IIS7, it is "Extensible".  In my next post, I will show you how to leverage this powerful schema system that IIS7 introduces for your own application purposes without code.

Thursday, 28 June 2007

Directory Services Samples Milestone

I was checking on the book's website the other day and noticed that we broke the 54,000 mark for downloads of our sample code.  That really surprised me.  I just didn't think that there were that many people in the world working on these types of scenarios.

 Now, I happen to know that we did not sell that many copies of the book, so this means a lot of people are downloading just the samples.  For the record, that is perfectly fine and we encourage people to look at the samples.  Admittedly, some of the samples might be head-scratchers without the book for context, but hey, its free and so is our time on the forums.

Joe and I log a lot of time here and in other forums helping people and we love to hear your feedback on what works and what doesn't.  Give us a shout when samples don't make sense, or when a scenario seems to be overly painful.  Some people have asked how they can repay us for our time (I have had offers for beer, wine, and lodging so far).  A simple thanks is enough for both of us, however, and a perhaps a recommendation to others.  If you really feel the need to contribute beyond nice words - simple, just buy the book!

Friday, 01 June 2007

Working with Large Amounts of Data in Directory Services

I almost missed this one from Tomek, but he has a good analysis of what happens when you have many results to return from the directory and a nice comparison of how the different stacks (System.DirectoryServices vs. System.DirectoryServices.Protocols) handle it.  The moral of the story:  if you have a ton of results coming back, it might be in your interest to pursue using the Protocols stack.

Sunday, 15 April 2007

Introduction to System.DirectoryServices.Protocols (S.DS.P)

Fellow MVP, Ethan Wilansky has a new article on MSDN outlining the System.DirectoryServices.Protocols stack.  I haven't had a chance to read every last word in it yet (it's a huge article!), but it appears to show roughly 80% of everything you might want to do with SDS.P.  Check it out.

Link to Introduction to System.DirectoryServices.Protocols (S.DS.P)

Thursday, 05 April 2007

Transitive Link Value Filter for SP1

If hot-LDAP-filter-action is your thing, but you were let down in my last post since it required SP2 and Longhorn, then this should get you all hot and bothered again:  Hotfix for SP1.

I know, it's a hotfix - which means you have to contact Microsoft to get it.  But if you want to take advantage of the new LDAP_MATCHING_RULE_IN_CHAIN without upgrading to SP2 or Longhorn, then this is it.

Tuesday, 03 April 2007

Expression Web and Blend on MSDN soon

Straight from Somasegar.  This is good news for developers trying to fit in with their visual counterparts.  Initially these were targeted at the professional designer instead of the developer.  I personally think that those two overlap quite a bit.  It was a good move to put this into developer's hands.  You can be sure that they will get used now...

Friday, 30 March 2007

IIS7 and 404.3 Error

Here is something that I am sure developers will run into as they start to create web sites on their local Vista boxes using IIS7.  I thought I would just put this out here to save people some time.  If you are like me, you will go to the 'Turn Windows features on or off' selection and simply check the IIS7 option like so:

If done correctly, this installs the ASP.NET environment and all the appropriate stuff.  If not, you get an interesting error when you browse to the first .aspx page on your site.  Specifically, you will get a nice 404.3 error similar to this:

The part that should tip you off that something is completely misconfigured  is the part of the the error message (which are sooo much nicer these days) where the module is being reported as the "StaticFileModule".  Static files are things like .html files, jpegs, etc. where we are not performing any server side logic.  We know that our .aspx files need to be processed by the .NET runtime, so we should see something else as the handler there.

Now, here comes the confusing part.  Bring up the IIS Manager (Windows Key > "Inetmgr"), and view the Handler Mappings for your site and you will see something similar to this:

Notice that all of the ASP.NET extensions are missing?  There are no handlers defined for the .aspx, .asmx, .ashx or any other .NET extensions.  If you bring up the handlers installed on the machine however, you will see that they are all just fine:

At this point, you may be like me and scratching your head asking, "why are my handlers not being inherited correctly"?  If you are like me, then you will probably try delete your web app and then re-add it back.  When that fails you will try to uninstall and reinstall IIS7 again.  When that doesn't work, you will try to reinstall ASP.NET manually from the command line using "aspnet_regiis -i" or something similar.  When that fails, you will spend hours on Google trying to see what other people have done.  It might also cross your mind to just add them manually to your configuration.  If you are like me, however, that seems dirty and you will keep trying.

To save you some time, it actually turns out it is pretty easy to fix.  Simply DELETE the "Default Web Site", and then ADD it back.  The Handlers will be re-applied correctly and your virtual directory or web site will have all the correct handlers installed.  It is confusing why this even happens with a fresh install, but it appears to be a frequent occurrence according to Google searches.  Coming from IIS 5.1 and 6, it might be counterintuitive to delete a site, but this is actually not a big deal in Vista since you can have as many of these as you like (no artificial limits anymore).  I hope this saves someone some time.

Note to the IIS7 teamman, it would really be nice if you had a button that said something like "apply inherited handlers" or something like that.

Tuesday, 20 March 2007

Transitive Link Value Filter Evaluation

I was speaking with Eric Fleischman at the MVP summit this year and he told me about a neat feature you will find in Window Server 2003 SP2 and Longhorn server.  It is a new type of matching rule ID filter that allows for transitive link value evaluation.  This is one filter type that is incredibly useful and you will want to know about.

First some background: a matching rule ID filter is a special syntax filter that allows for an arbitrary search behavior as defined by the matching rule.  Active Directory and ADAM only shipped with two matching rules until recently: LDAP_MATCHING_RULE_BIT_AND (1.2.840.113556.1.4.803) and LDAP_MATCHING_RULE_BIT_OR (1.2.840.113556.1.4.804).  We commonly used these matching rules to check our bitwise flag values.  For instance, here is the key portion of the filter that specifies an account is disabled:

(userAccountControl:1.2.840.113556.1.4.803:=2)

Notice that we have the attribute name we are searching on, the rule OID we want to use (the AND rule), and the value to check (in decimal).  Pretty simple, right?

The new matching rule is called LDAP_MATCHING_RULE_IN_CHAIN and it has an OID of 1.2.840.113556.1.4.1941.  This new rule allows us to search across all DN-syntax attributes recursively and evaluate the entire tree of relationships (hence the transitive name).

Evaluating Group Membership

Where we will typically see this used is in group membership evaluation.  Specifically, it answers two questions that are constantly asked by developers:  What groups are my user a member of? and What users are in this group?

It is really simple to use this filter, so here we go.  The first question is, what groups is my user a member of?

(member:1.2.840.113556.1.4.1941:=CN=User1,OU=X,DC=domain,DC=com)

The next question, what users are in this group?

(memberOf:1.2.840.113556.1.4.1941:=CN=A Group,OU=Y,DC=domain,DC=com)

If we place base this search on the main partition and use a subtree search, it will return for us all the matches across the domain.  However, if we scope the second search to a specific user object and use a base search, it is a quick and dirty way of telling us if the user is a member of the group.  Hence, this would also work for a type of IsInRole() function:

public bool IsUserMember(DirectoryEntry user, string groupDN)
{
    string filter = String.Format(
        "(memberOf:1.2.840.113556.1.4.1941:={0})", groupDN);

    DirectorySearcher ds = new DirectorySearcher(
        user,
        filter,
        null,
        SearchScope.Base);

    return (ds.FindOne() != null);
}

Now, I want to also point out that this sort of code also makes me cringe a little bit thinking about the abuse that can occur...  Remember, it is performing a search each and every time you want to check group membership.  It is still a better idea to build the entire group membership of a user and store it in one of the IPrincipal classes and use the .IsInRole() functionality to keep network access to a minimum.

Creating an Org Chart

The other area where we will find this filter being pretty handy is when we want to find all the users that directly and indirectly report to a single person.  This is the typical situation when building org charts or trying to find the users for a mailing list.  Here is one such example:

public static void GetOrgChart(DirectoryEntry entry, string bossDN)
{
    string filter = String.Format(
        "(&(mail=*)(manager:1.2.840.113556.1.4.1941:={0}))",
        bossDN);

    DirectorySearcher ds = new DirectorySearcher(
        entry,
        filter
        );

    using (SearchResultCollection src = ds.FindAll())
    {
        foreach (SearchResult sr in src)
        {
            Console.WriteLine(sr.Properties["mail"][0]);
        }
    }
}
  

Performance

The next question we will want to answer for this new filter type is the performance relative to the code required to process this recursively.  I will use the code I presented here to test the recursive case and compare it to a simple filter of:

(memberOf:1.2.840.113556.1.4.1941:=CN=BigNestedGroup,OU=X,DC=Y)

So, to cut to the chase, how does the new filter compare to recursively chasing DN link pairs yourself?  Short answer... it doesn't.  The code I wrote for the book and here on the blog blows it away by a factor of 10.  I have to admit that before I ran the tests, I expected the new filter to run circles around my code and I was pretty shocked when the reverse was actually true.  I tested this using 3 nested groups each with 100 members and running both the recursive search and the transitive search 100 times and averaging the results.  To expand the lead group with 300 direct and indirect members took roughly half a second using a transitive filter (409ms) as compared to only (41ms) using the recursive search.  This would have a big impact on server apps serving many of these searches, but would probably not be a huge factor in client side apps or where a smallish number of these searches are performed.

Summary

The transitive filter is a great new addition and can greatly simplify the code that you have to write.  This new filter is not without pitfalls however.  Make sure you are cognizant of the performance tradeoffs that are inherent in choosing this filter as it is considerably slower to use. 

Monday, 12 March 2007

A New Look

I have been playing with a new look for the site. I think this particular one is a bit cleaner looking than the previous one that shipped with DasBlog. I know that there are some bugs to be worked out, but I like it well enough that I think I will go ahead and put the site live with the new look and tweak stuff later (it is mostly the admin UI).

What do you think?

Friday, 09 February 2007

ASP.NET ObjectDataSource and LDAP

One of the neater abstractions we get using ASP.NET 2.0 is the ObjectDataSource. Essentially it allows us to specify our own objects or some arbitrary object graph as a datasource for databinding operations. Combined with the new GridView class it can be very powerful. Built in to both of these objects are methods for Selecting, Updating, Deleting, and Inserting. This allows you to plug-in your own code to manage each of these operations.

If we want to adapt this to support reading and updating (including adds and deletes) from AD or ADAM, we just need to put a few methods in place and hook them into the ObjectDataSource abstraction. The nice thing here is that it includes parameter support pretty easily. This allows us to add new values or update existing values. By specifying a data key (DataKeyNames) on the Gridview, it also allows us to uniquely index which object we are modifying.

I chose for this example to show a simple Select, Update, and Delete using a GridView and the ObjectDataSource. It only required me to configure 3 methods (one for each) and declaratively setup the parameters I would be using. I chose to use the 'objectGuid' for the key name since this is guaranteed to be unique and will always point me to the right object. However, I could also have used the DN as the key value in this case as well. If I had done that, it would have saved me a little bit of complication due to the need to rebind when using the GUID value for certain operations.

Keep in mind that this is just a simple sample of how this can be done and is not something that should be put into production as-is. The whole point of this is to show how the GridView gives us easy updating and reading of values, while the ObjectDataSource is used as our abstraction for operations on data. In the past, this would not have worked well because the DataGrid was so geared towards relational data like SQL. It would have been much more convoluted to get the same functionality. Being able to write only 3 methods and have working ASP.NET viewing and editing application is pretty neat.

You can download the sample here.

Wednesday, 24 January 2007

Getting the NETBIOS domain name

The task of figuring out your Active Directory NETBIOS domain name comes up now and again.  This is the process of turning something like "DC=yourdomain,DC=com" into something like "YOURDOMAIN".  This is important if you want to do things like prefix this name on the 'sAMAccountName' of a user for instance.  There are three ways of doing this (perhaps more).  I will cover each broadly.

Guess

Yes, that's right.  You can pretty much guess what it is by parsing the DN of the defaultNamingContext.  If you had something like "DC=yourdomain,DC=com", it seems a pretty good guess that the NETBIOS name is "YOURDOMAIN".  However, this method of course is not foolproof.  For whatever reason, your Active Directory admins might have decided to do something stupid - like say add a big dash (-) and some other random nonsense into the NETBIOS name.  Who knows why they do this, but hey, they can and do (I suspect they hate their users).  So, in this case, "DC=yourdomain,DC=com" might have a NETBIOS name of "YOURDOMAIN-CORP" for instance (it also means you have to type this every damn time you need to supply credentials as well in NT4 domain\user format).

DsCrackNames

You can simply use DsCrackNames to convert from DN format to NT4 format and it will work fine.  Pass the DN (e.g. "DC=yourdomain,DC=com") and you will get back the NETBIOS name.  This includes IADsNameTranslate if that is up your alley too.  Since there is no easy way to use either from .NET unless you build your own, this might not be your first choice.

LDAP Query

Finally, we get to the last method.  First, we connect to the RootDSE of the domain and inspect the following two properties:  "configurationNamingContext" and "defaultNamingContext".  Then we bind to the configuration partition (using the first of these properties) and perform the following search:  (&(objectCategory=crossRef)(nCName={0})) where {0} is the value you just got from the RootDSE for the 'defaultNamingContext'.  You should get one result.  Now, just read back the 'nETBIOSName' attribute and there you have it.

If you are shipping software that needs to do this, obviously you should use one of the last two methods.  Guessing probably will just not cut it for anything that is just not quick and dirty.

Monday, 30 October 2006

Asynchronous LDAP Searching with System.DirectoryServices.Protocols

I have been hanging on to this post for some time now and never getting around to polishing it up and putting it out.  Eric Fleischman’s recent posting on SDS.P has inspired me to get some more content out there, however.  For whatever reason, this is a poorly documented topic that probably deserves a series of postings on it.  We neglected this topic mostly in the book because we had concerns about exceeding a certain # of pages on an already pretty dense topic.  I wonder if there would be any demand for a book on this subject?

Previously, I demonstrated how you need to perform paging in SDS.P using the cookie and the paging control.  There is a lot more to take care of here than how it is done with ADSI and System.DirectoryServices (SDS).  Now, what if we really wanted to perform this type of search asynchronously?  We actually wrote a sample in the book to show how asynchronous searching is done in SDS.P, but to keep it simple I left out the whole problem of paging.  Let’s be honest, any useful searching is probably going to need to handle paging as well.

SDS.P supports asynchronous searching directly, unlike SDS.  It is entirely possible to emulate this behavior in SDS by leveraging a general asynchronous pattern supported in .NET (e.g. by way of delegates, background thread workers, or the thread pool), but there is nothing in SDS to support asynchronously searching directly.  Hold on you say!  What about .NET 2.0 and the fancy new property on the DirectorySearcher called Asynchronous?  That is what we call a very misleading property.  While it actually sets the underlying ADS_SEARCHPREF_ASYNCHRONOUS option on the IDirectorySearch interface, the SDS model consumes the results synchronously, netting you exactly nothing.  I honestly wish they (the SDS .NET team) would just not have exposed that one if you couldn’t really use it.

Let’s look at the most basic pattern for how this can be accomplished:

  1. Create a connection to the LDAP directory (LdapConnection).
  2. Create a searching request operation (SearchRequest)
  3. Add a paging control to the SearchRequest to control paging
  4. Invoke the method asynchronously using LdapConnection.BeginSendRequest
  5. Provide a callback method to handle the results.

Simple, no?  It actually looks harder than it is.

When I set out creating this sample, I had a particular use in mind.  Specifically, I wanted to create an easy searching object – something where I wouldn’t have to worry about paging and that could also handle performing multiple searches on the same instance.  That last point, as you will see makes for additional complication.  I started with my usage model or how I visualized that I wanted to use it:

using (LdapConnection connection = CreateConnection(servername))
{
    AsyncSearcher searcher = CreateSearcher(connection);
 
    //this call is asynch, so we need to keep this main
    //thread alive in order to see anything
    //we can use the same searcher for multiple requests - we just have to track which one
    //is which, so we can interpret the results later in our events.
    _firstSearch = searcher.BeginPagedSearch(baseDN, "(sn=d*)", null, 500);
    _secondSearch = searcher.BeginPagedSearch(baseDN, "(sn=f*)", null, 500);
 
    //we will use a reset event to signal when we are done (using Sleep() on
    //current thread would work too...)
    _resetEvent.WaitOne(); //wait for signal;
}

I also knew that I would want to be notified of a couple key events that would occur with my object.  Specifically, whenever a page was returned and when the final search was complete:

static AsyncSearcher CreateSearcher(LdapConnection connection)
{
    AsyncSearcher searcher = new AsyncSearcher(connection);
 
    //assign some handlers for our events
    searcher.PageCompleted += new EventHandler<AsyncEventArgs>(searcher_PageCompleted);
    searcher.SearchCompleted += new EventHandler<AsyncEventArgs>(searcher_SearchCompleted);
 
    return searcher;
}

This would allow me to hook up a couple handlers that would be invoked each time a page came back or my search was finished.

static void searcher_SearchCompleted(object sender, AsyncEventArgs e)
{
    //this is volatile, so we need check it first or another thread
    //could change this from under us
    bool lastSearch = (((AsyncSearcher)sender).PendingSearches == 0);
 
    Console.WriteLine(
        "{0} Search Complete on thread {1}",
        e.RequestID.Equals(_firstSearch) ? "First" : "Second",
        Thread.CurrentThread.ManagedThreadId
        );
 
    if (lastSearch)
        _resetEvent.Set();
}
 
static void searcher_PageCompleted(object sender, AsyncEventArgs e)
{
    //or do something with e.Results here...
    Console.WriteLine(
        "Found {0} results on thread {1} for {2} search",
        e.Results.Count,
        Thread.CurrentThread.ManagedThreadId,
        e.RequestID.Equals(_firstSearch) ? "first" : "second"
        );
}

The complication with this model has to do with the fact that I wanted to be able to use the same object to search multiple times.  I could have limited my AsyncSearcher class to disallow more than one search at a time, but I thought that would be lame if I had to spin up a new instance and set of handlers for each search I wanted to perform.  Since I could only register for one paging and one completion event and yet multiple searches could be firing it, I needed a way to distinguish one search from another.  I decided to return a unique Guid for each search that could be matched up later to determine which search was activating the event.  With more time, I suppose I could have wrapped the Guid in something a little prettier or easier to use, but it suffices for this sample.

I also decided that I wanted to return results by pages as I got them as well as the huge block of them on completion of the search.  To do this, I created a simple class to hold the results called AsyncEventArgs:

/// <summary>
/// Just a simple class to hold some results
/// </summary>
public class AsyncEventArgs : EventArgs
{
    Guid _id;
    List<SearchResultEntry> _entries;
 
    public AsyncEventArgs(List<SearchResultEntry> entries, string requestID)
    {
        _entries = entries;
        _id = new Guid(requestID);
    }
 
    public List<SearchResultEntry> Results
    {
        get
        {
            return _entries;
        }
    }
 
    public Guid RequestID
    {
        get
        {
            return _id;
        }
    }
}

I could have created a different EventArg class depending on the type of event being fired, but I didn’t think it was worth it as this point.  Now that I had how I wanted to use the object nailed down a bit, I created the actual implementation.  The logic goes something like this in:

  1. Setup initial search and kick it off (steps 1–5 above), register a callback and pass my SearchRequest as state.
  2. Callback now picks up request and unwraps the SearchRequest (which also tells me which unique search I am after here) and calls LdapConnection.EndSendRequest()
  3. Pull the results from the resulting SearchResponse and fire the PageCompleted event – passing them as EventArgs
  4. Next, determine if the SearchResponse has a cookie and if it does, call BeginSendRequest again with updated paging cookie and point it back to my original callback (#2).  This means there is a lot of 2 through 4 going on here as each page gets processed and the PageCompleted event fires.
  5. If the cookie in #4 is empty, then I am done and I fire the SearchCompleted event and pass as EventArgs all the results I have been collecting internally in a hashtable keyed to the original request Guid.

Here is what that class looks like:

public class AsyncSearcher
{
    LdapConnection _connect;
    Hashtable _results = new Hashtable();
 
    public event EventHandler<AsyncEventArgs> SearchCompleted;
    public event EventHandler<AsyncEventArgs> PageCompleted;
 
    public AsyncSearcher(LdapConnection connection)
    {
        this._connect = connection;
        this._connect.AutoBind = true; //will bind on first search
    }
 
    /// <summary>
    /// Volatile count of outstanding searches in process
    /// </summary>
    public int PendingSearches
    {
        get
        {
            return _results.Count;
        }
    }
 
    private void InternalCallback(IAsyncResult result)
    {
        SearchResponse response = this._connect.EndSendRequest(result) as SearchResponse;
 
        ProcessResponse(response, ((SearchRequest)result.AsyncState).RequestId);
 
        //find the returned page response control
        foreach (DirectoryControl control in response.Controls)
        {
            if (control is PageResultResponseControl)
            {
                //call paged search again
                NextPage((SearchRequest)result.AsyncState, ((PageResultResponseControl)control).Cookie);
                break;
            }
        }
    }
 
    private void ProcessResponse(SearchResponse response, string guid)
    {
        //only 1 thread at a time gets here...
        List<SearchResultEntry> entries = new List<SearchResultEntry>();
 
        foreach (SearchResultEntry entry in response.Entries)
        {
            entries.Add(entry);
        }
 
        //signal our caller that we have a page
        EventHandler<AsyncEventArgs> OnPage = PageCompleted;
        if (OnPage != null)
        {
            OnPage(
                this,
                new AsyncEventArgs(entries, guid)
                );
        }
 
        //add to the main collection
        ((List<SearchResultEntry>)_results[guid]).AddRange(entries);
    }
 
    public Guid BeginPagedSearch(
        string baseDN,
        string filter,
        string[] attribs,
        int pageSize
        )
    {
        Guid guid = Guid.NewGuid();
 
        SearchRequest request = new SearchRequest(
            baseDN,
            filter,
            System.DirectoryServices.Protocols.SearchScope.Subtree,
            attribs
            );
 
        PageResultRequestControl prc = new PageResultRequestControl(pageSize);
 
        //add the paging control
        request.Controls.Add(prc);
 
        //we will use this to distinguish multiple searches.
        request.RequestId = guid.ToString();
 
        //create a temporary placeholder for the results
        _results.Add(request.RequestId, new List<SearchResultEntry>());
 
        //kick off async
        IAsyncResult result = this._connect.BeginSendRequest(
            request,
            PartialResultProcessing.NoPartialResultSupport,
            new AsyncCallback(InternalCallback),
            request
            );
 
        return guid;
    }
 
    private void NextPage(SearchRequest request, byte[] cookie)
    {
        //our last page is when the cookie is empty
        if (cookie != null && cookie.Length != 0)
        {
            //update the cookie and preserve page size
            foreach (DirectoryControl control in request.Controls)
            {
                if (control is PageResultRequestControl)
                {
                    ((PageResultRequestControl)control).Cookie = cookie;
                    break;
                }
            }
 
            //call it again to get next page
            IAsyncResult result = this._connect.BeginSendRequest(
                request,
                PartialResultProcessing.NoPartialResultSupport,
                new AsyncCallback(InternalCallback),
                request
                );
        }
        else
        {
            List<SearchResultEntry> results = (List<SearchResultEntry>)_results[request.RequestId];
 
            //decrement our collection when we are done
            _results.Remove(request.RequestId);
 
            //we have finished, signal the caller
            EventHandler<AsyncEventArgs> OnComplete = SearchCompleted;
            if (OnComplete != null)
            {
                OnComplete(
                    this,
                    new AsyncEventArgs(results, request.RequestId)
                    );
            }
        }
    }
}

I originally toyed with the idea of having the AsyncSearcher manage the connection to the directory, but ultimately decided that it was a bad idea for two main reasons.  First, not everyone will construct the connection the same.  Some people will use SSPI, others SSL, perhaps different ports, even certificates or otherwise.  The other reason is that the lifetime of the connection is harder to control from inside another class.  Wrapping it in a “using” statement won’t work because the connection must be open the whole time during the callbacks.  Cleaning it up in a Dispose() method is sloppy as well because it leads to the client needing to know that they should keep this object around until all their events fire.  By pulling the LdapConnection out, I am implicitly telling clients they need to manage the connection themselves.

You can download this class along with a sample client as an attachment to this post.  Next time, perhaps we will delve into retrieving partial results.

 File Attachment: AsynchClient.zip (5 KB).

Thursday, 12 October 2006

XSD Object Generation for .NET 2.0

One of my co-workers forwarded this link to me today.  Essentially, it is XSDObjectGen for 2.0 with support for things like partial classes, generics, etc.  We tend to use this as a basis for our service contracts and to help serialize outward facing DTO objects.  I was looking for something similar (or the source to XSDObjectGen to do it myself) a few months back.  Very cool.

Get it here:

http://devauthority.com/blogs/ram_marappan/archive/2006/10/03/4755.aspx

Wednesday, 04 October 2006

MVP Again

I was fortunate enough to receive the MVP award again for the 2007 year.  I would like to thank Microsoft and all the program supporters that make this possible.  I enjoy helping others and that is why I do it – the MVP privileges however are a very nice perk.

Sunday, 20 August 2006

I'm a Dork

Joe Richards, of Active Directory 3rd Edition fame was nice enough to label me here as one of his ‘dorks’.  His book, btw is one of the best (if not the best) references to Active Directory.  Thanks Joe.

Friday, 04 August 2006

Fast Concurrent Binding in SDS.P

So, this is really a lesson learned about putting together a book and code samples.  Namely, refactoring your code just before the final cut is generally not a good idea.  Or perhaps I should say, refactoring your code and not thoroughly testing it is not a good idea.

In Chapter 12 of the book, we had a number of examples for how to perform authentication.  One of them was using System.DirectoryServices.Protocols (SDS.P).  The sample tried a number of techniques – first a secure SSL bind using Fast Concurrent Binding (FCB), then it tried either a secure SPNEGO bind or a Digest bind (if ADAM).  Well, initially these were all different samples.  I thought it might be nice to tie them all together a bit more comprehensively – hence the refactoring.  I figured that a bigger sample that did more in a practical manner was more useful than a few line snippets that showed each one.

Anyhow, what ended up happening is that I broke the FCB authentication during the refactoring.  Because of unforseen testing environment meltdown a week earlier I did not have the proper Win2k3 clients to test again (it used to work, really!).  So… I borked it because the FCB code never got tested again.

One of my Avanade co-workers was actually implementing something like this and asked why it was not working.  At first I chalked it up to an environment thing, but after a closer inspection I noticed what the issue was.  Namely, in my attempt to bring all the samples together I had attempted to reuse the same connection for authentication as the bootstrapping.  Well, you can’t do that with FCB – you have to enable it before you bind and cannot turn it off until you close the connection.

The good news is that it is a fairly simple fix and I have already refactored (yet again) to support it.  I will be posting that code in another week or so when I get back from vacation.  Then poor Joe gets to convert it yet again to VB.NET.  Mea Culpa…

More SDS Bloggers

My co-author Joe Kaplan is finally online and blogging now.  I have asked him for more Wix content since he knows a bunch more than me on it.  More LDAP blogging goodness to come I am sure…

Thursday, 27 July 2006

msDs-User-Account-Control-Computed Not So Spiffy

You learn something new everyday.  I had been under the mistaken belief that the new ‘msDs-User-Account-Control-Computed’ attribute was a sexier and more accurate version of the older, non-constructed ‘userAccountControl’.  In fact, I believe it might have been me that wrote words to that effect in the book.

Yeah… well… whoops.  An errata post is coming.

If we recap what we did not like about ‘userAccountControl’ it was that it did not accurately reflect all the user flags (PasswordExpired, PasswordCannotChange, and AccountLockout specifically) for the LDAP provider.  It would seem natural that the ‘msDs-User-Account-Control-Computed’ attribute was identical to the ‘userAccountControl’ attribute, but also accurately reflected those 3 flags.  At least for Active Directory, this is just not true.  It turns out that this sucker will be zero for everything except AccountLockout and PasswordExpired.  Boo, hiss…

So, what seemed like a promising replacement for an all-in-one user flags smorgasbord, is in fact a bit of an anorexic turd.  Why MS chose this particular behavior is beyond me…

The moral of the story?  You are still stuck with using both attributes – and even then you can’t get an accurate PasswordCannotChange flag.  Bummer.

Monday, 24 July 2006

A Recursive Pattern For DN Syntax Attributes Part 2

I am revisiting this particular topic once again to finish it up.  Last time, we established a general pattern for searching any DN syntax attribute in Active Directory or ADAM and chasing down all the nested results in either direction (i.e. forward link to back link or vice versa).  The solution worked well with one caveat:  intermediate values.  We often do not want to capture the intermediate values, but only the end results.  As an example, if we were to expand the group membership for a group object (the ‘member’ attribute) to discover the users, we would not want to include the other nested groups as values in our results, but we would only want to include the users in those nested groups.  In other words, we would want to exclude the intermediate values.  This is different than another example, say of discovering an org chart by expanding the ‘directReports’ attribute where we would clearly want to know all the intermediate reports.

For the specific example of expanding a group object to get membership, I posted an example in the book that used recursion and specifically excluded the result if the ‘objectClass’ was ‘group’.  I posed the question, “was this necessary?”.  Or more specifically, “can we create a general solution that will deal with both cases for intermediate values?”

The answer, of course, is yes.  We can pretty easily create a solution that will allow us to keep the intermediate values or discard them if we want.  In the general case, we do not need to know the object types.  In our example in the book, I think it was more clear what was happening by putting knowledge of the object type.  However, it also could have been solved with a simple boolean and no knowledge of the objects themselves.  Here is the revised solution.  I am omitting the IADsPathname interface this time, but you can easily pull it from the last post.

public class RecursiveLinkPair

{

    DirectoryEntry entry;

    ArrayList members;

    Hashtable processed;

    string attrib;

    bool includeAll;

 

    public RecursiveLinkPair(DirectoryEntry entry, string attrib, bool includeIntermediate)

    {

        if (entry == null)

            throw new ArgumentNullException("entry");

 

        if (String.IsNullOrEmpty(attrib))

            throw new ArgumentException("attrib");

 

        this.includeAll = includeIntermediate;

        this.attrib = attrib;

        this.entry = entry;

        this.processed = new Hashtable();

        this.processed.Add(

            this.entry.Properties[

                "distinguishedName"][0].ToString(),

            null

            );

 

        this.members = Expand(this.entry);

    }

 

    public ArrayList Members

    {

        get { return this.members; }

    }

 

    private ArrayList Expand(DirectoryEntry group)

    {

        ArrayList al = new ArrayList(5000);

 

        DirectorySearcher ds = new DirectorySearcher(

            entry,

            "(objectClass=*)",

            new string[] {

                this.attrib,

                "distinguishedName",

                "objectClass" },

            SearchScope.Base

            );

 

        ds.AttributeScopeQuery = this.attrib;

        ds.PageSize = 1000;

 

        using (SearchResultCollection src = ds.FindAll())

        {

            string dn = null;

            foreach (SearchResult sr in src)

            {

                dn = (string)

                    sr.Properties["distinguishedName"][0];

 

                if (!this.processed.ContainsKey(dn))

                {

                    this.processed.Add(dn, null);

 

                    if (sr.Properties.Contains(this.attrib))

                    {

                        if (this.includeAll)

                            al.Add(dn);

 

                        SetNewPath(this.entry, dn);

                        al.AddRange(Expand(this.entry));

                    }

                    else

                        al.Add(dn);

                }

            }

        }

        return al;

    }

 

    //we will use IADsPathName utility function instead

    //of parsing string values.  This particular function

    //allows us to replace only the DN portion of a path

    //and leave the server and port information intact

    private void SetNewPath(DirectoryEntry entry, string dn)

    {

        IAdsPathname pathCracker = (IAdsPathname)new Pathname();

 

        pathCracker.Set(entry.Path, 1);

        pathCracker.Set(dn, 4);

 

        entry.Path = pathCracker.Retrieve(5);

    }

}

This simple class uses the AttributeScopeQuery option to enumerate the attribute and then uses the utility interface IADsPathname to reset the entry as we recurse the results.  So, we now finally have a generic solution that will work on all DN syntax attributes in both directions and with or without intermediate values.

Wednesday, 31 May 2006

A Recursive Pattern For DN-Syntax Attributes

A common task that any Active Directory developer will face is to expand group membership for a given group.  This is a trivially simple task if we are only interested in finding direct group membership.  Things can get much more complicated when nested or indirect group membership is also involved.  Consider the following:

Group A: members – “Timmy”, “Alice”, “Bob”, “Group B”

Group B: members – “Jake”, “Jimmy”, “Nelson”

If we wanted to fully expand the group membership of Group A, we would expect to see 6 users in this case.  The issue of course is that the membership for a group is held on the ‘member’ attribute.  If we were to inspect this attribute directly, we would see only 4 members, with one of them being ‘Group B’.  The problem of course is that we cannot tell just by looking at the ‘member’ attribute which one is a group and which ones are the object types we are interested in (in this case user objects).  Short of a naming convention to indicate which is a group, we would have to bind or search for each object and determine if it was a group in order to continue the search.  This is how it is done today using .NET 1.x.  In fact – you can find an example of this by reading Chapter 11 and specifically viewing Listing 11.7.  I am also glossing over a couple details for 1.x regarding large DN-syntax attributes – but you can read about that of course in Chapter 11 .  I also won’t post the code for expanding group membership using recursion in 2.0 because that code is already available for download from the book’s companion website (Listing 11.6).  However, I will talk about this as a basis to discover a more general pattern.

Introduced with Windows 2003 and available to .NET 2.0 users is a new type of search called Attribute Scoped Query (ASQ).  An ASQ allows us to scope our searches to any DN-syntax attribute.  This powerful feature is ideal for things like membership expansion, or indeed, any DN-syntax attribute that should be expanded.  By scoping our search on the DN, we can easily overcome some of the limitations in other methods (such as range retrieval), which leads to a simpler conceptual model.

Another example where we would want to expand DN-syntax attributes occurs when we are checking for employees managed directly and indirectly by a particular individual.  For instance, we might have an employee that manages 1 employee that in turn manages 15 other employees.  It is fair to say in most organizations that the first employee really manages 16 employees rather than just one.  We have to use recursion however to fully expand this relationship.  The employee carries their manager’s DN on the attribute ‘manager’, while the manager has a backlink to the employee on the ‘directReports’ attribute.  This is a similar class of problem as the group membership expansion.

If we look at these two examples, we should notice some similarities and some key differences.  In both cases, we have the DN of another object to describe the relationship.  In both cases, we need to chase that DN to see if it in turn references other DNs.  However, there are two key differences.  First, in the case of group membership, we don’t care to see the intermediate results (e.g. Group B), but only the fully expanded members.  Next, for group membership, we are chasing a multi-valued forward link (the ‘member’ attribute), while in the employee example, we are chasing the backlink (the ‘directReports’ attribute).  The question we should consider is whether these differences will change our pattern.  They would be almost identical problems if the employee was trying to figure out all their bosses, or the user was trying to determine all their group memberships.

<Jeopardy music playing> So, do the two differences matter?  What if the forward link is single-valued (‘manager’ as opposed to ‘member’ for example)? </Jeopardy music playing>

The answer is yes and no.  The problem with excluding intermediate results certainly must be accounted for, but whether or not it is the backlink or forward link actually has no bearing on the problem.  It turns out that ASQ searches work just fine with DN-syntax attributes of either type – even when the forward link is single-valued.

Ignoring the intermediate value issue for now, we can produce a generalized solution for any DN-syntax recursion:

public class RecursiveLinkPair
{   
    DirectoryEntry entry;
    ArrayList members;
    Hashtable processed;
    string attrib;
 
    public RecursiveLinkPair(DirectoryEntry entry, string attrib)
    {
        if (entry == null)
            throw new ArgumentNullException("entry");
 
        if (String.IsNullOrEmpty(attrib))
            throw new ArgumentException("attrib");
 
        this.attrib = attrib;
        this.entry = entry;
        this.processed = new Hashtable();
        this.processed.Add(
            this.entry.Properties[
                "distinguishedName"][0].ToString(),
            null
            );
 
        this.members = Expand(this.entry);
    }
 
    public ArrayList Members
    {
        get { return this.members; }
    }
 
    private ArrayList Expand(DirectoryEntry group)
    {
        ArrayList al = new ArrayList(5000);
 
        DirectorySearcher ds = new DirectorySearcher(
            entry,
            "(objectClass=*)",
            new string[] {
                this.attrib,
                "distinguishedName"
                },
            SearchScope.Base
            );
 
        ds.AttributeScopeQuery = this.attrib;
        ds.PageSize = 1000;
 
        using (SearchResultCollection src = ds.FindAll())
        {
            string dn = null;
            foreach (SearchResult sr in src)
            {
                dn = (string)
                    sr.Properties["distinguishedName"][0];
 
                if (!this.processed.ContainsKey(dn))
                {
                    this.processed.Add(dn, null);
                    al.Add(dn);
 
                    if (sr.Properties.Contains(this.attrib))
                    {
                        SetNewPath(this.entry, dn);
                        al.AddRange(Expand(this.entry));
                    }
                }
            }
        }
        return al;
    }
 
    //we will use IADsPathName utility function instead
    //of parsing string values.  This particular function
    //allows us to replace only the DN portion of a path
    //and leave the server and port information intact
    private void SetNewPath(DirectoryEntry entry, string dn)
    {
        IAdsPathname pathCracker = (IAdsPathname)new Pathname();
 
        pathCracker.Set(entry.Path, 1);
        pathCracker.Set(dn, 4);
 
        entry.Path = pathCracker.Retrieve(5);
    }
}
 
[ComImport, Guid("D592AED4-F420-11D0-A36E-00C04FB950DC"), InterfaceType(ComInterfaceType.InterfaceIsDual)]
internal interface IAdsPathname
{
    [SuppressUnmanagedCodeSecurity]
    int Set([In, MarshalAs(UnmanagedType.BStr)] string bstrADsPath, [In, MarshalAs(UnmanagedType.U4)] int lnSetType);
    int SetDisplayType([In, MarshalAs(UnmanagedType.U4)] int lnDisplayType);
    [return: MarshalAs(UnmanagedType.BStr)]
    [SuppressUnmanagedCodeSecurity]
    string Retrieve([In, MarshalAs(UnmanagedType.U4)] int lnFormatType);
    [return: MarshalAs(UnmanagedType.U4)]
    int GetNumElements();
    [return: MarshalAs(UnmanagedType.BStr)]
    string GetElement([In, MarshalAs(UnmanagedType.U4)] int lnElementIndex);
    void AddLeafElement([In, MarshalAs(UnmanagedType.BStr)] string bstrLeafElement);
    void RemoveLeafElement();
    [return: MarshalAs(UnmanagedType.Interface)]
    object CopyPath();
    [return: MarshalAs(UnmanagedType.BStr)]
    [SuppressUnmanagedCodeSecurity]
    string GetEscapedElement([In, MarshalAs(UnmanagedType.U4)] int lnReserved, [In, MarshalAs(UnmanagedType.BStr)] string bstrInStr);
    int EscapedMode { get; [SuppressUnmanagedCodeSecurity] set; }
}
 
[ComImport, Guid("080d0d78-f421-11d0-a36e-00c04fb950dc")]
internal class Pathname
{
}

Notice I am using the IADsPathName utility interface here.  Since recursion requires us to re-base our search each time, we need to robustly update our DirectoryEntry instance’s .Path each time.  This interface does a much better job than trying to write your own string parsing routines - especially when you need to preserve server and port information for things like ADAM.  Whenever you see interfaces like this, check to make sure someone hasn’t already done the setup work for you.  You can check the .NET framework or ActiveDs interop assembly using Reflector and find the declaration oftentimes.

The final issue we must deal with now is the problem of intermediate values – specifically how to exclude them.  I am going to leave that exercise to the reader with just a hint:

  • I only included 2 attributes in each search.  If you have already seen my solution for Group Expansion in the book (Listing 11.6), you will notice that I make use of a 3rd attribute (objectClass).  Did I need to?

Well, this is a long enough post for now.  Signing off…

Thursday, 11 May 2006

DotNetDevGuide to Directory Services Companion Site Launched

With very little fanfare, I am announcing that the companion site for ‘The .NET Developer’s Guide to Directory Services Programming’ is available now.  We have managed to snag a fairly relevant domain name for it and rushed to put out a Community Server based site (a review on this tool later perhaps).  You can find it here:

Directory Programming .NET

Our first order of business is to get the code samples out there.  Right now, we have released what I term the ‘raw’ samples as they are verbatim from the book (available here).  The samples are still a great start even though they are slightly truncated in some cases.  The truncation will only affect the more sophisticated examples we have in the book that would have taken pages of code to print in entirety.

My next order of business is to finish converting all the book samples into a more easy to consume test harness release.  I say ‘test-harness’ in a loose manner as TDD guys will probably not be satisfied.  I have put together ad-hoc test cases for each of the non-trivial samples in the book and modified them to work with configuration (so you don’t have to hardcode your domain over and over again, for instance).  This is taking some time, but should be done shortly.  Once this is done, you can use TestDriven.NET or NUnit to run these guys pretty easily.

Future plans for the site include more and more samples (not from the book necessarily) as well as a place for us to publish errata and updates as we find them.  We have enabled the forums feature as of now and will gauge its usefulness as we go.

My co-author Joe is on the hook to convert all the samples to VB.NET.  I don’t particularly envy him – we didn’t realize how many frickin samples we had until we tried to put them all together.  He has his work cut out for him.

As for when you can get your hands on a copy of the book.  The answer is… now.  It is in stock at Addison-Wesley and Amazon and other retailers should have it stocked any time now.  Brick and mortar stores will be last, but should be in the next week or so.  Of course, purchasing it through my Amazon link on this site or the companion site would be appreciated since we are running this companion site out of our own pockets.

Friday, 05 May 2006

Self Service Updates in Active Directory

Sometimes AD Administrators would like to allow end users to update and maintain their own information in the directory.  Of course, this is not without risk – more adventurous users will change their title to “Chief Code Monkey” or something perhaps even less professional.  I was an ‘Executive Vice-President’ myself for some period of time and it made for some interesting phone conversations with co-workers that did not know me personally (it turns out people are pretty fearful of talking directly to EVPs).

However, by default, users have permission over a property set in the schema that allows them to update attributes on their own user object.  If these permissions have not been updated from the default, the user will generally have free reign to update their own personal information.  Which attributes, you ask?  Earlier I demonstrated a way to determine what attributes are available on a given class.  This involved finding the object we wanted in the directory and programmatically inspecting the schema.  It turns out there is another way to achieve the same thing, and with a twist – allow us to see which attributes we can update ourselves.

Active Directory (and ADAM) contain a couple attributes called ‘allowedAttributes’ and ‘allowedAttributesEffective’ that tell us all of the attributes on a given object and all of the attributes on a given object that we are allowed to update, respectively.  The first one, ‘allowedAttributes’ produces the exact output as inspecting the OptionalProperties and MandatoryProperties together (without distinction, however).  It gets more interesting with the second one, because it opens the possibility that we can easily generate a dynamic UI to allow the user to update any attributes where their permission allows.

Here is one such example:

<%@ Assembly Name="System.DirectoryServices, Version=2.0.0000.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" %>

<%@ Import Namespace="System.DirectoryServices" %>

<%@ Import Namespace="System.Text" %>

<html>

<head>

 

    <script language="c#" runat="server">

 

        static string adsPath = "LDAP://dc=yourdomain,dc=com";

 

        private void Page_Load(object sender, System.EventArgs e)

        {

            if (!Page.IsPostBack)

            {

                SearchResult sr = FindCurrentUser(new string[] { "allowedAttributesEffective" });

 

                if (sr == null)

                {

                    msg.Text = "User not found...";

                    return;

                }

 

                int count = sr.Properties["allowedAttributesEffective"].Count;

 

                if (count > 0)

                {

                    int i = 0;

                    string[] effectiveAttributes = new string[count];

 

                    foreach (string attrib in sr.Properties["allowedAttributesEffective"])

                    {

                        effectiveAttributes[i++] = attrib;

                    }

 

                    sr = FindCurrentUser(effectiveAttributes);

 

                    foreach (string key in effectiveAttributes)

                    {

                        string val = String.Empty;

 

                        if (sr.Properties.Contains(key))

                        {

                            val = sr.Properties[key][0].ToString();

                        }

 

                        GenerateControls(key, val, parent);

                    }

                }

            }

            else

            {

                UpdateControls();

            }

        }

 

        private SearchResult FindCurrentUser(string[] attribsToLoad)

        {

            //parse the current user's logon name as search key

            string sFilter = String.Format(

                "(&(objectClass=user)(objectCategory=person)(sAMAccountName={0}))",

                User.Identity.Name.Split(new char[] { '\\' })[1]

                );

 

            DirectoryEntry searchRoot = new DirectoryEntry(

                adsPath,

                null,

                null,

                AuthenticationTypes.Secure

                );

 

            using (searchRoot)

            {

                DirectorySearcher ds = new DirectorySearcher(

                    searchRoot,

                    sFilter,

                    attribsToLoad,

                    SearchScope.Subtree

                    );

 

                ds.SizeLimit = 1;

 

                return ds.FindOne();

            }

        }

 

        private void GenerateControls(string attrib, string val, Control parent)

        {

            parent.Controls.Add(new LiteralControl("<div>"));

 

            TextBox t = new TextBox();

            t.ID = "c_" + attrib;

            t.Text = val;

            t.CssClass = "txt";

 

            Label l = new Label();

            l.Text = attrib;

            l.AssociatedControlID = t.ID;

            l.CssClass = "lbl";

 

            parent.Controls.Add(l);

            parent.Controls.Add(t);

            parent.Controls.Add(new LiteralControl("</div>"));

        }

 

        private void UpdateControls()

        {

            SearchResult sr = FindCurrentUser(new string[] { "cn" });

 

            if (sr != null)

            {

                using (DirectoryEntry user = sr.GetDirectoryEntry())

                {

                    foreach (string key in Request.Form.AllKeys)

                    {

                        if (key.StartsWith("c_"))

                        {

                            string attrib = key.Split(new char[] { '_' })[1];

                            string val = Request.Form[key];

 

                            if (!String.IsNullOrEmpty(val))

                            {

                                Response.Output.Write("Updating {0} to {1}<br>", attrib, val);

                                user.Properties[attrib].Value = val;

                            }

                        }

                    }

                    user.CommitChanges();

                }

            }

 

            btnSubmit.Visible = false;

            Response.Output.Write("<br><br><a href=\"{0}\">&lt;&nbsp;Back</a>", Request.Url);

        }

 

    </script>

 

    <style>

 

    .lbl

    {

        margin-left: 25px;

        clear: left;

        width: 250px;

    }

 

    .txt

    {

        width: 250px;

    }

</style>

</head>

<body>

    <form id="main" runat="server">

        Data for user:

        <%=User.Identity.Name%>

        <br>

        <br>

        <asp:Label ID="msg" runat="server" />

        <asp:Panel ID="parent" runat="server" />

        <asp:Button ID="btnSubmit" runat="server" Text="Update" />

    </form>

</body>

</html>

Pretty easy, eh?  Sure, this one was whacked together in about 20 minutes, but you could create something similar that takes care of the single- vs. multi-value attribute treatment and make it a whole lot prettier with a little more effort.

This should be fairly obvious – but it bears mentioning:  This requires you to use Integrated Windows Authentication with impersonation and your IIS server must be set for delegation.  The whole point of this exercise was to allow the user to update their own information using their own credentials.  Using a service account will show you what attributes the service account has permission to update on the object.

(thanks Paul for the css – yeah, I am teh suck on UI)

 

Wednesday, 26 April 2006

Suduko for Tablet PC

A little old, but I found this article on MSDN.  What a sweet looking application for the Tablet PC.  Anything Steven Toub writes seems to be pure gold (check out his Fun with DVR-MS article as well).  It makes me want to get one just to try it.  I have only played Suduko a couple times on the plane when I found a puzzle in the in-flight magazines.  I was surprised at how addictive it was – it kept me entertained the almost the whole trip (yeah, I was slow the first time).

 

Monday, 24 April 2006

The next IIS provider

Scott Guthrie has some good news for the ADSI folks that have struggled so much with the IIS provider until now.  These new, managed APIs for IIS should replace all the spackle that was crusted onto ADSI to wedge the IIS provider into that model.  Granted, this is not really available yet - but at least relief it in sight.

Friday, 07 April 2006

What a small world

I was reading a post by my friend (and ex-coworker) James Avery today that made me want to checkout the status of the NHibernate project.  I had heard of this project a long time ago, but never looked into at all.  So, browsing the first page I see a name I am pretty sure I recognize – Mike Doerfler.  He is listed as one of the main contributors to the project on the homepage.

The funny part is that Mike and I used to teach classic ASP together back in the day when we both worked at Ernst & Young.  I think the market had slowed or something and we needed to keep busy, so we took the job as an internal project to teach the basics of ASP programming.  I can still remember how easy it was to crash the NT4 servers when students either put their database or UI code into infinite loops.  Having 20 generally horrible developers work on a single IIS server and one database server is like locking a couple sheep into the same room with a bunch of wolves... the poor things never stood a chance.

So, I am pretty sure Mike left in 1999 or 2000 and we slowly lost touch.  I have a ton of MSN Messenger contacts and Mike has been on there forever though the chats have been fewer and fewer until non-existent.  To be fair, I almost never chat with anyone anymore.  First, I am hardly ever online with Messenger, and second, I just don’t have a lot to chat about.

Anyhow, I had to send Mike an IM to see if it was the same ‘Doerfler’.  Turns out it was!  Now, I wonder who else is on my IM list that has gone on to fame and fortune…  I really need to do a better job of keeping in touch with people.

Friday, 31 March 2006

DEC 2006 Wrap Up and Presentation Material

Joe and I had a great time on Wednesday presenting our talk on Directory Services Programming in .NET.  Being the first time that we have ever presented together, we did no know exactly how it would work out.  All in all, we felt it went pretty well.  Thanks to everyone that attended!

One of the cool parts for us was that our publisher sent over some draft copies of the book.  We knew our publisher was sending something, but we had thought it was a sample chapter or two.  It was quite surprising to open the box and find the full text!  Seeing your hard work in print for the first time almost gives you goose bumps.

As promised, I am attaching the presentation material to this post.  It includes the updated presentation that we used as well as all the samples that we used during it.  We actually did not get to one of the samples called Dirsync, which allows us to poll the directory periodically for updates, so check it out.

If anyone has questions on these samples or the topic itself, Joe and I can be reached through the MS newsgroups or the ASP.NET Forums.  I would also suggest checking out what I consider useful System.DirectoryServices resources.  I will try to keep that updated as I find more.

DEC 2006 thoughts

I had a great time in Las Vegas this last week at the DEC 2006 conference.  It was great connecting with so many of the participants as well as the personalities involved in the presentations.  There were some really good presentations that I attended (e.g. Guido, Dean & Joe, and Stuart’s), and getting to speak one-on-one with so many of the product manager’s of the involved technologies was great.  The conference was big, but not TechEd big, so the interactions were of much better quality in my opinion.

I was surprised to run into only a couple people from Avanade there, however.  Being a large Microsoft consulting company, I would have thought we would have more folks from our infrastructure practice there learning, if not speaking themselves.

The last day of the conference was fun.  We had a nice dinner with a bunch of folks – Katherine Coombs, Joe Richards, Dean Wells, Paul Williams, Don Wells (Dean’s dad), and Eric.  Listening to the different accents and slang (Dutch, Welsh, English, and Aussie/English mix) was a trip.  On a side note – if Paul Williams is any indicator – Welshmen eat entire cakes for dinner and prefer their cow-flesh raw.  When asked, “what temperature” he would like his burger cooked, Paul looked around quizzically and said, “warm?”.  This was after, of course, he had devoured a 10 lb. brownie (not joking here) for lunch – we are talking a dense pile of chocolate that could cause serious injury if dropped on a man’s foot.

I did not have a hotel room for the last night before my 7am flight back to Cleveland.  Initially, I had planned to stay up the night and just go to the airport nice and early.  However, after Gil, Katherine, Dean, Jorge, Paul, and Ulf decided one after another to go to bed, I would have been left alone with the hardcore gamblers/insomniacs/general freaks in Vegas at that hour.  Ulf was kind enough to let me have his extra bed for 3 invigorating hours of sleep before departing for the airport.

If Gil would have me back next year, I would definitely do it again.

Friday, 17 March 2006

Paging in System.DirectoryServices.Protocols

One of the new features of .NET 2.0 is the System.DirectoryServices.Protocols (SDS.P) namespace that gives us access to the native LDAP api without relying on ADSI.  We can accomplish pretty much anything now in LDAP using managed code and we get around some of the wonkiness of ADSI.

Performing a simple search in SDS.P is pretty straightforward actually.  We just create a connection to the directory using the LdapConnection class, bind to the connection and then make request/response pairs on the connection.  One of the request/response pairs is SearchRequest and SearchResponse.  Without going into too many details, using these two classes in a synchronous manner is pretty easy.  What might be surprising however is that developers will often run into the following error:

System.DirectoryServices.Protocols.DirectoryOperationException : The size limit was exceeded

This departs from System.DirectoryServices (SDS) and ADSI where we do not get errors for simple searches even if we have not enabled paging.  In SDS, a similar query with DirectorySearcher would just return the MaxPageSize for the directory (usually 1000).  We know from SDS that to get more than the MaxPageSize, we need to use paging.  That is as simple as setting DirectorySearcher.PageSize > 0.

So, what about SDS.P.  It turns out that one of the really nice things that ADSI did for us was make paging so seamless.  If we wanted to use it, we simply asked for a specific page size and it just worked.  Now, since we are at a lower level, we need to take care of these details ourselves (native LDAP programmers already know this of course).

How do we do this in SDS.P?  Here is some sample code that implements paging and returns a collection of SearchResultEntry objects.

private List<SearchResultEntry> PerformPagedSearch(

    LdapConnection connection,

    string baseDN,

    string filter,

    string[] attribs)

{

    List<SearchResultEntry> results = new List<SearchResultEntry>();

 

        SearchRequest request = new SearchRequest(

            baseDN,

            filter,

            System.DirectoryServices.Protocols.SearchScope.Subtree,

            attribs

            );

 

    PageResultRequestControl prc = new PageResultRequestControl(500);

 

    //add the paging control

    request.Controls.Add(prc);

 

    while (true)

    {

        SearchResponse response = connection.SendRequest(request) as SearchResponse;

 

        //find the returned page response control

        foreach (DirectoryControl control in response.Controls)

        {

            if (control is PageResultResponseControl)

            {

                //update the cookie for next set

                prc.Cookie = ((PageResultResponseControl)control).Cookie;

                break;

            }

        }

 

        //add them to our collection

        foreach (SearchResultEntry sre in response.Entries)

        {

            results.Add(sre);

        }

 

        //our exit condition is when our cookie is empty

        if (prc.Cookie.Length == 0)

            break;

    }

 

    return results;

}

Friday, 10 March 2006

Four things...

Fellow Avanade’r Nino seems to have ‘tagged’ me for this four things game.  Well… here goes:

Four Jobs I’ve had…

  • Target Store stock boy
  • MS Access Developer (during college)
  • Ford Intern for IT department
  • Management Consultant

Four movies I can watch over and over…

  • The Big Lebowski
  • Office Space
  • Fight Club
  • Groundhog Day

Four TV shows I love to watch…

  • Scrubs
  • Mythbusters
  • Battlestar Galactica
  • Modern Marvels

Four places I’ve been on vacation…

  • Maui, Kauai, HI
  • Australia (all over), New Zealand (all over)
  • Greece (Hydra, Mykonos, Santorini)
  • Grand Cayman Island

Four favorite dishes…

  • Cutthroat Salmon (Shucker’s)
  • Porterhouse Steak (Hyde Park or Metropolitan Grill)
  • Duck (Wild Ginger)
  • Sushi (best is in Seattle and Hawaii)

Four websites I visit daily:

Four places I’d rather be…

  • On vacation
  • At a bar
  • In an electronics store
  • relaxing with my wife anywhere

Four bloggers I’m tagging

  • Keith (security guy extraordinaire)
  • Cole (general super stud)
  • Joe (besides being an LDAP/AD guru, he’s got a lot of opinions… which I love reading)
  • Ulf (another AD guru and all around nice guy)

Thursday, 09 March 2006

Goodbye Messenger BETA

Like other suckers, I installed the new MSN Messenger BETA 8 (why beta is in all caps I don’t know, perhaps to emphasize its inherent bugginess).  After using it for several weeks, I have given it the boot.  Why?

  • I am not a fan of 80+ MB memory footprints for I-frickin-M clients.
  • It crashes during the middle of chats constantly (yeah, I know its BETA with a capital B, but I wasn’t using fancy features here folks).
  • Waaaay too busy an interface for me.  I don’t dig adverts everywhere and anywhere and all the pop-ups and distractions are frankly just annoying.  It is not easily evident on how to turn all that crap off and I am not about to spend an hour (let alone 10 minutes) testing each and every option to figure out what it means.

I don’t like the way this beta product is leaning.  Part of the reason I used to like Messenger was because it was light, clean interface, and easy to use (contrast that to the horrible AOL client).  It had file transfer, voice and video capabilities, and the chat client worked well.  I don’t need games, winks, farts, tweaks, tickles, or whatever other stupid term is coined for annoying crap bouncing on your screen.

I am back to the older client and I will hang onto it for dear life until Messenger goes back to the basics:  Improve voice and video, file transfer, and mobile options for chat.  Stop the clutter and useless features (or at least let me not install the crap).  Enough ranting… back to work.

Monday, 13 February 2006

Copyedit Complete!

A major milestone was reached today.  My co-author, Joe Kaplan, and I have finally finished the copyedit process for our book.  This has been a very long book in the making and we are both pretty excited that the end is near.  We still have a few more rounds to go with proofs and what not, but for the most part, it is all done.  The next big focus for us will be to get the companion web site in order.  I won’t share the URL for it yet, but it should have some great code that we developed for this book to share.

If anyone was wondering why I have not been in the forums as much as normal these last few weeks, its because we have been heads down trying to get this all wrapped up.

Thursday, 26 January 2006

Strange DasBlog Error Fixed

Previously I was getting an intermittent error with DasBlog that would cause the entire site to freeze up.  Scott Hanselman was kind enough to point out the bug and suggest that I upgrade the binaries to fix it:

“This happens if, IMMEDIATELY FOLLOWING AN APP RECYCLE, a category page is hit, rather than a page in the root. The HttpModule thinks that the AppDomain is starting up with the category page as it's root and chaos ensues.”

I have been waiting a few weeks to ensure that this patch indeed fixed it, and it appears to be golden now.

Thanks Scott!

Monday, 16 January 2006

Back online

Well, it was bound to happen.  I was using free .NET 2.0 beta hosting and the account was expiring.  I had to switch servers and providers in order to keep my mail and site intact.  Now that that transistion has occurred, hopefully I will get around to posting some more interesting tidbits.

 

Saturday, 31 December 2005

WMF Exploit Firsthand

The last few days have been pretty crappy with regards to the computer situation.  On Wednesday (the 27th), I was trying to recover some files from a portable harddrive that had decided to go tits-up.  Well, the short story is that I was browsing the web using Google cache and all of the sudden my AV package and MS Antispyware went crazy.  Something was dropping trojans onto my machine at an alarming rate.  This was before all the hubbub broke out the next day describing the issue.

I spent an inordinate amount of time trying to undo all the hooks that had been put into my XP SP2 (fully patched) box.  First, I found that my firewall had been disabled, group policy had been applied to prevent me from accessing the task manager, and a bunch of stuff was injected into my startup portions in the registry.

Using Autoruns and Process Explorer, I also discovered that my Explorer.exe had been replaced and shelled out by another program that was trying to prevent me from removing the trojan(s).  A number of unknown services were installed and all of my browser settings were hijacked (no help from Antispyware there for some reason).  A static HTML page was inserted as my homepage that told me that my computer was at risk from spyware.  I suspect the idiots that wrote the malware expected me to take them up on an offer to remove it.

I was able to undo all the hooks and set the system in order, but I was not comfortable that I got every last thing.  As such, I had to back up a few files I was working on and restore a backup image I luckily had.  This took me a few days with everything else I had going on.

The points that bother me are:

  • I don’t think running as lower privilege user would have helped (yes, I run as an Admin, bad Ryan).  It appears that this is using a buffer overflow and RevertToSelf() to get to SYSTEM account.  Perhaps I am wrong on this one, but I have not read anything to contravene this viewpoint as it appears all XP machines are vulnerable regardless of setup.  I got this from Google’s cache, not by clicking or running any files.
  • MS Antispyware was easily defeated.  It did notify me that a trojan was installed and tried to let me remove it (which it said it did, but did not).  It did not protect me from any of my settings being hijacked.  That was a big miss.  Not only that, but I could not restore my old settings since the trojan wiped them as well.
  • My AV package did not help either.  It was nice enough to let me know that trojans were being installed, but did not appear to prevent it either.  What was the point of that?

This is a really bad one folks.  This is the very first time I have ever been infected or compromised.  I shudder to think how easily it occurred.  Make sure you patch up.  There is an unofficial patch you can use right now to help.

 

Thursday, 22 December 2005

Resurrecting Tombstones in Active Directory or ADAM

I did not get to put all the material I wanted to into the book, so I figure some of the stuff we have left out will make good material for a blog entry here and there.

Active Directory and ADAM both have a special container called “CN=Deleted Objects” that can only be viewed by administrators by default.  When we delete objects in either of these directories, the entries are typically moved there and most of their attributes are stripped off.  This is done such that other replication partners know which items were deleted.  These objects are now referred to as ‘tombstones’.  The system will hold these objects for a set period of time (60 days or so IIRC) before purging them completely.

Now, occasionally we run into situations where we want to resurrect one of the tombstones.  This is typically when we need an object to have the identical GUID or SID it had while alive as simply recreating the object would generate new values for these attributes.  There is an example on MSDN on how to resurrect these objects, but it is in C++.  I thought it would be fun to show how to do this in System.DirectoryServices.Protocols as it has the capabilities to do this while System.DirectoryServices (ADSI) does not.  We have to use the LDAP replace method in this case which is supported in the new Protocols namespace.

Instead of just posting a link to my code, I thought I would post it in whole here.  Notice that my Lazarus (resurrected, get it?) class is using Well-known GUID binding.  You should have your RootDSE return a defaultNamingContext for ADAM if you have not already done so.

class Lazarus : IDisposable
{
    const string WK_TOMBSTONE = "18e2ea80684f11d2b9aa00c04f79f805";
    string _defaultNC;
    string _server;
 
    DirectoryEntry _entry;
 
    public Lazarus()
        : this(null)
    {
    }
 
    public Lazarus(string server)
    {
        _server = server;
 
        DirectoryEntry root = new DirectoryEntry(
            String.Format(
                "LDAP://{0}RootDSE",
                _server != null ? _server + "/" : String.Empty
                )
            );
 
        using (root)
        {
            _defaultNC = root.Properties["defaultNamingContext"][0].ToString();
        }
 
        _entry = new DirectoryEntry(
            String.Format(
                "LDAP://{0}<WKGUID={1},{2}>",
                _server != null ? _server + "/" : String.Empty,
                WK_TOMBSTONE,
                _defaultNC
                ),
            null,
            null,
            AuthenticationTypes.Secure
            | AuthenticationTypes.FastBind
            );
    }
 
    public void ShowAllTombStones()
    {
 
        DirectorySearcher ds = new DirectorySearcher(
            _entry,
            "(isDeleted=TRUE)",
            null,
            System.DirectoryServices.SearchScope.OneLevel
            );
 
        ds.PageSize = 1000;
        ds.Tombstone = true;
 
        using (SearchResultCollection src = ds.FindAll())
        {
            foreach (SearchResult sr in src)
            {
                foreach (string key in sr.Properties.PropertyNames)
                {
                    foreach (object o in sr.Properties[key])
                    {
                        Console.WriteLine("{0}: {1}", key, o);
                    }
                }
                Console.WriteLine("====================");
            }
        }
 
    }
 
    private SearchResult GetTombstone(string name)
    {
        DirectorySearcher ds = new DirectorySearcher(
            _entry,
            String.Format("(&(isDeleted=TRUE)(name={0}*))", name),
            new string[] { "cn", "lastKnownParent", "distinguishedName" },
            System.DirectoryServices.SearchScope.OneLevel
            );
 
        ds.Tombstone = true;
 
        return ds.FindOne();
    }
 
    public void RaiseFromTheDead(string name)
    {
        SearchResult deadGuy = GetTombstone(name);
 
        if (deadGuy == null)
            throw new ArgumentException("No object exists");
 
        LdapConnection conn = new LdapConnection(
            new LdapDirectoryIdentifier(_server),
            System.Net.CredentialCache.DefaultNetworkCredentials,
            AuthType.Negotiate
            );
 
        using (conn)
        {
            conn.Bind();
            conn.SessionOptions.ProtocolVersion = 3;
 
            //we have to remove the isDELETED attribute
            DirectoryAttributeModification dam = new DirectoryAttributeModification();
            dam.Name = "isDELETED";
            dam.Operation = DirectoryAttributeOperation.Delete;
 
            //and set a new dn string - a bit of a copout because
            //I am always assuming a CN prefix.
            string newDN = String.Format(
                "CN={0},{1}",
                deadGuy.Properties["cn"][0].ToString().Split(new char[]{'\n'})[0],
                deadGuy.Properties["lastKnownParent"][0]
                );
 
            //word of warning... 'lastKnownParent' is only guaranteed good
            //on Windows Server 2003 and ADAM.
 
            DirectoryAttributeModification dam2 = new DirectoryAttributeModification();
            dam2.Name = "distinguishedName";
            dam2.Operation = DirectoryAttributeOperation.Replace;
            dam2.Add(newDN);
 
            //kinda bizzare that a collection is not used
            ModifyRequest mr = new ModifyRequest(
                deadGuy.Properties["distinguishedName"][0].ToString(),
                new DirectoryAttributeModification[] { dam, dam2 }
                );
 
            //we need to have the ShowDeletedControl to find this DN
            mr.Controls.Add(new ShowDeletedControl());
 
            //optionally, we must put any required attributes on the object
            //as well.  For user/computer, this might be 'sAMAccountName'
 
            ModifyResponse resp = (ModifyResponse)conn.SendRequest(mr);
 
            if (resp.ResultCode != ResultCode.Success)
            {
                //we should also check to see if it was already
                //existing here (same DN).
                Console.WriteLine(resp.ErrorMessage);
            }
        }
    }
 
    #region IDisposable Members
 
    public void Dispose()
    {
        if (_entry != null)
            _entry.Dispose();
    }
 
    #endregion
}

This is definitely sample code, but it should give a working idea of how to accomplish this.  I think that as I used the .Protocols namespace, there were some usability issues that I question, however it is definitely a neat addition.  I decided to mix using System.DirectoryServices with using .Protocols because I wanted to show how to do a Tombstone search using SDS as well.

Finally, this was tested on ADAM, but should work fine on AD.  The usual warnings apply:  This is not production code, don’t expect it to be and don’t cry to me if it breaks your machine horribly or your dog dies.

Enjoy, your comments are always welcome.

The .NET Developer's Guide To Directory Services Programming

Keith beat me to it, but Joe and I have finally submitted the final manuscript.  We have some copy editing left, but the book is mostly baked.  It has been a long time in the making and we are very fortunate to have had some great reviewers like Keith giving us advice along the way.

There were a lot of things that we wished we could have covered (perhaps in another book) more in depth.  It was a balancing act between trying to dump everything in our brains and not publishing a 900 page book.  The final page count is around 450 pages right now, which is big, but not too intimidating.

All in all, this was a good experience.  I did not know Joe very well when we started, but needless to say, we know each pretty well at this point and are good friends.  I think the partnership worked pretty darn well for two first-time authors collaborating mostly over email and wiki.  Joe knows his Directory Services very well and it would not nearly be the same book without him.  Keith was slightly off in his description (it was over a year ago), but I knew about Joe from the newsgroups (where he was dropping some serious knowledge) and had suggested to Keith to contact him as the second author of this book.  It was a great decision and I am glad Joe took it.

So, what is in the book?  Pretty much all the basics for System.DirectoryServices, like binding, searching, reading and writing attributes.  We cover the more advanced searches and data type marshaling as well as schema considerations and a discussion of security as well.  Finally, we cover a more scenario, or ‘cookbook’ approach for the more popular topics like user and group management as well as authentication.  We know what problems most users have and we try to address them in the scenarios.

Most of the samples in the book are going to be using System.DirectoryServices, though we do cover in places how to do things using the .Protocols namespace.  Additionally, we have one chapter that gives developers a view of what is there in the .ActiveDirectory namespace that they might use (it is mostly for administrators).

It is important to know that this is truly a guide and we could not cover every single scenario.  As part of the book’s website (or this blog), we will be adding new scenarios and how they can be accomplished as well.  Additionally, we are going to release some very cool code that developers will be able to use in their own applications.

As always, we can be found in either the forums (http://forums.asp.net) or the newsgroups (microsoft.public.adsi.general).  I am not usually on the newsgroups, but Joe camps there.

 

Saturday, 10 December 2005

Strange DasBlog Error

This blog site which has been quite neglected while I work on the book occasionally gets a bizarre error.  I have not had a chance to look into it at all and it is not reproducible in any way that I know. It just seems to occur randomly and usually after a long period of time.  Here is the stack trace:

[DirectoryNotFoundException: Could not find a part of the path 'C:\Inetpub\wwwroot\dunnry.com\wwwroot\dasblogce\category\SiteConfig\blockedips.config'.]
   System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) +1885372
   System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) +916
   System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) +115
   System.IO.StreamReader..ctor(String path, Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize) +85
   System.IO.StreamReader..ctor(String path) +117
   newtelligence.DasBlog.Web.Core.IPBlackList.GetBlockedIPs(String configPath) in C:\dev\DasBlog CE\source\newtelligence.DasBlog.Web.Core\IPBlackList.cs:62
   newtelligence.DasBlog.Web.Core.IPBlackList.GetBlockedIPs(HttpContext context) in C:\dev\DasBlog CE\source\newtelligence.DasBlog.Web.Core\IPBlackList.cs:37
   newtelligence.DasBlog.Web.Core.IPBlackList.HandleBeginRequest(Object sender, EventArgs evargs) in C:\dev\DasBlog CE\source\newtelligence.DasBlog.Web.Core\IPBlackList.cs:102
   System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +92
   System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +64

If anyone has seen this and knows what’s up, shoot me an email.

Tuesday, 01 November 2005

A solution for loquacious co-workers

I subscribe to some internal mailing lists at work.  Oftentimes they are informative, sometimes amusings, but lately they have just become increasingly annoying.  I think I finally found the solution.  Essentially, it is a Do-Not-Disturb sign on the inane blatherings of a few mildly annoying individuals.

I am going to give it a shot as soon as Omar releases it to see how it works.  Right away I can think of another feature I would like it to have:

  • I want the ability to mark messages not only by thread, but by individual.  Certain individuals seem to have nothing better to do than spam the mailing lists and annoy everyone.  This way the thread could be killed regardless of subject if it was originated by someone.
Unfortunately for me, most of my email access during work hours is using OWA, so this won't help me there.  At least my hours in the evening will hopefully be less painful when using the full client.

Wednesday, 19 October 2005

What attributes are available on my Active Directory Object?

The question arises occasionally on how to dynamically determine what attributes are available for a given object in Active Directory (typically the user object).  This is a different question than what attributes are actually populated with a value on an instance of the object.  We can find what attributes are defined for an object by naturally looking into the schema for the object.  This can be by inspecting the schema partition and searching for the class, or by taking an instance of the object and accessing the schema.

I will present the latter method in which we have an instance of an object and we wish to determine the available attributes (again, not necessarily populated).

using System;
using System.DirectoryServices;
using System.Reflection;
 
 
DirectoryEntry searchRoot = new DirectoryEntry(
    adsPath,
    null,
    null,
    AuthenticationTypes.Secure
    );
 
using (searchRoot)
{
    DirectorySearcher ds = new DirectorySearcher(
        searchRoot,
        "(sAMAccountName=Ryan_Dunn)"
        );
 
    ds.SizeLimit = 1;
 
    SearchResult sr = null;
 
    using (SearchResultCollection src = ds.FindAll())
    {
        if (src.Count > 0)
            sr = src[0];
    }
 
    if (sr != null)
    {
        using (DirectoryEntry user = sr.GetDirectoryEntry())
        using (DirectoryEntry schema = user.SchemaEntry)
        {
            Type t = schema.NativeObject.GetType();
 
            object optional = t.InvokeMember(
                "OptionalProperties",
                BindingFlags.Public | BindingFlags.GetProperty,
                null,
                schema.NativeObject,
                null
                );
 
            if (optional is ICollection)
            {
                foreach (string s in ((ICollection) optional))
                {
                    WL("Optional: {0}", s);
                }
            }
 
            object mand = t.InvokeMember(
                "MandatoryProperties",
                BindingFlags.Public | BindingFlags.GetProperty,
                null,
                schema.NativeObject,
                null
                );
 
            if (mand is ICollection)
            {
                foreach (string s in ((ICollection) mand))
                {
                    WL("Mandatory: {0}", s);
                }
            }
        }
    }
}

This code will take the user object I find and then iterate through all the attributes defined for this object type (in this case the user object).

 

Tuesday, 04 October 2005

MVP Conference Wrap-Up

I spent the better part of last week in Redmond and surrounding communities for the MVP Conference.  It was a nice to be able to put a face to so many online personas.  For the most part it appears that a lot of the content was also presented at PDC with a few exceptions.  My overall thoughts are that there are a number of exciting technologies – I suppose the only frustration is that there is little indication in so many cases of when the stuff will actually be shipped.

Things I am looking forward to:

  • Windows Workflow Foundation (WWF) – I think this has snuck under the radar a bit, but should present some great new capabilities to enterprise developers.  I can think of a project right now that could desperately use this.
  • Indigo (WCF) –  I hate the new name, but WCF looks very promising.
  • LINQ – I know that Frans is not a big fan of DLINQ, but I am looking forward to learning more about this particular technology.  Unfortunately, since I did not attend this session I will admit to a paucity of knowledge here.

 

Tuesday, 20 September 2005

Expanding Group Membership in .NET 2.0

We have some new options available to us in .NET 2.0 to discover a user’s group membership.  I ran into an entry on Dominick’s blog about expanding group membership using the new IdentityReference class.  This technique assumes you can get a WindowsIdentity for the user you wish to expand.  I previously covered two other techniques here and here.

I use yet another 3rd technique similar to this in the book that actually takes the ‘tokenGroups’ attribute for any user in AD and expands the membership using the IdentityReference.  It is the most elegant of the 3 methods, IMO.

One note on Dominick’s code: a way to further optimize this is to use .Translate on the IdentityReferenceCollection so that the call is batched under the hood.

Wednesday, 17 August 2005

On Installing DasBlog

Heeding Scott’s advice, I decided to install DasBlog last weekend.  I had expected the installation to take some time, so I was not really prepared when I had DasBlog up and running in approximately 10 mins.  With the help of DotText2DasBlog, most of the content was moved over automatically.  I say mostly, because the tool was unable to move all the content and I did not feel like digging into the source to figure out why.

Why did I move?  I wanted a change and an update from my very old version of .Text.  However, I was not interested in installing all of CS including the forums and picture gallery.  I also wanted to break the SQL Server dependency.

All in all, DasBlog appears to be a very finely tuned and specific app for a single user blog.  I love the XCOPY deployment and the management is a snap.  I was blown away by how fast it was to get up and running.  Great job guys.

 

.NET versus Java Security

Oh boy… I can’t wait to see all the rebuttals (and the likely Slashdot):

.NET versus Java Security

Zealots… arise!

Monday, 25 July 2005

Data Source Control Series in ASP.NET

Nikhil has wrapped up his excellent series on on creating Data Source controls.  A very well-written and informative guide.  Check it out.

 

Friday, 22 July 2005

Useful System.DirectoryServices Resources

(Updated 12/11/06)

A common question often raised by new .NET developers is : what resources are available for me to learn how to program against Active Directory or other LDAP sources?

There are a number of resources available for .NET developers:

General Resources

  • Directory Programming .NET - contains all the sample code from DotNevDevGuide to DS Programming and useful tools.  This is probably your best bet for any .NET related questions.  Check out the forums where you can get in touch with both Joe and me.  Tons of sample code for .NET can found here as well in both C# and VB.NET.
  • microsoft.public.adsi.general – This is a great resource and well trafficked.  This is also where you should post your non-.NET related questions.  C++, scripting, and *blech* VB are all fair game here.  Newsgroups might not be your bag for posting… so read on.
  • ADSI Yahoo Group – This discussion group has slowed down quite a bit, but is still a good avenue to find some help for ADSI and LDAP related questions.  The focus tends to be on .NET, but 3rd party LDAP and other technologies are fair game.

Other Resources

Books

Tools

  • It takes a little getting used to, but ldp.exe is probably the most useful tool for working with AD or ADAM.  It is a no-frills and ugly tool, but definitely powerful.  You can find this on most Windows 2003 servers or with the AdminPak.msi.  Probably an even easier way to get it is to download ADAM and just install the tools.  I rely on this tool to test my LDAP queries and bind operations first.
  • Softerra makes a nice LDAP browser for free that is useful.  I have not tried the commercial version that allows you to edit things, so I can’t comment on that.  I wish it would support more types of binds so we can bind with our current credentials, but it works well otherwise.  It does not use ADSI at all and might not support paging correctly, but it has some slick features like the ability to export objects to LDIF files with a click.
  • Wireshark - formerly Ethereal - this is an awesome tool to use to sniff the underlying traffic when you just don't know what is going on.  It does a great job of decoding the Kerberos and LDAP traffic into human-readable format.  Highly recommended when other troubleshooting steps fail.
  • Microsoft’s Err.exe tool.  Used in conjunction with ldp.exe, you can very easily pinpoint the true error.  No more COMException: An unknown exception has occurred.
  • Beavertail – An open-source LDAP browser written in C# by Marc Scheuner.  A strange name perhaps for a browser, but definitely worth a look if you want to see C# and LDAP in action.
  • ADSI Browser– Another LDAP Browser, this time written in Delphi by Marc Scheuner.  This one has a few more features than the Beavertail offering, but it is not open-source.
  • Joe Richards has a number of free tools available that worth checking out.  In particular ADFind is a must see for the command line junkie in all of us.

Friday, 15 July 2005

Determining if an account is locked out in .NET revisited.

There is an interesting email conversation going on over on the ADSI listgroup regarding the best way to determine if an account is locked out.  One of the contributors has pointed out that Microsoft has updated the schema in ADAM and Windows 2003 to include a new constructed attribute called ‘msDS-User-Account-Control-Computed’.  This attribute can accurately reflect the UF_LOCKOUT flag, unlike the standard ‘userAccountControl’ using the LDAP provider.  The question comes out, which is the better method?

From my last post on this topic, I introduced a method of determining if an account was locked out when we only have the user’s DirectoryEntry.  This method works on all platforms, including Windows 2000.  It was correctly pointed out that this particular code also had some shortcomings.  Key amongst these were that it used recursion to find the ‘lockoutDuration’ for the domain.  I never liked that bit of code and I originally hesitated to use it.  I did it anyway just so users could see where it was located.  Now, I am going to revisit this again and perhaps show a better way of finding locked accounts that works on all platforms.

//get this from RootDSE or other...
string adsPath = _defaultNamingSyntax;

//Explicitly create our SearchRoot
DirectoryEntry searchRoot = new DirectoryEntry(
    adsPath,
    null,
    null,
    AuthenticationTypes.Secure
    );

//default for when accounts stay locked indefinitely
string qry = "(lockoutTime>=1)";

if (searchRoot.Properties.Contains("lockoutDuration"))
{
    long lockoutDuration = LongFromLargeInteger(
        searchRoot.Properties["lockoutDuration"].Value
        );

    DateTime lockoutThreshold = DateTime.Now.AddTicks(lockoutDuration);

    Console.WriteLine(
        "Threshold Lockout: {0}",
        lockoutThreshold.ToString()
        );

    qry = String.Format(
        "(lockoutTime>={0})",
        lockoutThreshold.ToFileTime()
        );
}

using (searchRoot)
{
    DirectorySearcher ds = new DirectorySearcher(
        searchRoot,
        qry
        );

    using (SearchResultCollection src = ds.FindAll())
    {
        Console.WriteLine(qry);

        foreach (SearchResult sr in src)
        {
            Console.WriteLine(
                "{0} locked out at {1}",
                sr.Properties["name"][0],
                DateTime.FromFileTime((long)sr.Properties["lockoutTime"][0])
                );
        }
    }
}

 

//decodes IADsLargeInteger objects into a FileTime format (long)
private long LongFromLargeInteger(object largeInteger)
{
    System.Type type = largeInteger.GetType();
    int highPart = (int)type.InvokeMember("HighPart", BindingFlags.GetProperty, null, largeInteger, null);
    int lowPart = (int)type.InvokeMember("LowPart", BindingFlags.GetProperty, null, largeInteger, null);

    return (long)highPart << 32 | (uint)lowPart;
}

This is just sample code here, but it should give you an idea of what to do.  I put in a lot of .WriteLine statements to hopefully make clear what is occurring.  This dynamically determines the lockout duration policy and performs a simple search to determine which users are still locked out.  This is done is only one call to the directory and is pretty efficient (add more indices to the filter and it gets better).  If you were looking for only one user, you could obviously add that into the query and depending on whether or not you got a result back, you would know if they were locked out!

Here are the caveats:

  • Domain time skew can throw you off.  If you need millisecond precision this might not be for you.  If you can live with a few minutes drift depending on the local time where the user was locked out (any more and Kerberos would have some issues as would replication) then this is darn accurate.
  • I have not tested this against ADAM – assuming that ADAM keeps its ‘lockoutDuration’ on the root of the application partition and you have set the defaultNamingContext (or manually point to it) it should work fine.

Now, what about using the ‘msDS-User-Account-Control-Computed’ attribute?  It works great when you have the DirectoryEntry and you know that you are using ADAM or Windows 2003.  It is also decisive, i.e. if the bit is flipped you are locked out - no questions.  It has the following limitations however:

  • Limited Platform support (no Windows 2000)
  • Cannot be searched for directly since it is constructed attribute (bummer!)
  • Requires a second call to the directory in a .RefreshCache() for each object to inspect since it is constructed.  If you are checking lockout status a lot this adds up quickly.

So… which one should you use?  That is completely up to you.  Keep in mind the limitations of each and just pick one.

Standard Disclaimer:  In the event this does not work for you… or utterly destroys your machine, I take no responsibility.  This is sample code and not production ready.  It has been through only limited testing, so test it yourself as well.

Monday, 11 July 2005

DsCrackNames in .NET

As I alluded to some time ago in my previous post, entitled “Enumerating Token Groups (tokenGroups) in .NET” there is another method to converting the collection of SIDs obtained from the ‘tokenGroups’ attribute.

An API is available to us that can conveniently convert all the SIDs in one call to a number of different formats for us.  There is a bit of pre-work involved to define the signature, setup some structures and whatnot, but it is very slick once you have it working.

I decided that a sample would be in order to demonstrate this one.  So here it is.

The usual caveats apply – this is not production code and I am an embarrassingly bad WinUI designer so give me a break.  The point of this exercise is to show you how to use this particular API in a somewhat practical sample.

Enjoy.  Feedback is welcomed.

 

Thursday, 23 June 2005

A quick reminder on Dispose and foreach

Keith points out today that the foreach loop does not actually call Dispose() on the items being enumerated – only on the enumerator itself.  That is actually news to me – I was always under the assumption that Dispose() would be called on the iterated object when it left scope.

Why should you care?  For anyone programming System.DirectoryServices, this actually has a big impact.  Consider the following, very common code:

        DirectoryEntry entry = new DirectoryEntry(
            "LDAP://dc=mydomain,dc=com",
            null,
            null,
            AuthenticationTypes.Secure
            );
 
        using (entry)
        {
            foreach(DirectoryEntry child in entry.Children)
            {
                //do something
            }
        }

Whoops, we have potentially many undisposed DirectoryEntry objects here.  Given that the SDS namespace has a couple problems with leaking memory, we always recommend you Dispose() your objects in SDS.  This will not do it for you (and no, the ‘entry’ will not call Dispose() for its children either here).

In general, always explicitly call Dispose() on the following object types:

  • DirectoryEntry
  • SearchResultCollection (from .FindAll())
  • DirectorySearcher (if you have not explicitly set a SearchRoot)

Keep this is mind now with the foreach loops as well.

 

 

Wednesday, 08 June 2005

TechEd Podcasts

Similar to Mr. Avery, I was not particular enamored with a bunch of people running through a session, interrupting it, and taking everyone off track.  If I was a speaker, I would have told them to bugger off, but Don Box was particularly gracious with the interruption.

 

Tuesday, 07 June 2005

TechEd 2005 to this point

I have been attending TechEd 2005 this week and spending the vast majority of my time in the Server Infrastructure Cabana lounge.  I am there answering AD/MIIS questions as best as I can.  It has been an interesting experience as I will readily admit that I know relatively little about technical infrastructure as it relates to DNS and AD.  I happen to know quite a bit about programming against Active Directory, but the majority of users to this point have been asking more questions related to Active Directory topology and replication errors they are seeing than anything related to using ADSI and/or System.DirectoryServices.  When this occurs, I hand them off to some really amazing resources working there that seem able to troubleshoot just about anything in AD.  I have learned quite a bit listening to them in this way myself – just fascinating.

In terms of sessions so far, I have attended a session on ADFS (Active Directory Federation Services), which is being pitched as essentially a web single sign-on solution (SSO) that will be released with 2003 R2 (in beta now).  I previewed this technology a few months back and it has essentially only had minor changes.  It seems to be an interesting solution.  It does not provide all the functionality of some of the vendor available options on the market today like Netegrity, Cleartrust, Oblix, etc., but it certainly fills out a nice portion of that functionality and for a price point that is hard to argue with (it’s free with Windows 2003).  I would expect adoption of this technology to be rapidly be adopted by 6 months after release – it is just that compelling.  It will be released in phases with more functionality scheduled for later.  Expect only web SSO at first, moving to smart clients and SOAP next, and I am speculating here, even more complicated transports in the future (Citrix, RDS, anyone?).

Next, I attended Clemens Vasters’s session on asynchronous design.  He had some really interesting things to say about why and how to design for asynchronous transport.  The only complaint I had was that the demonstrations were much too quick to see what was really going on.  Luckily, he is going to be posting his sample code on his blog so I can inspect it later.  He is releasing a fairly substantial and full featured MSMQ Listener he created that greatly simplifies using ASMX and WSE providers with MSMQ in a pretty seamless and easy manner.  Good stuff, and I intend to dig deeper later this week.

Lastly, I have been very busy to this point and the situation has only been complicated by the utterly abysmal wireless access here this week.  I have yet to be to an MS event where wireless could stand up to the thousands of attendees.  It is almost completely unusable.  My hotel is actually competing with the convention this year to see which can provide crappier service – so far it is neck and neck.  You don’t realize how much you need the Internet until it is gone…

 

Thursday, 02 June 2005

Inspecting your Trusts in .NET 2.0

One of the great things about .NET 2.0 is the new System.DirectoryServices.ActiveDirectory namespace.  It contains a plethora of previously either obscure or difficult things to do using Ds* or NetApi* API calls.  I was troubleshooting a trust relationship at a client and was digging how easy this was to do in 2.0:

    public static void ShowAllTrusts()
    {
        Domain domain = Domain.GetCurrentDomain();
        int i = 0;
 
        foreach (TrustRelationshipInformation tri in domain.GetAllTrustRelationships())
        {
            Console.WriteLine("Trust #{0}", ++i);
            Console.WriteLine("=============================================");
            Console.WriteLine(
                "Source Name: {0}",
                tri.SourceName
                );
 
            Console.WriteLine(
                "Target Name: {0}",
                tri.TargetName
                );
 
            Console.WriteLine(
                "Trust Direction: {0}",
                tri.TrustDirection
                );
 
            Console.WriteLine(
                "Trust Type: {0}",
                tri.TrustType
                );
 
 
            string verifiedMsg = "true";
 
            try
            {
                domain.VerifyTrustRelationship(
                    Domain.GetDomain(
                        new DirectoryContext(
                            DirectoryContextType.Domain,
                            tri.TargetName
                            )
                        ),
                    tri.TrustDirection
                    );
            }
            catch (Exception ex)
            {
                verifiedMsg = ex.Message;
            }
 
            Console.WriteLine(
                "Verified: {0}",
                verifiedMsg        
                );
 
            Console.WriteLine("=============================================");
            Console.WriteLine();
        }
    }

That took all of 5 minutest to code and test using SnippetCompiler.  Very cool.

Friday, 13 May 2005

Limited Team System Foundation Server for MSDN Universal

Good news for all the folks that really pushed hard on Microsoft to make some form of this software available for Universal subscribers.

http://blogs.msdn.com/rickla/archive/2005/05/12/416994.aspx

Friday, 01 April 2005

Enumerating a user's groups in .NET

I have seen a number of techniques for enumerating a user's group in Active Directory and .NET.  Here is one that seems to crop up every now and then (note, I cleaned this up so at least we were calling Dispose() on our DirectoryEntrys):

        ArrayList al = new ArrayList();
        using (DirectoryEntry user = new DirectoryEntry(_adsPath, _username, _password, AuthenticationTypes.Secure))
        {
            object adsGroups = user.Invoke("Groups");
 
            foreach (object adsGroup in (IEnumerable)adsGroups)
            {
                using (DirectoryEntry group = new DirectoryEntry(adsGroup))
                {
                    al.Add(group.Name);
                }
            }
        }

As we can see, this one uses Reflection to grab the native IADsMembers interface.  We then enumerate the groups by placing each native object into a DirectoryEntry and consuming it as we wish.

What's wrong with this method?  At first glance, nothing really.  Sure, it seems a little more complex than just retrieving the 'memberOf' property from the user to begin with.  We also know that this will not expose the nested group relationships like enumerating the 'tokenGroup' attribute will.

No, the real issue with this is that we are at risk for a memory leak.  What would happen if an error occurred right after we .Invoke() and before the foreach loop finishes?  We would have left a bunch of native IADs objects in limbo.  Only the DirectoryEntry has the .Dispose() pattern that will calls the appropriate Marshal.ReleaseComObject() on the native IADs object.

Developers make mistakes and this one is fraught with peril.  Don't use it.  Instead, enumerate the 'memberOf' attribute or 'tokenGroups' and avoid the issue.

Tuesday, 22 March 2005

MSDN is pushing condoms...

Let me first say this: this is not a comment on the quality of Paul DiLascia's code. I have not personally evaluated it, and likely won't (it sounds too dirty). But, you too can play with *ahem*... 'ManWrap', here on MSDN:

http://msdn.microsoft.com/msdnmag/issues/05/04/C/default.aspx

What's next... the STD/IUD library (STanDard Interface User Design)?

Thursday, 17 March 2005

NetUserChangePassword implementation in C#

As an example for someone on the newsgroups, I put together an example of how to use the NetUserChangePassword API. I thought I would share it here. It is not too terribly difficult, but still... someone might find it useful.

Download here

It includes one unit test to show how it works.

X509Certificate2 or as you should know it: X509CertificateTheOneIShouldUse

Straight from Shawn Farkas:

http://blogs.msdn.com/shawnfa/archive/2005/03/16/397154.aspx

My reason for posting this is not so much that you should care that the class has been renamed, as much as you should care that this is the class you should use for X509 Certficates going forward. If you have ever done any CryptoAPI work in v1.1, you will be glad to know that they have finally put the important bits that were missing in X509Certificate into X509Certificate2. I don't know the full details of why they did not just update the original class instead of the kinda ugly 2 - but I would have to guess something about way too many breaking changes.

I had to develop a digital signing framework that used X509 Certificates. It was immediately clear that v.1.1 versions were not going to cut it. Since it was all .NET, CAPICOM was out as well. After long hours of reading MSDN CryptoAPI documentation, it became clear that I would need to write a lot of p/invoke code to get things like Public and Private keys from the Certificate as well as interact with a vendor CSP. In a way, I am glad that this X509Certificate2 was not available, since I would never have had a chance to really dig in and learn the underlying CryptoAPI. However, from a typical developer standpoint, this new class should save you at least a couple weeks of trying to figure out exactly how to interact with a CSP and get Private and Public keys.

Thursday, 10 March 2005

MessageDialog Goodness

Jeff Kay, creator of SnippetCompiler and other useful tidbits has recently released a customizable version of the MessageBox called MessageDialog

Providing for customizable text, sizes and # of buttons, it looks to be quite useful as usual.

Wednesday, 09 March 2005

Enumerating Token Groups (tokenGroups) in .NET

The 'tokenGroups' attribute is a calculated attribute (we must use .RefreshCache() to get it) that exists for all users in Active Directory.  It contains a collection of SIDs for each security group that the user is a member of.  The advantage of this collection is that it only contains security groups, and it contains all security groups including nested and primary groups.  The disadvantage is that it is a little bit more complicated to do anything with this attribute.

There are two methods of enumerating the tokenGroups and returning the security groups for a user.  The first method is to use DsCrackNames on collection of SIDs and have the Win32 api return the groups in your choice of name formats.  This is a powerful and fast method, but you will need to rely on p/invoke and setting up some structures.  The other method is to build an LDAP query filter and then use the DirectorySearcher to find all the groups.  This method returns a SearchResult for each group which means you could additionally retrieve more information about the group as well as does not require any p/invoke code, so it is usually more palatable for users.

Here are the steps we would take to enumerate the groups:

1.Create a DirectoryEntry to serve as the SearchRoot for our DirectorySearcher
2.Bind to our user object with another DirectoryEntry and pull the 'tokenGroups' attribute
3.Iterate over each SID in the tokenGroup and build the LDAP filter (formatting the SID bytes correctly)
4.Search the Directory with the constructed filter
5.Iterate each returned SearchResult for your information.

Here is a sample VS.NET solution in C# that demonstrates this:  Enumerate Token Groups

The code for how to do this using DsCrackNames I will leave for another post. (UPDATESee here for DsCrackNames)


Updated 3/16/2005 - I realize that not everyone feels like digging around to find their GUID to use this sample (I initially created it for someone that only had their GUID), so I revisited this so it also accepts the user's login name as well. Download the updated example

Updated 7/13/2005 – Reader “Daniel” was kind enough to point out some errors that I had in my code.  I seem to have deleted my original code at some point and was recreating it from scratch… so I apparently forgot some very key things in the code.  My bad!  I should have tested it better.  These errors have been corrected.

Friday, 28 January 2005

Determining if a user is locked out using LDAP in .NET

Experienced ADSI users might know the following fun fact about determining if a user is locked out in Active Directory: the 'userAccountControl' attribute is not the place to look. The 'userAccountControl' is a bunch of flags that determine a lot of settings on the user (or related class type) object. Here are a list of flags associated with it:

const int UF_SCRIPT = 0x0001;
const int UF_ACCOUNTDISABLE = 0x0002;
const int UF_HOMEDIR_REQUIRED = 0x0008;
const int UF_LOCKOUT; = 0x0010
const int UF_PASSWD_NOTREQD = 0x0020;
const int UF_PASSWD_CANT_CHANGE = 0x0040;
const int UF_TEMP_DUPLICATE_ACCOUNT = 0x0100;
const int UF_NORMAL_ACCOUNT = 0x0200;
const int UF_INTERDOMAIN_TRUST_ACCOUNT = 0x0800;
const int UF_WORKSTATION_TRUST_ACCOUNT = 0x1000;
const int UF_SERVER_TRUST_ACCOUNT = 0x2000;
const int UF_DONT_EXPIRE_PASSWD = 0x10000;
const int UF_MNS_LOGON_ACCOUNT = 0x20000;

It would seem intuitive that you should use the UF_LOCKOUT and check to see if that flag was set on the 'userAccountControl'. Of course, that only works if you are using the WinNT provider. The next logical solution would be to .Invoke the ADSI 'AccountIsLocked' property using reflection. However, again, that won't work with the LDAP provider for the same reasons - internally it is using the UF_LOCKOUT flag. So the question becomes, how do I figure out if the account is locked out using the LDAP provider?

You can do this by a small calculation. You first need to inspect the user object's 'lockoutTime' attribute and determine if it is not '0' (meaning it was locked out, but reset). If it is not '0', then you need to covert that value to a DateTime object and calculate how much time has passed by inspecting the 'lockoutDuration' attribute on the 'domainDNS' class. This will tell you how long must pass before the account is automatically unlocked. Compare that to DateTime.Now and you can see if the account is still in the lockout period. Here is a sample:
private bool IsAccountLocked( DirectoryEntry user )
{
    //if they have a lockoutTime
    if (user.Properties.Contains("lockoutTime"))
    {
        long fileTicks = LongFromLargeInteger(user.Properties["lockoutTime"].Value);

        //check to see if it's not already unlocked
        if (fileTicks != 0)
        {
            //now check to see if it was automatically unlocked
            DateTime lockoutTime = DateTime.FromFileTime(fileTicks);

            DirectoryEntry parent = user.Parent;
            while (parent.SchemaClassName != "domainDNS")
                parent = parent.Parent;

            long durationTicks = LongFromLargeInteger(parent.Properties["lockoutDuration"].Value);
            
            return (DateTime.Now.CompareTo(lockoutTime.AddTicks(-durationTicks)) < 0);
        }
    }
    return false;
}

//decodes IADsLargeInteger objects into a FileTime format (long)
private long LongFromLargeInteger( object largeInteger )
{
    System.Type type = largeInteger.GetType();
    int highPart = (int)type.InvokeMember("HighPart",BindingFlags.GetProperty, null, largeInteger, null);
    int lowPart = (int)type.InvokeMember("LowPart",BindingFlags.GetProperty, null, largeInteger, null);

    return (long)highPart << 32 | (uint)lowPart;
}
The tricky part comes in when you need to basically recurse back to the domain object to find the 'lockoutDuration'. Of course, if you knew the 'lockoutDuration', and knew it would not change, you could skip this step and make the comparison directly.

Tuesday, 18 January 2005

Determining your Primary Group in Active Directory using .NET

The Primary Group in Active Directory is an interesting concept. You are assigned the 'Domain Users' group by default when your account is initially created. The Primary Group is somewhat treated differently than any other type of group in the domain.

Due to Windows 2000 Active Directory limitations, a group can hold up to 5000 members. This stems from the fact that membership is accounted for with the 'member' attribute on an AD group. Given that multi-valued attributes (MVAs) have a maximum limit of 5000 ('member' is a MVA), we can see why you typically cannot put more than 5000 users into a group. Since large domains can easily have over 5000 members, we can see that treating the Primary Group like any other type of group would not work. We would quickly run to the limit of the 'member' attribute for the Primary Group.

It was decided at some point that they would have to change how AD managed members for the Primary Group. Instead of keeping the membership on the group object itself, they would store an identifier on the user object that made it calculatable to figure out the Primary Group.

Because of the inconsistency of how the Primary Group was treated, it made it somewhat challenging to figure out what a user's Primary Group was. The WinNT: provider would enumerate all groups, but not tell you which one was the primary group. Just to be contrary, the LDAP: provider would enumerate all groups except the Primary Group. Instead, you could enumerate the 'tokenGroups' attribute, but this would only enumerate the security groups and still not tell you which one was the Primary Group either.

To help ADSI developers, Microsoft released a couple support articles that outlined 3 ways of figuring out the Primary Group.
http://support.microsoft.com/default.aspx?scid=kb;en-us;321360
and
http://support.microsoft.com/default.aspx?scid=kb;en-us;297951

These methods are still valid, but need to be adapted somewhat for .NET. The solution I will present here will be an all LDAP solution. The steps are quite simple:

1. Retrieve the user's SID in byte array format
2. Retrieve the user's Primary Group ID RID (Relative Identifier)
3. Overwrite the user's RID on their SID with the Primary Group RID
4. Construct a binding ADsPath and bind to the Directory.

In order:
1. Retrieve the user's SID in byte array format:
byte[] objectSid = user.Properties["objectSid"].Value as byte[];

2. Retrieve the user's Primary Group ID RID
//calculate the primaryGroupId
user.RefreshCache(new string[]{"primaryGroupId"});

3. Overwrite the user's RID on their SID with the Primary Group RID:
        private byte[] CreatePrimaryGroupSID(byte[] userSid, int primaryGroupID)
        {
            //convert the int into a byte array
            byte[] rid = BitConverter.GetBytes(primaryGroupID);

            //place the bytes into the user's SID byte array
            //overwriting them as necessary
            for (int i=0; i < rid.Length; i++)
            {
                userSid.SetValue(rid[i], new long[]{userSid.Length - (rid.Length - i)});
            }

            return userSid;
        }

4. Finally, construct a binding string from the returned byte[] array and bind
adPath = String.Format("LDAP://<SID={0}>", BuildOctetString(sidBytes));
DirectoryEntry de = new DirectoryEntry(adPath, null, null, AuthenticationTypes.Secure);

        private string BuildOctetString(byte[] bytes)
        {
            StringBuilder sb = new StringBuilder();

            for(int i=0; i < bytes.Length; i++)
            {
                sb.Append(bytes[i].ToString("X2"));
            }
            return sb.ToString();
        }

I have created a small VS.NET 2003 project to demonstrate this. You can download the sample here

Monday, 03 January 2005

When does my Password Expire?

Here is a quick and dirty way to figure out when your account (or anyone else's) will expire on the domain.

Prerequisites:
1.  You must be running this from a computer joined to the domain.
2.  You must be running this with valid domain credentials.

How does it work:

There is no attribute that directly holds when your password expires.  It is a calculation done based on two factors - 1. When you last set your password (pwdLastSet), and 2. What your domain policy for the maximum password age (MaxPwdAge) is.

Here is a simple command line app to demonstrate how this is done:

using System;
using System.DirectoryServices;
using System.Reflection;
class Invoker
{
    [STAThread]
    static void Main(string[] args)
    {
        try
        {
            if (args.Length != 1)
            {
                Console.WriteLine("Usage: {0} username", Environment.GetCommandLineArgs()[0]);
                return;
            }
            PasswordExpires pe = new PasswordExpires();
            Console.WriteLine("Password Policy: {0} days", 0 - pe.PasswordAge.Days);
            TimeSpan t = pe.WhenExpires(args[0]);
            if(t == TimeSpan.MaxValue)
                Console.WriteLine("{0}: Password Never Expires", args[0]);
            else if(t == TimeSpan.MinValue)
                Console.WriteLine("{0}: Password Expired", args[0]);
            else
                Console.WriteLine("Password for {0} expires in {1} days at {2}", args[0], t.Days, DateTime.Now.Add(t));
        }
        catch(Exception ex)
        {
            Console.WriteLine(ex.ToString()); //debugging info
        }
    }
}
class PasswordExpires
{
    DirectoryEntry _domain;
    TimeSpan _passwordAge = TimeSpan.MinValue;
    const int UF_DONT_EXPIRE_PASSWD            = 0x10000;
    public PasswordExpires()
    {
        //bind with current credentials
        using (DirectoryEntry root = new DirectoryEntry("LDAP://rootDSE", null, null, AuthenticationTypes.Secure))
        {
            string adsPath = String.Format("LDAP://{0}", root.Properties["defaultNamingContext"][0]);
            _domain = new DirectoryEntry(adsPath, null, null, AuthenticationTypes.Secure);
        }
    }
    public TimeSpan PasswordAge
    {
        get
        {
            if(_passwordAge == TimeSpan.MinValue)
            {
                long ldate = LongFromLargeInteger(_domain.Properties["maxPwdAge"][0]);
                _passwordAge =  TimeSpan.FromTicks(ldate);
            }
            
            return _passwordAge;
        }
    }
    public TimeSpan WhenExpires(string username)
    {
        DirectorySearcher ds = new DirectorySearcher(_domain);
        ds.Filter = String.Format("(&(objectClass=user)(objectCategory=person)(sAMAccountName={0}))", username);
        SearchResult sr = FindOne(ds);
        int flags = (int)sr.Properties["userAccountControl"][0];
        if(Convert.ToBoolean(flags & UF_DONT_EXPIRE_PASSWD))
        {
            return TimeSpan.MaxValue; //password never expires
        }
        //get when they last set their password
        DateTime pwdLastSet = DateTime.FromFileTime((long)sr.Properties["pwdLastSet"][0]);
        
        if(pwdLastSet.Subtract(PasswordAge).CompareTo(DateTime.Now) > 0)
        {
            return pwdLastSet.Subtract(PasswordAge).Subtract(DateTime.Now);
        }
        else
            return TimeSpan.MinValue;  //already expired
    }
    private long LongFromLargeInteger(object largeInteger)
    {
        System.Type type = largeInteger.GetType();
        int highPart = (int)type.InvokeMember("HighPart",BindingFlags.GetProperty, null, largeInteger, null);
        int lowPart  = (int)type.InvokeMember("LowPart",BindingFlags.GetProperty, null, largeInteger, null);
            
        return (long)highPart << 32 | (uint)lowPart;
    }
    private SearchResult FindOne(DirectorySearcher searcher)
    {
        SearchResult sr = null;
        using (SearchResultCollection src = searcher.FindAll())    
        {
            if(src.Count>0)
            {
                sr = src[0];
            }
        }
        return sr;
    }
}

Wednesday, 22 December 2004

Memory Leak using DirectorySearcher .FindOne()?

I remember Joe Kaplan telling me one time that there was a memory leak when using the .FindOne() method of the DirectorySearcher. I also believe Max Vaughn from MS told me the same thing, now that I think about it. I decided to dig a little bit more into this and figure out what was actually wrong. It turns out that this bug will only really affect you when you are doing a lot of searches and not finding any results. That's right, I said not finding any results. A quick look using Reflector and we can see why this is:

            public SearchResult FindOne()
            {
                SearchResultCollection collection1 = this.FindAll(false);
                foreach (SearchResult result1 in collection1)
                {
                    collection1.Dispose();
                    return result1;
                }
                return null;
            }

As you can see, the .Dispose() is only called when at least one SearchResult is found. If no SearchResult is found, then the SearchResultCollection hangs onto its unmanaged resource. What's the fix? It is a pretty simple fix actually - just make sure you always call .Dispose() on your SearchResultCollection by using the .FindAll() method instead:

        public static SearchResult FindOne( DirectorySearcher ds )
        {
            SearchResult sr = null;

            using (SearchResultCollection src = ds.FindAll())
            {
                if (src.Count > 0)
                {
                    sr = src[0];
                }
            }
            return sr;
        }


Note, for those unfamiliar with C# syntax, the 'using' statement will call .Dispose() in a try/finally manner at the last brace.