Building a Desired State Configuration Infrastructure


This is a the kickoff in a series of posts about building a Desired State Configuration (DSC) infrastructure. I'll be leveraging concepts I've been working on as I've been building out our DSC deployment at Stack Exchange.

The High Points

I'm starting today with the general overview of what I'm trying to accomplish and why I'm trying to accomplish this. The what and why are critical in determining the how

The Overview


All systems have basic and general purpose roles configured and monitored for drift via Desired State Configuration.


System configuration is the one of the silent killers for sysadmin (yes, I prefer sysadmin to IT Pro - deal with it). In the case where deployments are not automated, each system is unique, a snowflake that results from the our fallibility as humans.
The more steps involved that require human intervention allow for more potential failure points. Yes, if I make a mistake in my automation, then that mistake can be replicated out. But as Deming teaches with the Wheel of Continuous Improvement (Plan, Do, Check, Act),  we can't correct a process problem until we have a stable process.

Deming Cycle

Every intervention by a human adds instability to the equation, so first we need to make the process consistent. We do that by standardizing the location(s) of human intervention.  Those touch points become the areas that we can tweak to further optimize the system.  I'm getting a bit ahead of myself though.
Let's continue to look at how organizations tend to deploy systems.  Organizations tend to have several levels of flexibility in their organizations about how systems are built and provided for use.  The three main categories I see are:

  • Automated provisioning from a purpose built image
  • Install and configure from checklist
  • Install and configure on demand

Usually, the size of the organization tends to indicate to what level they've automated deployments, but that is less true today.  Larger organizations tend to have more customized and automated deployments.  It's mainly been a matter of scale.  With virtualization and (please forgive me) cloud infrastructures, even smaller organizations can have ever increasing numbers of servers to manage, with admin to server ratios of 1 to hundreds being common and where the number of servers starts to overtake the client OS count.
If we aren't in a fully automated deployment environment, each server has the potential to be subtly (or not so subtly) unique.  Checklists and scripts can help with how varied our initial configurations can start out, but each server is like a unique piece of art (or a snowflake).

Try to make more than one of me...

That's kind of appealing to sysadmins who like to think of themselves as crafters of solutions.  However, in terms of maintainability, it is a nightmare.  Every possible deviation in settings can cause problems or irregularities in operations that can be difficult to track down.  It's also much more work overall.
What we want our servers to be is like components fresh off the assembly line.

Keeping it consistent

Each server should be consistently stamped out, with minimal deviations, so that troubleshooting across like servers is more consistent.  Or, even more exciting, if you are experiencing some local problems, refreshing the OS and configuration to a known good state becomes trivial.  Building the assembly line and work centers can be time consuming up front, but pays off in the long haul.

My Situation:

At Stack Exchange, we are a mix of these categories.  All of our OS deployments are driven by PXE boot deployments.  For our Linux systems, we fall into the first group.  We can deploy an OS and make the addition to our Puppet system, which will configure the box for the designated purpose.  For our Windows systems, we operate out of the second and third groups.  We have a basic checklist (about 30-some items) that details the standards our systems should be configured with, but once we get to configuring the server for a specific role, it's been a bit more chaotic.  As we've migrated to Server 2012 for a web farm and SQL servers, we've began to script out our installations for those roles, so they were kind of automated, but in a very one-time run way.
Given where we stood with our Windows deployments and the experience we had with Puppet, we looked at using Puppet with our Windows systems (like Paul Stack - podcast, video) and decided not to go that route (why is probably worthy of another post at another time).  That was around the time that DSC was starting to peek it's head out from under the covers of the Server 2012 R2 preview.  Long story made short, we decided to use DSC to standardize our Windows deployments and bring us parity with our Linux infrastructure in terms of configuration management.

Proposed Solution: Desired State Configuration

DSC offers us a pattern for building idempotent scripts (contained in DSC resources) and offers an engine for marshaling parameters from an external source (in my case a DSC Pull Server, but could be a tool like Chef or some other configuration management product) to be executed on the local machine, as well as coordinating the availability of extra functionality (custom resources).  I'm building an environment where a deployed server can request it's configuration from the pull server and reduce the number of touch points to improve consistency and velocity in server deployments.
Next up, I'm going to talk about how I've configured my pull server, including step by step instructions to set one up on Server 2012 R2.

13 Responses to " Building a Desired State Configuration Infrastructure "

  1. […] Murawski is running a series on for Building a DSC infrastructure. I am planning to run a similar presentation session at Singapore User Groups in November/December […]

  2. […] Building a Desired State Configuration Infrastructure […]

  3. Rich Siegel says:

    I would love to hear more about the choice to DSC “probably worthy of another post at another time”. We are running puppet on 100s of windows servers, and I’ve built providers a good number of providers to do nearly all the tasks needed. We haven’t had scalability or speed problems. most iterative runs are several seconds to a min, and the full catalog on a new build in about 7 minutes of the total build time [~35 on baremetal] The other thing is we have really clean patterns in puppet for windows and linux code living harmoniously, that has been published in some github presos.
    On another point- We had talked about clustering, and i have a really naive, and incomplete ruby provider for it. I initially wrote portions of it in COM ruby calls, but have since moved it toward powershell. I would love to explore using DSC as the api and have puppet/dsc/chef whatever and use this common approach to managing it. There is a lot to instantiating a cluster, storage and the resources. I had seen your work here Interesting stuff!
    Let me know your thoughts!

  4. […] new and shiny and that may prevent it from being truly production-ready other than the brave Steven Murawski getting DSC implemented over at Stack Overflow.  I can’t even begin to look into deploying DSC to my client’s […]

  5. Mark Albin says:

    Thanks for sharing this information! IT Master Services. Sparks NV

  6. […] makes DSC a necessity? Haven’t we been doing well so far? Steven Murawski does a far better job of explaining this (just gonna paraphrase here for a […]

  7. dragon788 says:

    Steven, it would be great if you could update the links to your other articles to remove the dates since has gone to “permalinks” or so it appears.