Why "Puppet vs. DSC" isn't Even a Thing

After all the DSC-related excitement this week, there have been a few online and Twitter-based discussions including Chef, Puppet, and similar solutions. Many of these discussions start off with a tone I suppose I should be used to: fanboy dissing. "Puppet already does this and is cross-platform! Why should I bother with DSC?" Those people, sadly, miss the point about as entirely as it's possible to do.

Point 1: Coolness

First, what Microsoft has accomplished with DSC is cool. Star Wars Episode V was also cool. These facts do not prevent previous things - Puppet/Chef/etc and Episode IV - from being cool as well. Something new being cool does not make other things less cool. This shouldn't be a discussion of, "Puppet did this first, so nothing else can possibly be interesting at the same time." As IT professionals, we should be looking at everything with an eye toward what it does, and what new ideas it might offer than can be applied to existing approaches.

Point 2: Switching

Have you seen the magazine ads suggesting you ditch Puppet and start using DSC? No, you have not - and you will not. If Puppet/Chef/etc is meeting your needs, keep using it. The fact that Microsoft has introduced a technology that accomplishes similar things (make no mistake, they're not the same and aren't intended to be), doesn't mean Microsoft is trying to convince you to change.

I know where people get confused on this, because in the past that's exactly what Microsoft intended to do. They're not, this time. And I'll explain why in a minute.

Point 3: DSC on Linux

Snover demonstrated a DSC Local Configuration Manager running on Linux, consuming a standard DSC MOF file, being used to set up an Apache website on the server. The underlying DSC resources were native Linux code.

This is not an attempt to convince Linux people to switch to Windows, nor is it an attempt to convince them to use DSC. Saying so is like saying, "Microsoft made PowerShell accept forward slashes as path separators in an attempt to convert Linux people.... but we're too smart for that, hahahahah!" It's idiotic. Microsoft knows you're not going to suddenly break down and switch operating systems. They may be a giant corporation that sometimes makes silly moves, but they're not dumb.

No, DSC on Linux is for Windows admins who choose to use DSC, and who want to extend that skill set to other platforms they have to manage. People who aren't, in other words, faced with a "switch" decision.

Point 4: Puppet/Chef/etc Should Use DSC

Linux is, in many many ways, a more simplistic OS than Windows. And I mean that in a very good way, not as a dig. Most config information comes form text files, and text files are ridiculously easy to edit. Getting a solution like Puppet to work on Linux is, form a purely technical perspective, pretty straightforward. Windows, on the other hand, is built around an enormous set of disparate APIs, meaning getting something like Chef/DSC/whatever working on Windows is not only harder, it's essentially a never-ending task.

Microsoft is pouring time and money into creating DSC resources that can, through a very simple and consistent interface, configure tons of the OS. The coverage provided by DSC resources will continue to grow - exponentially, I suspect. That means Microsoft is doing a lot of work that you don't have to.

Even if you're using Puppet/Chef/etc instead of DSC, you can still piggyback on all the completely open and human-readable code that actually makes DSC work. Your recipes and modules can simply call those DSC resources directly. You're not "using" DSC, but you're snarfing its code, so that you don't have to re-invent that wheel yourself. This should make Puppet/Chef people super-happy, because their lives got easier. Yes, you'll doubtless have to write some custom stuff still, but "save me some work" should always be a good thing.

Point 5: Tool vs. Platform

Another thing that sidetracks these discussions is folks not understanding that Puppet/Chef/etc each provide a complete solution stack. They are a management console, they are a domain-specific language, and they are a platform-level implementation. When you adopt Puppet, you adopt it from top to bottom.

DSC isn't like that.

DSC only provides the platform-level implementation. It doesn't come with the management tools you actually need in a large environment, or even in many medium-sized environments. I completely expect tools like System Center Configuration Manager, or something, to provide the management-level tooling on top of DSC at some point - but we aren't discussing System Center.

So arguing "Puppet vs. DSC" is a lot like arguing "Toyota vs. 6-cylinder engine." The argument doesn't make sense. Yes, at the end of the day, Puppet/Chef/etc and DSC are meant to accomplish every similar things, but DSC is only a piece of the picture, which leads to the most important point.

Point 6: Microsoft Did Something Neat

You can't take your Puppet scripts and push them to a Chef agent, nor can you do the reverse. Puppet/Chef/etc are, as I mentioned, fully integrated stacks - and they're proprietary stacks. "Proprietary" is not the same as "close-sourced;" and I realize that the languages used by these products aren't specifically proprietary. But the Puppet agent only knows how to handle Puppet scripts, and the Chef agent only knows how to read Chef scripts. That's not a dig at those products - being an integrated, proprietary stack isn't a bad thing at all.

But it's interesting that Microsoft took a different approach. Interesting in part because they're usually the ones making fully-integrated stacks, where you can only use their technology if you fully embrace their entire product line. This time, Microsoft bucked the trend and didn't go fully-integrated, proprietary stack. Microsoft did this, and the simple fact that they did is important, even if you don't want to use any of their products.

From the top-down, that is from the management side down, Microsoft isn't forcing you to use PowerShell. They're not forcing you to use Microsoft technology at all, in fact. The configuration file that goes to a managed node is a static MOF file. That's a plain-text file, as in "Management Object Format," as in developed by the Distributed Management Task Force (DMTF). A vendor-neutral standard, in other words.

See, Microsoft isn't pushing DSC as a fully integrated stack. DSC is just the bottom layer that accepts a configuration and implements it. Puppet Labs could absolutely design their product to turn Puppet scripts into the MOF file that DSC needs. You'd be able to completely leverage the OS-native, built-in configuration agent and all its resources, right from Puppet.

Frankly, de-coupling the administrative tooling from the underlying API should make people happy. If we're having a really professional, non-fanboy discussion about declarative configuration, I think you have to admit that Microsoft has kinda done the right thing. In a perfect world, the Puppet/Chef/etc administrative tools would let you write your configuration scripts in their domain-specific language, and then compile those to a MOF. Everyone's agents would accept the same kind of MOF, and execute the MOF using local, native resources. That approach means any OS could be managed by any tool. That's cross-platform. You'd be free to switch tools anytime you wanted, because the underlying agents would all accept the same incoming language - MOF.

I'm not saying Puppet/Chef/etc should do that. But if you're going to make an argument about cross-platform and vendor-agnostic tooling, Microsoft's approach is the right one. They've implemented a service that accepts vendor-neutral configurations (MOF), and implements them using local, native resources. You can swap out the tooling layer anytime you want to. You don't need to write PowerShell; you just need to produce a MOF.

At the End of the Day

I think the folks behind Puppet/Chef/etc totally "get" all this. I think you're probably going to see them taking steps to better leverage the work MS is doing on DSC, simply because it saves them, and their users, work. And I don't think you're going to see Microsoft suggesting you ditch Puppet in favor of DSC. That's a complete non-argument, and nobody at Microsoft even understands why people thing the company wants that.

I fully recognize that there's a lot of "Microsoft vs. Linux" animosity in the world - the so-called "OS religions." I've never understood that, and I certainly am not trying to convince anyone of the relative worth of one OS over another. PowerShell.org - a community dedicated to a Microsoft product - runs on a CentOS virtual machine, which should tell you something about my total lack of loyalty when it comes to choosing the right tool for a job. If you're similarly "non-religious" about operating systems, I think DSC is worth taking a look at just to take a look at it. What's it do differently? How can you leverage that in your existing world? Are there any approaches that might be worth considering?

Part of my frustration about the whole "Puppet vs DSC" meme is that it smacks of, "my toys are shinier than your toys," which is just... well, literally childish. And it worries me that people are missing some of the above, very important, points - mainly, that Microsoft is trying really damn hard to play nicely with the other kids in the sandbox for a change. Encourage that attitude, because it benefits everyone.

Once More...

And again, I don't think Microsoft is trying to convince you to use DSC, or any other MS product, here. I'm certainly not trying to do so. I think DSC presents an opportunity for folks who already have a declarative configuration management system, strictly in terms of saving you some work in custom module authoring. And I think for folks that don't have a declarative configuration management solution, and who already have an investment in Microsoft's platform, DSC is going to be an exceptionally critical technology to master. That doesn't in any way diminish the accomplishment of the folks behind Puppet/Chef/etc. In fact, if nothing else, it further validates those products' goals. And I think it's massively interesting that Microsoft took an approach that is open to be used by those other products, rather than trying to make their own top-to-bottom stack. It's a shift in Microsoft's strategic thinking, if nothing else, and an explicit acknowledgement that the world is bigger than Redmond.

Let's at least "cheers" for that shift in attitude.

 

 

 

 

About the Author

Don Jones

Don Jones is a Windows PowerShell MVP, author of several Windows PowerShell books (and other IT books), Co-founder and President/CEO of PowerShell.org, PowerShell columnist for Microsoft TechNet Magazine, PowerShell educator, and designer/author of several Windows PowerShell courses (including Microsoft's). Power to the shell!

7 Comments

  1. Great article! As a provider author for windows, a chocolatey maintainer and a systems architect, I can tell you a few things that I have learned about configuration management. I am also probably one of the first windows devops guys. I do I think that Microsoft is learning, but still far behind. Perhaps it is a communications problem or a result of conflicting visions from within their different groups. This was a common theme at a recent MS Techstravanganza we had here in NYC.

    At the heart of a good puppet or chef or ansible implementation, is something that is not part of any of them. This is version control. A good configuration management system needs to leverage version control to create an immutable timeline of all configurations that ever existed. This is multifold because it will increase transparency/visibiltiy into the configuration across a wider group of teams (enter devops). To contrast with a database backed configuration design, a db is a horrendous place to store system configurations, even from the mere chicken-egg problem - How do I build my database server itself? It becomes a bootstrap nightmare. Version control also affords the ability to rollback the current configuration to any historic configuration. Also through a VC tool like git, which makes branching cheap and easy, it allows one to create a test_new_feature branch. In this feature branch, it keeps all my existing config the same, but then only contain the new changes. I can run this branch safely against a test environment, qa, and if it passes, I can merge it into production. This dynamic environment capability is a well known pattern. With a DVCS I can then fork and clone as needed to bring new technical staff into the fold and work from anywhere.

    So with version control out of the way, you then need a pipeline. The pipeline is how do people work. How do they submit changes into the pipeline, how are they reviewed, tested and subsequently deployed. The vision with DSC is lost because it feels like Microsoft isn't sure how you should do this, or doesn't want to enter this space. The emphasis is on deployment. Except for maybe a handful of people (like Steve @ StackOverflow) I hope people aren't deploying so much because they probably are doing so at a high risk.

    Github has helped a lot around this, particularly with their service hooks and repo forking, but testing tools like Travis, Jenkins, Teamcity, now Appveyor for windows are ambiguous for use on this. It isn't clear how you write tests around powershell DSC code, and know its good before pushing to 100s or 1000s of nodes . How do you lint it? How do you unwind a bad push/pull. Ruby has a strong TDD community spirit, and Powershell has Pester (which I have improved) but it has not received a nod as "the go to framework". We do use it for chocolatey. Phabricator and Gerrit have become key tools for peer review and create "safety stops" to prevent bad code from moving into prod. Testing is the second most important part of config management after the version control and people need to be engaged in how testing is conducted.

    The communities, particularly around puppet and chef have assembled these types of pipelines to solve configuration management, because one tool cannot do the job nor should it. Note I am not endorsing or pushing any of the tools, I am a firm believer of picking the best tool for your environment. And part of devops is being informed about tool selection.

    > But if you're going to make an argument about cross-platform and vendor-agnostic tooling, Microsoft's approach is the right one

    This is not proven. Ruby is a cross platform language, powershell is not. A tool like puppet, organizes the domain language and puts it into a catalog which is an directed acyclic graph for ordered and non-ordered dependencies. The agent then knows how to implement the catalog through the use of providers. These providers know what to do on each system. The package provider I'm writing for One-Get differs from the provider I wrote for chocolatey, which is different for yum, or apt-get that pertain to Linux OS's.

    Providers are akin to DSC modules. And this is what I think a lot of people who haven't done the development don't get. If I am building a cluster on windows and I want to manage a cluster resource, 3 pieces need to be crafted, to manage a resource as present or absent. : Add-ClusterResource and Remove-ClusterResource and Get-ClusterResource. There is no getting around making the powershell calls via a provider, a module, or whatever other nomenclature people use. It goes back to idempotency, and you should never Add-ClusterResource if the Get-ClusterResource returned true and the inverse likewise.

    I applaud Microsoft for getting into this space, but while they solidify the base on this declarative end of configuration management, a whole swarm of people have figured out containerization with docker, making a lot of configuration management obsolete. By shipping applications in containers like ServerAppV, but represented as code instead of sequenced, makes application delivery and its relating configuration a much more trivial excercise to then move, scale, shrink or deploy. The vision is immutable servers, or as many people say cattle not pets. ServerappV is a topic for another thread as well, but this is a major gap in the MS stack.

    Anyway, I have to run. I enjoy your blog, speaking and articles. Good stuff and love the devops windows story and people's opinions.

  2. to quickly summarize, while Microsoft may have done something neat, the technology was already there to manage windows declaratively though already open configuration management tools. This focus was, imho was misplaced, as now gaps exist throughout the whole pipeline, confusion between tooling like SCCM and DSC, and now people who want containers on windows are years behind what the rest of the world is already doing en-masse.

  3. Another perspective: Declarative idempotent administration isn't something you buy. It is something you achieve (and by degrees at that).
    By analogy single sign on is not something you buy. It is something you achieve. MSFT investments in Windows integration w/ Kerberos, SAML, etc. made it easier for everyone in this space (admins, identity stack vendors, etc.) to *achieve* single sign on with Windows and across platforms. MSFT investments in platform specific DSC resources and open MOF format should similarly make it easier for everyone playing in this space (admins, management stack vendors, etc.) to *achieve* increasingly declarative idempotent administration with reduced effort in the future.

  4. Rich- I enjoyed your reading your post and its insights. You wrote, "I applaud Microsoft for getting into this space, but while they solidify the base on this declarative end of configuration management, a whole swarm of people have figured out containerization with docker, making a lot of configuration management obsolete". I too see this trend towards containers and wondered about how it would change configuration management (it does, fundamentally, I think). That said, what % of workloads will be able to be containerized over the next 5-10 years? I'm not sure it will be a majority--it might, but there is still a need to configure the hosts running containers, at the very least. And I suspect, the ability to manage deployments of containers themselves may provide opportunities for tools. Regardless, I'm no Puppet/Chef expert, but I do know Windows. When I've looked into it in the past, I've observed that both Puppet and Chef seem relatively weak as Windows configuration solutions. Yes, they support it, but that support is generic and, as Don alludes to, tends to ignore the complexity (or richness) of a Windows system. I think the DSC integration work that Chef showed almost a year ago could provide the best of both worlds. I'm big on the right tool for the right job, and DSC could be part of that equation. As has been stated, I don't think it's an either/or decision.
    Darren

  5. "See, Microsoft isn't pushing DSC as a fully integrated stack. DSC is just the bottom layer that accepts a configuration and implements it. Puppet Labs could absolutely design their product to turn Puppet scripts into the MOF file that DSC needs. You'd be able to completely leverage the OS-native, built-in configuration agent and all its resources, right from Puppet."

    That is completely backwards! Puppet, Chef, etc are systems which help manage entire groups of systems. For example, a chef server can be queried by a client to see what the IP address is for the SQL Server in the environment. MOF's are just descriptors for CIM objects. DSC keeps track of the state of those CIM objects. That is relatively simple. The MOF/DSC represent units of work and the tracking of the state. Chef and Puppet should make calls into the DSC system to check the state of an object and act accordingly. This would be similar to native Chef and Puppet providers. Chef and Puppet should make direct API calls into DSC without Powersell.

    You're right that this level of abstraction is important on Windows. This is because it is a pain to write a chef script that calls into Windows objects. Since the code backing MOF's can be Powershell scripts, that means they can easily make calls into the Windows system. On Linux/Unix most configuration is done via straight text files that are easily to turn into templates. No API call is needed. This has been the way of Unix/Linux for ages. But, RedHat is also working to build out CIM-based objects for configuring a Linux system. The MOF's are under the OpenLMI project.

    I find it sort of strange that Microsoft has released DSC code for Linux that starts from scratch. Instead of using LMI_Account, they implemented MSFT_nxUserResource. I am not a CIM coder, but it appears that Microsoft's object is based on 'OMI_BaseResource' and not 'CIM_Account'. If I'm reading that right, extending the CIM_Account is the more standard way to access account objects.

    Also Microsoft does all over this over WMI, not WBEM. WBEM is the standard way to access CIM objects over a netowkr. WMI is not a standard. It is just an interface to objects defined by a standard. So until Microsoft starts using WBEM, they should not claim that DSC is "standard".

    To sum up things. DSC is NOT standard (WMI, instead of WBEM). DSC will *absolutely* be a requirement for sane management of Windows systems via Chef, Puppet, etc.

    • Just to clarify: WMI is CIM objects accessed over DCOM. WBEM is CIM objects accessed over WS-MAN.

      As of recent releases (Win 8 at least, maybe longer?), Windows supports WS-MAN. WS-MAN is the primary remote management technology used by PowerShell and DSC. WMI is still supported for some things, but there are also variants that use WS-MAN (see Get-WmiObject vs Get-CimInstance PowerShell cmdlets).

      So with support for WS-MAN, DSC is actually standards compliant with WBEM (at least, to my knowledge).