This past Friday, Microsoft posted a DSC Update that’s worth your attention – and some commentary.
This follows up on a previous announcement about “DSC Core,” a term which the company has wisely stopped using. I do think the original use of “DSC Core” was well-intentioned: “Core” had come to represent the company’s cross-platform efforts, a la .NET Core and PowerShell Core. But the “new DSC” has nothing to do with either of those, and so the use of “Core” was confusing.
The “new DSC” does mean that “old DSC” is going away. But, as this most recent update reveals, very little of your existing work or knowledge investment will go to waste.
The “new DSC” will consist of a rewritten Local Configuration Manager, or LCM. This should still duplicate much of the functionality of the existing LCM, but will be written in C++, enabling it to be compiled for any operating system. So, gone are the days of a .NET-based LCM for Windows and a distinct one written for Unix/Linux – we’ll have one code base.
The big announcement, for me, was that the LCM will rely on a “provider model” for running DSC resources. While the word provider is a little overused in PowerShell, this is a huge and positive step forward. The announcement indicates that the first provider will allow the new, C++ based LCM to run DSC resources written for PowerShell – e.g., almost everything currently out there today. A future provider will enable resources written in cross-platform PowerShell Core (which, as discussed previously here, is a distinct product from Windows PowerShell), and other future providers will support resources written in C++ and Python. This is a huge deal, as it offers the potential for a much wider array of community-based resources.
I was also impressed with the humility in Microsoft’s open source direction for the LCM. I think the company has learned from its open sourcing of PowerShell Core that “open source” means a great deal more than just publishing your code in GitHub. You’ve got to be responsive to issue posts, responsive to pull requests, and more. So the team is taking a more gradual approach to open sourcing the new LCM, which should help ensure they do it right.
The announcement mentions that, “[the new LCM] will need to be installed on systems where the current DSC platform exists today, and we will need to offer conflict detection…” and I find that notable. It means the company is still thinking about the best ways to deploy this new LCM, and they’re working through the math on, “what if you have the old one and new one installed – is it going to be raccoons in a bag clawing at each other, or no?” We’ll have to see what they come up with, but I’d personally be just fine if the new LCM just disabled itself if the old, original LCM was working. That would make it easy to pre-stage the new LCM without fear, and “flip it on” one day by disabling the old LCM.
I think by the time the new LCM is formally released as “General Availability” (as opposed to the inevitable beta releases), we’ll be looking at feature parity with the old LCM. That means I think we can expect to see it support both pull and push modes. Given that the LCM pretty much is DSC – that is, the LCM has all the brains of the technology – that should make the transition pretty easy and seamless. I think it’s important to follow the beta builds, so that if you’re seeing feature parity go awry, you can provide Microsoft with the feedback they’ll need to course-correct in time. Don’t wait until GA to bitch and moan; get in there and play with this new thing the moment you can.
There are, of course, questions not answered in this post, but I’ll hazard my own guesses. There’s no mention of Pull Server. Now that the team has enabled SQL Server for the “native” Pull Server, I honestly think we can expect to see them stop investing in that product. Their direction, as with much of Microsoft, is Azure; the pricing for running DSC in Azure Automation is so low that many organizations can, and should, simply do that. Those companies whose servers must run in a totally disconnected environment should plan to look outside of Microsoft – such as Tug, an open-source pull server replacement that’s got a ton more flexibility. Today’s Microsoft can’t be all things to all people, and if you have edge-case working conditions – like no Internet for your servers – then you’re going to find yourself a bit more on your own. And, if I’m being honest, for some of those edge cases (and even for some more mainstream cases), a full configuration management platform like Chef or Puppet may be your best bet.
The real upshot here, though, is that everything you know about DSC is still valid. Microsoft is going to be doing some heavy lifting, programming-wise, to broaden the applicability of your DSC knowledge across platforms, but I suspect you’re not going to have to do much work to “keep up” for this first phase. We’ll doubtless see some asked-for new features creep in along the way, which will be great, and hopefully we’ll see some of the minor architectural improvements that the community has been asking for for a few years, now.
Overall, I think this is a bright and positive announcement for DSC fans, and I’m looking forward to seeing where the company goes next!