Want to see what a real-world, functional, production-grade DevOps environment looks like?
Look no further than Amazon Web Services' Elastic Beanstalk (EBS). EBS is a neat combination of their EC2 IaaS product, S3 storage, and some DevOps magic. From a working perspective, it goes something like this:
- Developer checks code into Git. A portion of this code is actually a set of EBS directives, outlining changes that need to be made to the base operating environment. This can include things like setting environment variables, installing packages, and so on.
- Someone indicates that what's in GitHub is ready for release. You can do this by pushing a button in your AWS console, or by making a call to AWS' REST APIs. It's pretty easy to automat this step.
- AWS spins up virtual machines, and reads the EBS directives to get that environment configured the way it's supposed to be. The code is loaded from Git into the VMs. The VMs are registered with AWS' load balancer, and whatever old VMs were running are de-registered and destroyed. Poof, your app is up and running.
This model accomplishes the basic goal of DevOps, which is to shorten the path between developers and users. So where's the "Ops" role in all this? Amazon did it. Their contribution to ops was to create all the automation necessary to make these steps happen. And the beauty of this model is that it supports tiered environments. For example, the above three steps might serve to spin up a testing environment, where you then run automated tests to validate the code. If the code validates, it's pushed into a production tier - all automatically - running on a separate EBS application. So from check-in to in-production is entirely automated, and the process can be performed consistently every single time.
Now... what would this look like in a Windows world?
In Step 1, imagine that instead of a set of EBS configuration directives - which are just text files - your developers create DSC configurations. Yes, the developers. After all, they're the ones who are coding for the environment, so that DSC configuration documents what they need the environment to look like. You might have a second DSC configuration that documents corporate standards for security, manageability, and so on. Whatever.
Step 3 might be Microsoft Azure Pack or System Center Virtual Machine Manager, told - perhaps via an SMA automation script - to spin up the new VMs from a base OS image. The DSC configurations are run to produce a MOF, which is injected into the new VM. The developer's code is deployed to the VM. The VM is registered with DNS and perhaps a load balancer, which provide access to it.
There are a couple of important details that I've glossed over a bit. Jeffrey Snover is fond saying, "treat servers like cattle, not pets." But servers by their nature have to have a few unique pieces of information, right? Well... yes and no. For all I know, cows make up names for themselves. I just don't care. Take IP addresses, for example. You shouldn't be assigning static IP addresses to servers; your DHCP system should be highly available, fault tolerant, and set up to handle servers. As you spin up a new VM, you can obviously have it register itself with DNS, so the IP address is mapped to a hostname. And speaking of that hostname - you as a human never need to know it. Or you shouldn't. Windows will make up a host name for itself as the VM spins up, and you can - through your automation scripts - capture that host name. That lets you set up DNS CNAME records, a load balancer, or whatever else. The point is that while the server may have made up a name for itself, you don't care. Nobody will ever address that server by its host name - they'll use an abstraction, like a load-balanced name, or a CNAME, or something else. Your automation scripts handle the mapping for you. When a VM is spun down, automation de-registers the dying host's name from whatever, closing the lifecycle loop.
Interestingly, you could probably do this exact model, today, with a huge number of applications in your environment. Why bother? I mean, this model makes sense in web apps where you're constantly spinning up and destroying VMs, but what about the majority of your apps that just run all the time without change? Well, this same model could spin them up in a disaster recovery scenario. Or in testing environments, which are constantly re-created to provide "clean" tests. Yes, it's a lot of investment up front to make it all work, but once it's set up it just runs itself.
And that's what DevOps looks like.
Sounds like a great plan.
For implementing it this guy seems to have a reasonably good way to put it in release management and powershell and tfs build processes.
http://writeabout.net/2015/03/22/packaging-dsc-configurations-for-visual-studio-tfs-release-management-vnext/
Is microsoft working on any extra tooling for DSC in their traditional tools like the do for DACPAC and standard solution building?
Thom, I’ve no idea what Microsoft may or may not be working on in particular. I’d suggest that part of “DevOps” is developers stretching their muscles a bit and learning to work with the technologies that are available, rather than necessarily need more abstractions. For example, I don’t think it’s terribly much to ask for dev’s to learn PowerShell’s Configuration Script syntax, solely for the purpose of specifying the environmental needs their app might have. That wouldn’t necessarily need to be a complete config – just a portion that is applicable to the application. That portion could be easily merged (via automation) into a whole-machine config (a feature EBS notably lacks), used to produce a MOF, and then deployed to the target machine.
On Twitter, you’d asked about T4 support. I suppose you could certainly use a T4 template to produce that partial PowerShell script, which is what T4 is for – producing template-based code. I suppose you could use T4 to produce a MOF directly, although that’d get harder to validate and merge with other MOF bits; it’d be easier to assemble a composite configuration script and then let PowerShell produce the MOF.
But the point is that everything we actually need already exists. Microsoft’s less and less in the business of tooling and more and more in the business of platform technologies; the tooling your organization needs will probably be different than what mine needs. Part of DevOps is learning to produce your own tooling that fits your own needs – that tooling becomes part of your organization’s competitive advantages, and so it can’t exactly be off-the-shelf.
To your original point, the fellow who wrote the article is definitely working up some great tooling and methodology.
[…] A Real-World DevOps Implementation – and Food for Thought (Don Jones) […]