DSC ConfigurationData Blocks in a World of Cattle

As you may know, Jeffrey Snover and I have, for some time, been on a "servers are cattle, not pets" kick. Meaning, servers shouldn't be special, individualized snowflakes. They should be, in many regards, appliances. One dies, you eat it and make another. They don't have names - that you know of. They don't have IP addresses - that you know of. Oh, I mean, they have them, but you don't know them and don't care.

Anyway, one thing that came up in a recent conversation related to DSC's ConfigurationData blocks. Have a look at the MSDN documentation and tell me what you see.

Go on, I'll wait.

You see NodeName. But damnit, if servers are cattle and cattle don't have (known) names, what the dude is NodeName all about?

Well, for one, it was a poor choice on the team's part. I'd have called it - and this is giving away the punchline - NodeRole. Imagine that your "NodeName" was "SalesAppWebServerRole." When you run your configuration script, you get a MOF named SalesAppWebServerRole.mof, right? Which you then checksum and load onto a pull server. And when you're spinning up a new server to host that role, you tell its LCM to grab the ConfigurationName "SalesAppWebServerRole."

The server, when spinning up, makes up a name for itself. Charming, right? Cows think they have names. Sweet. Don't care. It gets an IP address for itself, partially from DHCP of course, and partially by making up the other necessary IPv6 stuff (oh, and IPv6 is a thing now, so get on board).

Then, presumably, it runs to the pull server, grabs the MOF, and starts a consistency check. During which, presumably, it registers some known name with DNS or load balancer or something. Now you know it's "name!" Or the name you want to call it by, at least. Also presumably, your load balancer knows to remove or suspend the entry if the host stops responding, and to periodically scavenge stale records (remember, the node's own LCM will make sure its entry gets put back, on the next consistency check run). So if the node dies and you spin up a new one, the rest of the affected infrastructure - DNS, load balancers, what have you - clean themselves up automatically (and DSC could be involved in that process, too).

Anyway... the point is that ConfigurationData blocks can absolutely be used for cattle farms, not just for pet shops. "NodeName" is a misleading setting, but if you think of it as a role, which could be applied to multiple actual machines, then it makes a lot more sense that way.

About the Author

Don Jones

Profile photo of Don Jones

Don Jones is a Windows PowerShell MVP, author of several Windows PowerShell books (and other IT books), Co-founder and President/CEO of PowerShell.org, PowerShell columnist for Microsoft TechNet Magazine, PowerShell educator, and designer/author of several Windows PowerShell courses (including Microsoft's). Power to the shell!


  1. So how does this work with certificates?
    I'm currently using a shared certificate for credential encryption, but in The DSC Book you say that we shouldn't share certificates and should list every cert thumbprint for each Node.

    So which is it?
    Sharing one cert seems to be the more "cattle-minded" approach, since you don't have to define every node that you want to configure in a single document.

    • It's a tough one. I personally have unique certs, but I also generate my allnodes doc.

      I have a "roleInformation" document that contains my cattle based info. I merge it with a list retrieved from AD, where part of the name shows the role, presto - a rather large allnodes doc where servers contain information listed against their role and not unique name.

      Gives me ultimate flexibility and keeps things rather simple.

      In my DSC configs, I don't use the nodename property, but instead use 'role'