Category Archives: PowerShell for Admins

Community Brainstorming: PowerShell Security versus malicious code


A couple weeks ago, some malicious PowerShell code was discovered in the wild, dubbed the “Power Worm” in the Trend Micro article that originally publicised the malware. Matt Graeber has done a great analysis of the code on his blog. In the comments for that blog post, we started discussing some options for locking down PowerShell, preventing it from being used in this type of attack. (Unfortunately, Execution Policy is currently no barrier at all; PowerShell.exe can simply be launched with the -Command or -EncodedCommand parameters, bypassing ExecutionPolicy entirely.) Matt’s idea is to have PowerShell run in Constrained Language mode by default, similar to how it works on the Windows RT platform.

This post is to engage some community discussion on the topic; please do so in this thread: http://powershell.org/wp/forums/topic/discussion-community-brainstorming-powershell-security-versus-malicious-code/

What are your thoughts on this topic? Do you feel that PowerShell.exe needs some additional security features to try to prevent it from being used in this sort of malware? What impact would these ideas have on your normal, legitimate uses of PowerShell (if any)? How would you suggest minimizing that impact while still making it more difficult for PowerShell malware to execute?

Cmdlets or Advanced Functions?


I’ve posted a poll to get a feel for the PowerShell community’s preference on this. Compiled Cmdlets offer much better performance than the equivalent PowerShell advanced function, which can be a very valuable thing if you need to process large sets of data in your scripts. The other side of that coin is that in order to make changes or review the code of a Cmdlet, you need to start working with C# (and likely Visual Studio as well.) Which do you feel is more important when downloading a module: performance, or sticking with familiar PowerShell code?

The poll is over on my blog.

New tools in my toolbox!


Much like a top mechanic, I keep a well-organized toolbox filled with up-to-date, high quality tools, ready to tackle any management, troubleshooting or automation project of the day. With the recent release of SAPIEN Technologies new 2014 lineup, I have some new tools and updates.

Wanna’ see what’s in my toolbox? I’m happy to show you around on one condition. I’m interested in what you have in your toolbox, so share and we can discuss!

As an admin working heavily with products such as Microsoft Exchange, Exchange Online, System Center and others, I find that I need many different tools depending on the management task at hand. My Windows 8.1 desktop has everything neatly arranged on the taskbar so the right tool is always a click away, but I also have my own secret aliases to launch a variety of my tools from the Windows PowerShell Console – including Microsoft Word just in case I feel the urge to write a blog about something new I’ve learned. So, here are some of the tools in my toolbox and how I use them.

Windows PowerShell Console

I have friends, colleagues, and several students that laugh when I open the Windows PowerShell Console to launch my browser. I admit that placing a shortcut on my desktop or taskbar is probably a better approach, however I’m almost always in the Console. I have the Console open to “check” on things, run reports, manage or troubleshoot a problem, basically using the Console as the interactive tool it was designed to be. I can easily get help on a cmdlet, launch additional tools, and run my scripts. So, when you see me open the Console and type:

PS> Start iexplore www.bing.com

It’s “ok” to laugh, it’s the habit of always having the Console open and at my fingertips. But the Console is not the only tool in my toolbox. I need to automate and build reusable solutions and the console just isn’t the place for that.

Notepad

Yes, I use Notepad from time to time. Normally it’s for something simple and short, like modifying my profile or quickly copying a one-liner from the Console. For me, this is an old habit from years of scripting when I would walked up to computer and there was nothing else installed. Many of us that started out with PowerShell 1.0 continued to hone the “Notepad” skills. For new admins starting with PowerShell, Notepad is probably not something you need in your toolbox; we have the built-in ISE now and makes Notepad look like an old rusty screwdriver.

Windows PowerShell ISE

Let’s face it, the ISE is great and I use it all the time. Syntax highlighting, a GUI Console with easy-to-use cut and paste operations to make a new script, who wouldn’t use this over Notepad. The ISE is a great tool for working out an idea, learning and creating a new one-liner, building a function or writing out some automation.  Everyone needs this as part of their toolbox.

But wait…there’s more…

My toolbox doesn’t stop with the ISE. Many of my projects require working on Modules of cmdlets, longer and more complicated automation scripts and sometimes I need more features (like package and deployment) that the ISE doesn’t provide.  Does this make the ISE bad? NO! The ISE is great for what it is designed to do, that’s why I have a toolbox of many tools and not just a hammer.

Many of my friends are professional developers and rely on professional development packages like Microsoft Visual Studio and Apple Xcode. The reason for them is the added tools and features that help them accomplish their tasks. As a PowerShell Admin working on more and more complicated solutions, I too need additional tools.  While I may not need Visual Studio yet, I could use the help from a professional development package.

SAPIEN PrimalScript 2014

Disclaimer: I’ve used PrimalScript for a very long time and I admit I’m a fan of SAPIEN Technologies. They have earned my respect over the years by providing me with high quality tools that make my life easier. It’s that simple.  PrimalScript has 50 different parsing languages – so no matter if it’s a PowerShell script, Perl, AutoIt or Java, I’m equipped. You might not need to work with a variety of different languages but there is much more to this product that may fit your needs such as easy to use script signing, creating .exe’s and packaging tools.

I’ve been using the new release version and I personally think that SAPIEN has hit the mark once again. The cost is much cheaper than Visual Studio and the tools are designed for many administrative tasks, which means it fits for me. I could spend pages explaining why I use PrimalScript, but is it right for you? I don’t know – that’s why you should check out the webpage and see. Remember, if you decide to try out the evaluation version, let me know what you think and if you decide to add it to your toolbox.

SAPIEN PowerShell Studio 2014

From time to time I’ve found myself in need to create a graphical tool to help support an admin or helpdesk. Often these are simple graphical tools, manipulating some user properties of a mailbox or in Active Directory. While I encourage helpdesk folks to learn PowerShell, let’s face it, it’s just much easier for them in most cases to use a graphical tool. This is why PowerShell Studio is in my toolbox. It’s really like having the great graphical creation tools from Visual Studio in an affordable development environment using PowerShell.

Again, this may or may not be a business challenge you face, so take a look at the website for more information and take the evaluation out for a spin. If the only language you will need at your office is PowerShell, then this might be the perfect tool for you. I’m curious so let me know.

The Glue in my toolbox – SAPIEN VersionRecall 2014

I’ve written here in the past about VersionRecall, a quick, simple version control program that keeps versions of your scripts for easy comparison and recovery. I don’t have the hardware/software for Microsoft Team Foundation Server, and I really don’t need those features that TFS provides. It’s a great product and if you have TFS, or something like it, then you already have your solution, but I always wanted something that could provide automatic version control that was super simple and easy to use. That’s why it’s the glue in my toolbox, keeping all my scripts versioned and backed up, so the next time I’m wondering what change did I make that broke my function, I can easily make a comparison.

Again, you can always try it out for yourself on their website.

Microsoft Visual Studio

This tool is in my toolbox for entirely different reasons then the others; education. When students or colleagues notice I have Visual Studio, they usually ask if I’m a developer. Well, to get better at automation with PowerShell, I’m becoming one. No, I’m not leaving PowerShell for the lands of C# – it’s simply that I’m learning more about the development process, skills, and deeper knowledge to help improve my abilities. These improvements directly impact my job capabilities, so the investment in Visual Studio has been worth it.

I do want to mention, that if you are a developer that wants to work with PowerShell and you have Visual Studio – which I’m sure you do – check out PowerShell MVP Adam Driscoll’s PowerShell Tools for Visual Studio.  I have them loaded into my copy of VS and I really like what he has done.  You might find it helpful, so check out his page here. 

Closing my toolbox

So that’s the short tour of my toolbox and how I use the tools inside. So, what’s in your toolbox? What do you like to have at your fingertips? We all have slightly different jobs with different requirements, but the discussion can help us learn from each other. There is no right or wrong answer!

Cheers!

Jason

We Want Your DSC Resource Wish List!


What sorts of things would you want to configure via DSC that don’t already have a resource?

NB: Focusing on the core Windows OS and its components only; Exchange, SharePoint, SQL Server, and other products are off the table for this discussion.

For example, I want a “log file rotator” resource, that lets me specify a log file folder, an archive folder, and a pair of dates. Files older than one date are moved from the log folder to the archive folder; archived files older than the second date are deleted.

I’d also like a File Permissions resource. Specify a folder or file, optional recursion, and a set of access control entries (in plain English terms), and it’ll make sure the permissions stay that way.

Maybe also a User Home Folder resource, which would (a) ensure a folder exists for a given set of user accounts, and (b) ensures a set of “template” permissions, so that each individual user has the rights to their folder, plus rights given to global users like admins.

What resources would YOU like to have to ease configuration and maintenance in YOUR environment? Drop a comment!

My DSC Demo-Class Setup Routine


I think I’ve gotten my DSC classroom and demo setup ready. Understand that this isn’t meant to be production-friendly – it doesn’t automate some stuff because I want to cover that stuff in class by walking through it. But, I thought I’d share.

I’ve basically made an ISO that I can carry into class, attach to a Win2012R2 VM and a Win81 VM, and run students through. The server VM is a DC in “company.pri” domain, and the client VM belongs to that domain.

In the root of the ISO are these scripts: ISO_Root (unzip that). Students basically just open PowerShell, set the execution policy to RemoteSigned or Unrestricted, and then run SetupLab -DVD D:, replacing “D:” with the drive letter of the VM’s optical drive. The script isn’t super-intelligent since I demo it at the same time; it needs the colon after the drive letter.

In a folder called DSC_Modules, I add the following DSC modules (unzipped): xActiveDirectory, xComputerManagement, xDscDiagnostics, xDscResourceDesigner, xNetworking, xPSDesiredStateConfiguration_1.1, xSmbShare, xSqlPs, xWebAdministration.

In a folder called DSC_Pull_Examples, I include these scripts: DSC_Pull_Examples (unzip that).

In a folder called eBooks, I include these files: eBooks (unzip that). Those get used in a lot of the demos I do, so I have the lab setup scripts copy over some script modules.

In a folder called Help, I have a file called Help.zip. This contains everything downloaded by the Save-Help command in PowerShell. The Setup script unzips this into the VM and then runs Update-Help against it, so the VM doesn’t need to be Internet-connected.

In a folder called Hotfix, I have the Windows8.1-KB2883200-x64.msu hot fix installer. I include the 32-bit version also, just in case, but my script doesn’t use it.

In a folder called Installers, I have installers for PrimalScript, PowerShell Studio, and SQL Server Express with Advanced Services. Again, those get used a lot in my classes, but the setup script doesn’t rely on them.

Finally, in a folder called sxs, I have the contents of the Windows 8.1 installation media’s \Sources\sxs folder. Some of the things my setup script does – like adding .NET Framework 3.5 so SQL Server 2012 will work – rely on features that aren’t in a Win8.1 VM, normally. Because I don’t want to rely on the Internet, I include this source so I can install new features from it.

This is all pretty specific to the way I run classes, but if there’s any use you can make of it, feel free.

PowerShell and System.Nullable<T>


While helping to answer a question about Exchange cmdlets today, I came across something interesting, which doesn’t seem to be very well documented.

A little background, first: in the .NET Framework (starting in Version 2.0), there’s a Generic type called System.Nullable<T>. The purpose of this type is to allow you to assign a value of null to Value types (structs, integers, booleans, etc), which are normally not allowed to be null in a .NET application. The Nullable structure consists of two properties: HasValue (a Boolean), and Value (the underlying value type, such as an integer, struct, etc).

A C# method which accepts a Nullable type might look something like this:

int? Multiply(int? operand1, int? operand2)
{
    if (!operand1.HasValue || !operand2.HasValue) { return null; }

    return operand1.Value * operand2.Value;
}

("int?" is C# shorthand for System.Nullable<int> .)

PowerShell appears to do something helpful, though potentially unexpected, when it comes across an instance of System.Nullable: it evaluates to either $null or an object of the underlying type for you, without the need (or the ability) to ever access the HasValue or the Value properties of the Nullable structure yourself:

$variable = [Nullable[int]] 10

$variable.GetType().FullName   # System.Int32

If you assign $null to the Nullable variable instead, the $variable.GetType() line will produce a “You cannot call a method on a null-valued expression” error. You never see the actual System.Nullable structure in your PowerShell code.

What does this have to do with Exchange? Some of the Exchange cmdlets return objects that have public Nullable properties, such as MoveRequestStatistics.BytesTransferred. Going through the MSDN documentation on these classes, you might expect to have to do something like $_.BytesTransferred.Value.ToMB() to get at the ByteQuantifiedSize.ToMB() method, but that won’t work. $_.BytesTransferred will either be $null, or it will be an instance of the ByteQuantifiedSize structure; there is no “Value” property in either case. After checking for $null, you’d just do this: $_.BytesTransferred.ToMB()

Building Desired State Configuration Custom Resources


Now that we’ve suitably rested, let’s get back to working with Desired State Configuration.  Now, there are some basic features to work with that ship by default and the PowerShell team has been blogging some additional resources, but in order to do some really interesting thing with DSC, we’ll need to create our own resources.

The High Points

The DSC Resource Structure

DSC resources are (at their most basic) a PowerShell module.  These modules are augmented by a schema.mof file (we’ll get into that more in a minute or two).  These modules expose three main functions, Get-TargetResource, Set-TargetResource, and Test-TargetResource.  All three functions should share the same set of parameters.

Test-TargetResource

Test-TargetResource validates whether your resource is currently in the desired state based on the parameters provided.  This function returns a boolean, $true if the resource is in the state described or $false if not.

Set-TargetResource

Set-TargetResource is the workhorse in this module.  This is what will get things into the correct state.  The convention is to support one parameter called Ensure that can take two values, “Present” or “Absent” to describe whether or not a resource should be applied or removed as described.

(Here’s a little trick.. if you write break your Test-TargetResource into discrete functions, you can use those functions to only run the portions of Set-TargetResource that you need to!)

Get-TargetResource

This is currently the least useful of the commands, but if experience has taught me anything, it’ll likely have an a growing use case over time.

Get-TargetResource returns the current state of the of the resource, returning a hash table of properties matching the parameters supplied to the command.

Exporting Commands

This module should explicitly export these commands via either Export-ModuleMember or a module manifest.  If you don’t, Import-DscResource will have trouble loading the resources when you try to generate a configuration (it’s not a problem for running a configuration, just the generation part).

The Managed Object Framework (MOF) Schema

The last piece of the DSC Resource is a schema file that maps the parameters for the command to a CIM class that can be registered in WMI.  This allows us to serialize the configuration parameters to a standards-based format and allows the Local Configuration Manager to marshal the parameters back to call the PowerShell functions for the phase that the LCM is in.  This file is named modulename.schema.mof.

There is no real reason to write a schema.mof file by hand, both the DSC Resource Designer and my New-MofFile function can help generate that function.  The one key thing to be aware of in the schema.mof is that there is an attribute at the top of each of the MOF classes that denotes a friendly name, which is the identifier you will use in a configuration to specify a resource.

[ClassVersion("1.0.0"), FriendlyName("Pagefile")]

How To Structure a Module With Resources

To get a good idea of the resource structure, we can look at the StackExchangeResources module in the PowerShell.Org GitHub repository.  There is a base module – StackExchangeResources, which has a module metadata file (required, you’ll see why in a minute).  In that module, we need a folder DSCResources.  Our custom resource will be placed under that folder.

The reason we need a module metadata file for the base module, is when resources from that module are used in a configuration, the generated configuration MOF files will reference the version of the base module (and that specific version is required on the node where the resource will be applied).

Next up, we’ll talk about how we package our resources to be distributed by a pull server.

The DSC Conversation Continues


Some lovely conversation on DSC over on Reddit… with some I wanted to perhaps offer an opinion on. From what I’ve seen, these are very common sentiments, and they definitely deserve… not argument or disagreement, but perhaps an alternate viewpoint. I’m not suggesting the commenters are wrong – but that maybe they’re not considering the entire picture.

Certainly if you work with a superset of MS OSs (i.e. you do Linux also), then Puppet or something like it seems like a no brainer. In fact, that is what we’re doing now. Puppet has powershell modules you can install for instance. Personally, I still feel like Powershell is overrated except for small snippets of that’s how something is exposed. Puppet can run powershell commands. AutoIT can run powershell commands… I just don’t see value in Powershell today.

The point is that, until PowerShell, there were no PowerShell commands. Microsoft was incredibly inconsistent about providing automation-friendly commands of any kind. They could have gone down the path of building command-line tools for Cmd.exe; they didn’t. The point of PowerShell is that Microsoft forced themselves to build commands. Now, if you run those from AutoIt, or Puppet, or whatever else – that’s cool. PowerShell is an API, not a tool. Whatever tool you use to access that API is just dandy. Without the API, the tools are useless.

As to DSC – I’m really confused. Why is this separate from Group Policy again? Why is it better? Or is MS giving up on Group Policy as needing a total re-write?

The advantage of Group Policy over DSC, today, is that GP has richer ability to target computers based on OU membership, WMI criteria, etc. Today, DSC targeting isn’t that flexible. On the other hand, GP is extremely difficult to extend, since client extensions are native code. GP was built to manage the registry, although it’s been extended to do more. DSC is built to do whatever PowerShell (and, via CIM, native code) can touch. My opinion? Yeah, DSC will obviate GP over time. Not instantly.

Specifically, as I’ve been rolling out Puppet across Windows and Linux, I see that in some ways, it brings the computer GPO aspect to Linux, and duplicates it a bit on Windows.
Anyway, I won’t be surprised to see someone start writing DSC modules in Puppet, because you’ll want your config management to work across your platforms. And MS is kind of late to the game here – many many people have lots of knowledge already in Puppet, Chef etc…

The guys on the PowerShell team love Chef and Puppet. I think you’re confusing “api” and “tool.” There are two pieces to DSC: Piece one is the ability of PowerShell to read a configuration script and produce a MOF. Piece two is the ability of a Windows computer to receive that MOF and reconfigure itself accordingly. Any tool can do piece one. Use Puppet to produce the MOF. Use Puppet to control which MOFs get sent where. That’s the intent. But Microsoft takes a big burden off the Puppet developers by having Windows know what to do with the MOF. Yeah, MS is late to the game. No question. But they’re joining the game, not reinventing it. What they’re doing works with what everyone else is already doing.

I would personally carry the sentiment even further and say that investing the bulk of your effort in DSC over something like Puppet would be needlessly tying your own hands. Why focus on something that’s platform specific when there is a good cross-platform alternative. Don’t put all your eggs in one basket as it were.

Wrong. It isn’t an either-or thing. DSC’s introduction at TechEd 2013 included a demo of Puppet (or was it Chef?) being used to send configurations to Windows – much more easily, because with DSC, Windows natively knew what to do with them. If you’ve got tooling like Puppet, use it. DSC is just making Windows work better with it. The whole point of DSC is that it plays the cross-platform game everyone else has already been playing. 

Purely on the Windows side, the need to focus on DSC is more about developing the DSC resources you need, so that you can send a MOF (from Puppet, say) to a Windows computer, and that Windows computer will know how to configure everything you need configured. Microsoft will continue to produce resources for core OS and server application stuff; any LOB stuff is what you’d be focusing on.

Heck, even in a pure-Windows environment, with cross-platform off the table, Puppet provides tooling that DSC does not. You’re going to need those tools, whether it’s Puppet, some future System Center thing, or whatever. DSC is a mid-level API, not a tool.

Configuration managment does seem to be the future — I just don’t agree completely with the author’s point of a view that it will have to be DSC.

On Windows, DSC will be the underlying API that your configuration management tool talks to. DSC isn’t a configuration management tool. DSC bridges the gap between a text-based MOF and the bajillion proprietary protocols MS uses internally in their products. Remember, on Linux, it’s easier – everything already lives in a text file of some kind, right (oversimplifying, I know, but still)? In Windows, config information lives everyplace; DSC’s main job is to bridge the gap. DSC doesn’t provide management of what configuration goes where; it just provides the implementation mechanism. In PowerShell, there’s a primitive ability to write configurations, because MS has to give you something, but yeah… I think most organizations would benefit from good tooling atop that.

I think this entire discussion is why more people need to start learning (not necessarily using) DSC if you have Windows in your environment. Find out what it is, what it isn’t, and how it’ll play into the other efforts you’ve got underway. There’s a ton of misconception about what it is and where it’s meant to fit in. When I say, “if you’re not learning DSC, you’re screwed,” I don’t mean, “if you’re not using DSC.” I mean learning. Because if you’re not learning it, you’re going to be subject to the same misconceptions about it. You end up spending a lot of time reinventing what it’s willing to do – and what it’s willing to do in conjunction with your existing tools.

 

 

The DSC Opportunity for ISVs


Desired State Configuration offers a number of immediate opportunities for independent software vendors (ISVs) who are smart enough to jump on board now. DSC currently suffers from a marked lack of tooling. That’s partially deliberate; MS obviously needs to deliver the functionality, and they may well rely on third parties or the System Center team to build tools on top of that functionality. But let’s explore some of the immediate opportunities.

Change Control and Versioning. This should be pretty easy. We basically need a way to “check in” a new DSC configuration, possibly have it go through an approvals workflow, and then deploy it. In more detail, I’d want to be able to submit a configuration script to this tool. It would run the config, generate a MOF, and deploy it to a “lab” pull server location. I could then verify its functionality, and “approve” it to deploy the MOF to a production pull server. Deployment would include creating the necessary checksum file. Obviously, rollback capability to a previous version would be nice.

Configuration Consolidation. Natively, DSC requires me to specify the nodes I want to push a configuration too. I’d like to see a tool that lets me create server lists somewhat graphically, organizing things so that a single server might appear in a “domain controllers” list, a “New York servers” list, and a “Win2012R2″ list.  I could target configurations at each list, and the tool would combine those configurations to create the appropriate one for each node based on its “folder memberships.” That might be done through composite resources. This makes DSC work a bit like GPO, with this tool doing the work of combining configurations into a single one per node.

DSC Studio. Using the underlying DSC Resource Kit and Resource Designer for functionality, give me an IDE that lets me graphically design a resource (specify properties) and then spit out the schema MOF and skeleton PSM1 file. This could probably be a very simple PowerShell ISE add-on, in fact.

Node management. In a pull server environment, give me a tool that lets me group servers. The tool should modify the LCM on each group, so that each member of the group has the same DSC configuration ID. That way, they’re all pulling the correct MOF from the pull server. Otherwise, managing GUIDs gets out of hand pretty quickly – I can see a lot of Excel spreadsheets.

Resources. There are obviously a ton of resources to be written. This might be a bit of a bad call for an ISV, as you never know what MS is going to release resources for. Now that MS has built so many PowerShell cmdlets, building resources on top of them gets pretty straightforward. They’ve pumped out two waves of resources pretty fast already.

In short, I think there’s a big opportunity for a smart company. It’s a matter of seeing the “holes” in the technology, which currently focus mainly on management, and filling them in.

 

DSC: Must-Have or Just Nice-To-Have?


On a recent PowerScripting Podcast episode, I went off on a bit of a career-oriented rant, and amongst other things mentioned something to the effect of, “if you’re not learning DSC now, you’re screwed.” It hopefully goes without saying that my comment applies to folks working in environments that use Microsoft server products; obviously, in an all-Linux shop you’re pretty safe not knowing Microsoft’s technologies :).

Some discussion on Twitter ensued, a place I hate for discussions because 140 characters leaves you enough room to be misunderstood and paraphrased, but not enough room to articulate your perspective. I wanted to follow-up on the rant a bit, and by doing so here hopefully engender a more detailed discussion.

One comment – and this is a nice, succinct one to start with: “Is it a useful tool? Yes; is it the tool that makes or breaks a sysadmin? No.” Couldn’t disagree more. Maybe it won’t make or break you today, but in a few years – absolutely. Unless you’re stuck in a company that’s going to just run Win2008 forever. So if it’s going to be an inevitable part of your future, then you are, in fact, more and more screwed the longer you ignore it. It’s like the poor NetWare guys who ignored TCP/IP. They were screwed, in the end, and had to hustle to catch up. I hate playing catch-up; in my mind “screwed” is what you are whenever you’re playing “catch up.” So maybe knowing my definition of “screwed” will help the discussion a bit!

Another comment – and a good one – was, “[PowerShell] is a must… but I live in a multi-platform world where it is just a part, not a definer, of the whole.” Excellent point, but if you must manage Microsoft technologies, then DSC is going to be a part of your life. Perhaps it’ll be DSC “as managed by ___” cross-platform solution, but DSC is going to be the underlying API. If you’re comfortable being insulated from underlying APIs by tools, fine – but you’ll never be as effective as you would be if you knew those tools. Point being, in a multi-platform environment, DSC is not all you need to know, but you must know it (or begin to) if that environment includes Microsoft server products. Could you manage your Microsoft elements without using DSC? Sure. You can also drive a car using mind control, I’m told, but it’s not the most effective way of doing so. Folks are quite welcome to disagree, but I do firmly believe that any environment would benefit from DSC. Time will tell if I’m right or wrong there, but personally – and this is very much a “this is how I proceed with my life” thing – I would rather be on the forefront of something than turn around in 5 years and realize I should have been there.

Keep in mind that, 6 years ago, folks felt free to ignore PowerShell. Many now wish they hadn’t. It was a lot easier to get into PowerShell in v1, and then “keep up” with new versions, than to dive in now.

Why do I think DSC will be the same? Because DSC is the ultimate outcome of PowerShell. DSC is what PowerShell has been building toward. I think this is perhaps a perspective that other folks don’t share. To them, DSC is “just a tool.” It isn’t doing anything they couldn’t have done all along.

But understand something about DSC: This is something Snover planned almost a decade ago. It was the ultimate destination of his “Monad Manifesto.” DSC is exactly what PowerShell has been building up to. DSC is the main reason, in many ways, for PowerShell. If you really think about it, DSC removes much of the need for you to learn PowerShell. 

That’s a bold statement. Let me explain.

There’s no question that PowerShell can be difficult to learn. It’s programming, and not everyone has an aptitude for that. There are literally thousands of commands, and that’s just from Microsoft product teams. It’s a huge product, like any language has idiosyncrasies, and you can come at it from a half-dozen different directions. Writing scripts that configure or re-configure computers, or even that report on current configurations, can be complex. Yes, they’re faster than doing it manually – but it’s not a zero effort.

DSC abstracts all of that. To create a DSC configuration, you don’t need to know how to program, yet you can potentially leverage all the PowerShell investment Microsoft has been making. You can use PowerShell, and all it can do, without having to really touch much of PowerShell. Sure, there’s a middle layer of folks writing DSC resources (which use PowerShell commands as their API), but that’s going to be a small subset of folks. A highly-paid subset, I suspect.

If Microsoft had had infinite time, money, and people, they’d have just given us DSC and not mentioned PowerShell at all. PowerShell v1, v2, and v3 were building blocks toward what DSC gives us. DSC was the point, all along. We’re just seeing the tippy top of that, now. There’s a glacier underneath.

Now, you may be thinking, “bullshit. I can’t use DSC to do everything that my job involves, even if I just think about my Microsoft assets.” True. Today. But folks, you need to have a little vision. We’re dealing with DSC 1.0. Kindergarten DSC. Literally, what you’re seeing now is the simplest possible expression of something that the world’s largest software company took seven years to deliver. Seven years. Most of Microsoft’s PowerShell investment, going forward, is going to be in DSC – I guarantee it. They’ve done the lower-level building blocks already.

“Can I use DSC to generate configuration reports?” Maybe not today. But have you noticed that a DSC pull server can have a “compliance server” component? Have you looked at its service definition? It’s basically a way for servers to report in on the state of their configuration compliance. That’s reporting. And that’s my point: DSC has a super long way to go. It is going to be everything for an administrator – and that’s going to happen fast. Looking at DSC today, that may be tough to imagine. So was PowerShell, in 2006.

And we haven’t even seen the tooling that will be layered on top of DSC yet, because it’s all so new. The tool where you click a Wizard to add a user… and the tool goes and rewrites four dozen server configuration files, causing the user to exist in AD, in your accounting system, as a home directory on a file server, and so on. Yeah, that’ll all happen. Eventually, you won’t touch servers anymore – you’l touch their configuration files, and they’ll reconfigure themselves appropriately. That’s why this is such a big deal. It’s not a tool. It’s the administrative interface. 

So when I say, “if you’re not learning DSC right now, you’re screwed,” it’s because I personally believe that to be true. My experience in the industry and my familiarity with how Microsoft pursues these things informs that opinion. You are going to fall behind the curve so fast you won’t even realize it’s a curve anymore. Today, people look at Infant DSC and see a basic configuration tool. I see Teenager DSC, and Young Adult DSC, coming around the corner, and they are going to absolutely change the way you are required to manage Microsoft products. Yeah, I personally want to be on board with that right now.

“What about a small shop? Isn’t DSC meant for large scale?” No, large enterprises just have the most obvious advantage from DSC. It’s less obvious to small shops.

You know how Exchange 2007 really impressed everyone, because the GUI was just running PowerShell under the hood? That meant a small shop could still get the GUI, but you could always drop down to PowerShell when you needed to. It also meant that not everything went into the GUI, and sometimes you had to drop into PowerShell anyway. I predict DSC will do the same thing. GUIs won’t run PowerShell commands anymore – they’ll modify DSC configurations. Those configurations will then be implemented on the affected servers. Your cross-platform management tools? If they’re smart, they’ll be doing the same thing.

Think about that. DSC isn’t going to be “just a tool.” It’s going to be the entire interface by which you interact with Microsoft server products. It’s as important as the mouse or the keyboard. I truly think people aren’t seeing the end-game when it comes to this technology.

You know those admins who only know what the GUI shows them? They don’t know much about what’s happening underneath, and as a result, they’re not very good at planning, architecture, troubleshooting, or anything else that requires a deeper knowledge. That’s where you stand with DSC. You either ride that bus, or get run over by it. Eventually.

Do you want to risk not knowing this thing? You might. Perhaps in your job position you know it’s not going to affect you. For me, I won’t risk it. So that’s where my perspective comes from. In my world, this thing is a must-have. And yes, that’s an enterprise-class world, with large, cross-platform environments. But it’s also a perspective from my experience in SMB – I’d have killed for DSC, given the minuscule budgets and staff I worked with in those environments, and given my colleagues’ distaste for scripting.

Anyway, that’s how I feel about it – in more detail than 140 characters allowed ;). If you have a different perspective, please feel free to share it. I can’t promise that you’ll change my mind (and I’m not really out to change yours), but it’s good for the world in general to see different perspectives, so that folks can make informed decisions about their own career directions.

Up Next: Nick Howell from NetApp talking about software defined datacenter


This Thursday, Feb 27, 2014 join us with guest, Nick Howell, (@that1guynick) from NetApp as the discussion will be software defined datacenter. See you at 9:30PM EST

 

 

PowerShell Gotcha: UNC paths and Providers


PowerShell’s behavior can be a little bit funny when you pass a UNC path to certain cmdlets. PowerShell doesn’t recognize these paths as “rooted” because they’re not on a PSDrive; as such, whatever provider is associated with PowerShell’s current location will attempt to handle them. For example:

Set-Location C:
Get-ChildItem -Path \\$env:COMPUTERNAME\c$

Set-Location HKLM:
Get-ChildItem -Path \\$env:COMPUTERNAME\c$

The first command works fine (assuming you have a c$ share enabled and are able to access it), and the second command gives a “Cannot find path” error, because the Registry provider tried to work with the UNC path instead of the FileSystem provider. You can get around this problem by prefixing the UNC path with “FileSystem::”, which will make PowerShell use that provider regardless of your current location.

On top of that, commands like Resolve-Path and $PSCmdlet.GetUnresolvedProviderPathFromPSPath() don’t normalize UNC paths properly, even when the FileSystem provider handles them. This annoyed me, so I spent some time investigating different options to get around the quirky behavior. The result is the Get-NormalizedFileSystemPath function, which can be downloaded from the TechNet Gallery. In addition to making UNC paths behave, this had the side effect of also resolving 8.3 short file names to long paths (something else that Resolve-Path doesn’t do.)

The function has an “-IncludeProviderPrefix” switch which tells it to include the “FileSystem::” prefix, if desired (so you can reliably use cmdlets like Get-Item, Get-Content, Test-Path, etc., regardless of your current location or whether the path is UNC.) For example:

$path = "\\$env:COMPUTERNAME\c$\SomeFolder\..\.\Whatever\..\PROGRA~1" 
 
$path = Get-NormalizedFileSystemPath -Path $path -IncludeProviderPrefix 
 
$path 
 
Set-Location HKLM: 
Get-ChildItem -Path $path | Select-Object -First 1 

<# 
Output: 
 
FileSystem::\\MYCOMPUTERNAME\c$\Program Files 
 
    Directory: \\MYCOMPUTERNAME\c$\Program Files 
 
 
Mode                LastWriteTime     Length Name 
----                -------------     ------ ---- 
d----         7/30/2013  10:54 AM            7-Zip 
 
#>

Free eBook from Microsoft’s Scripting Guy: Windows PowerShell Networking Guide


Ed Wilson, Microsoft’s Scripting Guy, has created a free ebook, Windows PowerShell Networking Guide. It’s designed to provide a super-quick PowerShell crash course, and then show you how to  manage various networking scenarios by using the shell.

And it’s free! Just click the link to get your copy – and please, tell a friend!

PoshNetworking.pdf

Episode 255 – PowerScripting Podcast – Steve Roberts from Amazon on AWS and PowerShell


A Podcast about Windows PowerShell.
Listen:

In This Episode

Tonight on the PowerScripting Podcast, we talk to Steve Roberts from Amazon on Amazon Web Services and PowerShell.

News

Interview

Guest – Steve Roberts

Links

 

Chatroom Highlights:

[21:55:58] <Brian___> http://amzn.com/1430264519

[21:56:13] <Brian___> Pro PowerShell for Amazon Web Services

[21:56:33] <Brian___> Steve (speaking) was a big help with the book

[21:56:43] <Brian___> his team was great

<ScriptingWIfe> http://powershell.org/wp/community-events/summit/

<alevyinroc> http://www.panasonic.com/business/toughpad/us/7-inch-tablet-fz-m1.asp

<halr9000> http://aws.amazon.com/powershell/

<halr9000> http://docs.aws.amazon.com/powershell/latest/reference/Index.html

<halr9000> http://aws.amazon.com/

<Brian___> http://amzn.com/1430264519

<halr9000> http://docs.aws.amazon.com/powershell/latest/reference/Index.html

<halr9000> http://aws.amazon.com/net/

<wade> http://blogs.aws.amazon.com/net

<alevyinroc> http://www.musicradar.com/us/news/guitars/trent-reznor-talks-johnny-cash-168199

<ScriptingWIfe> https://scontent-a-iad.xx.fbcdn.net/hphotos-ash3/1607005_10202465193703988_1046463679_n.jpg

<Stuwee> ## what does AWS stand for again?

<JonWalz> DexterPOSh, please add ## before your questions so they are easier for us to pick out

<DexterPOSh> @JonWalz …got it ##

<Stuwee> ## can you give a quick/small example of the differences between AWS and Azure?

<DexterPOSh> ## Can I extend my local Lab to include machines from AWS ?

<Stuwee> ## does he have a blog

The Question – Hero/Power

  • Thor