PowerShell Great Debate: The Purity Laws


This should be interesting.
During The Scripting Games, I observed (and in some cases made) a great many comments that I'm lumping under the name "Purity Laws."

  • You shouldn't use a command-line utility like Robocopy in a PowerShell script.
  • You shouldn't use .NET classes in a PowerShell script.
  • You should map a drive using New-PSDrive, not net use.

And so on. You see where I'm going: there are folks out there who feel as if the only thing that goes into a PowerShell script is Pure PowerShell. Which is odd, because it isn't an approach the product team actually gave much value. They spent extra time making sure the shell could use .NET, and could run external utilities - why not use them, if they work and get the job done?
A counterargument involves maintenance and readability. External commands, for example, are harder to read, may not be well-documented, and don't work consistently with the rest of PowerShell. .NET classes are hard to discover, and force you into a very "programmer-y" approach. Some environments might not want the extra overhead - even if it means giving up functionality.
So where do you come down on this debate? I'd really love some detailed recommendations. What's right for your environment, and most importantly why? Are there any facts or situations that would sway you to the other side of the argument?
[boilerplate greatdebate]

12 Responses to " PowerShell Great Debate: The Purity Laws "

  1. Art Beane says:

    Until all the comments around this year’s games, I never gave “purity” much thought. Sure, I use native PowerShell most of the time, but there are some things that are not possible (so I use .NET to manipulate remote registry entries using alternate credentials) or just lots easier (I do use RoboCopy to copy/move/update nested folders). I truly believe that there’s no wrong way to get a correct answer from PowerShell.

  2. Matt Tilford says:

    I stick to PowerShell’s built in commands mainly out of habit. I’ve not delved into using the native .NET because it’s not an area that i know much about. I know that .NET gives you access to vast amounts of power over a modern computer but as Don points out i can’t discover that power.
    As for using command line utilities i believe that they can also suffer from discoverability. Robocopy is such an excellent tool that Microsoft started bundling it in the OS instead of making us download it, and the documentation is pretty solid. But in comparison how many SharePoint admins want to continue to use stsadm instead of powershell?
    But those 2 are pretty much performing actions with little/no output. External commands that give us information are doing so as lines of text, and usually formatted to be prettified on the command shell.
    Don’t want this header?
    No, you want this text which means you probably have to loop through each line to get it, then work out where it ends.
    Personally i work as such:
    Powershell commands where i can unless there is an obvious functional deficiency which is covered by an >
    External command, which is usually one i’ve used or heard about before. Failing that i’ll have to search to find a >
    .NET class. These i mostly only discover by finding them in someone else’s script that i can then… repurpose.

  3. I don’t know .Net to use any of it, but I have used Robocopy within the last two hours. (if it isn’t broken why fix it?) – how many ways can you cut/copy paste something. therefore I have no strong leanings to any method. However I do tend to lean towards the PowerShell way for the reasons stated above.
    However, on the flip side, why do we have the built in aliases that are there?
    These are old style DOS type commands that sort of do what we know. (to a point)

  4. Joel Reed says:

    I have always been a little frustrated by the “purity” issue. I think you can also add COM to that list. Its largely considered “inappropriate” to use but still has great support in PowerShell. The New-Object cmdlet has always had a –ComObject parameter. I’ve yet to hear plans for that being deprecated. I started with PowerShell v1 and came from a VBscript back ground. This was for a long time where I crossed the purity police.
    I think it is easy to forget nowadays just how much PowerShell gives us out of the box. Many Microsoft services, stacks, and roles come with full suites of cmdlets, some even rely solely on PowerShell. It is really quite staggering how pervasive PowerShell has become. Maybe someday every single product team will have 100% cmdlet coverage, every vendor will provide cmdlets, and all in house developers will become fully PowerShell supportive. Until that time I think PowerShell solutions that are well documented and follow generally accepted best practices, verb/noun patterns, comment help, parameter validation, etc. are the way to go. Whatever gets solutions into use within your organizations work patterns and workflows are best.
    As a general rule I always try to use cmdlets first. We seemingly know the “core” ones but there are many that kind of lie on the fringes. I discovered ConvertFrom-StringData a few weeks ago. It replaced some Get-Content and string manipulation code with a single one liner. Not exactly avoiding native executables or .NET but the idea is the same. Don’t reinvent the wheel is sound advice to try and live by most of the time.
    If I have come to a point where an off the shelf cmdlet cannot get me the solution that I need, .NET as well as COM are my next steps. PowerShell v1 was a really amazing tool for me. It was that crossroad to the future. I had a tremendous amount of VBscript code and knowledge built-up at that point but PowerShell was the “scriptable” .NET of the future. However the future wasn’t there yet but the New-Object cmdlet was amazing. I could call my familiar COM objects from PowerShell as well as the full .NET base class library and just about everything on top of it. I really think this was a deliberate set of doors that were opened by the PowerShell team and I see no signs of that being closed off. So feel free to use these in your solutions. However try and be responsible. Wrap your usage in advanced functions. Use parameter validation and comment based help. If you solved a legitimate gap, fill it so your future self and those who come after you can use it and re-use it.
    Native commands are those I always try to avoid but maddeningly I always have to handle for some link or integration that I desperately need access too. There are just some applications out there that only have a command line tool into them for whatever reason. I always try to avoid having to parse the output because that just seems to be a big bucket of fail. Instead I pay attention to process exit codes and then validate outside of the standard output that A resulted in B. This is not always possible of course. As with .NET try and wrap the executable as best you can with an advanced function. Then you use that cmdlet in your “main” code path. This way when something better comes along its sufficiently abstracted away. It’s a one line for one line replacement, or there a bouts, when that day comes.
    I think that PowerShell is the best automation tool on the planet. It should not be about how “pure” something is. Does the solution you create do what you need it to do? Does it adhere to the larger generally accepted best practices, conventions, and idioms? I think if you can answer yes to that then purity doesn’t really mean much. I have automated some pretty disparate, proprietary, and legacy systems over the years. The real question for me was how seamless did it appear. PowerShell is glue, use it but don’t abuse it.

  5. Chris Hunt says:

    If you have a problem, are you going to half-solve it because you want to maintain “pure Powershell”? No. On top of that, Powershell actually helps to document the features of classic Exe’s that you may use. If you write a function to wrap Robocopy, you can map function parameters to Robocopy switches and provide default values and help that is meaningful to you and your organization. I think there is value in sticking to pure Powershell over .Net when you can, however, you can’t ignore the tens-of-thousands of classes .Net offers.
    We’re all trying to solve business problems. Without using command-line utilities and .Net classes, I would just have to do more by hand and that’s stepping backwards. Should they be outlawed in the Games? It would make for a more interesting competition, but less relevant to real life. I think the point of the Games it to learn, so it should remain open to all options.

  6. I think there’s two sides to this question:
    1) What would I do in my production environment?
    2) What should be the ideal answer in the Scripting Games
    Number 1 is easy – what ever it takes to get the job done. I’ll use cmdlets, PowerShell scripting, .NET, old style utilities, COM, Script Host objects (also COM), WMI, web services and whatever else I can get to do the job.
    My usual approach is to look for a PowerShell answer first – if that works great. I’ll then look for a COM or .NET solution and finally a utility UNLESS I know that something further down my list gives me exactly what I am looking for.
    Thinking about automating AD for a minute. With PowerShell v1 we got the [adsi] accelerator. It was about on a par with using VBScript (in some areas a bit more difficult) but it got the job done. If you wanted to search AD though you had to use .NET. With v2 we got the [adsisearcher] accelerator and improvements to [adsi]. The Quest cmdlets were available if you could use them.
    With Windows 2008 R2 we got PowerShell cmdlets from Microsoft which made things a lot easier.
    The point is that PowerShell is still evolving and growing. Using other technologies to fill the gaps is necessary to get the job done – however as new versions of PowerShell, Windows and other products are released the breadth of PowerShell cmdlet support increases.
    Cmdlets are usually the easiest way – if you haven’t got cmdlets use whatever will get the job done in your environment.
    The second side regards the Scripting Games. The Games are an opportunity to practice using PowerShell against a real live scenario. My personal take is that the solutions should use as much PowerShell as possible. if you don’t have access to the latest and greatest cmdlets say so and provide an answer that works but if you have access to a “pure” PowerShell solution then I think that’s the way you should go.

  7. mikeshepard says:

    I agree with most of the comments so far. I try to stick to native PowerShell cmdlets when I can, but there are times when there’s nothing that does quite what you need. One thing I’ve tried to do is to wrap these “impure” calls with PowerShell functions that have the parameters I need. That way if I ever find a way to replace them with cmdlets I can do so easily.
    I’m very happy with the range of functionality provided by cmdlets and want to use them as much as possible. Given that PowerShell is obviously the “going forward” technology for automating administration, that’s where I’m investing the most of my time and effort. And since most administrators don’t want to spend the time to learn COM objects or the depths of the .NET libraries, providing them with a PowerShell-ish way to solve their problems seems like a good approach.
    In terms of Robocopy, though, I don’t see a lot of reason not to use it. I’ve seen some attempts to wrap/rewrite it in PowerShell and haven’t put any effort into investigating them. As powerful and fully-featured as Robocopy is, it would be difficult to completely replace.

  8. imfrancisd says:

    My scripting background can be described as, “use whatever is in front of you”, so worrying about something as abstract as “purity” seems very impractical to me.
    To convince me, “purity” would have to be replaced with something more concrete.
    If “purity” here means “use cmdlets” then I think a good reason for purity would be to become more proficient with PowerShell. After all, the cmdlet is the mechanism that PowerShell uses to expose basic functionality like sorting or working with dates and files.
    Avoiding cmdlets in PowerShell would be like a C++ programmer avoiding the C++ standard library. Yes, he can use the C library in C++, but he will miss out on a big part of C++ programming. I think it’s the same with PowerShell and cmdlets. In other words, you can’t get better at using a particular tool if you don’t use it.

  9. James Edwards says:

    I think “purity” has a lot of merit, but so does having the best solution. Perhaps 2 awards could be given: Best Solution and Best “Pure” Solution.

  10. Cameron Ove says:

    Where are the PowerShell ‘Purity Laws’ published? It appeared to me reading the gradings on the scripting games that everyone had a different Purity Law book. If PowerShell provides a facility for me to use .Net why am I un-pure if I use it; and if I’m a .Net advocate maybe everyone who doesn’t use .Net is un-pure. My point is that these purity laws seem to be very subjective (as in EVERYone has their own opinion).
    POWERShell is powerful because it gives us so much facility. I use built-in cmdlets, .Net objects, COM objects, third-party .net libraries, “DOS” commands, Perl scripts, functions, advanced functions, third-party cmdlets. I’ve wrapped functions around DOS executables, I use WMI. I can go on and on and on and on. The creators of PowerShell gave us all of that to use. So use it.
    If you make a script or function that does what it is supposed to do then why complain if it is pure or not. If you don’t have to reinvent the wheel but can extend PowerShell by rapping functions around another tool then more power to you – why should anyone complain. If you want to write a cmdlet from scratch using .Net then great. If you want to write cmdlets in C# then even more power to you. Can you imaging Superman not using his x-ray vision when he needed it, just because the ‘institutions’ determined that it wasn’t the best way to see through things. If you have the power use it.
    If there are any purity laws then I think that Jeffrey Snover, Bruce Payette, Lee Holmes, James Truher, or someone on the PowerShell design team should publish it. If they have one written could someone point me to it?
    That said there is some prudence in writing code that is somehow regular when writing shared code and functions. I think in those cases that creating a set of rules on syntax (e.g. to use an alias or not, the level of commenting in the code, etc), or around security risks, or to use advanced functions, or to include help, etc might be necessary for maintain readability and reusability, and uniformity in amongst the shared code. In a sense those rules would become ‘purity laws’.
    However, creating a general set of purity laws for an entire community across diverse applications is like government creating laws to take away our inherent freedoms. Like Bloomberg ‘telling’ ALL NYC residents how much soda they can drink and how many candy bars they can eat.
    I don’t think the scripting games should be judged on what someone thinks is pure when there is no standard ‘pure’. I think extra points should be given when someone shows innovation and thinks outside the box. Actually Seth Godin wrote, “We need you to poke the system and see what happens, to learn from it, to adjust and to repeat…’Poke the Box’…demands that you stop waiting for a road map and start drawing one instead.” PowerShell inherently lets you ‘Poke the Box’. So poke it…

    • Don Jones says:

      Mind you, the question isn’t wholly about the games, but about working with powershell in general. But thanks for the input! Every viewpoint helps!

      • Cameron Ove says:

        Yes I completely agree with you Don. I regret that my post seemed to address only the scripting games when it was intened as a general view. I wrote the two paragraphs regarding the scripting games just to tie it back to the subject in your original post. However, my post is my general view of so-called ‘Purity Laws’, regardless of why I’m writing PowerShell code.