Pipeline or Script? That is the Question

When I teach PowerShell classes, I often start by assuring students that, with the shell, you can accomplish a great deal without ever writing a script. And it's true - you can. Unlike predecessor technologies like VBScript, PowerShell lets you pack a lot of goodness into a one-liner - or even into several lines run manually in the console.

What I never say is you can accomplish anything without ever writing a script. That isn't true. I see folks struggle all the time to squeeze something into a one-liner pipeline, when life would be so much easier if they switched a script-style, procedural approach.

So what's the tipping point?

Actually, it's really easy to spot. You should be writing a script if:

  • You need to take different actions based on some condition, like send an e-mail if there's data to send, but send nothing if there's no data.
  • You need to do more than one discrete task. Yeah, you can sometimes jam multiple actions into a one-liner using things like passthrough, but it's not consistently available, and the command becomes dreadfully difficult to read and debug.
  • You need to run a command repeatedly over time, and each time some of its values will change (scripts offer declarative parameters).

Many smart folks start in the console to test a command, and then paste it into a script they're working on (I do that, too). And there are other reasons to switch from "running a command in the console" to "banging out a script in the ISE [or editor of choice]." What tips would you offer to a PowerShell newbie to help them get the most from the command-line... but know when it's time to move into a script-based approach?

About the Author

Don Jones

Don Jones is a Windows PowerShell MVP, author of several Windows PowerShell books (and other IT books), Co-founder and President/CEO of PowerShell.org, PowerShell columnist for Microsoft TechNet Magazine, PowerShell educator, and designer/author of several Windows PowerShell courses (including Microsoft's). Power to the shell!

6 Comments

  1. This is similar to the question the Supreme Court address on Porn, you know it when you see it.

    I normally have a 2 or 3 Pipe limit on my One LIners. If I need more than that, I script.
    That or if I need more than one command line within a For/While loop.

    Writing a Powerfull One Liner is cool, but it's similar to an amazingly complex RegEx, it's harder to read and understand. So remembering what you did later, or someone else looking at it will struggle.
    I think Scripts are good for readability and cross training.

    Plus, you normally want to save your code anyways. So you're either going to have to save it in a PS1 file, or create a text file with all of your one liners that you're going to copy and paste in anyways.

  2. I start wondering if it might be time to refactor when the pipeline gets to having more than one Foreach-Object or blocking cmdlets if it's handling a large data set.

  3. I'm not quite sure of what you mean by "pipeline" and "script". For example, if I want to use Sort-Object, I have to use a pipeline regardless of whether I'm in the shell or writing a script. And that's true for other cmdlets like Select-Object, Where-Object, Get-Member, Add-Member, etc. If I want to use those cmdlets together, a pipeline is a natural way of putting them together.

    The thing I look for in a pipeline is how easily I can figure out the inputs and outputs of each command in the pipeline, and what each command is supposed to do. For example, if I see " | %{$_ - 3} | ?{$_ -lt 5}" I can figure out that those commands deal with numbers. However, if the script is dealing with some kind of special numbers, and the body of % and ? consists of many lines, I would rather see something like " | subtract 3 | getLessThan 5". That way, I can still easily figure out what goes in each command, what comes out of each command, and what the command does with its input.

    I mention this because some cmdlets are tricky to figure out when used in a pipeline. Take Get-ChildItem for example. The output of Get-ChildItem is predictable, but its pipeline input is just weird. A line like "$folder | Get-ChildItem" has wildly different outputs depending on what's in $folder. If $folder contains a string, then a character like '[' inside that string can change the result in a big way, but the mistake is difficult to notice. On the other hand, pipelining a DirectoryInfo or FileInfo object to Get-ChildItem just works, even if the name contains a character like '['. But that just adds to the mystery because the only pipeline parameters of Get-ChildItem are strings called "Path" and "LiteralPath". DirectoryInfo and FileInfo objects don't have properties called "Path" and "LiteralPath". You have to dig really deep in order to find out that the "LiteralPath" parameter of Get-ChildItem has an alias called "PSPath", which is in DirectoryInfo and FileInfo.

    So I would be okay with using a command like Get-ChildItem in the beginning of a pipeline because I can easily predict its output, but, if possible, I would avoid using a command like Get-ChildItem in the middle of the pipeline because there are just a lot of subtle quirks to deal with when piping stuff to that command.

    In short, I'm okay with a long pipeline if I can easily figure out what's happening to the data as it flows through the pipeline.

  4. Use one liner only for one action. You will learn more when you write bunch of statements in a script! That's my thought! I have been doing scripting since DOS and I still love to write batch scripting since there's no dependency for it to work!

  5. I agree with imfrancisd, I don't see where exactly you draw the line between the script and "one-liner pipeline". Is a one-liner saved in .ps1 file a script or a one-liner?

    Also agree with Mark Keisling, you know it when you see it. 🙂

    I usually do not manage things interactively, and I always use PowerGUI to write and test my scripts. But my rule of thumb is: If it is simple one time action that will unlikely repeat in the near future -> write one liner, send it by email to back it up.

    In other situations: I write a proper script. Test the commands in the embedded console. Create a function from them if it suits the needs. Test the function. Paste it to the script and test it with the script. Then I forget about how the function works inside because it does not matter anymore.

    >>> You need to take different actions based on some condition, like send an e-mail if there’s data to send, but send nothing if there’s no data. <<< I don't think this is good candidate for a rule, foreach-object suits this purpose very well: Get-AdUser | foreach { Send-MailMessage -to $_.email -body "Your account expires in 10 days." etc. } ... you get the picture.

  6. I tend to do more one-liners so that I'm more familiar with the cmdlets and then I'm faster throwing together something new. Put 2 or 3 lines in a scripts and then you remember the script name, not what's in the script.

    I also don't want to have 300 scripts of a few lines each to remember what each one does exactly, what it's limited to. I'm much more flexible and productive writing slightly different one-liners throughout the day instead of searching through my scripts looking for the one that does what I want, or is close to what I want and editing it.

    I primarily use PowerCLI for daily management of our VMware environment. I have written some scripts that the group uses for building VMs and the like, and some that mostly just I use, but 90% of what I type throughout the day are the same 2-3 scripts/functions I wrote and the base cmdlets strung together. Sometimes the one-liners get ugly, especially if it's a report, but I almost never need to do the exact same thing twice.

    As always, tmtowtdi.