Category Archives: Great Debates

Great Debate: The Conclusion


All this Summer, we’ve been encouraging your feedback in a series of Great Debate posts. Most of the topics came from the 2013 Scripting Games, where we definitely saw people coming down on both sides of these topics. My goal was to pull everyone’s thoughts together into a kind of community consensus, and to offer a living book of community-accepted practices for PowerShell. This’ll be a neverending story, likely adapting and growing to include more topics as the years wind on.

But here’s the start: DRAFT-2013Sep_Practices is the first draft, officially a Request For Comments, based on the comments you’ve all contributed to the Great Debate posts over these past few weeks. I tried to capture consensus where I saw it, and to outline both sides of the great back-and-forth we’ve seen.

NOTE: The cover image in this draft is just a placeholder; this book is NOT dedicated to error handling. Its working title is correctly shown on the page following the cover image.

I’m going to leave this post in place until October 1st. Please drop any comments you’d like to offer to the final first edition of this ebook, and let me know if there are any topics you’d like to see debated in the future. After October 1st, I’ll publish the final edition of this Practices guide as one of PowerShell.org’s free ebooks. The final first edition will also become part of the next iteration of The Scripting Games, as its official “best practices” guide. In fact, you’ll notice in this draft that there are a couple of Games-specific comments, since the Games sometimes have different drivers than a production environment.

Thanks again to everyone who participated!

Great Debate: Pipeline Coding Syles


A good programmer or scripter should always try to produce code that is easy to read and understand. Choosing and consistently applying a strategy for indentation and bracing will make your life, and possibly those of your co-workers, much easier. (Unless, of course, you’re that guy who prides himself in writing code that looks like a cat walked across his keyboard, and cackles any time someone tries to understand it.)

Much of PowerShell’s syntax can borrow coding conventions from C-style languages, but the pipeline is something new. Below are two pipelines formatted a few different ways. Which styles do you find to be the easiest to read? In the second pipeline, which contains an embedded multi-line script block, would your choice be any different if the script block were much longer? For instance, if you couldn’t see both the beginning and the end of the pipeline without scrolling. Do you have another way of formatting these bits of code that you like better?

Remember, there is no right or wrong answer here. The idea is just to generate discussion, and perhaps to help people produce more readable code.

# A simple pipeline with each command fitting on a single line.
# Line breaks after the pipe character.

Get-ChildItem -Path $home\Documents -File |
Where-Object { $_.LastWriteTime -lt (Get-Date).AddDays(-120) } |
Select-Object -ExpandProperty Name

Next-Command

# The same pipeline, this time indenting each line after the first one.

Get-ChildItem -Path $home\Documents -File |
    Where-Object { $_.LastWriteTime -lt (Get-Date).AddDays(-120) } |
    Select-Object -ExpandProperty Name

Next-Command

# This time, using backticks at the end of each line, and placing the 
# pipe character at the beginning of the next line.

Get-ChildItem -Path $home\Documents -File `
| Where-Object { $_.LastWriteTime -lt (Get-Date).AddDays(-120) } `
| Select-Object -ExpandProperty Name

Next-Command

# And again, with indentation

Get-ChildItem -Path $home\Documents -File `
    | Where-Object { $_.LastWriteTime -lt (Get-Date).AddDays(-120) } `
    | Select-Object -ExpandProperty Name

Next-Command

# A slightly more complex pipeline involving an embedded script block
# passed to ForEach-Object.  Does the pipe symbol after a closing brace,
# potentially much farther down in the code from where the pipeline
# started, change your opinion on what's the easiest to read?

Import-Csv -Path $inputCsvPath |
ForEach-Object {
    # Transform objects in some way
} |
Export-Csv $outputCsvPath -NoTypeInformation

Next-Command

# The same three variations of style involving indentation and backticks:

Import-Csv -Path $inputCsvPath |
    ForEach-Object {
        # Transform objects in some way
    } |
    Export-Csv $outputCsvPath -NoTypeInformation

Next-Command

Import-Csv -Path $inputCsvPath `
| ForEach-Object {
    # Transform objects in some way
} `
| Export-Csv $outputCsvPath -NoTypeInformation

Next-Command

Import-Csv -Path $inputCsvPath `
    | ForEach-Object {
        # Transform objects in some way
    } `
    | Export-Csv $outputCsvPath -NoTypeInformation

Next-Command

PowerShell Great Debate: “Fixing” Output


When should a script (or more likely, function) output raw data, and when should it “massage” its output?

The classic example is something like disk space. You’re querying WMI, and it’s giving you disk space in bytes. Nobody cares about bytes. Should your function output bytes anyway, or output megabytes or gigabytes?

If you output raw data, how would you expect a user to get a more-useful version? Would you expect someone running your command to use Select-Object on their own to do the math, or would you perhaps provide a default formatting view (a la what Get-Process does) that manages the math?

The “Microsoft Way” is to use a default view – again, it’s what Get-Process does. But views are separate files, and they’re only really practical (many say) when they’re part of a module that can auto-load them.

What do you think?

[boilerplate greatdebate]

PowerShell Great Debate: What’s Write-Verbose For?


This was a fascinating thing to see throughout The Scripting Games this year: When exactly should you use Write-Verbose, and why? The same question applies to Write-Debug.

  • “I use Write-Debug to provide developer-level comments in my scripts, since I can turn it on with -Debug to see variable contents.”
  • “I use Write-Verbose to provide developer-level comments in my scripts, since I can turn it on with -Debug to see variable contents.”

See what I mean? Some folks will suggest that Verbose is for “user-friendly status messages;” others eschew Debug entirely and prefer PSBreakpoints for that functionality.

What guidance would you offer for using Write-Verbose and Write-Debug in a script?

[boilerplate greatdebate]

PowerShell Great Debate: Can You Have Too Much Help?


In The Scripting Games this year, more than a few folks took the time to write detailed comment-based help. Awesome. No debating it – comment-based help is a good thing. 

But some folks felt that others took it too far. There were definitely scripts where the authors used, for example, the .NOTES section to explain their thinking and approach. Some commenters felt it was excessive, while others have pointed out, “wow, what if every programmer gave us some idea what the heck he/she was thinking at the time?” Some felt these extensive comments were just at attempt to get a better score by “convincing” the reviewer of an approach or tactic; others felt, “so what?”

So let’s leave the Games out of this debate – in a production environment, where do you come down on extensive notes in a script? When is it not enough, and when is it going too far? Where’s the value, and where’s the annoyance?

[boilerplate greatdebate]

PowerShell Great Debate: Script or Function?


One of the most frequent comments in The Scripting Games this year was along the lines of, “you should have submitted this as a function, not a script.” Of course, the second-most frequent comment was something like, “you shouldn’t have submitted this as a function.”

Let’s be clear: if an assignment explicitly asks for a function, you should write one. What we’re debating are the pros and cons of a single tool being written one way or another. Read that again: a single tool. If you’re writing a library of tools, it’s obvious that writing them as functions for inclusion in a single file (like a script module) is beneficial.

Some argue that any tool is potentially going to be included in a function… so why not write it that way to begin with? Others argue that functions are a smidge harder to test, so why not just write a script?

This is a debate I don’t personally have a strong stake in. I mean, we’re literally talking about a single keyword. Take any script, add the function keyword, a function name, and a couple of curly brackets, and you’ve got a function. This really shouldn’t be a criteria when you’re looking at a contest entry… or even when you’re looking at something a colleague offered to you.

Or should it?

[boilerplate greatdebate]

PowerShell Great Debate: PowerShell Versions?


Today’s Great Debate is a bonus, offered from former team member June Blender. Take it away, June!

Like several of the excellent debates in our Great Debate series, this debate issue arose during in Scripting Games 2013 when different judges used different selection criteria to evaluate entries.

Some judges, like me, wanted to see evidence that the scripter had studied all features of the newest version of the Windows PowerShell language and selected the best approach for their solution. Other judges wanted the solutions to work on as many computers as possible.

Outside of the Scripting Games, this issue is very practical and very important. If you’re writing a script to work on particular computers in your enterprise, you know which versions of Windows PowerShell are installed and which features you can use. But when you write a shared script or functions for a module, your scripts/functions can run in any environment.

What’s the version best practice?

I think we can all agree that a #Requires statement should appear in any shared script.

 #Requires -Version <N>[.<n>]

In fact, maybe we need a version property of commands that can be queried by using Get-Command, like the PowerShellVersion property of modules?

But, beyond that, should you restrict yourself to features in the oldest supported version of Windows PowerShell, or the most common version, or can you use features in the newest version, even if your scripts don’t run on all computers in all enterprises?

Sometimes, the answers are trivial. The simplified syntax in Windows PowerShell 3.0 that omits curly braces {} and “$_.” is just syntactic sugar for the original syntax. We might decide that it’s best to avoid it unless you’re sure that all computers are running at least 3.0.

At the other extreme are features that don’t have any equivalent in a previous version. What if your module would benefit from using scheduled jobs, CIM commands, or workflows? Must you avoid them?

In the middle are cases where you can use a somewhat equivalent feature. Can you use Get-CimInstance, or are we forever tied to Get-WmiObject? Can you use PSCustomObject or are you committed to Add-Member? Do you need to write Types.ps1xml files when dynamic type data would suffice?

PowerShell Great Debate: The Purity Laws


This should be interesting.

During The Scripting Games, I observed (and in some cases made) a great many comments that I’m lumping under the name “Purity Laws.”

  • You shouldn’t use a command-line utility like Robocopy in a PowerShell script.
  • You shouldn’t use .NET classes in a PowerShell script.
  • You should map a drive using New-PSDrive, not net use.

And so on. You see where I’m going: there are folks out there who feel as if the only thing that goes into a PowerShell script is Pure PowerShell. Which is odd, because it isn’t an approach the product team actually gave much value. They spent extra time making sure the shell could use .NET, and could run external utilities – why not use them, if they work and get the job done?

A counterargument involves maintenance and readability. External commands, for example, are harder to read, may not be well-documented, and don’t work consistently with the rest of PowerShell. .NET classes are hard to discover, and force you into a very “programmer-y” approach. Some environments might not want the extra overhead – even if it means giving up functionality.

So where do you come down on this debate? I’d really love some detailed recommendations. What’s right for your environment, and most importantly why? Are there any facts or situations that would sway you to the other side of the argument?

Go.

[boilerplate greatdebate]

PowerShell Great Debate: Credentials


Credentials suck.

You obviously don’t want to hardcode domain credentials into a script – and PowerShell actually makes it a bit difficult to do so, for good reason. On the other hand, you sometimes need a script to do something using alternate credentials, and you don’t necessarily want the runner of the script to know those credentials.

So how do you deal with it?

Let’s be clear: This is not a wish list. Comments like, “I wish PowerShell could do ____” aren’t valid. What do you do using the technology as it exists today? Do you prompt for a credential and assume the script user will have it? Do you try to hardcode it? Do you set up a constrained endpoint? What?

[boilerplate greatdebate]

PowerShell Great Debate: Piping in a Script


Take a look at this:

# version 1
Get-Content computers.txt |
ForEach-Object {
  $os = Get-WmiObject Win32_OperatingSystem -comp $_
  $bios = Get-WmiObject Win32_BIOS -comp $_
  $props = @{computername=$_;
             osversion=$os.version;
             biosserial=$bios.serialnumber}
  New-Object PSObject -Prop $props
}

# version 2
$computers = Get-Content computers.txt
foreach ($computer in $computers) {
  $os = Get-WmiObject Win32_OperatingSystem -comp $computer
  $bios = Get-WmiObject Win32_BIOS -comp $computer
  $props = @{computername=$computer;
             osversion=$os.version;
             biosserial=$bios.serialnumber}
  New-Object PSObject -Prop $props
}

These two snippets do the same thing. The first uses a more “pipeline” style approach, and I’ve personally never felt the urge to do that in a script. Probably habit – I come from the VBScript world, so a construct like foreach($x in $y) is natural for me. I’ve seen folks get into that “pipeline” approach inside a script and get into trouble, and if I’m scripting I often prefer to use the more formal, structured approach of the version 2 snippet.

What’re your thoughts? For me, version 1 has some downsides – forcing yourself into that pipeline structure can be limiting, and I find the approach in version 2 to be more readable and a bit easier to follow. Frankly, I’m never a fan of having to mentally track what’s in $_.

(Which brings up a sidebar: I tend to evaluate a script’s goodness based on how well I can understand what it does without running it. That’s a common criteria, in fact, and one I personally think helps aid in debugging as well as maintaining scripts.)

Anyway… discuss!

[boilerplate greatdebate]

PowerShell Great Debate: Backticks


Here’s an age-old debate that we can finally, perhaps, put an end to: The backtick character for line continuation.

The basic concept looks like this:

Get-WmiObject -Class Win32_BIOS `
              -ComputerName whatever `
              -Filter "something='else'"

This trick relies on the fact that the backtick (grave accent) is PowerShell’s escape character. In this case, it’s escaping the carriage return, turning it from a logical end-of-line marker into a literal carriage return. It makes commands with a lot of parameters easier to read, since you can line up the parameters as I’ve done.

My personal beefs with this:

  • The character is visually hard to distinguish. On-screen, it’s just a couple of pixels; in a book, it looks like stray ink or toner.
  • If you put any whitespace after the backtick, it escapes that character instead of the carriage return, and everything breaks.
  • On some non-US keyboards, it’s a difficult character to get to.

In  many cases, you can achieve nice formatting without the back tick.

Do-Something -Parameter this |
  Get-Something -Parameter those -Parm these |
  Something-Else -This that -Foo bar

This is because a carriage return after a pipe, semicolon, or comma is always interpreted as a visual thing, and not as a logical end of line. Of course, some argue that you can make that command prettier by using the back tick:

Do-Something -Param this `
| Something-Else -this that -foo bar `
| Invoke-Those -these those

Here, the pipes line up on the front, making the command into a kind of visual block – but you have to rely on the backticks. You could then argue that a combination of splatting and careful formatting could be nicer, without the backticks:

$do_something     = @{parameter = $this;
                      foo       = $bar}
$invoke_something = @{param     = $these;
                      param     = $those}

Do-Something     @do_something     |
Invoke-Something @invoke_something |
Something-Else

Visually blocked-out, but no back ticks.

And the debate rages on. Your thoughts? Pros? Cons? Why?

 

[boilerplate greatdebate]

PowerShell Great Debate: Formatting Constructs


Here’s an easy, low-stakes debate: How do you like to format your scripting constructs? And, more importantly, why do you like your method?

For example, I tend to do this:

If ($this -eq $that) {
 # do this 
} else {
 # do this
}

I do so out of long habit with C-like syntax, and because when I’m teaching this helps me keep more information on the screen. However, some folks prefer this:

if ($this -eq $that)
{
  # do this 
}
else 
{ 
  # do this
}

Because of my own long habits, I find that hard to read, but it does make it easier to see if your squigglies are lining up properly. It takes up a ton of room, though, and I personally don’t follow this as easily as the previous example.

But what’s your preference? Why? 

[boilerplate greatdebate]

PowerShell Great Debate: To Accelerate, or Not?


At his Birds of a feather session at TechEd 2013, Glenn Sizemore and I briefly debated something that I’d like to make the topic of today’s Great Debate. It has to do with how you create new, custom objects. For example, one approach – which I used to favor, but now think is too long-form:

$obj = New-Object -Type PSObject
$obj | Add-Member NoteProperty Foo $bar
$obj | Add-Member NoteProperty This $that

We saw some variants in The Scripting Games, including this one:

$obj = New-Object PSObject
Add-Member -InputObject $obj -Name Foo -MemberType NoteProperty -Value $bar

I generally don’t like any syntax that explicitly uses -InputObject like that; the parameter is designed to catch pipeline input, and using it explicitly strikes me as overly wordy, and doesn’t really leverage the shell.

Glenn and I both felt that, these days, a hashtable was the preferred approach:

$props = @{This=$that;
           Foo=$bar;
           These=$those}

The semicolons are optional when you type the construct that way, but I tend to use them out of habits that come from other languages. The point of our debate was that Glenn would use the hashtable like this:

$obj = [pscustomobject]$props

Because he feels it’s more concise, and because he puts a high value on quick readability. I personally prefer (and teach) a somewhat longer version:

$obj = New-Object -Type PSObject -Prop $props

Because, I argued, type accelerators like [pscustomobject] aren’t documented or discoverable. Someone running across your script can’t use the shell’s help system to figure out WTF is going on; with New-Object, on the other hand, they’ve got a help file and examples to rely on.

(BTW, I never worry about ordered hashtables; if I need the output in a specific order, I’ll use a custom view, a Format cmdlet, or Select-Object. A developer once explained to me that unordered hashtables are more memory-efficient for .NET, so I go with them).

But the big question on the table here is “to use type accelerators, or no?” You see this in many instances:

[null]Do-Something

# vs.

Do-Something | Out-Null

Same end effect of course, but I’ve always argued that the latter is more discoverable, while Glenn (and many others) prefer the brevity of the former.

So we’ll make today’s Great Debate two-pronged. What approach do you favor for creating custom objects? And, do you tend to prefer type accelerators, or no?

[boilerplate greatdebate]

PowerShell Great Debate: Capturing Errors


Hot on the heels of our last Great Debate, let’s take the discussion to the next logical step and talk about how you like to capture errors when they occur.

The first technique is to use -ErrorVariable:

Try {
  Get-WmiObject Win32_BIOS -comp nothing -ea stop -ev mine
} Catch {
  # use $mine for error
}

Another is to use the $Error collection:

Try {
  Get-WmiObject Win32_BIOS -comp badname -ea stop
} Catch {
  # use $error[0]
}

And a third is to use $_:

Try {
  Get-WmiObject Win32_BIOS -comp snoopy -ea stop
} Catch {
  # use $_
}

Personally, I’ve always disliked the last approach, because people don’t realize that in some situations $_ can get “hijacked.” For example:

Get-Content names.txt |
ForEach-Object { 
  Try {
    Get-WmiObject Win32_BIOS -Comp $_ -EA Stop
  } Catch {
    # is $_ an error or a computer name?
  }
}

Now, I’m a big not-fan of using pipelines like this in a script, but that’s another debate (it’s on my list). The point is really that I can’t universally, 100% rely on $_… and when someone uses $_ without realizing what’s happening, they back themselves into a tricky corner that’s difficult to diagnose. Since my big focus is on learning and teaching, I tend to want to teach techniques that are universal and always work the same way.

That said, $error[0] and the -ErrorVariable (-EV) technique return slightly different objects, meaning you have to work with them somewhat differently.

So what’s your preference? Why? Which of these don’t you like so much… and why?

[boilerplate greatdebate]

PowerShell Great Debate: Error Trapping


In the aftermath of The Scripting Games, it’s clear we need to have several community discussions – thus, I present to you, The Great Debates. These will be a series of posts wherein I’ll outline the basic situation, and you’re encouraged to debate and discuss in the comments section.

The general gist is that, during the Games, we saw different people voting “up” and “down” for the exact same techniques. So… which one is right? Neither! But all approaches have pros and cons… so that’s what we’ll discuss and debate. In the end, I’ll take the discussion into a community-owned (free) ebook on patterns and practices for PowerShell.

Today’s Debate: Error Trapping

There are a few different approaches folks take to trapping an error (I’m not discussing capturing the error, just knowing that one occurred).

Hopefully the Trap construct is familiar to everyone; I’ve always believed it’s awkward and outdated. The product team has said as much; it was just the best they could do in v1 given time constraints. Its use of scope makes it especially tricky sometimes.

Try…Catch…Finally seems to be what a lot of people prefer. It’s procedural and structured, and it works against any terminating exception. You do have to remember to make errors into terminating exceptions (-EA Stop on a cmdlet, for example), but it’s a very programmatic approach.

I see folks sometimes use $?:

Do-Something
If ($?) { 
 # deal with it
}

A “con” of this approach is that $? doesn’t indicate an error. It indicates whether or not the previous command thinks it completed successfully. It’s reliable with most cmdlets – but I’ve seen it fail for a lot of external utilities. Given that it isn’t 100% reliable as an indicator, I tend to shy away from it. I’d rather learn one way that always works, and that’s been Try/Catch for me.

Try/Catch also makes it easy to catch different exceptions differently. I don’t always need to do so… but again, I’d rather learn one way to do things that always works and provides more flexibility. I don’t want to use $? sometimes, and then use something else other times, because that’s more to remember, teach, learn, etc.

Some folks will do an $error.clear(), clearing the error collection, and then run a command. They’ll then check $error.count to see if it’s nonzero. I don’t like that as much because it looks messy to me, and again – it doesn’t let me easily handle different exceptions as easily as Try/Catch.

Ok… your thoughts?

 

[boilerplate greatdebate]