Category Archives: Tips and Tricks

Going Deeper on DSC Resources

Desired State Configuration is a very new technology and declarative configuration management is a very young space yet.  We (Microsoft and the community) are still figuring out the best structure for resources, composite configurations, and other structures.

That said, there are certain viewpoints that I’ve come to, either from hands on experience or in watching how other communities (like the Puppet community or Chef community) handle similar problems.

How Granular Should I Get?

There is no absolute answer.

Very, Very Granular

Resources should be very granular in the abstract, but in practice, you may need to make concessions to improve the user experience.

For example, when I configure an IP address for a network interface, I can supply a default gateway. A default gateway is a route, which is separate from the interface and IP address, but in practice they tend to be configured together. In this case, it might make sense to offer a resource that can configure both the IP address and the default gateway.

I tend to think resources should be very granular. We can use composite resources to offer higher level views of the configuration. If I were implementing a resource to configure a network adapter’s IP and gateway, I would have a route resource, an IP address resource, and probably a DNS server setting resource. I would then also have a composite resource to deal with the default use case of configuring a network adapter’s IP address, gateway, and DNS servers together.

The benefit of doing it this way is that I still have very discrete, flexible primitives (the IP address resource, the route resource, and the DNS server resource). I can then leverage the route resource to create static routes, or use them directly to more discretely configure the individual elements.


You have some flow control that you need to happen based on the state of the client or the environment.  Since your configuration is statically generated and is declarative, there are no flow control statements in the configuration MOF document.  That means that any logic that needs to occur at application time

Unfortunately, this leads to the need to re-implement common functionality.  For example, if I have a service that I need to be able to update the binary (not via an MSI), I need to basically re-implement parts of the file and service resource.  This use case requires a custom resource because I need to stop the service before I can replace the binary, but I don’t want to stop the service with every consistency check if I don’t need to replace the file.

This scenario begs for a better way to leverage existing resources in a cross resource scenario (kind of like RequiredModules in module metadata), but there isn’t a clean way to do this that I’ve found (but I’m still looking!).

My Recommendation

So for most cases, I would try to use existing resources or build very granular custom resources.  If I need to offer a higher level of abstraction, I’d escalate to putting a composite resource on top of those granular resources.  Finally, if I need some flow control or logic for a multistep process, I’d implement a more comprehensive resource.

What Should I Validate?

Now that we are seeing some more resources in the community repository (especially thanks to the waves of resources from the Powershell Team!), we are seeing a variety of levels of validation being performed.

I think that the Test-TargetResource function should validate all the values and states that Set-TargetResource can set.

An example of where this isn’t happening currently is in the cNetworking resource for PSHOrg_cIPAddress.  I’m going to pick on this resource a bit, since it was the catalyst for this discussion.

The resource offers a way to set a default gateway as well as the IP address.  So what happens if after setting the IP and default gateway, someone changes the default gateway to point to another router?

In this case, the validation is only checking that the IP address is correct.  DSC will never re-correct the gateway and our DSC configuration document (the MOF file) is no longer an accurate representation of the system state, despite the fact that the Local Configuration Manager (LCM) will report that everything matches.

This is BAD!!  If a resource offers an option to configure a setting, that setting should be validated by Test-TargetResource, otherwise that setting should be removed from the resource.  The intent of DSC is to control configuration, including changes over time and return a system to the desired state.  If we ignore certain settings, we weaken our trust in the underlying infrastructure of DSC.

What should I return?

The last element I’m going to tackle today is what should be returned from Get-TargetResource.  I’ve been on the fence about this one.  Like with Test-TargetResource, there are a number of implementation examples that vary in how they come up with the return values.

Currently, I don’t see a ton of use for Get-TargetResource and it doesn’t impact the Test and Set phases of the LCM, so it’s been easy to ignore.  This is bad practice (shame on me).

Here’s my thoughts around Get-TargetResource.  It should return the currently configured state of the machine.  Directly returning parameters passed in is misleading.

Going back to the PSHOrg_cIPAddress from the earlier example, it directly returns the default gateway from the parameter, regardless of the configured gateway.  This wouldn’t be so bad if the resource actually checked the gateway during processing and could correct it if it drifted.  But it does not check the gateway, so Get-TargetResource could be lying to you.  T

he most consistent result of Get-TargetResource would be retrieving the currently configured settings.

What’s left?

What other burning questions do you have around DSC?  Let’s keep talking them through either in the forums or in the comments here.

PowerShell and System.Nullable<T>

While helping to answer a question about Exchange cmdlets today, I came across something interesting, which doesn’t seem to be very well documented.

A little background, first: in the .NET Framework (starting in Version 2.0), there’s a Generic type called System.Nullable<T>. The purpose of this type is to allow you to assign a value of null to Value types (structs, integers, booleans, etc), which are normally not allowed to be null in a .NET application. The Nullable structure consists of two properties: HasValue (a Boolean), and Value (the underlying value type, such as an integer, struct, etc).

A C# method which accepts a Nullable type might look something like this:

int? Multiply(int? operand1, int? operand2)
    if (!operand1.HasValue || !operand2.HasValue) { return null; }

    return operand1.Value * operand2.Value;

("int?" is C# shorthand for System.Nullable<int> .)

PowerShell appears to do something helpful, though potentially unexpected, when it comes across an instance of System.Nullable: it evaluates to either $null or an object of the underlying type for you, without the need (or the ability) to ever access the HasValue or the Value properties of the Nullable structure yourself:

$variable = [Nullable[int]] 10

$variable.GetType().FullName   # System.Int32

If you assign $null to the Nullable variable instead, the $variable.GetType() line will produce a “You cannot call a method on a null-valued expression” error. You never see the actual System.Nullable structure in your PowerShell code.

What does this have to do with Exchange? Some of the Exchange cmdlets return objects that have public Nullable properties, such as MoveRequestStatistics.BytesTransferred. Going through the MSDN documentation on these classes, you might expect to have to do something like $_.BytesTransferred.Value.ToMB() to get at the ByteQuantifiedSize.ToMB() method, but that won’t work. $_.BytesTransferred will either be $null, or it will be an instance of the ByteQuantifiedSize structure; there is no “Value” property in either case. After checking for $null, you’d just do this: $_.BytesTransferred.ToMB()

PowerShell Gotcha: UNC paths and Providers

PowerShell’s behavior can be a little bit funny when you pass a UNC path to certain cmdlets. PowerShell doesn’t recognize these paths as “rooted” because they’re not on a PSDrive; as such, whatever provider is associated with PowerShell’s current location will attempt to handle them. For example:

Set-Location C:
Get-ChildItem -Path \\$env:COMPUTERNAME\c$

Set-Location HKLM:
Get-ChildItem -Path \\$env:COMPUTERNAME\c$

The first command works fine (assuming you have a c$ share enabled and are able to access it), and the second command gives a “Cannot find path” error, because the Registry provider tried to work with the UNC path instead of the FileSystem provider. You can get around this problem by prefixing the UNC path with “FileSystem::”, which will make PowerShell use that provider regardless of your current location.

On top of that, commands like Resolve-Path and $PSCmdlet.GetUnresolvedProviderPathFromPSPath() don’t normalize UNC paths properly, even when the FileSystem provider handles them. This annoyed me, so I spent some time investigating different options to get around the quirky behavior. The result is the Get-NormalizedFileSystemPath function, which can be downloaded from the TechNet Gallery. In addition to making UNC paths behave, this had the side effect of also resolving 8.3 short file names to long paths (something else that Resolve-Path doesn’t do.)

The function has an “-IncludeProviderPrefix” switch which tells it to include the “FileSystem::” prefix, if desired (so you can reliably use cmdlets like Get-Item, Get-Content, Test-Path, etc., regardless of your current location or whether the path is UNC.) For example:

$path = "\\$env:COMPUTERNAME\c$\SomeFolder\..\.\Whatever\..\PROGRA~1" 
$path = Get-NormalizedFileSystemPath -Path $path -IncludeProviderPrefix 
Set-Location HKLM: 
Get-ChildItem -Path $path | Select-Object -First 1 

FileSystem::\\MYCOMPUTERNAME\c$\Program Files 
    Directory: \\MYCOMPUTERNAME\c$\Program Files 
Mode                LastWriteTime     Length Name 
----                -------------     ------ ---- 
d----         7/30/2013  10:54 AM            7-Zip 

Revisited: PowerShell and Encryption

Back in November, I made a post about saving passwords for your PowerShell scripts. As I mentioned in that article, the ConvertFrom-SecureString cmdlet uses the Data Protection API to create an encrypted copy of the SecureString’s contents. DPAPI uses master encryption keys that are saved in the user’s profile; unless you enable either Roaming Profiles or Credential Roaming, you’ll only be able to decrypt that value on the same computer where the encryption took place. Even if you do enable Credential Roaming, only the same user account who originally encrypted the data will be able to read it.

So, what do you do if you want to encrypt some data that can be decrypted by other user accounts?

The ConvertFrom-SecureString and ConvertTo-SecureString cmdlets have a pair of parameters (-Key and -SecureKey) that allow you to specify your own encryption key instead of using DPAPI. When you do this, the SecureString’s contents are encrypted using AES. Anyone who knows the AES encryption key will be able to read the data.

That’s the easy part. Encrypting data is simple; making sure your encryption keys don’t get exposed is the trick. If you’ve hard-coded the keys in your script, you may as well have just stored the password in plain text, for all the good the encryption will do. There are several ways you can try to save and protect your AES key; you could place it in a file with strong NTFS permissions, or in a database with strict access control, for example. In this post, however, I’m going to focus on another technique: encrypting your AES key with RSA certificates.

If you have an RSA certificate (even a self-signed one), you can encrypt your AES key using the RSA public key. At that point, only someone who has the certificate’s private key will be able to retrieve the AES key and read your data. Instead of trying to protect encryption keys yourself, we’re back to letting the OS handle the heavy lifting; if it protects your RSA private keys well, then your AES key is also safe. Here’s a brief example of creating a SecureString, saving it with a new random 32-byte AES key, and then using an RSA certificate to encrypt the key itself:

    $secureString = 'This is my password.  There are many like it, but this one is mine.' | 
                    ConvertTo-SecureString -AsPlainText -Force

    # Generate our new 32-byte AES key.  I don't recommend using Get-Random for this; the System.Security.Cryptography namespace
    # offers a much more secure random number generator.

    $key = New-Object byte[](32)
    $rng = [System.Security.Cryptography.RNGCryptoServiceProvider]::Create()


    $encryptedString = ConvertFrom-SecureString -SecureString $secureString -Key $key

    # This is the thumbprint of a certificate on my test system where I have the private key installed.

    $thumbprint = 'B210C54BF75E201BA77A55A0A023B3AE12CD26FA'
    $cert = Get-Item -Path Cert:\CurrentUser\My\$thumbprint -ErrorAction Stop

    $encryptedKey = $cert.PublicKey.Key.Encrypt($key, $true)

    $object = New-Object psobject -Property @{
        Key = $encryptedKey
        Payload = $encryptedString

    $object | Export-Clixml .\encryptionTest.xml

    if ($null -ne $key) { [array]::Clear($key, 0, $key.Length) }

Notice the use of try/finally and [array]::Clear() on the AES key’s byte array. It’s a good habit to make sure you’re not leaving the sensitive data lying around in memory longer than absolutely necessary. (This is the same reason you get a warning if you use ConvertTo-SecureString -AsPlainText without the -Force switch; .NET doesn’t allow you to zero out the memory occupied by a String.)

Any user who has the certificate installed, including its private key, will be able to load up the XML file and obtain the original SecureString as follows:

    $object = Import-Clixml -Path .\encryptionTest.xml

    $thumbprint = 'B210C54BF75E201BA77A55A0A023B3AE12CD26FA'
    $cert = Get-Item -Path Cert:\CurrentUser\My\$thumbprint -ErrorAction Stop

    $key = $cert.PrivateKey.Decrypt($object.Key, $true)

    $secureString = $object.Payload | ConvertTo-SecureString -Key $key
    if ($null -ne $key) { [array]::Clear($key, 0, $key.Length) }

Using RSA certificates to protect your AES encryption keys is as simple as that: Get-Item, $cert.PublicKey.Key.Encrypt() , and $cert.PrivateKey.Decrypt() . You can even make multiple copies of the AES key with different RSA certificates, so that more than one person/certificate can decrypt the data.

I’ve posted several examples of data encryption techniques in PowerShell on the TechNet Gallery. Some are based on SecureStrings, as the code above, and others use .NET’s CryptoStream class to encrypt basically anything (in this case, an entire file on disk.)

Revisited: Script Modules and Variable Scopes

Last week, I demonstrated that functions exported from Script Modules do not inherit their caller’s variable scopes, and how you could get around this by using the method $PSCmdlet.GetVariableValue().

It didn’t take me long to decide it was very tedious to include this type of code in every function, particularly when considering the number of preference variables that PowerShell has. (Check out about_Preference_Variables some time; there are quite a few.) I’ve just converted this approach into a function that can be called with a single line, and supports all PowerShell preference variables. For example, the Test-ScriptModuleFunction from the original post can be written as:

function Test-ScriptModuleFunction
    param ( )

    Get-CallerPreference -Cmdlet $PSCmdlet -SessionState $ExecutionContext.SessionState

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."

You can download the Get-CallerPreference function from the TechNet Gallery. It has been tested on PowerShell 2.0 and 4.0.

Getting your Script Module functions to inherit “Preference” variables from the caller

Edit: While this first blog post demonstrates a viable workaround, it requires a fair bit of code in your function if you want to inherit multiple preference variables. After this post was made, I discovered a way to get this working in a function, so your code only requires a single line to call it. Check out Revisited: Script Modules and Variable Scopes for more information and a download link.

One thing I’ve noticed about PowerShell is that, for some reason, functions defined in script modules do not inherit the variable scope of the caller. From the function’s perspective, the inherited scopes look like this: Local (Function), Script (Module), Global. If the caller sets, for example, $VerbosePreference in any scope other than Global, the script module’s functions won’t behave according to that change. The following test code demonstrates this:

# File TestModule.psm1:
function Test-ScriptModuleFunction
    param ( )

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."

# File Test.ps1:
Import-Module -Name .\TestModule.psm1 -Force

$VerbosePreference = 'Continue'

Write-Host "Global VerbosePreference: $global:VerbosePreference"
Write-Host "Test.ps1 Script VerbosePreference: $script:VerbosePreference"


Executing test.ps1 produces the following output, assuming that you’ve left the global VerbosePreference value to its default of “SilentlyContinue”:

Global VerbosePreference: SilentlyContinue
Test.ps1 Script VerbosePreference: Continue
Script Module Effective VerbosePreference: SilentlyContinue

There is a way for the script module’s Advanced Function to access variables in the caller’s scope; it just doesn’t happen automatically. You can use one of the following two methods in your function’s begin block:


$PSCmdlet.GetVariableValue() is just a shortcut for accessing the methods in the SessionState object. Both methods will return the same value that the caller would get if they just typed $VerbosePreference (using the local scope if the variable exists here, then checking the caller’s parent scope, and so on until it reaches Global.)

Let’s give it a try, modifying our TestModule.psm1 file as follows:

function Test-ScriptModuleFunction
    param ( )

    if (-not $PSBoundParameters.ContainsKey('Verbose'))
        $VerbosePreference = $PSCmdlet.GetVariableValue('VerbosePreference')

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."

Now, when executing test.ps1, we get the result we were after:

Global VerbosePreference: SilentlyContinue
Test.ps1 Script VerbosePreference: Continue
Module Function Effective VerbosePreference: Continue
VERBOSE: Something verbose.

Keep in mind that the $PSCmdlet variable is only available to Advanced Functions (see about_Functions_Advanced). Make sure you’ve got a param block with the [CmdletBinding()] and/or [Parameter()] attributes in the function, or you’ll get an error, because $PSCmdlet will be null.

Saving Passwords (and preventing other processes from decrypting them)

This question is nothing new: “How do I save credentials in PowerShell so I don’t have to enter a password every time the script runs?” An answer to that question has been in PowerShell for a very long time: you use the ConvertFrom-SecureString cmdlet to encrypt your password, save the resulting encrypted string to disk, and then later reverse the process with ConvertTo-SecureString. (Alternatively, you can use Export-CliXml, which encrypts the SecureString the same way.) For example:

# Prompt the user to enter a password
$secureString = Read-Host -AsSecureString "Enter a secret password"

$secureString | ConvertFrom-SecureString | Out-File -Path .\storedPassword.txt

# Later, read the password back in.
$secureString = Get-Content -Path .\storedPassword.txt | ConvertTo-SecureString

The ConvertFrom-SecureString and ConvertTo-SecureString cmdlets, when you don’t use their -Key, -SecureKey, or -AsPlainText switches, use DPAPI to encrypt / decrypt your secret data. When it comes to storing secrets with software alone (and without requiring a user to enter a password), DPAPI’s security is about as good as it gets. It’s not unbreakable – all of the encryption keys are there to be compromised by someone with Administrator access to the computer – but it’s pretty good.

You can see the details on how DPAPI works in the linked article, but here’s the long and short of it: By default, only the same user account (and on the same computer) is able to decrypt the protected data. However, there’s a catch: any process running under that user account can freely decrypt the data. DPAPI addresses this by allowing you to send it some optional, secondary entropy information to be used in the encryption and decryption process. This is like a second key that is specific to your program or script; so long as other processes don’t know what that entropy value is, they can’t read your data. (In theory, now you have a problem with protecting your entropy value, but this at least adds an extra layer that a malicious program needs to get around.) Here’s an excerpt from the article:

A small drawback to using the logon password is that all applications running under the same user can access any protected data that they know about. Of course, because applications must store their own protected data, gaining access to the data could be somewhat difficult for other applications, but certainly not impossible. To counteract this, DPAPI allows an application to use an additional secret when protecting data. This additional secret is then required to unprotect the data.

Technically, this “secret” should be called secondary entropy. It is secondary because, while it doesn’t strengthen the key used to encrypt the data, it does increase the difficulty of one application, running under the same user, to compromise another application’s encryption key. Applications should be careful about how they use and store this entropy. If it is simply saved to a file unprotected, then adversaries could access the entropy and use it to unprotect an application’s data.

Unfortunately, the ConvertFrom-SecureString and ConvertTo-SecureString cmdlets don’t allow you to specify this secondary entropy; they always pass in a Null value. For that reason, I’m putting together a small PowerShell script module to add this functionality. It can be downloaded from the TechNet repository. It adds an optional -Entropy parameter to both ConvertFrom-SecureString and ConvertTo-SecureString.

Since I was tinkering with those commands anyway, I also added an -AsPlainText switch to the ConvertFrom-SecureString command, in case you want to get the plain text back. This saves you the couple of extra commands to set up a PSCredential object and call the GetNetworkCredential() method, or make the necessary calls to the Marshal class yourself.

Import-Module .\SecureStringFunctions.psm1
$secureString = Read-Host -AsSecureString "Enter a secret password." 

# You can pass basically anything as the Entropy value, but I'd recommend sticking to simple value types (including strings),
# or arrays of those types, to make sure that the binary serialization of your entropy object doesn't change between
# script executions.  Here, we'll use Pi.  Edit:  The latest version of the code enforces the use of the recommended simple
# types, unless you also use the -Force switch.

$secureString | ConvertFrom-SecureString -Entropy ([Math]::PI) | Out-File .\storedPassword.txt

# Simulating another program trying to read your value using the normal ConvertTo-SecureString cmdlet (with null entropy).  This will produce an error. 
$newSecureString = Get-Content -Path .\storedPassword.txt | ConvertTo-SecureString 
# When your program wants to read the value, it can do so by passing the same Entropy value. 
$newSecureString = Get-Content -Path .\storedPassword.txt | ConvertTo-SecureString -Entropy ([Math]::PI)

#Display the plain text of the new SecureString object, verifying that it was decrypted correctly.
$newSecureString | ConvertFrom-SecureString -AsPlainText -Force

PowerShell Performance: Filtering Collections

Depending on what version of Windows PowerShell you are running, there are several different methods and syntax available for filtering collections. The differences are not just aesthetic; each one has different capabilities and performance characteristics. If you find yourself needing to optimize the performance of a script that processes a large amount of data, it helps to know what your options are. I ran some tests on the performance of various filtering techniques, and here are the results:

Description Syntax Pros Cons
Where-Object (-FilterScript)
Get-Process | Where-Object { $_.Name -eq 'powershell_ise' }
  • Runs on any version of Windows PowerShell.
  • Streams objects via the pipeline, keeping memory utilization to a minimum.
  • Any complex logic can be implemented inside the script block.
  • Slowest execution time of all the available options.
PowerShell 3.0 Simplified Where-Object syntax
Get-Process | Where Name -eq 'powershell_ise'
  • Executes slightly faster than the -FilterScript option.
  • Streams objects via the pipeline, keeping memory utilization to a minimum.
  • Limited to very simple comparisons of a single property on the piped objects. No compound expressions or data transformations are allowed.
  • Only works with PowerShell 3.0 or later
PowerShell 4.0 .Where() method syntax
(Get-Process).Where({ $_.Name -eq 'powershell_ise' })
  • Much faster than both of the Where-Object cmdlet versions (about 2x the speed.)
  • Any complex logic can be implemented inside the script block.
  • Only usable on collections that are completely stored in memory; cannot stream objects via the pipeline.
  • Only works with PowerShell 4.0
PowerShell filter
filter isISE { if ($_.Name -eq 'powershell_ise') { $_ } }

Get-Process | isISE
  • Runs on any version of Windows PowerShell.
  • Streams objects via the pipeline, keeping memory utilization to a minimum.
  • Any complex logic can be implemented inside the script block.
  • Faster than all of the Where-Object and .Where() options.
  • Not as easy to read and debug; user must scroll to wherever the filter is defined to see the actual logic.
foreach loop with embedded conditionals
foreach ($process in (Get-Process)) {
    if ($process.Name -eq 'powershell_ise') {
  • Faster than any of the previously mentioned options.
  • Any complex logic can be implemented inside the loop / conditional.
  • Only usable on collections that are completely stored in memory; cannot stream objects via the pipeline.

To test the execution speed of each option, I ran this code:

$loop = 1000

$v2 = (Measure-Command {
    for ($i = 0; $i -lt $loop; $i++)
        Get-Process | Where-Object { $_.Name -eq 'powershell_ise' }

$v3 = (Measure-Command {
    for ($i = 0; $i -lt $loop; $i++)
        Get-Process | Where Name -eq 'powershell_ise'

$v4 = (Measure-Command {
    for ($i = 0; $i -lt $loop; $i++)
        (Get-Process).Where({ $_.Name -eq 'powershell_ise' })

$filter = (Measure-Command {
    filter isISE { if ($_.Name -eq 'powershell_ise') { $_ } }
    for ($i = 0; $i -lt $loop; $i++)
        Get-Process | isISE

$foreachLoop = (Measure-Command {
    for ($i = 0; $i -lt $loop; $i++)
        foreach ($process in (Get-Process))
            if ($process.Name -eq 'powershell_ise')
                # Do something with $process

Write-Host ('Where-Object -FilterScript:  {0:f2} ms' -f $v2)
Write-Host ('Simplfied Where syntax:      {0:f2} ms' -f $v3)
Write-Host ('.Where() method:             {0:f2} ms' -f $v4)
Write-Host ('Using a filter:              {0:f2} ms' -f $filter)
Write-Host ('Conditional in foreach loop: {0:f2} ms' -f $foreachLoop)


Where-Object -FilterScript:  3035.69 ms
Simplfied Where syntax:      2855.33 ms
.Where() method:             1445.21 ms
Using a filter:              1281.13 ms
Conditional in foreach loop: 1073.14 ms

Comparison Operators, Collections, and Conditionals (Oh, My!)

Earlier today, I came across this bit of code in a forum post (modified slightly for clarity):

Get-WmiObject Win32_NetworkAdapterConfiguration -Filter "IPEnabled = True" |
Where-Object { $_.IPAddress | ForEach-Object { $_ -match '^192\.168\.1\.' } }

At first glance, it seems reasonable. Win32_NetworkAdapterConfiguration.IPAddress is a multi-valued property, so the author included a foreach loop in the Where-Object script block. If the NIC has an IP address matching the regular expression, the Where-Object block should evaluate to True, giving the desired result, right?

Not quite. This code works fine if all of your network adapters have only a single IP address, but this Where-Object clause actually evaluates to True for any adapter that has more than one IP, regardless of whether any of them match the pattern. For instance, when I change the pattern to something nonsensical and run it on my PC, here’s what I get:

Get-WmiObject Win32_NetworkAdapterConfiguration -Filter "IPEnabled = True" |
Where-Object { $_.IPAddress | ForEach-Object { $_ -match 'Jabberwocky' } }

DHCPEnabled      : True
IPAddress        : {, fe80::8dfb:e1df:bea4:3e78}
DefaultIPGateway : {}
DNSDomain        : 
ServiceName      : e1cexpress
Description      : Intel(R) 82579V Gigabit Network Connection
Index            : 7

DHCPEnabled      : False
IPAddress        : {, fe80::ad35:fe41:f21b:ee0b}
DefaultIPGateway : 
DNSDomain        : 
ServiceName      : VBoxNetAdp
Description      : VirtualBox Host-Only Ethernet Adapter
Index            : 13

Here’s why:

When you use the Where-Object cmdlet, PowerShell executes the script block, and coerces the results to type [bool]. When a network adapter contains more than one IP address, the block { $_.IPAddress | ForEach-Object { $_ -match ‘Jabberwocky’ } } produces a multi-element array of Boolean values (which, in this case, are all False). When PowerShell casts an array to a Boolean, there are three possible results:

  • If the array is empty, it evaluates to False.
  • If the array contains one element, that element is casted to Boolean.
  • If the array contains two or more elements, it evalutes to True

You can see this for yourself by entering these tests commands in a PowerShell console:

[bool] @()  # False
[bool] @($false)   # False
[bool] @($true)   # True
[bool] @($false, false)   # True

This doesn’t just affect the Where-Object cmdlet; the same rules apply in an “if” statement, or the condition of a while / do loop. So how do we fix it?

As always, there are multiple ways to fix the problem. In this post, I’m going to focus on how the comparison operators behave when the left operand is a collection. Instead of returning a Boolean value, they act as a sort of filter themselves, returning only the elements from the collection that meet the criteria of the operator. For example:

$array = @(0,1,2,3,4,5,6,7,"dog","cat","doghouse")

$array -eq 5   # This returns a single-element array containing 5
$array -lt 5   # This returns an array containing 0,1,2,3,4
$array -like 'do*'  # This returns an array containing "dog","doghouse"
$array -eq 47   # This returns an empty array.  (note: NOT $null)

When used in a conditional, the previous rules apply. 0 matches (an empty array) means False, 2 or more matches means True, and a single match depends on what you were searching for. For example, ($array -eq 0) evaluates to False in the above array, not because “0″ wasn’t found, but because 0 happens to convert to False. (On a side note, it would be better to use the -contains operator in this situation rather than -eq, for that reason.)

Going back to the initial example with network adapters, an easy solution is simply to get rid of the ForEach loop:

Get-WmiObject Win32_NetworkAdapterConfiguration -Filter "IPEnabled = True" |
Where-Object { $_.IPAddress -match '^192\.168\.1\.' }

Now, what happens?

  • If zero IPs in the list match the pattern, you get an empty array (which evaluates to False).
  • If one IP matches, you get back that IP address as a string, and this non-empty string will always evaluate to True when cast to a Boolean.
  • If more than one IP matches, you get a multi-element array, which also evaluates to True.

Exactly the results you wanted for that Where-Object filter.

The “about_Comparison_Operators” help file contains the details on how these operators behave for both scalar (single) values and collections. When you’re writing your script’s logic, be sure to ask yourself: “Can this value I’m evaluating sometimes be a collection?”, and hopefully, this type of bug will not bite you.

Why Get-Content Ain’t Yer Friend

Well, it isn’t your enemy, of course, but it’s definitely a tricky little beast.

Get-Content is quickly becoming my nemesis, because it’s sucking a lot of PowerShell newcomers into its insidious little trap. Actually, the real problem is that most newcomers don’t really understand that PowerShell is an object-oriented, rather than a text-oriented shell; they’re trying to treat Get-Content like the old Type command (and why not? type is an alias to Get-Content in PowerShell, isn’t it?), and failing.

Worse, PowerShell has just enough under-the-hood smarts to make some things work, but not everything. 

For example, this works to replace all instances of “t” with “x” in the file test.txt, outputting the result to new.txt:

$x = Get-Content test.txt
$x -replace "t","x" | Out-File new.txt

Sadly, this reinforces – for newcomers – the notion that Get-Content is just reading in the text file as a big chunk o’ text.


You see, in reality, Get-Content reads each line of the file individually, and returns a collection of System.String objects. It “loses” the carriage returns from the file at the same time. But you’d never know that, because when PowerShell displays a collection of strings, it displays them one object per line and inserts carriage returns. So if you do this, it’ll look like you’re dealing with a big hunk o’ text:

$x = Get-Content test.txt

But you’re not. $x, in that example, is a collection of objects, not a single string.

Never fear – you can make sense of this. First, if you use the -Raw parameter of Get-Content (available in v3+), it does in fact read the entire file as a big ol’ string, preserving carriage returns instead of using them to separate the file into single-line string objects. In v2, you can achieve something similar by using Out-String:

$x = Get-Content test.txt | Out-String

So if you just need to work with a big ol’ string, you can. Alternately, you might find that some operations are quicker when you actually do work line-by-line. For example, asking PowerShell to do a regex replace on a huge string can consume a ton of memory; working with one line at a time is often quicker. Just use a foreach:

ForEach ($line in (Get-Content test.txt)) {
  $line -replace "\d","x" | Out-File new.txt -Append

Of course, don’t assume it’ll be quicker – Measure-Command lets you test different approaches, so you can see which one is actually quicker.

You should also consider not using Get-Content, especially with very large files. That’s because it wants to read the entire file into memory at once, at that can take a lot of memory – not to mention a bit more processor power, swap file space, or whatever else. Instead, read your file from disk one line at a time, work with each line, and then (if that’s your intent) write each line back out to disk. Instead of caching the entire file in RAM, you’re reading it off disk one line at a time.

$file = New-Object System.IO.StreamReader -Arg "test.txt"
while ($line = $file.ReadLine()) {
  # $line has your line

Or at least something like that. Yeah, welcome to .NET Framework. Other options available to the Framework include reading a text file in chunks – again, to help conserve memory and improve processing speed, but not necessarily making you read line-by-line.

Whatever approach you choose, just remember that, by default, Get-Content isn’t just reading a stream of text all at once. You’ll be getting, and need to be prepared to deal with, a collection of objects. Those will often require that you enumerate them (line by line, in other words) using a foreach construct, and with large files the act of reading the entire file might negatively impact performance and system resources.

Knowing is half the battle!

Desired State Configuration – General Availability Changes

PowerShell DSC, along with Windows Server 2012 R2 has reached General Availability!  Yay!

However, there is (at least one so far) breaking change in Desired State Configuration (DSC).

Fortunately, the change is in an area I haven’t blogged about yet.. creating custom resources.  Unfortunately, it does mean I’ll have to update the GitHub repository and all my internal content (should be done by early next week).

The short version is that DSC resources are now resources inside modules, rather than each resource being independent modules.  The benefit of this is that now DSC resources won’t pollute the module scope, each resource won’t need its own psd1 file (the source module will require one though), and it provides an easier way to group resources, which wasn’t really possible before.

So, with GA, resources should go under the module root in a folder DSCResources.  You can have one or more resources in one PowerShell module.  The PowerShell module version is what will be used for the resource version number, so if you have several resources, a version number bump affects all the resources in the module.

I’ll be picking back up with the DSC series next week with how to configure DSC clients, so stay tuned.

PowerShell Gotcha: Relative Paths and .NET Methods

When you’re calling PowerShell cmdlets, you can almost always specify a relative path beginning with “.\”. However, if you use a path beginning with “.\” when calling a .NET method directly, it probably won’t work as you intended. This is because PowerShell’s idea of your current location is different than what the operating system and the .NET Framework see as the current working directory:

PS C:\Source\temp> [System.Environment]::CurrentDirectory

As an example, take the following code that tries to make a change to an XML file:

$xmlDoc = [xml](Get-Content .\MyXmlFile.xml)

$xmlDoc.someRootElement.someChildElement.Value = "New Value"


The first line to load the XML file would work fine; Get-Content will replace “.\” with the current PowerShell location (C:\Source\temp\ , in my example) before calling any of the underlying .NET Framework methods to read the file.

The call to $xmlDoc.Save(), however, would save a copy of the file to “C:\Users\dwyatt\MyXmlFile.xml”, because [System.Environment]::CurrentDirectory is currently set to “C:\Users\dwyatt”.

There’s an easy way to work around this problem. Instead of a period, the automatic PowerShell variable $PWD can be embedded in a string to specify a relative path when calling a .NET method:

PS C:\Source\temp> "$PWD"


PowerShell Performance: The += Operator (and When to Avoid It)

In PowerShell, there are always many different ways to accomplish a given task. Sometimes these different options offer trade-offs in performance and code clarity: faster execution at the expense of higher memory usage (or vice versa), or better performance at the expense of code that isn’t as easy to read. Depending on how much data you need to process, the differences between options may not really matter, and you can pick whatever is most aesthetically pleasing. However, if your script needs to scale well with large data sets, you’ll want to know how to make sure your script isn’t wasting a lot of CPU time or memory. This article, possibly the first in a series, touches on one such performance “gotcha”: using the += operator on strings or arrays.

Every so often, I see a blog post or script posted online containing code that looks something like this:

$outputString = ""
$array = @()

for ($i = 0; $i -lt 10; $i++)
    $outputString += "Line $i`r`n"
    $array += "Array Element $i"

They may not be both appending to a string and to an array in the same block, but this illustrates both ideas at once. When the loop only executes 10 times (or even 1000 times), the performance of this block of code isn’t so bad. It runs in less than 100 milliseconds on my computer when I change the loop limit to 1,000. When I bump it up to 10,000, though, it takes over 5 seconds to run (and it just gets worse from there: 12.5 seconds for 15000 elements, 26 seconds for 20000 elements, and so on). The increase in execution time is exponential, not linear. In other words, this code does not scale well at all.

The reason for this is that Arrays and Strings cannot be resized and appended to in the .NET Framework. Every one of those += operators caused .NET to have to create a new Array or String, copy the contents of the original over (plus its one new line or array element), and discard the original. As the size of the string or array goes up, each new copy takes longer and longer to complete.

The .NET Framework offers classes to address both of these performance problems. Instead of appending to Strings directly, there is System.Text.StringBuilder. As an alternative to arrays, you can use either System.Collections.ArrayList or System.Collections.Generic.List. In a PowerShell script, the difference usually doesn’t matter; in the next example, I’ll use List. This requires me to specify the type of elements that will be contained in the list, but will perform better than ArrayList in some situations (and since this is a Performance post, I may as well use the best option.)

Here’s how you can test the performance of the original example code, and compare it to the performance of StringBuilder and List:

Write-Host "Using += operators:"

$outputString = ""
$array = @()

Measure-Command {
    for ($i = 0; $i -lt 20000; $i++)
        $outputString += "Line $i`r`n"
        $array += "Array Element $i"

Write-Host "Using StringBuilder and List:"

$stringBuilder = New-Object System.Text.StringBuilder
$list = New-Object System.Collections.Generic.List[System.String]

Measure-Command {
    for ($i = 0; $i -lt 1000000; $i++)
        # Notice that I'm assigning the result of $stringBuilder.Append to $null,
        # to avoid sending any unwanted data down the pipeline.

        $null = $stringBuilder.Append("Line $i`r`n")
        $list.Add("Array Element $i")

    # These lines show you how to convert your StringBuilder and List objects back to String and Array types for later use.

    $outputString = $stringBuilder.ToString()
    $array = $list.ToArray()

In this case, the code clarity hasn’t suffered at all, in my opinion. $list.Add and $stringBuilder.Append are both very clear in their meaning, just as easy to read as the += operator.

Notice that I snuck in a difference in scale, there. The “+=” block only had to process 20,000 elements, and the StringBuilder / List block was cranked up to a million. The results?

Using += operators:
TotalMilliseconds : 26024.1599

Using StringBuilder and List:
TotalMilliseconds : 8334.3011

Even though they had to process 50 times more data, the StringBuilder and List classes did the job in less than one third the time.

Regular Expressions are a -replace’s best friend

Are you familiar with PowerShell’s -replace operator?

"John Jones" -replace "Jones","Smith"

Most folks are aware of it, and rely on it for straightforward string replacements like this one. But not very many people know that -replace also does some amazing stuff using regular expressions.

"," -replace "\.\d{2}\.","10"

That’d change the input string to “,,” replacing all occurrences of two digits, between periods, to 10. The 12 would be skipped because it isn’t followed by a period, as specified in the pattern. Note that all occurrences are replaced, in keeping with the usual operation of -replace.

The operator can also do capturing expressions, and this is where it gets really neat-o.

"Don Jones" -replace "([a-z]+)\s([a-z]+)",'$2, $1'

Here, I’ve specified two capturing expressions in parentheses, with a space character between them. PowerShell will capture the first to $1, and the second to $2. Those aren’t actually variables, which is important. In my replacement string, I put $2 first, followed by a comma, a space, and $1. The resulting string will be “Jones, Don”. It’s important that my replacement string be in single quotes. In double quotes, the shell will try and treat $1 and $2 as variables, instead of using them as captured regex placeholders. I kinda wish they’d used something other than a $ for the captured placeholders, so that they didn’t look like variables, but the syntax is in keeping with regex standards.

I think it’s cool to see all the places a regex can be put to use. The -split operator also supports regex syntax as a way of specifying the separator that will be used to break a string into components, so you’re not limited to splitting just on a single character like a comma.

Apart from the well-known -match operator and the Select-String command, where else have you used a regex in PowerShell?

Why Doesn’t My ValidateScript() work correctly?

I’ve received a few comments from folks after my observations on the Scripting Games Event 1. In those observations, I noted how much I loved:

[ValidateScript({Test-Path $_})][string]$path

As a way of testing to make sure your -Path parameter got a valid value, I love this. I’d never thought of it, and I plan to use it in classes. I may write a book about it someday, or maybe even an ode. Seriously good logic. But… I also bemoaned some scripts that provided an additional Test-Path, in the script’s main body of code. Why have a redundant check?

So, first, thanks for the e-mails you all sent. Second… please understand that I can’t respond to you all. I’ve got this full-time job thing, and I’ve got to do it or the grocery store will stop taking our checks. You’re welcome to drop comments here, and I really appreciate when you say stuff like, “can you explain ___ in a future post?” because it gives me ideas to write about. I just can’t get into private e-mail based education for a dozen folks. Teaching is kinda what I do for my job, so most of my time has to go to that.

But – there’s a great teaching point here. Let’s take this example:


This works as you would hopefully expect. When given a valid path, it’s fine. When allowed to use a valid default, it’s fine. When given an invalid path, it barfs in the ValidateScript. Now look at the next example – which more closely approximates what people have been seeing in their Scripting Games scripts:


In the Games, you were given a default path that wasn’t valid on your computer. So folks allowed their script to run with that default, and got errors, and were annoyed that ValidateScript() didn’t catch the problem.

It never will.

When you run a command, PowerShell goes through a process called parameter binding, wherein it attaches values to parameters and runs any declarative validation – like ValidateScript(). That validation will always catch invalid incoming data that’s been manually specified or sent in via the pipeline (for parameters that accept pipeline input). Because my -Path parameter wasn’t declared as mandatory, the validation routine will let me run the script and not specify -path.

Then the shell actually runs my code – and that’s when it assigns the default value to $path if one wasn’t specified on -path. Validation is over by this point, so an invalid default value will sneak by. The assumption by the shell is that you’re providing the default value, so you’re smart enough to provide a valid one. If you don’t, it’s your problem.

So do you just add a second, in-code check for the parameter? I’d still say no. I really dislike redundancy. If you know, because of your situation, that you can’t rely on ValidateScript(), then don’t use it at all – one check should suffice, and if it needs to be in-code instead of declarative, that’s fine. What’d be nice is if there was a declarative way of specifying a default, like [Default('whatever')] that ran before the validation checks, but such a thing doesn’t exist. Frankly, you could probably argue that if you can’t guarantee the validity of a default, then you shouldn’t provide one – and I’d probably buy into that argument, and subscribe to your newsletter.

In this case, the problem is entirely artificial. The default path value given to you in the Games scenario is valid in the context of the Games; it’s just when you test it on your system, outside that context, that a problem crops up.

Hopefully this helps explain how the ValidateXXX() attributes work, and how they interact with other features, like a default value.

Now explain why this will never assign C:\ as a default value:

[Parameter(Mandatory=$True)][string]$path = ‘c:\’