Category Archives: Tips and Tricks

Proxy Functions for Cmdlets with Dynamic Parameters


I came across an interesting problem today while working on the Pester module: how do you create a proxy function for a Cmdlet which has dynamic parameters? I needed a solution which would automatically reproduce the original cmdlet’s dynamic parameters inside a PowerShell function, and which would work on PowerShell 2.0 if at all possible. The full post and solution can be found on my blog at http://davewyatt.wordpress.com/2014/09/01/proxy-functions-for-cmdlets-with-dynamic-parameters/.

DSC Pull Server on Windows Server 2008 R2


Recently on the PowerShell.org forums, a community member mentioned that they were having trouble setting up a Server 2008 R2 machine as a DSC pull server. It turns out, this is possible, but you have to install all the prerequisites yourself, since the Add-WindowsFeature DSC-Service command doesn’t do it for you on the older operating system.

Refer to this blog post for the checklist.

Tracking down commands that are polluting your pipeline


In a recent forum post, someone was having trouble with a function that was outputting more values than he expected. We’ve all been there. He was having trouble debugging this, and I decided to see if I could find a way to narrow down the search in an automated fashion, rather than having to step through the code by hand.

The full article and code are up on my blog at http://davewyatt.wordpress.com/2014/06/05/tracking-down-commands-that-are-polluting-your-pipeline/

Patterns for Implementing a DSC Pull Server Environment


My Patterns for Implementing a DSC Pull Server Environment talk from the PowerShell Summit is now online.

Enjoy!

Building Scalable Configurations With DSC


My Building Scalable Configurations with DSC talk from the PowerShell Summit is now online.

Enjoy!

Installing PowerShell v5? Be a Little Careful, OK?


I’m getting a lot of questions from folks, via Twitter and other venues, regarding Windows Management Framework 5.0 – which is where PowerShell v5 comes from. It’s awesome that people are installing v5 and kicking the tires – however, please help spread the word:

  • v5 is a preview. It isn’t done, and it isn’t guaranteed bug-free. It shouldn’t be installed on production computers until it’s officially released.
  • v5 doesn’t install ‘side by side’ with v3 or v4. You can’t run it with “-version 3″ to “downgrade.” Now, v5 shouldn’t break anything – something that runs in v3 or v4 should still work fine – but there are no guarantees as it’s a preview and not released code at this stage.
  • Server software (Exchange, SharePoint, etc) often has a hard dependency on a specific version of PowerShell. You need to look into that before you install v5.
  • After installing v5, you might not be able to cleanly uninstall and revert to a prior version.

Generally speaking, v5 should be installed in a test virtual machine at the very least, not on a production computer. It’s great to play with it, and you should absolutely log bugs and suggestions to http://connect.microsoft.com.

This situation will be true for any pre-release preview of PowerShell or WMF going forward. “Preview” is the new Microsoft-speak for “beta,” and you should treat it as such. Play with it, yes – that’s the whole point, and it’s how we get a stable, clean release in the end. But play with caution, and never on production computers.

Going Deeper on DSC Resources


Desired State Configuration is a very new technology and declarative configuration management is a very young space yet.  We (Microsoft and the community) are still figuring out the best structure for resources, composite configurations, and other structures.

That said, there are certain viewpoints that I’ve come to, either from hands on experience or in watching how other communities (like the Puppet community or Chef community) handle similar problems.

How Granular Should I Get?

There is no absolute answer.

Very, Very Granular

Resources should be very granular in the abstract, but in practice, you may need to make concessions to improve the user experience.

For example, when I configure an IP address for a network interface, I can supply a default gateway. A default gateway is a route, which is separate from the interface and IP address, but in practice they tend to be configured together. In this case, it might make sense to offer a resource that can configure both the IP address and the default gateway.

I tend to think resources should be very granular. We can use composite resources to offer higher level views of the configuration. If I were implementing a resource to configure a network adapter’s IP and gateway, I would have a route resource, an IP address resource, and probably a DNS server setting resource. I would then also have a composite resource to deal with the default use case of configuring a network adapter’s IP address, gateway, and DNS servers together.

The benefit of doing it this way is that I still have very discrete, flexible primitives (the IP address resource, the route resource, and the DNS server resource). I can then leverage the route resource to create static routes, or use them directly to more discretely configure the individual elements.

Unless…

You have some flow control that you need to happen based on the state of the client or the environment.  Since your configuration is statically generated and is declarative, there are no flow control statements in the configuration MOF document.  That means that any logic that needs to occur at application time

Unfortunately, this leads to the need to re-implement common functionality.  For example, if I have a service that I need to be able to update the binary (not via an MSI), I need to basically re-implement parts of the file and service resource.  This use case requires a custom resource because I need to stop the service before I can replace the binary, but I don’t want to stop the service with every consistency check if I don’t need to replace the file.

This scenario begs for a better way to leverage existing resources in a cross resource scenario (kind of like RequiredModules in module metadata), but there isn’t a clean way to do this that I’ve found (but I’m still looking!).

My Recommendation

So for most cases, I would try to use existing resources or build very granular custom resources.  If I need to offer a higher level of abstraction, I’d escalate to putting a composite resource on top of those granular resources.  Finally, if I need some flow control or logic for a multistep process, I’d implement a more comprehensive resource.

What Should I Validate?

Now that we are seeing some more resources in the community repository (especially thanks to the waves of resources from the Powershell Team!), we are seeing a variety of levels of validation being performed.

I think that the Test-TargetResource function should validate all the values and states that Set-TargetResource can set.

An example of where this isn’t happening currently is in the cNetworking resource for PSHOrg_cIPAddress.  I’m going to pick on this resource a bit, since it was the catalyst for this discussion.

The resource offers a way to set a default gateway as well as the IP address.  So what happens if after setting the IP and default gateway, someone changes the default gateway to point to another router?

In this case, the validation is only checking that the IP address is correct.  DSC will never re-correct the gateway and our DSC configuration document (the MOF file) is no longer an accurate representation of the system state, despite the fact that the Local Configuration Manager (LCM) will report that everything matches.

This is BAD!!  If a resource offers an option to configure a setting, that setting should be validated by Test-TargetResource, otherwise that setting should be removed from the resource.  The intent of DSC is to control configuration, including changes over time and return a system to the desired state.  If we ignore certain settings, we weaken our trust in the underlying infrastructure of DSC.

What should I return?

The last element I’m going to tackle today is what should be returned from Get-TargetResource.  I’ve been on the fence about this one.  Like with Test-TargetResource, there are a number of implementation examples that vary in how they come up with the return values.

Currently, I don’t see a ton of use for Get-TargetResource and it doesn’t impact the Test and Set phases of the LCM, so it’s been easy to ignore.  This is bad practice (shame on me).

Here’s my thoughts around Get-TargetResource.  It should return the currently configured state of the machine.  Directly returning parameters passed in is misleading.

Going back to the PSHOrg_cIPAddress from the earlier example, it directly returns the default gateway from the parameter, regardless of the configured gateway.  This wouldn’t be so bad if the resource actually checked the gateway during processing and could correct it if it drifted.  But it does not check the gateway, so Get-TargetResource could be lying to you.  T

he most consistent result of Get-TargetResource would be retrieving the currently configured settings.

What’s left?

What other burning questions do you have around DSC?  Let’s keep talking them through either in the forums or in the comments here.

PowerShell and System.Nullable<T>


While helping to answer a question about Exchange cmdlets today, I came across something interesting, which doesn’t seem to be very well documented.

A little background, first: in the .NET Framework (starting in Version 2.0), there’s a Generic type called System.Nullable<T>. The purpose of this type is to allow you to assign a value of null to Value types (structs, integers, booleans, etc), which are normally not allowed to be null in a .NET application. The Nullable structure consists of two properties: HasValue (a Boolean), and Value (the underlying value type, such as an integer, struct, etc).

A C# method which accepts a Nullable type might look something like this:

int? Multiply(int? operand1, int? operand2)
{
    if (!operand1.HasValue || !operand2.HasValue) { return null; }

    return operand1.Value * operand2.Value;
}

("int?" is C# shorthand for System.Nullable<int> .)

PowerShell appears to do something helpful, though potentially unexpected, when it comes across an instance of System.Nullable: it evaluates to either $null or an object of the underlying type for you, without the need (or the ability) to ever access the HasValue or the Value properties of the Nullable structure yourself:

$variable = [Nullable[int]] 10

$variable.GetType().FullName   # System.Int32

If you assign $null to the Nullable variable instead, the $variable.GetType() line will produce a “You cannot call a method on a null-valued expression” error. You never see the actual System.Nullable structure in your PowerShell code.

What does this have to do with Exchange? Some of the Exchange cmdlets return objects that have public Nullable properties, such as MoveRequestStatistics.BytesTransferred. Going through the MSDN documentation on these classes, you might expect to have to do something like $_.BytesTransferred.Value.ToMB() to get at the ByteQuantifiedSize.ToMB() method, but that won’t work. $_.BytesTransferred will either be $null, or it will be an instance of the ByteQuantifiedSize structure; there is no “Value” property in either case. After checking for $null, you’d just do this: $_.BytesTransferred.ToMB()

PowerShell Gotcha: UNC paths and Providers


PowerShell’s behavior can be a little bit funny when you pass a UNC path to certain cmdlets. PowerShell doesn’t recognize these paths as “rooted” because they’re not on a PSDrive; as such, whatever provider is associated with PowerShell’s current location will attempt to handle them. For example:

Set-Location C:
Get-ChildItem -Path \\$env:COMPUTERNAME\c$

Set-Location HKLM:
Get-ChildItem -Path \\$env:COMPUTERNAME\c$

The first command works fine (assuming you have a c$ share enabled and are able to access it), and the second command gives a “Cannot find path” error, because the Registry provider tried to work with the UNC path instead of the FileSystem provider. You can get around this problem by prefixing the UNC path with “FileSystem::”, which will make PowerShell use that provider regardless of your current location.

On top of that, commands like Resolve-Path and $PSCmdlet.GetUnresolvedProviderPathFromPSPath() don’t normalize UNC paths properly, even when the FileSystem provider handles them. This annoyed me, so I spent some time investigating different options to get around the quirky behavior. The result is the Get-NormalizedFileSystemPath function, which can be downloaded from the TechNet Gallery. In addition to making UNC paths behave, this had the side effect of also resolving 8.3 short file names to long paths (something else that Resolve-Path doesn’t do.)

The function has an “-IncludeProviderPrefix” switch which tells it to include the “FileSystem::” prefix, if desired (so you can reliably use cmdlets like Get-Item, Get-Content, Test-Path, etc., regardless of your current location or whether the path is UNC.) For example:

$path = "\\$env:COMPUTERNAME\c$\SomeFolder\..\.\Whatever\..\PROGRA~1" 
 
$path = Get-NormalizedFileSystemPath -Path $path -IncludeProviderPrefix 
 
$path 
 
Set-Location HKLM: 
Get-ChildItem -Path $path | Select-Object -First 1 

<# 
Output: 
 
FileSystem::\\MYCOMPUTERNAME\c$\Program Files 
 
    Directory: \\MYCOMPUTERNAME\c$\Program Files 
 
 
Mode                LastWriteTime     Length Name 
----                -------------     ------ ---- 
d----         7/30/2013  10:54 AM            7-Zip 
 
#>

Revisited: PowerShell and Encryption


Back in November, I made a post about saving passwords for your PowerShell scripts. As I mentioned in that article, the ConvertFrom-SecureString cmdlet uses the Data Protection API to create an encrypted copy of the SecureString’s contents. DPAPI uses master encryption keys that are saved in the user’s profile; unless you enable either Roaming Profiles or Credential Roaming, you’ll only be able to decrypt that value on the same computer where the encryption took place. Even if you do enable Credential Roaming, only the same user account who originally encrypted the data will be able to read it.

So, what do you do if you want to encrypt some data that can be decrypted by other user accounts?

The ConvertFrom-SecureString and ConvertTo-SecureString cmdlets have a pair of parameters (-Key and -SecureKey) that allow you to specify your own encryption key instead of using DPAPI. When you do this, the SecureString’s contents are encrypted using AES. Anyone who knows the AES encryption key will be able to read the data.

That’s the easy part. Encrypting data is simple; making sure your encryption keys don’t get exposed is the trick. If you’ve hard-coded the keys in your script, you may as well have just stored the password in plain text, for all the good the encryption will do. There are several ways you can try to save and protect your AES key; you could place it in a file with strong NTFS permissions, or in a database with strict access control, for example. In this post, however, I’m going to focus on another technique: encrypting your AES key with RSA certificates.

If you have an RSA certificate (even a self-signed one), you can encrypt your AES key using the RSA public key. At that point, only someone who has the certificate’s private key will be able to retrieve the AES key and read your data. Instead of trying to protect encryption keys yourself, we’re back to letting the OS handle the heavy lifting; if it protects your RSA private keys well, then your AES key is also safe. Here’s a brief example of creating a SecureString, saving it with a new random 32-byte AES key, and then using an RSA certificate to encrypt the key itself:

try
{
    $secureString = 'This is my password.  There are many like it, but this one is mine.' | 
                    ConvertTo-SecureString -AsPlainText -Force

    # Generate our new 32-byte AES key.  I don't recommend using Get-Random for this; the System.Security.Cryptography namespace
    # offers a much more secure random number generator.

    $key = New-Object byte[](32)
    $rng = [System.Security.Cryptography.RNGCryptoServiceProvider]::Create()

    $rng.GetBytes($key)

    $encryptedString = ConvertFrom-SecureString -SecureString $secureString -Key $key

    # This is the thumbprint of a certificate on my test system where I have the private key installed.

    $thumbprint = 'B210C54BF75E201BA77A55A0A023B3AE12CD26FA'
    $cert = Get-Item -Path Cert:\CurrentUser\My\$thumbprint -ErrorAction Stop

    $encryptedKey = $cert.PublicKey.Key.Encrypt($key, $true)

    $object = New-Object psobject -Property @{
        Key = $encryptedKey
        Payload = $encryptedString
    }

    $object | Export-Clixml .\encryptionTest.xml

}
finally
{
    if ($null -ne $key) { [array]::Clear($key, 0, $key.Length) }
}

Notice the use of try/finally and [array]::Clear() on the AES key’s byte array. It’s a good habit to make sure you’re not leaving the sensitive data lying around in memory longer than absolutely necessary. (This is the same reason you get a warning if you use ConvertTo-SecureString -AsPlainText without the -Force switch; .NET doesn’t allow you to zero out the memory occupied by a String.)

Any user who has the certificate installed, including its private key, will be able to load up the XML file and obtain the original SecureString as follows:

try
{
    $object = Import-Clixml -Path .\encryptionTest.xml

    $thumbprint = 'B210C54BF75E201BA77A55A0A023B3AE12CD26FA'
    $cert = Get-Item -Path Cert:\CurrentUser\My\$thumbprint -ErrorAction Stop

    $key = $cert.PrivateKey.Decrypt($object.Key, $true)

    $secureString = $object.Payload | ConvertTo-SecureString -Key $key
}
finally
{
    if ($null -ne $key) { [array]::Clear($key, 0, $key.Length) }
}

Using RSA certificates to protect your AES encryption keys is as simple as that: Get-Item, $cert.PublicKey.Key.Encrypt() , and $cert.PrivateKey.Decrypt() . You can even make multiple copies of the AES key with different RSA certificates, so that more than one person/certificate can decrypt the data.

I’ve posted several examples of data encryption techniques in PowerShell on the TechNet Gallery. Some are based on SecureStrings, as the code above, and others use .NET’s CryptoStream class to encrypt basically anything (in this case, an entire file on disk.)

Revisited: Script Modules and Variable Scopes


Last week, I demonstrated that functions exported from Script Modules do not inherit their caller’s variable scopes, and how you could get around this by using the method $PSCmdlet.GetVariableValue().

It didn’t take me long to decide it was very tedious to include this type of code in every function, particularly when considering the number of preference variables that PowerShell has. (Check out about_Preference_Variables some time; there are quite a few.) I’ve just converted this approach into a function that can be called with a single line, and supports all PowerShell preference variables. For example, the Test-ScriptModuleFunction from the original post can be written as:

function Test-ScriptModuleFunction
{
    [CmdletBinding()]
    param ( )

    Get-CallerPreference -Cmdlet $PSCmdlet -SessionState $ExecutionContext.SessionState

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."
}

You can download the Get-CallerPreference function from the TechNet Gallery. It has been tested on PowerShell 2.0 and 4.0.

Getting your Script Module functions to inherit “Preference” variables from the caller


Edit: While this first blog post demonstrates a viable workaround, it requires a fair bit of code in your function if you want to inherit multiple preference variables. After this post was made, I discovered a way to get this working in a function, so your code only requires a single line to call it. Check out Revisited: Script Modules and Variable Scopes for more information and a download link.

One thing I’ve noticed about PowerShell is that, for some reason, functions defined in script modules do not inherit the variable scope of the caller. From the function’s perspective, the inherited scopes look like this: Local (Function), Script (Module), Global. If the caller sets, for example, $VerbosePreference in any scope other than Global, the script module’s functions won’t behave according to that change. The following test code demonstrates this:

# File TestModule.psm1:
function Test-ScriptModuleFunction
{
    [CmdletBinding()]
    param ( )

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."
}

# File Test.ps1:
Import-Module -Name .\TestModule.psm1 -Force

$VerbosePreference = 'Continue'

Write-Host "Global VerbosePreference: $global:VerbosePreference"
Write-Host "Test.ps1 Script VerbosePreference: $script:VerbosePreference"

Test-ScriptModuleFunction

Executing test.ps1 produces the following output, assuming that you’ve left the global VerbosePreference value to its default of “SilentlyContinue”:

Global VerbosePreference: SilentlyContinue
Test.ps1 Script VerbosePreference: Continue
Script Module Effective VerbosePreference: SilentlyContinue

There is a way for the script module’s Advanced Function to access variables in the caller’s scope; it just doesn’t happen automatically. You can use one of the following two methods in your function’s begin block:

$PSCmdlet.SessionState.PSVariable.GetValue('VerbosePreference')
$PSCmdlet.GetVariableValue('VerbosePreference')

$PSCmdlet.GetVariableValue() is just a shortcut for accessing the methods in the SessionState object. Both methods will return the same value that the caller would get if they just typed $VerbosePreference (using the local scope if the variable exists here, then checking the caller’s parent scope, and so on until it reaches Global.)

Let’s give it a try, modifying our TestModule.psm1 file as follows:

function Test-ScriptModuleFunction
{
    [CmdletBinding()]
    param ( )

    if (-not $PSBoundParameters.ContainsKey('Verbose'))
    {
        $VerbosePreference = $PSCmdlet.GetVariableValue('VerbosePreference')
    }

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."
}

Now, when executing test.ps1, we get the result we were after:

Global VerbosePreference: SilentlyContinue
Test.ps1 Script VerbosePreference: Continue
Module Function Effective VerbosePreference: Continue
VERBOSE: Something verbose.

Keep in mind that the $PSCmdlet variable is only available to Advanced Functions (see about_Functions_Advanced). Make sure you’ve got a param block with the [CmdletBinding()] and/or [Parameter()] attributes in the function, or you’ll get an error, because $PSCmdlet will be null.

Saving Passwords (and preventing other processes from decrypting them)


This question is nothing new: “How do I save credentials in PowerShell so I don’t have to enter a password every time the script runs?” An answer to that question has been in PowerShell for a very long time: you use the ConvertFrom-SecureString cmdlet to encrypt your password, save the resulting encrypted string to disk, and then later reverse the process with ConvertTo-SecureString. (Alternatively, you can use Export-CliXml, which encrypts the SecureString the same way.) For example:

# Prompt the user to enter a password
$secureString = Read-Host -AsSecureString "Enter a secret password"

$secureString | ConvertFrom-SecureString | Out-File -Path .\storedPassword.txt

# Later, read the password back in.
$secureString = Get-Content -Path .\storedPassword.txt | ConvertTo-SecureString

The ConvertFrom-SecureString and ConvertTo-SecureString cmdlets, when you don’t use their -Key, -SecureKey, or -AsPlainText switches, use DPAPI to encrypt / decrypt your secret data. When it comes to storing secrets with software alone (and without requiring a user to enter a password), DPAPI’s security is about as good as it gets. It’s not unbreakable – all of the encryption keys are there to be compromised by someone with Administrator access to the computer – but it’s pretty good.

You can see the details on how DPAPI works in the linked article, but here’s the long and short of it: By default, only the same user account (and on the same computer) is able to decrypt the protected data. However, there’s a catch: any process running under that user account can freely decrypt the data. DPAPI addresses this by allowing you to send it some optional, secondary entropy information to be used in the encryption and decryption process. This is like a second key that is specific to your program or script; so long as other processes don’t know what that entropy value is, they can’t read your data. (In theory, now you have a problem with protecting your entropy value, but this at least adds an extra layer that a malicious program needs to get around.) Here’s an excerpt from the article:

A small drawback to using the logon password is that all applications running under the same user can access any protected data that they know about. Of course, because applications must store their own protected data, gaining access to the data could be somewhat difficult for other applications, but certainly not impossible. To counteract this, DPAPI allows an application to use an additional secret when protecting data. This additional secret is then required to unprotect the data.

Technically, this “secret” should be called secondary entropy. It is secondary because, while it doesn’t strengthen the key used to encrypt the data, it does increase the difficulty of one application, running under the same user, to compromise another application’s encryption key. Applications should be careful about how they use and store this entropy. If it is simply saved to a file unprotected, then adversaries could access the entropy and use it to unprotect an application’s data.

Unfortunately, the ConvertFrom-SecureString and ConvertTo-SecureString cmdlets don’t allow you to specify this secondary entropy; they always pass in a Null value. For that reason, I’m putting together a small PowerShell script module to add this functionality. It can be downloaded from the TechNet repository. It adds an optional -Entropy parameter to both ConvertFrom-SecureString and ConvertTo-SecureString.

Since I was tinkering with those commands anyway, I also added an -AsPlainText switch to the ConvertFrom-SecureString command, in case you want to get the plain text back. This saves you the couple of extra commands to set up a PSCredential object and call the GetNetworkCredential() method, or make the necessary calls to the Marshal class yourself.

Import-Module .\SecureStringFunctions.psm1
$secureString = Read-Host -AsSecureString "Enter a secret password." 

# You can pass basically anything as the Entropy value, but I'd recommend sticking to simple value types (including strings),
# or arrays of those types, to make sure that the binary serialization of your entropy object doesn't change between
# script executions.  Here, we'll use Pi.  Edit:  The latest version of the code enforces the use of the recommended simple
# types, unless you also use the -Force switch.

$secureString | ConvertFrom-SecureString -Entropy ([Math]::PI) | Out-File .\storedPassword.txt

# Simulating another program trying to read your value using the normal ConvertTo-SecureString cmdlet (with null entropy).  This will produce an error. 
$newSecureString = Get-Content -Path .\storedPassword.txt | ConvertTo-SecureString 
 
# When your program wants to read the value, it can do so by passing the same Entropy value. 
$newSecureString = Get-Content -Path .\storedPassword.txt | ConvertTo-SecureString -Entropy ([Math]::PI)

#Display the plain text of the new SecureString object, verifying that it was decrypted correctly.
$newSecureString | ConvertFrom-SecureString -AsPlainText -Force

PowerShell Performance: Filtering Collections


Depending on what version of Windows PowerShell you are running, there are several different methods and syntax available for filtering collections. The differences are not just aesthetic; each one has different capabilities and performance characteristics. If you find yourself needing to optimize the performance of a script that processes a large amount of data, it helps to know what your options are. I ran some tests on the performance of various filtering techniques, and here are the results:

Description Syntax Pros Cons
Where-Object (-FilterScript)
Get-Process | Where-Object { $_.Name -eq 'powershell_ise' }
  • Runs on any version of Windows PowerShell.
  • Streams objects via the pipeline, keeping memory utilization to a minimum.
  • Any complex logic can be implemented inside the script block.
  • Slowest execution time of all the available options.
PowerShell 3.0 Simplified Where-Object syntax
Get-Process | Where Name -eq 'powershell_ise'
  • Executes slightly faster than the -FilterScript option.
  • Streams objects via the pipeline, keeping memory utilization to a minimum.
  • Limited to very simple comparisons of a single property on the piped objects. No compound expressions or data transformations are allowed.
  • Only works with PowerShell 3.0 or later
PowerShell 4.0 .Where() method syntax
(Get-Process).Where({ $_.Name -eq 'powershell_ise' })
  • Much faster than both of the Where-Object cmdlet versions (about 2x the speed.)
  • Any complex logic can be implemented inside the script block.
  • Only usable on collections that are completely stored in memory; cannot stream objects via the pipeline.
  • Only works with PowerShell 4.0
PowerShell filter
filter isISE { if ($_.Name -eq 'powershell_ise') { $_ } }

Get-Process | isISE
  • Runs on any version of Windows PowerShell.
  • Streams objects via the pipeline, keeping memory utilization to a minimum.
  • Any complex logic can be implemented inside the script block.
  • Faster than all of the Where-Object and .Where() options.
  • Not as easy to read and debug; user must scroll to wherever the filter is defined to see the actual logic.
foreach loop with embedded conditionals
foreach ($process in (Get-Process)) {
    if ($process.Name -eq 'powershell_ise') {
        $process
    }
}
  • Faster than any of the previously mentioned options.
  • Any complex logic can be implemented inside the loop / conditional.
  • Only usable on collections that are completely stored in memory; cannot stream objects via the pipeline.

To test the execution speed of each option, I ran this code:

$loop = 1000

$v2 = (Measure-Command {
    for ($i = 0; $i -lt $loop; $i++)
    {
        Get-Process | Where-Object { $_.Name -eq 'powershell_ise' }
    }
}).TotalMilliseconds

$v3 = (Measure-Command {
    for ($i = 0; $i -lt $loop; $i++)
    {
        Get-Process | Where Name -eq 'powershell_ise'
    }
}).TotalMilliseconds

$v4 = (Measure-Command {
    for ($i = 0; $i -lt $loop; $i++)
    {
        (Get-Process).Where({ $_.Name -eq 'powershell_ise' })
    }
}).TotalMilliseconds

$filter = (Measure-Command {
    filter isISE { if ($_.Name -eq 'powershell_ise') { $_ } }
    
    for ($i = 0; $i -lt $loop; $i++)
    {
        Get-Process | isISE
    }
}).TotalMilliseconds

$foreachLoop = (Measure-Command {
    for ($i = 0; $i -lt $loop; $i++)
    {
        foreach ($process in (Get-Process))
        {
            if ($process.Name -eq 'powershell_ise')
            {
                # Do something with $process
                $process
            }
        }
    }
}).TotalMilliseconds

Write-Host ('Where-Object -FilterScript:  {0:f2} ms' -f $v2)
Write-Host ('Simplfied Where syntax:      {0:f2} ms' -f $v3)
Write-Host ('.Where() method:             {0:f2} ms' -f $v4)
Write-Host ('Using a filter:              {0:f2} ms' -f $filter)
Write-Host ('Conditional in foreach loop: {0:f2} ms' -f $foreachLoop)

<#
Results:

Where-Object -FilterScript:  3035.69 ms
Simplfied Where syntax:      2855.33 ms
.Where() method:             1445.21 ms
Using a filter:              1281.13 ms
Conditional in foreach loop: 1073.14 ms
#>