Author Archives: Dave Wyatt

About Dave Wyatt

Dave Wyatt is an IT professional with 15 years of experience, and a member of PowerShell.org's Board of Directors (as of June 2014.) In January and April 2014, he received PowerShell.org's "PowerShell Hero" and Microsoft's MVP (PowerShell) awards.

DSC Pull Server on Windows Server 2008 R2


Recently on the PowerShell.org forums, a community member mentioned that they were having trouble setting up a Server 2008 R2 machine as a DSC pull server. It turns out, this is possible, but you have to install all the prerequisites yourself, since the Add-WindowsFeature DSC-Service command doesn’t do it for you on the older operating system.

Refer to this blog post for the checklist.

Tracking down commands that are polluting your pipeline


In a recent forum post, someone was having trouble with a function that was outputting more values than he expected. We’ve all been there. He was having trouble debugging this, and I decided to see if I could find a way to narrow down the search in an automated fashion, rather than having to step through the code by hand.

The full article and code are up on my blog at http://davewyatt.wordpress.com/2014/06/05/tracking-down-commands-that-are-polluting-your-pipeline/

Working on a new PowerShell module: ProtectedData


I’m working on a new module intended to make it easier to encrypt secret data, and share it among multiple users and computers in a secure fashion. It’s not quite ready for “release” yet, but I’ve made it public on GitHub anyway, so I can start to get feedback early.

Check out my original blog post (link) for details. The GitHub repository is here.

Community Brainstorming: PowerShell Security versus malicious code


A couple weeks ago, some malicious PowerShell code was discovered in the wild, dubbed the “Power Worm” in the Trend Micro article that originally publicised the malware. Matt Graeber has done a great analysis of the code on his blog. In the comments for that blog post, we started discussing some options for locking down PowerShell, preventing it from being used in this type of attack. (Unfortunately, Execution Policy is currently no barrier at all; PowerShell.exe can simply be launched with the -Command or -EncodedCommand parameters, bypassing ExecutionPolicy entirely.) Matt’s idea is to have PowerShell run in Constrained Language mode by default, similar to how it works on the Windows RT platform.

This post is to engage some community discussion on the topic; please do so in this thread: http://powershell.org/wp/forums/topic/discussion-community-brainstorming-powershell-security-versus-malicious-code/

What are your thoughts on this topic? Do you feel that PowerShell.exe needs some additional security features to try to prevent it from being used in this sort of malware? What impact would these ideas have on your normal, legitimate uses of PowerShell (if any)? How would you suggest minimizing that impact while still making it more difficult for PowerShell malware to execute?

Cmdlets or Advanced Functions?


I’ve posted a poll to get a feel for the PowerShell community’s preference on this. Compiled Cmdlets offer much better performance than the equivalent PowerShell advanced function, which can be a very valuable thing if you need to process large sets of data in your scripts. The other side of that coin is that in order to make changes or review the code of a Cmdlet, you need to start working with C# (and likely Visual Studio as well.) Which do you feel is more important when downloading a module: performance, or sticking with familiar PowerShell code?

The poll is over on my blog.

Thread Synchronization (lock statement) in PowerShell


While reading today’s “Hey, Scripting Guy!” blog post by Boe Prox, I learned that it is possible to have multiple threads / runspaces accessing the same live objects in a PowerShell session, something I had previously thought was not possible. Seeing that, I decided to write a “lock” statement for PowerShell, which can be necessary in some circumstances. See the original blog post for the function and more information.

PowerShell and System.Nullable<T>


While helping to answer a question about Exchange cmdlets today, I came across something interesting, which doesn’t seem to be very well documented.

A little background, first: in the .NET Framework (starting in Version 2.0), there’s a Generic type called System.Nullable<T>. The purpose of this type is to allow you to assign a value of null to Value types (structs, integers, booleans, etc), which are normally not allowed to be null in a .NET application. The Nullable structure consists of two properties: HasValue (a Boolean), and Value (the underlying value type, such as an integer, struct, etc).

A C# method which accepts a Nullable type might look something like this:

int? Multiply(int? operand1, int? operand2)
{
    if (!operand1.HasValue || !operand2.HasValue) { return null; }

    return operand1.Value * operand2.Value;
}

("int?" is C# shorthand for System.Nullable<int> .)

PowerShell appears to do something helpful, though potentially unexpected, when it comes across an instance of System.Nullable: it evaluates to either $null or an object of the underlying type for you, without the need (or the ability) to ever access the HasValue or the Value properties of the Nullable structure yourself:

$variable = [Nullable[int]] 10

$variable.GetType().FullName   # System.Int32

If you assign $null to the Nullable variable instead, the $variable.GetType() line will produce a “You cannot call a method on a null-valued expression” error. You never see the actual System.Nullable structure in your PowerShell code.

What does this have to do with Exchange? Some of the Exchange cmdlets return objects that have public Nullable properties, such as MoveRequestStatistics.BytesTransferred. Going through the MSDN documentation on these classes, you might expect to have to do something like $_.BytesTransferred.Value.ToMB() to get at the ByteQuantifiedSize.ToMB() method, but that won’t work. $_.BytesTransferred will either be $null, or it will be an instance of the ByteQuantifiedSize structure; there is no “Value” property in either case. After checking for $null, you’d just do this: $_.BytesTransferred.ToMB()

PowerShell Gotcha: UNC paths and Providers


PowerShell’s behavior can be a little bit funny when you pass a UNC path to certain cmdlets. PowerShell doesn’t recognize these paths as “rooted” because they’re not on a PSDrive; as such, whatever provider is associated with PowerShell’s current location will attempt to handle them. For example:

Set-Location C:
Get-ChildItem -Path \\$env:COMPUTERNAME\c$

Set-Location HKLM:
Get-ChildItem -Path \\$env:COMPUTERNAME\c$

The first command works fine (assuming you have a c$ share enabled and are able to access it), and the second command gives a “Cannot find path” error, because the Registry provider tried to work with the UNC path instead of the FileSystem provider. You can get around this problem by prefixing the UNC path with “FileSystem::”, which will make PowerShell use that provider regardless of your current location.

On top of that, commands like Resolve-Path and $PSCmdlet.GetUnresolvedProviderPathFromPSPath() don’t normalize UNC paths properly, even when the FileSystem provider handles them. This annoyed me, so I spent some time investigating different options to get around the quirky behavior. The result is the Get-NormalizedFileSystemPath function, which can be downloaded from the TechNet Gallery. In addition to making UNC paths behave, this had the side effect of also resolving 8.3 short file names to long paths (something else that Resolve-Path doesn’t do.)

The function has an “-IncludeProviderPrefix” switch which tells it to include the “FileSystem::” prefix, if desired (so you can reliably use cmdlets like Get-Item, Get-Content, Test-Path, etc., regardless of your current location or whether the path is UNC.) For example:

$path = "\\$env:COMPUTERNAME\c$\SomeFolder\..\.\Whatever\..\PROGRA~1" 
 
$path = Get-NormalizedFileSystemPath -Path $path -IncludeProviderPrefix 
 
$path 
 
Set-Location HKLM: 
Get-ChildItem -Path $path | Select-Object -First 1 

<# 
Output: 
 
FileSystem::\\MYCOMPUTERNAME\c$\Program Files 
 
    Directory: \\MYCOMPUTERNAME\c$\Program Files 
 
 
Mode                LastWriteTime     Length Name 
----                -------------     ------ ---- 
d----         7/30/2013  10:54 AM            7-Zip 
 
#>

Revisited: PowerShell and Encryption


Back in November, I made a post about saving passwords for your PowerShell scripts. As I mentioned in that article, the ConvertFrom-SecureString cmdlet uses the Data Protection API to create an encrypted copy of the SecureString’s contents. DPAPI uses master encryption keys that are saved in the user’s profile; unless you enable either Roaming Profiles or Credential Roaming, you’ll only be able to decrypt that value on the same computer where the encryption took place. Even if you do enable Credential Roaming, only the same user account who originally encrypted the data will be able to read it.

So, what do you do if you want to encrypt some data that can be decrypted by other user accounts?

The ConvertFrom-SecureString and ConvertTo-SecureString cmdlets have a pair of parameters (-Key and -SecureKey) that allow you to specify your own encryption key instead of using DPAPI. When you do this, the SecureString’s contents are encrypted using AES. Anyone who knows the AES encryption key will be able to read the data.

That’s the easy part. Encrypting data is simple; making sure your encryption keys don’t get exposed is the trick. If you’ve hard-coded the keys in your script, you may as well have just stored the password in plain text, for all the good the encryption will do. There are several ways you can try to save and protect your AES key; you could place it in a file with strong NTFS permissions, or in a database with strict access control, for example. In this post, however, I’m going to focus on another technique: encrypting your AES key with RSA certificates.

If you have an RSA certificate (even a self-signed one), you can encrypt your AES key using the RSA public key. At that point, only someone who has the certificate’s private key will be able to retrieve the AES key and read your data. Instead of trying to protect encryption keys yourself, we’re back to letting the OS handle the heavy lifting; if it protects your RSA private keys well, then your AES key is also safe. Here’s a brief example of creating a SecureString, saving it with a new random 32-byte AES key, and then using an RSA certificate to encrypt the key itself:

try
{
    $secureString = 'This is my password.  There are many like it, but this one is mine.' | 
                    ConvertTo-SecureString -AsPlainText -Force

    # Generate our new 32-byte AES key.  I don't recommend using Get-Random for this; the System.Security.Cryptography namespace
    # offers a much more secure random number generator.

    $key = New-Object byte[](32)
    $rng = [System.Security.Cryptography.RNGCryptoServiceProvider]::Create()

    $rng.GetBytes($key)

    $encryptedString = ConvertFrom-SecureString -SecureString $secureString -Key $key

    # This is the thumbprint of a certificate on my test system where I have the private key installed.

    $thumbprint = 'B210C54BF75E201BA77A55A0A023B3AE12CD26FA'
    $cert = Get-Item -Path Cert:\CurrentUser\My\$thumbprint -ErrorAction Stop

    $encryptedKey = $cert.PublicKey.Key.Encrypt($key, $true)

    $object = New-Object psobject -Property @{
        Key = $encryptedKey
        Payload = $encryptedString
    }

    $object | Export-Clixml .\encryptionTest.xml

}
finally
{
    if ($null -ne $key) { [array]::Clear($key, 0, $key.Length) }
}

Notice the use of try/finally and [array]::Clear() on the AES key’s byte array. It’s a good habit to make sure you’re not leaving the sensitive data lying around in memory longer than absolutely necessary. (This is the same reason you get a warning if you use ConvertTo-SecureString -AsPlainText without the -Force switch; .NET doesn’t allow you to zero out the memory occupied by a String.)

Any user who has the certificate installed, including its private key, will be able to load up the XML file and obtain the original SecureString as follows:

try
{
    $object = Import-Clixml -Path .\encryptionTest.xml

    $thumbprint = 'B210C54BF75E201BA77A55A0A023B3AE12CD26FA'
    $cert = Get-Item -Path Cert:\CurrentUser\My\$thumbprint -ErrorAction Stop

    $key = $cert.PrivateKey.Decrypt($object.Key, $true)

    $secureString = $object.Payload | ConvertTo-SecureString -Key $key
}
finally
{
    if ($null -ne $key) { [array]::Clear($key, 0, $key.Length) }
}

Using RSA certificates to protect your AES encryption keys is as simple as that: Get-Item, $cert.PublicKey.Key.Encrypt() , and $cert.PrivateKey.Decrypt() . You can even make multiple copies of the AES key with different RSA certificates, so that more than one person/certificate can decrypt the data.

I’ve posted several examples of data encryption techniques in PowerShell on the TechNet Gallery. Some are based on SecureStrings, as the code above, and others use .NET’s CryptoStream class to encrypt basically anything (in this case, an entire file on disk.)

Error Handling draft available


There was quite a bit of interest in the upcoming free Error Handling ebook during the PowerScripting Podcast on Thursday. It’s still in a very early draft form, but I’ve posted it to the GitHub repository for anyone who wants to brave the unedited waters and get a preview of the content.

Feedback is welcome, particularly on the technical content. Don’t worry about the presentation / organization so much, as those are likely to change once we go through a review and edit process anyway. You can post comments on GitHub, or contact me directly at [email protected].

Automatic formatting of code for posting on PowerShell.org forums


Hello everyone,

As you may know from past experience or from reading the Forums Tips and Guidelines sticky, there are a couple of quirks with the forum software we’re using here related to CODE tags and backtick characters. The forum software treats backticks as special characters, which is quite annoying if your code contains any of them, and you’ll find that using PRE tags will give you much better results than CODE, particularly if someone tries to copy and paste your code into another window.

I’ve added a function to my PowerShell ISE profile which avoids these headaches by taking the code in the ISE’s active window, escaping it in such a way that the forum will display it properly, wraps it in PRE tags (and also adds a blank comment line to the beginning and end of the code, to make copy / paste easier), and pipes the results to the clipboard. All I have to do is press F7 and then paste into the forum window. The function was even used to post itself here, which is about as good of a success indicator as I can think of.

The lines of this function may wrap in the blog post; there is a downloadable copy on the TechNet Gallery at http://gallery.technet.microsoft.com/scriptcenter/Format-PowerShell-code-for-68481ec4.

Edit: Looks like the handling of code formatting / HTML escaping on the blog posts has changed since it was originally posted, and the function posted here was displaying all sorts of double-escaped characters that it shouldn’t have. Please use the TechNet download link to find the proper code.

Revisited: Script Modules and Variable Scopes


Last week, I demonstrated that functions exported from Script Modules do not inherit their caller’s variable scopes, and how you could get around this by using the method $PSCmdlet.GetVariableValue().

It didn’t take me long to decide it was very tedious to include this type of code in every function, particularly when considering the number of preference variables that PowerShell has. (Check out about_Preference_Variables some time; there are quite a few.) I’ve just converted this approach into a function that can be called with a single line, and supports all PowerShell preference variables. For example, the Test-ScriptModuleFunction from the original post can be written as:

function Test-ScriptModuleFunction
{
    [CmdletBinding()]
    param ( )

    Get-CallerPreference -Cmdlet $PSCmdlet -SessionState $ExecutionContext.SessionState

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."
}

You can download the Get-CallerPreference function from the TechNet Gallery. It has been tested on PowerShell 2.0 and 4.0.

How to make your submissions look good in the Scripting Games site’s File window.


In the course of participating in the practice event, and working on event 1, I’ve noticed a couple of things about the way files are displayed in the File window on the Scripting Games site:

  • The site treats all files as ASCII; when you upload a Unicode file, syntax highlighting doesn’t work and you wind up with what looks like an extra blank line between every line of the original file.
  • Transcripts taken with PowerShell’s Start-Transcript command frequently contain lone Carriage Return or Line Feed characters, whereas the Scripting Games site appears to only like the CRLF pair. For example, lines that are separated by only LF (such as those in PowerShell’s error or warning output), appear as a single line in the File window of the Games site.

I’ve written a function to clean up the encoding and content of files before uploading them to the Games site. It takes advantage of the fact that Get-Content has no problem reading files with any combination of EOL characters, and that Set-Content always injects CRLF pairs between its input strings. It also searches the file for any multi-byte Unicode characters; if none were found, it converts the file encoding to ASCII for a nicer display on the Games site.

#requires -Version 3.0

function Convert-GamesFile
{
    [CmdletBinding()]
    param (
        [Parameter(Mandatory)]
        [ValidateScript({ Test-Path -Path $_ -PathType Leaf })]
        [string]
        $Path,

        [switch]
        $BackupOriginal
    )

    $contents = Get-Content -LiteralPath $Path -ErrorAction Stop

    # Check for Unicode characters

    $encoding = [Microsoft.PowerShell.Commands.FileSystemCmdletProviderEncoding]::Ascii

    foreach ($line in $contents)
    {
        if ([System.Text.Encoding]::UTF8.GetByteCount($line) -ne $line.Length)
        {
            Write-Warning "File '$Path' contains multi-byte characters."
            Write-Warning "File encoding will be Unicode, though this doesn't display as well on the Scripting Games site."
            
            $encoding = [Microsoft.PowerShell.Commands.FileSystemCmdletProviderEncoding]::Unicode
            break
        }
    }

    # Perform updates

    if ($BackupOriginal)
    {
        Copy-Item -LiteralPath $Path -Destination ("$Path.bak") -ErrorAction Stop
    }

    $contents | Set-Content -LiteralPath $Path -Encoding $encoding -ErrorAction Stop
}

Getting your Script Module functions to inherit “Preference” variables from the caller


Edit: While this first blog post demonstrates a viable workaround, it requires a fair bit of code in your function if you want to inherit multiple preference variables. After this post was made, I discovered a way to get this working in a function, so your code only requires a single line to call it. Check out Revisited: Script Modules and Variable Scopes for more information and a download link.

One thing I’ve noticed about PowerShell is that, for some reason, functions defined in script modules do not inherit the variable scope of the caller. From the function’s perspective, the inherited scopes look like this: Local (Function), Script (Module), Global. If the caller sets, for example, $VerbosePreference in any scope other than Global, the script module’s functions won’t behave according to that change. The following test code demonstrates this:

# File TestModule.psm1:
function Test-ScriptModuleFunction
{
    [CmdletBinding()]
    param ( )

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."
}

# File Test.ps1:
Import-Module -Name .\TestModule.psm1 -Force

$VerbosePreference = 'Continue'

Write-Host "Global VerbosePreference: $global:VerbosePreference"
Write-Host "Test.ps1 Script VerbosePreference: $script:VerbosePreference"

Test-ScriptModuleFunction

Executing test.ps1 produces the following output, assuming that you’ve left the global VerbosePreference value to its default of “SilentlyContinue”:

Global VerbosePreference: SilentlyContinue
Test.ps1 Script VerbosePreference: Continue
Script Module Effective VerbosePreference: SilentlyContinue

There is a way for the script module’s Advanced Function to access variables in the caller’s scope; it just doesn’t happen automatically. You can use one of the following two methods in your function’s begin block:

$PSCmdlet.SessionState.PSVariable.GetValue('VerbosePreference')
$PSCmdlet.GetVariableValue('VerbosePreference')

$PSCmdlet.GetVariableValue() is just a shortcut for accessing the methods in the SessionState object. Both methods will return the same value that the caller would get if they just typed $VerbosePreference (using the local scope if the variable exists here, then checking the caller’s parent scope, and so on until it reaches Global.)

Let’s give it a try, modifying our TestModule.psm1 file as follows:

function Test-ScriptModuleFunction
{
    [CmdletBinding()]
    param ( )

    if (-not $PSBoundParameters.ContainsKey('Verbose'))
    {
        $VerbosePreference = $PSCmdlet.GetVariableValue('VerbosePreference')
    }

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."
}

Now, when executing test.ps1, we get the result we were after:

Global VerbosePreference: SilentlyContinue
Test.ps1 Script VerbosePreference: Continue
Module Function Effective VerbosePreference: Continue
VERBOSE: Something verbose.

Keep in mind that the $PSCmdlet variable is only available to Advanced Functions (see about_Functions_Advanced). Make sure you’ve got a param block with the [CmdletBinding()] and/or [Parameter()] attributes in the function, or you’ll get an error, because $PSCmdlet will be null.

Saving Passwords (and preventing other processes from decrypting them)


This question is nothing new: “How do I save credentials in PowerShell so I don’t have to enter a password every time the script runs?” An answer to that question has been in PowerShell for a very long time: you use the ConvertFrom-SecureString cmdlet to encrypt your password, save the resulting encrypted string to disk, and then later reverse the process with ConvertTo-SecureString. (Alternatively, you can use Export-CliXml, which encrypts the SecureString the same way.) For example:

# Prompt the user to enter a password
$secureString = Read-Host -AsSecureString "Enter a secret password"

$secureString | ConvertFrom-SecureString | Out-File -Path .\storedPassword.txt

# Later, read the password back in.
$secureString = Get-Content -Path .\storedPassword.txt | ConvertTo-SecureString

The ConvertFrom-SecureString and ConvertTo-SecureString cmdlets, when you don’t use their -Key, -SecureKey, or -AsPlainText switches, use DPAPI to encrypt / decrypt your secret data. When it comes to storing secrets with software alone (and without requiring a user to enter a password), DPAPI’s security is about as good as it gets. It’s not unbreakable – all of the encryption keys are there to be compromised by someone with Administrator access to the computer – but it’s pretty good.

You can see the details on how DPAPI works in the linked article, but here’s the long and short of it: By default, only the same user account (and on the same computer) is able to decrypt the protected data. However, there’s a catch: any process running under that user account can freely decrypt the data. DPAPI addresses this by allowing you to send it some optional, secondary entropy information to be used in the encryption and decryption process. This is like a second key that is specific to your program or script; so long as other processes don’t know what that entropy value is, they can’t read your data. (In theory, now you have a problem with protecting your entropy value, but this at least adds an extra layer that a malicious program needs to get around.) Here’s an excerpt from the article:

A small drawback to using the logon password is that all applications running under the same user can access any protected data that they know about. Of course, because applications must store their own protected data, gaining access to the data could be somewhat difficult for other applications, but certainly not impossible. To counteract this, DPAPI allows an application to use an additional secret when protecting data. This additional secret is then required to unprotect the data.

Technically, this “secret” should be called secondary entropy. It is secondary because, while it doesn’t strengthen the key used to encrypt the data, it does increase the difficulty of one application, running under the same user, to compromise another application’s encryption key. Applications should be careful about how they use and store this entropy. If it is simply saved to a file unprotected, then adversaries could access the entropy and use it to unprotect an application’s data.

Unfortunately, the ConvertFrom-SecureString and ConvertTo-SecureString cmdlets don’t allow you to specify this secondary entropy; they always pass in a Null value. For that reason, I’m putting together a small PowerShell script module to add this functionality. It can be downloaded from the TechNet repository. It adds an optional -Entropy parameter to both ConvertFrom-SecureString and ConvertTo-SecureString.

Since I was tinkering with those commands anyway, I also added an -AsPlainText switch to the ConvertFrom-SecureString command, in case you want to get the plain text back. This saves you the couple of extra commands to set up a PSCredential object and call the GetNetworkCredential() method, or make the necessary calls to the Marshal class yourself.

Import-Module .\SecureStringFunctions.psm1
$secureString = Read-Host -AsSecureString "Enter a secret password." 

# You can pass basically anything as the Entropy value, but I'd recommend sticking to simple value types (including strings),
# or arrays of those types, to make sure that the binary serialization of your entropy object doesn't change between
# script executions.  Here, we'll use Pi.  Edit:  The latest version of the code enforces the use of the recommended simple
# types, unless you also use the -Force switch.

$secureString | ConvertFrom-SecureString -Entropy ([Math]::PI) | Out-File .\storedPassword.txt

# Simulating another program trying to read your value using the normal ConvertTo-SecureString cmdlet (with null entropy).  This will produce an error. 
$newSecureString = Get-Content -Path .\storedPassword.txt | ConvertTo-SecureString 
 
# When your program wants to read the value, it can do so by passing the same Entropy value. 
$newSecureString = Get-Content -Path .\storedPassword.txt | ConvertTo-SecureString -Entropy ([Math]::PI)

#Display the plain text of the new SecureString object, verifying that it was decrypted correctly.
$newSecureString | ConvertFrom-SecureString -AsPlainText -Force