Category Archives: Tips and Tricks

The Monad Manifesto Annotation Project


Richard’s log, stardate 2457164.5

Today’s destination is the Monad Manifesto Annotation Project.

The idea behind this project is to keep the manifesto intact somewhere on the internet, and to provide the possibility to the community to annotate on the several topics in the manifesto. The idea for this came from Pluralsight author Tim Warner, with the initial annotations being made by Don Jones. Jeffrey Snover gave his permission for this project, but with a big warning: the content only can be shared on the original source page on penflip, and cannot be hosted anywhere else.

I am already in the progress to put all the chapters from the Manifesto in penflip, and I’m putting the right formatting on it. The idea is to finish this the coming days. After that the actual annotation can be started.

For more information check the project page on penflip:

https://www.penflip.com/powershellorg/monad-manifesto-annotated

Till the next time, live long and prosper.

boldlygo_preview

PowerShell… An exciting frontier…


PowerShell… An exciting frontier…

These are the voyages of a PowerShell adventurer.

Its continuing mission:

To explore strange new cmdlets…

To seek out new modules; new parameters…

To boldly go where no one has gone before!”

Richard’s log, stardate 2457163.

Our destination for today is my very first post on PowerShell.org. As you can see, from the opening lines, I approach my journey in PowerShell as a exploration into the unknown, just like the crew of Star Trek, Next Generation did. Till now my journey has been a pleasant one, because you know, exploring PowerShell is a lot of fun! And your exploration should also be a lot of fun, for that reason I want to share with you my discoveries and experiences. These will help you, I hope, to boldly go where no one has gone before!

About Me, And A Statement

My name is Richard Diphoorn, and I’m a IT Professional based in the Netherlands. I work now for around 14 years in IT. My daily work consists mostly of automating, scripting, provisioning new servers, working with System Center, Azure Pack, SMA. Actually everything which can be automated, is that what I am working on. I believe in automation, it’s in my opinion the mindset every IT professional should have.

When I started working in IT, it was not called IT in the Netherlands, it was called ‘automatisering’; in english it’s called ‘automation’. And there you have it, the job I’m doing was always ment to do automation. But still I see a lot of ‘click-next-admins’ around in the field. This term has been thrown up by Jeffrey Snover, and what it means is that there are administrators who click their way trough provisioning and configuration situations, instead of trying to automate this.

It’s my personal quest, to get the intention into the click-next-admins, to learn and use PowerShell. I strive for a transitional change in the admin’s life, by giving them new perspectives on how to ‘do’ things.

For sure I am not the person who possesses all the knowledge, but at least I want to share my passion and the knowledge I build up till now, with the people who are open for it. And with this I invite you, to do together this exploration into ‘strange new cmdlets’. 😉

A Small Introduction

So, with this personal statement I kick off this first post. Our first mission is the exploration of what this thing ‘PowerShell’ actually is, which kind of advantages it brings to you, and why it’s here to stay.

I assume ‘level 200’ as the basic level of my audience, therefore I will not go into the very basics of how you operate a Windows Operating System. I try to avoid unclear technobabble as much as possible, but I don’t want to oversimplify things. I try to make it as simple as possible, but not simpler (where did we heard that before…hmmm…).

Monad, A Brief History Of Time.

If you are like me, you probably bought a little book called ‘Monad’, written by Andy Oakly, quite some years back (if I remember correctly, I bought this book somewhere late december, 2005. I saw this little book on a bookshelf in Waterstone’s, in London. I bought the book because I heard of MSH, and I wanted to learn more about it.

I was hooked. 100%

I still encourage people to read this book, because a lot of information in that book is still relevant, in term of concepts. Topics like the Pipeline, Verb-Noun syntax, cmdlets, repeatability and consistency did not changed from the first version of Monad that the PowerShell team released. This is also the reason why you still see ‘C:\windows\system32\WindowsPowerShell\v1.0’ on all the Windows OS’es till now. This is because the underlying architecture did not changed. As we will continue to explore, you will see what I mean.

This book will explain to you the very first basic concepts, but for really getting into the dirt, I encourage you to read the Monad Manifesto, written by Monad’s architect, Jeffrey Snover. This manifesto explains the long term vision behind Monad, and describes many elements which are consisting today in PowerShell. This is really a comprehensive document on how Jeffrey first saw the big problems that existed in the way administrators did their work.

He explains the new approaches to different kind of models, and how existing problems be solved with these new approaches. This document will also contributes in your way of thinking the ‘DevOps’ way, because many concepts in here contribute directly to the way you should ‘do’ DevOps. For example, Jeffrey talks about ‘prayer-based parsing’, which is in direct conflict with predictability in certain DevOps scenarios.

Because you need to be able to predict what is happening when you go from Testing to Production. In all cases Deus Ex Machine situations needs to be prevented. You always need to know what is happening and why. In my opinion, DevOps is nothing more than just being really good in automating stuff, PowerShell gives you this possibility.

So, what is PowerShell, and how do I benefit from it?

PowerShell basically is a Shell ( a black box, in which you can type 😛 ), in which you can interact with every aspect of the Operating System in either a interactive or programmable manner.

You type commands in a specific format in this window, and magic happens. This is the simple explanation. Now the rest…

The concept of a shell in which you can manipulate the whole windows system in a interactive way or scripted way, with common syntaxes and semantics, was for me a really powerful and inspiring idea. This new shell helped me working more efficient, effective and with more fun. It enabled me to explore the whole OS, to boldly go where I never have gone before!

This concept is not new for the mainframe operators and the *nix admins; it’s something they are used to already for a long time. If you doubt if working on a command line is a bad thing, go and talk with the *nix admin in your company. They happily will show you how fast they can work, I’m sure!

So for you, as a Windows Administrator, what kind of benefits do you get from learning PowerShell? There are obvious benefits like getting stuff done more quickly, and doing it always in the same way so that you never make that one mistake again. A more un-obvious benefit is that you get to know the OS & Apps very well, because sometimes you really dig into the system, really deep. This level of knowledge can and will benefit you in terms of understanding how a system works, and how to resolve problems. This hugely contributes to your personal success in your career, because you are ‘the topnotch’ engineer. You will be the Geordi La Forge of your company, so to say. :)

PowerShell is dead, long live PowerShell!

PowerShell is here to stay, rest assured. Almost all the products made by Microsoft can be manipulated with PowerShell in one way or another. This by providing a direct API to the product itself, or either by providing a REST interface. A lot of products from third-party suppliers also support PowerShell, like VMware, NetApp and Citrix. PowerShell is really getting (or already is) a commodity; actually I advice customers to only buy products which can be manipulated with PowerShell.

Be honest here, if a product cannot be automated, how does this product contributes to the WHOLE business? The business thrives by efficient processes, and if all IT processes are efficient, the business profits hugely from that.

In every company where I have been till now in my IT career, make use of Microsoft software. I believe in the best tools for the job. PowerShell is such a tool. It’s ducttape, wd400 and a swiss knive in one, whatever you want to do, PowerShell can do it (and better).

PowerShell is here to stay my fellow IT pro’s, embrace it fully and enjoy the voyage!

I want to thank the crew at PowerShell.org to give me the opportunity to blog on this site!

Till next time, when we meet again.

Proxy Functions for Cmdlets with Dynamic Parameters


I came across an interesting problem today while working on the Pester module: how do you create a proxy function for a Cmdlet which has dynamic parameters? I needed a solution which would automatically reproduce the original cmdlet’s dynamic parameters inside a PowerShell function, and which would work on PowerShell 2.0 if at all possible. The full post and solution can be found on my blog at http://davewyatt.wordpress.com/2014/09/01/proxy-functions-for-cmdlets-with-dynamic-parameters/.

DSC Pull Server on Windows Server 2008 R2


Recently on the PowerShell.org forums, a community member mentioned that they were having trouble setting up a Server 2008 R2 machine as a DSC pull server. It turns out, this is possible, but you have to install all the prerequisites yourself, since the Add-WindowsFeature DSC-Service command doesn’t do it for you on the older operating system.

Refer to this blog post for the checklist.

Tracking down commands that are polluting your pipeline


In a recent forum post, someone was having trouble with a function that was outputting more values than he expected. We’ve all been there. He was having trouble debugging this, and I decided to see if I could find a way to narrow down the search in an automated fashion, rather than having to step through the code by hand.

The full article and code are up on my blog at http://davewyatt.wordpress.com/2014/06/05/tracking-down-commands-that-are-polluting-your-pipeline/

Patterns for Implementing a DSC Pull Server Environment


My Patterns for Implementing a DSC Pull Server Environment talk from the PowerShell Summit is now online.

Enjoy!

Building Scalable Configurations With DSC


My Building Scalable Configurations with DSC talk from the PowerShell Summit is now online.

Enjoy!

Installing PowerShell v5? Be a Little Careful, OK?


I’m getting a lot of questions from folks, via Twitter and other venues, regarding Windows Management Framework 5.0 – which is where PowerShell v5 comes from. It’s awesome that people are installing v5 and kicking the tires – however, please help spread the word:

  • v5 is a preview. It isn’t done, and it isn’t guaranteed bug-free. It shouldn’t be installed on production computers until it’s officially released.
  • v5 doesn’t install ‘side by side’ with v3 or v4. You can’t run it with “-version 3″ to “downgrade.” Now, v5 shouldn’t break anything – something that runs in v3 or v4 should still work fine – but there are no guarantees as it’s a preview and not released code at this stage.
  • Server software (Exchange, SharePoint, etc) often has a hard dependency on a specific version of PowerShell. You need to look into that before you install v5.
  • After installing v5, you might not be able to cleanly uninstall and revert to a prior version.

Generally speaking, v5 should be installed in a test virtual machine at the very least, not on a production computer. It’s great to play with it, and you should absolutely log bugs and suggestions to http://connect.microsoft.com.

This situation will be true for any pre-release preview of PowerShell or WMF going forward. “Preview” is the new Microsoft-speak for “beta,” and you should treat it as such. Play with it, yes – that’s the whole point, and it’s how we get a stable, clean release in the end. But play with caution, and never on production computers.

Going Deeper on DSC Resources


Desired State Configuration is a very new technology and declarative configuration management is a very young space yet.  We (Microsoft and the community) are still figuring out the best structure for resources, composite configurations, and other structures.

That said, there are certain viewpoints that I’ve come to, either from hands on experience or in watching how other communities (like the Puppet community or Chef community) handle similar problems.

How Granular Should I Get?

There is no absolute answer.

Very, Very Granular

Resources should be very granular in the abstract, but in practice, you may need to make concessions to improve the user experience.

For example, when I configure an IP address for a network interface, I can supply a default gateway. A default gateway is a route, which is separate from the interface and IP address, but in practice they tend to be configured together. In this case, it might make sense to offer a resource that can configure both the IP address and the default gateway.

I tend to think resources should be very granular. We can use composite resources to offer higher level views of the configuration. If I were implementing a resource to configure a network adapter’s IP and gateway, I would have a route resource, an IP address resource, and probably a DNS server setting resource. I would then also have a composite resource to deal with the default use case of configuring a network adapter’s IP address, gateway, and DNS servers together.

The benefit of doing it this way is that I still have very discrete, flexible primitives (the IP address resource, the route resource, and the DNS server resource). I can then leverage the route resource to create static routes, or use them directly to more discretely configure the individual elements.

Unless…

You have some flow control that you need to happen based on the state of the client or the environment.  Since your configuration is statically generated and is declarative, there are no flow control statements in the configuration MOF document.  That means that any logic that needs to occur at application time

Unfortunately, this leads to the need to re-implement common functionality.  For example, if I have a service that I need to be able to update the binary (not via an MSI), I need to basically re-implement parts of the file and service resource.  This use case requires a custom resource because I need to stop the service before I can replace the binary, but I don’t want to stop the service with every consistency check if I don’t need to replace the file.

This scenario begs for a better way to leverage existing resources in a cross resource scenario (kind of like RequiredModules in module metadata), but there isn’t a clean way to do this that I’ve found (but I’m still looking!).

My Recommendation

So for most cases, I would try to use existing resources or build very granular custom resources.  If I need to offer a higher level of abstraction, I’d escalate to putting a composite resource on top of those granular resources.  Finally, if I need some flow control or logic for a multistep process, I’d implement a more comprehensive resource.

What Should I Validate?

Now that we are seeing some more resources in the community repository (especially thanks to the waves of resources from the Powershell Team!), we are seeing a variety of levels of validation being performed.

I think that the Test-TargetResource function should validate all the values and states that Set-TargetResource can set.

An example of where this isn’t happening currently is in the cNetworking resource for PSHOrg_cIPAddress.  I’m going to pick on this resource a bit, since it was the catalyst for this discussion.

The resource offers a way to set a default gateway as well as the IP address.  So what happens if after setting the IP and default gateway, someone changes the default gateway to point to another router?

In this case, the validation is only checking that the IP address is correct.  DSC will never re-correct the gateway and our DSC configuration document (the MOF file) is no longer an accurate representation of the system state, despite the fact that the Local Configuration Manager (LCM) will report that everything matches.

This is BAD!!  If a resource offers an option to configure a setting, that setting should be validated by Test-TargetResource, otherwise that setting should be removed from the resource.  The intent of DSC is to control configuration, including changes over time and return a system to the desired state.  If we ignore certain settings, we weaken our trust in the underlying infrastructure of DSC.

What should I return?

The last element I’m going to tackle today is what should be returned from Get-TargetResource.  I’ve been on the fence about this one.  Like with Test-TargetResource, there are a number of implementation examples that vary in how they come up with the return values.

Currently, I don’t see a ton of use for Get-TargetResource and it doesn’t impact the Test and Set phases of the LCM, so it’s been easy to ignore.  This is bad practice (shame on me).

Here’s my thoughts around Get-TargetResource.  It should return the currently configured state of the machine.  Directly returning parameters passed in is misleading.

Going back to the PSHOrg_cIPAddress from the earlier example, it directly returns the default gateway from the parameter, regardless of the configured gateway.  This wouldn’t be so bad if the resource actually checked the gateway during processing and could correct it if it drifted.  But it does not check the gateway, so Get-TargetResource could be lying to you.  T

he most consistent result of Get-TargetResource would be retrieving the currently configured settings.

What’s left?

What other burning questions do you have around DSC?  Let’s keep talking them through either in the forums or in the comments here.

PowerShell and System.Nullable<T>


While helping to answer a question about Exchange cmdlets today, I came across something interesting, which doesn’t seem to be very well documented.

A little background, first: in the .NET Framework (starting in Version 2.0), there’s a Generic type called System.Nullable<T>. The purpose of this type is to allow you to assign a value of null to Value types (structs, integers, booleans, etc), which are normally not allowed to be null in a .NET application. The Nullable structure consists of two properties: HasValue (a Boolean), and Value (the underlying value type, such as an integer, struct, etc).

A C# method which accepts a Nullable type might look something like this:

int? Multiply(int? operand1, int? operand2)
{
    if (!operand1.HasValue || !operand2.HasValue) { return null; }

    return operand1.Value * operand2.Value;
}

("int?" is C# shorthand for System.Nullable<int> .)

PowerShell appears to do something helpful, though potentially unexpected, when it comes across an instance of System.Nullable: it evaluates to either $null or an object of the underlying type for you, without the need (or the ability) to ever access the HasValue or the Value properties of the Nullable structure yourself:

$variable = [Nullable[int]] 10

$variable.GetType().FullName   # System.Int32

If you assign $null to the Nullable variable instead, the $variable.GetType() line will produce a “You cannot call a method on a null-valued expression” error. You never see the actual System.Nullable structure in your PowerShell code.

What does this have to do with Exchange? Some of the Exchange cmdlets return objects that have public Nullable properties, such as MoveRequestStatistics.BytesTransferred. Going through the MSDN documentation on these classes, you might expect to have to do something like $_.BytesTransferred.Value.ToMB() to get at the ByteQuantifiedSize.ToMB() method, but that won’t work. $_.BytesTransferred will either be $null, or it will be an instance of the ByteQuantifiedSize structure; there is no “Value” property in either case. After checking for $null, you’d just do this: $_.BytesTransferred.ToMB()

PowerShell Gotcha: UNC paths and Providers


PowerShell’s behavior can be a little bit funny when you pass a UNC path to certain cmdlets. PowerShell doesn’t recognize these paths as “rooted” because they’re not on a PSDrive; as such, whatever provider is associated with PowerShell’s current location will attempt to handle them. For example:

Set-Location C:
Get-ChildItem -Path \\$env:COMPUTERNAME\c$

Set-Location HKLM:
Get-ChildItem -Path \\$env:COMPUTERNAME\c$

The first command works fine (assuming you have a c$ share enabled and are able to access it), and the second command gives a “Cannot find path” error, because the Registry provider tried to work with the UNC path instead of the FileSystem provider. You can get around this problem by prefixing the UNC path with “FileSystem::”, which will make PowerShell use that provider regardless of your current location.

On top of that, commands like Resolve-Path and $PSCmdlet.GetUnresolvedProviderPathFromPSPath() don’t normalize UNC paths properly, even when the FileSystem provider handles them. This annoyed me, so I spent some time investigating different options to get around the quirky behavior. The result is the Get-NormalizedFileSystemPath function, which can be downloaded from the TechNet Gallery. In addition to making UNC paths behave, this had the side effect of also resolving 8.3 short file names to long paths (something else that Resolve-Path doesn’t do.)

The function has an “-IncludeProviderPrefix” switch which tells it to include the “FileSystem::” prefix, if desired (so you can reliably use cmdlets like Get-Item, Get-Content, Test-Path, etc., regardless of your current location or whether the path is UNC.) For example:

$path = "\\$env:COMPUTERNAME\c$\SomeFolder\..\.\Whatever\..\PROGRA~1" 
 
$path = Get-NormalizedFileSystemPath -Path $path -IncludeProviderPrefix 
 
$path 
 
Set-Location HKLM: 
Get-ChildItem -Path $path | Select-Object -First 1 

<# 
Output: 
 
FileSystem::\\MYCOMPUTERNAME\c$\Program Files 
 
    Directory: \\MYCOMPUTERNAME\c$\Program Files 
 
 
Mode                LastWriteTime     Length Name 
----                -------------     ------ ---- 
d----         7/30/2013  10:54 AM            7-Zip 
 
#>

Revisited: PowerShell and Encryption


Back in November, I made a post about saving passwords for your PowerShell scripts. As I mentioned in that article, the ConvertFrom-SecureString cmdlet uses the Data Protection API to create an encrypted copy of the SecureString’s contents. DPAPI uses master encryption keys that are saved in the user’s profile; unless you enable either Roaming Profiles or Credential Roaming, you’ll only be able to decrypt that value on the same computer where the encryption took place. Even if you do enable Credential Roaming, only the same user account who originally encrypted the data will be able to read it.

So, what do you do if you want to encrypt some data that can be decrypted by other user accounts?

The ConvertFrom-SecureString and ConvertTo-SecureString cmdlets have a pair of parameters (-Key and -SecureKey) that allow you to specify your own encryption key instead of using DPAPI. When you do this, the SecureString’s contents are encrypted using AES. Anyone who knows the AES encryption key will be able to read the data.

That’s the easy part. Encrypting data is simple; making sure your encryption keys don’t get exposed is the trick. If you’ve hard-coded the keys in your script, you may as well have just stored the password in plain text, for all the good the encryption will do. There are several ways you can try to save and protect your AES key; you could place it in a file with strong NTFS permissions, or in a database with strict access control, for example. In this post, however, I’m going to focus on another technique: encrypting your AES key with RSA certificates.

If you have an RSA certificate (even a self-signed one), you can encrypt your AES key using the RSA public key. At that point, only someone who has the certificate’s private key will be able to retrieve the AES key and read your data. Instead of trying to protect encryption keys yourself, we’re back to letting the OS handle the heavy lifting; if it protects your RSA private keys well, then your AES key is also safe. Here’s a brief example of creating a SecureString, saving it with a new random 32-byte AES key, and then using an RSA certificate to encrypt the key itself:

try
{
    $secureString = 'This is my password.  There are many like it, but this one is mine.' | 
                    ConvertTo-SecureString -AsPlainText -Force

    # Generate our new 32-byte AES key.  I don't recommend using Get-Random for this; the System.Security.Cryptography namespace
    # offers a much more secure random number generator.

    $key = New-Object byte[](32)
    $rng = [System.Security.Cryptography.RNGCryptoServiceProvider]::Create()

    $rng.GetBytes($key)

    $encryptedString = ConvertFrom-SecureString -SecureString $secureString -Key $key

    # This is the thumbprint of a certificate on my test system where I have the private key installed.

    $thumbprint = 'B210C54BF75E201BA77A55A0A023B3AE12CD26FA'
    $cert = Get-Item -Path Cert:\CurrentUser\My\$thumbprint -ErrorAction Stop

    $encryptedKey = $cert.PublicKey.Key.Encrypt($key, $true)

    $object = New-Object psobject -Property @{
        Key = $encryptedKey
        Payload = $encryptedString
    }

    $object | Export-Clixml .\encryptionTest.xml

}
finally
{
    if ($null -ne $key) { [array]::Clear($key, 0, $key.Length) }
}

Notice the use of try/finally and [array]::Clear() on the AES key’s byte array. It’s a good habit to make sure you’re not leaving the sensitive data lying around in memory longer than absolutely necessary. (This is the same reason you get a warning if you use ConvertTo-SecureString -AsPlainText without the -Force switch; .NET doesn’t allow you to zero out the memory occupied by a String.)

Any user who has the certificate installed, including its private key, will be able to load up the XML file and obtain the original SecureString as follows:

try
{
    $object = Import-Clixml -Path .\encryptionTest.xml

    $thumbprint = 'B210C54BF75E201BA77A55A0A023B3AE12CD26FA'
    $cert = Get-Item -Path Cert:\CurrentUser\My\$thumbprint -ErrorAction Stop

    $key = $cert.PrivateKey.Decrypt($object.Key, $true)

    $secureString = $object.Payload | ConvertTo-SecureString -Key $key
}
finally
{
    if ($null -ne $key) { [array]::Clear($key, 0, $key.Length) }
}

Using RSA certificates to protect your AES encryption keys is as simple as that: Get-Item, $cert.PublicKey.Key.Encrypt() , and $cert.PrivateKey.Decrypt() . You can even make multiple copies of the AES key with different RSA certificates, so that more than one person/certificate can decrypt the data.

I’ve posted several examples of data encryption techniques in PowerShell on the TechNet Gallery. Some are based on SecureStrings, as the code above, and others use .NET’s CryptoStream class to encrypt basically anything (in this case, an entire file on disk.)

Revisited: Script Modules and Variable Scopes


Last week, I demonstrated that functions exported from Script Modules do not inherit their caller’s variable scopes, and how you could get around this by using the method $PSCmdlet.GetVariableValue().

It didn’t take me long to decide it was very tedious to include this type of code in every function, particularly when considering the number of preference variables that PowerShell has. (Check out about_Preference_Variables some time; there are quite a few.) I’ve just converted this approach into a function that can be called with a single line, and supports all PowerShell preference variables. For example, the Test-ScriptModuleFunction from the original post can be written as:

function Test-ScriptModuleFunction
{
    [CmdletBinding()]
    param ( )

    Get-CallerPreference -Cmdlet $PSCmdlet -SessionState $ExecutionContext.SessionState

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."
}

You can download the Get-CallerPreference function from the TechNet Gallery. It has been tested on PowerShell 2.0 and 4.0.

Getting your Script Module functions to inherit “Preference” variables from the caller


Edit: While this first blog post demonstrates a viable workaround, it requires a fair bit of code in your function if you want to inherit multiple preference variables. After this post was made, I discovered a way to get this working in a function, so your code only requires a single line to call it. Check out Revisited: Script Modules and Variable Scopes for more information and a download link.

One thing I’ve noticed about PowerShell is that, for some reason, functions defined in script modules do not inherit the variable scope of the caller. From the function’s perspective, the inherited scopes look like this: Local (Function), Script (Module), Global. If the caller sets, for example, $VerbosePreference in any scope other than Global, the script module’s functions won’t behave according to that change. The following test code demonstrates this:

# File TestModule.psm1:
function Test-ScriptModuleFunction
{
    [CmdletBinding()]
    param ( )

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."
}

# File Test.ps1:
Import-Module -Name .\TestModule.psm1 -Force

$VerbosePreference = 'Continue'

Write-Host "Global VerbosePreference: $global:VerbosePreference"
Write-Host "Test.ps1 Script VerbosePreference: $script:VerbosePreference"

Test-ScriptModuleFunction

Executing test.ps1 produces the following output, assuming that you’ve left the global VerbosePreference value to its default of “SilentlyContinue”:

Global VerbosePreference: SilentlyContinue
Test.ps1 Script VerbosePreference: Continue
Script Module Effective VerbosePreference: SilentlyContinue

There is a way for the script module’s Advanced Function to access variables in the caller’s scope; it just doesn’t happen automatically. You can use one of the following two methods in your function’s begin block:

$PSCmdlet.SessionState.PSVariable.GetValue('VerbosePreference')
$PSCmdlet.GetVariableValue('VerbosePreference')

$PSCmdlet.GetVariableValue() is just a shortcut for accessing the methods in the SessionState object. Both methods will return the same value that the caller would get if they just typed $VerbosePreference (using the local scope if the variable exists here, then checking the caller’s parent scope, and so on until it reaches Global.)

Let’s give it a try, modifying our TestModule.psm1 file as follows:

function Test-ScriptModuleFunction
{
    [CmdletBinding()]
    param ( )

    if (-not $PSBoundParameters.ContainsKey('Verbose'))
    {
        $VerbosePreference = $PSCmdlet.GetVariableValue('VerbosePreference')
    }

    Write-Host "Module Function Effective VerbosePreference: $VerbosePreference"
    Write-Verbose "Something verbose."
}

Now, when executing test.ps1, we get the result we were after:

Global VerbosePreference: SilentlyContinue
Test.ps1 Script VerbosePreference: Continue
Module Function Effective VerbosePreference: Continue
VERBOSE: Something verbose.

Keep in mind that the $PSCmdlet variable is only available to Advanced Functions (see about_Functions_Advanced). Make sure you’ve got a param block with the [CmdletBinding()] and/or [Parameter()] attributes in the function, or you’ll get an error, because $PSCmdlet will be null.