Search
Generic filters
Exact matches only
Filter by Custom Post Type

Using Azure Desired State Configuration - Part II

Today we're going to be talking about adding configurations to your Azure Automation Account.  In this article, we'll be discussing special considerations that we need to take into account when uploading our configurations.  Then we'll talk about compiling the configurations into Managed Object Format (.mof) files, which we'll be able to use to assign to our systems.

Things to Consider

When building configurations for Azure DSC (or anything where we are pulling pre-created .mof files from), there are some things that we need to keep in mind.

Don't embed PowerShell scripts in your configurations. - I spent a lot of time cleaning up my own configurations when learning Azure Automation DSC.  When configurations are compiled, they're done so on a virtual machine hidden under the covers and can cause some unexpected behaviours.  Some of the issues that I ran into were:

  • Using environment variables like $env:COMPUTERNAME - This actually caused me a lot of headaches when I started building systems that were being joined to a domain.  The name of the instance that compiles the .mof will be used for $env:COMPUTERNAME instead of the target computer name and you'll be banging your head on the table wondering what happened.  Some of the resources that have been published in the gallery have been updated to use a 'localhost' option as a computer name input, such as xActiveDirectory.  This takes care of a lot of those headaches.
  • Using Parenthetical Commands to establish values - Using something like Get-NetAdapter in a parenthetical argument to get a network adapter of your target system and pass the needed values on to your DSC Resource Providers won't work for the same reasons as above.  In this instance, I received a vague error indicating that I was passing an invalid property, and took a little bit of time before I understood what was going on.
  • I also ran into an issue with compiling a configuration because I had been using Set-Item to configure the WSMan maxEnvelopeSize in my configs because they can get really big.  The error that I received was that WSMan wasn't installed on the machine.  It took me a bit to realize that this was because the machine compiling the .mof didn't have WSMan running on the box and it was blowing up on the config.

Instead, if you need to run PowerShell scripts ahead of your deployment, you can use the custom script extension to perform those tasks in Azure, or just put the script into your image on-prem.  There is one exception to this, and that's what we'll be talking about next.

Leverage Azure Automation Credential storage where possible - Passing credentials in as a parameter can cause all kinds of issues.

  • First and foremost, anyone that is building or deploying those configurations will know those credentials.
  • Second of all, it brings the possibility of someone tripping over the keyboard and entering a credential in improperly.

Allowing Azure Automation to tap the credential store during .mof compilation allows to credentials to stay in a secured bubble through the entire process.  To pass a credential from Azure Automation to your config, you need to modify the configuration.  Simply call Get-AutomationPSCredential to a variable inside your configuration, and then set that variable wherever those credentials are required.  Like so:

    $AdminCreds = Get-AutomationPSCredential -Name $AdminName

    Node ($AllNodes.Where{$_.Role -eq "WebServer"}).NodeName
    {
            
            JoinDomain DomainJoin
            {
                DependsOn = "[WindowsFeature]RemoveUI"
                DomainName = $DomainName
                Admincreds = $Admincreds
                RetryCount = 20
                RetryIntervalSec = 60
            }

   
    }

Azure Automation under the covers will authenticate to the Credentials store with the RunAs account, and then pass those credentials as PSCredential to your DSC resource provider.

Stop Using localhost (or a specific computer name) as the Node Name - Azure Automation DSC allows you to use genericized, but meaningful names to configurations instead of just assigning things to localhost.  So now you can use webServer, or domainController, or something that describes the role instead of a machine name.  This makes it much easier to decide which configuration should go to what machine.

Upload The Configuration

So much like in my previous series on Azure Automation and OMS, we're going to upload our DSC resources to our Automation Account's modules directory.  This requires getting the automation account, zipping up our local module files, sending them to a blob store, and importing those modules from the blob store.  I've sectioned out the code into different regions to better break it down for your own purposes.

#region GetAutomationAccount

$AutoResGrp = Get-AzureRmResourceGroup -Name 'mms-eus'
$AutoAcct = Get-AzureRmAutomationAccount -ResourceGroupName $AutoResGrp.ResourceGroupName

#endregion

#region compress configurations

    Set-Location C:\Scripts\Presentations\AzureAutomationDSC\ResourcesToUpload
    $Modules = Get-ChildItem -Directory
    
    ForEach ($Mod in $Modules){

        Compress-Archive -Path $Mod.PSPath -DestinationPath ((Get-Location).Path + '\' + $Mod.Name + '.zip') -Force

    }


#endregion

#region Access blob container

$StorAcct = Get-AzureRmStorageAccount -ResourceGroupName $AutoAcct.ResourceGroupName

Add-AzureAccount
$AzureSubscription = ((Get-AzureSubscription).where({$PSItem.SubscriptionName -eq $Sub.Name})) 
Select-AzureSubscription -SubscriptionName $AzureSubscription.SubscriptionName -Current
$StorKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $StorAcct.ResourceGroupName -Name $StorAcct.StorageAccountName).where({$PSItem.KeyName -eq 'key1'})
$StorContext = New-AzureStorageContext -StorageAccountName $StorAcct.StorageAccountName -StorageAccountKey $StorKey.Value
$Container = Get-AzureStorageContainer -Name ('modules') -Context $StorContext


#endregion

#region upload zip files

$ModulesToUpload = Get-ChildItem -Filter "*.zip"

ForEach ($Mod in $ModulesToUpload){

        $Blob = Set-AzureStorageBlobContent -Context $StorContext -Container $Container.Name -File $Mod.FullName -Force
        
        New-AzureRmAutomationModule -ResourceGroupName $AutoAcct.ResourceGroupName -AutomationAccountName $AutoAcct.AutomationAccountName -Name ($Mod.Name).Replace('.zip','') -ContentLink $Blob.ICloudBlob.Uri.AbsoluteUri

}


#endregion

Once we've uploaded our files, we can monitor them to ensure that they've imported successfully via the UI, or by using the Get-AzureRmAutomationModule command.

PS C:\Scripts\Presentations\AzureAutomationDSC\ResourcesToUpload> Get-AzureRmAutomationModule -Name LWINConfigs -ResourceGroupName $AutoAcct.ResourceGroupName -AutomationAccountName $AutoAcct.AutomationAccountNa
me


ResourceGroupName     : mms-eus
AutomationAccountName : testautoaccteastus2
Name                  : LWINConfigs
IsGlobal              : False
Version               : 1.0.0.0
SizeInBytes           : 5035
ActivityCount         : 1
CreationTime          : 9/13/2017 9:56:10 AM -04:00
LastModifiedTime      : 9/13/2017 9:57:26 AM -04:00
ProvisioningState     : Succeeded

Compile the Configuration

Once we've uploaded our modules, we can then upload and compile our configuration.  For this, we'll use the Import-AzureRmAutomationDscConfiguration command.  But before we do, there's two things to note when formatting a configuration for deployment to Azure Automation DSC.

  • The configuration name has to match the name of the configuration file.  So if your configuration is called SqlServerConfig, your config file has to be called SqlServerConfig.ps1.
  • The sourcepath parameter errors out with an 'invalid argument specified' error if you use a string path.  Instead, it works if you use (Get-Item).FullName

We'll be casting this command to a variable, as we'll be using it later on when we compile the configuration.  You'll also want to use the publish parameter to publish the configuration after importation, and if you're overwriting a configuration you'll want to leverage the force parameter.

$Config = Import-AzureRmAutomationDscConfiguration -SourcePath (Get-Item C:\Scripts\Presentations\AzureAutomationDSC\TestConfig.ps1).FullName -AutomationAccountName $AutoAcct.AutomationAccountName -ResourceGroupName $AutoAcct.ResourceGroupName -Description DemoConfiguration -Published -Force

Now that our configuration is published, we can compile it.  So let's add our parameters and configuration data:

$Parameters = @{
    
            'DomainName' = 'lwinerd.local'
            'ResourceGroupName' = $AutoAcct.ResourceGroupName
            'AutomationAccountName' = $AutoAcct.AutomationAccountName
            'AdminName' = 'lwinadmin'
    
}

$ConfigData = 
@{
    AllNodes = 
    @(
        @{
            NodeName = "*"
            PSDscAllowPlainTextPassword = $true
        },


        @{
            NodeName     = "webServer"
            Role         = "WebServer"
        }
        
        @{
            NodeName = "domainController"
            Role = "domaincontroller"
        }

    )
}

You'll notice that I have PSDscAllowPlainTextPassword set to true for all of my nodes.  This is to allow the PowerShell instance on the compilation node to compile the configuration with credentials being passed into it.  This PowerShell instance isn't aware that once the .mof is compiled, it is encrypted by Azure Automation before it's stored in the Automation Account.

Now that we have our parameters and configuration data set, we can pass this to our Start-AzureRmAutomationDscCompilationJob command to kick off the .mof compilation.

$DSCComp = Start-AzureRmAutomationDscCompilationJob -AutomationAccountName $AutoAcct.AutomationAccountName -ConfigurationName $Config.Name -ConfigurationData $ConfigData -Parameters $Parameters -ResourceGroupName $AutoAcct.ResourceGroupName

And now we can use the Get-AzureRmAutomationDscCompilationJob command to check the status of the compilation, or check through the UI.

Get-AzureRmAutomationDscCompilationJob -Id $DSCComp.Id -ResourceGroupName $AutoAcct.ResourceGroupName -AutomationAccountName $AutoAcct.AutomationAccountName

The compilation itself can take up to around five minutes, so grab yourself a cup of coffee.  Once it returns as complete, we can get to registering our endpoints and delivering our configurations to them.  Join us next week as we do just that!

Using Azure Desired State Configuration - Part I

I've been wanting to do this series for a while, and with some of the recent changes in Azure Automation DSC, I feel like we can now do a truly complete series.  So let's get started!

Compliance is hard as it is.  And as companies start moving more workloads into the cloud, they struggle with compliance even more so.  Many organizations are moving to Infrastructure-as-a-Service for a multitude of reasons (both good and bad).  As these workloads become more numerous, IT departments are struggling with keeping up with auditing and management needs.  Desired State Configuration, as we all know, can provide a path to not only configuring your environments as they deploy as new workloads, but can maintain compliancy, and give you rich reporting.

Yes.  Rich reporting from Desired State Configuration, out of the box.  You read it right.  You can get rich graphical reporting out of Azure Automation Desired State Configuration out of the box.  And you can even use it on-prem!

In this series, we're going to be discussing the push and pull methods for Desired State Configuration in Azure.  We'll be going over some of the 'gotchas' that you have to keep in mind while deploying your configurations in the Azure environment.  And we'll be talking about how we can use hybrid workers to manage systems on-prem using the same tools.

Push vs. Pull

Desired State Configuration, like a datacenter implementation, can be handled via push or pull method.  Push method in Azure does not give you reporting, but allows you to deploy your configurations to a new or existing environment.  These configurations, and the modules necessary to perform the configuration, are stored in a private blob that you create, and then the Azure Desired State Configuration extension can be assigned that package.  It is then downloaded to the target machine, decompressed, modules installed, and the configuration .mof file generated locally on the system.

Pull method fully uses the capabilities of the Azure Automation Account for storing modules, configurations, and .mof compilations to deploy to systems.  The target DSC nodes are registered and monitored through the Azure Automation Account and reporting is generated and made available through the UI.  This reporting can also be forwarded to OMS Log Analytics for dashboarding and alerting purposes (which, as we discussed in my previous series, can be used with Azure Automation Runbooks for auto-remediation).

Pros and Cons to Each

So let's talk about some of the upsides and downsides to each method.  These may affect your decisions as you architect your DSC solution.

  • Pricing - Azure DSC is essentially free.  Azure Automation DSC is free for Azure nodes, while there is a cost associated with managed on-prem nodes.  This charged per month and is dependent on how often the machines are checking in.  You can get more information on the particulars here.
  • Reporting - If you're looking for rich reporting, Azure Automation DSC is definitely the way to go.  You can still get statuses from your Azure DSC nodes via PowerShell, but this leaves the onus on you to format that data and make it look pretty.  We'll be taking a look at how we can do this a bit later.
  • Flexibility - Azure Automation DSC allows you to use modules stored in your Azure Automation Account.  If you wish to use a new module, you simply add that module, update your configuration file, and recompile.  With Azure DSC, you need to repackage your configuration with all of the modules, re-publish them, and re-push them to your target machines.
  • Side-by-Side Module Versioning Tolerance - Currently, Azure DSC actually has an advantage over Azure Automation DSC in this respect.  You cannot currently have multiple module versions in your module repository.  So if you're using Automation DSC and calling the same DSC resources in multiple configs, they need to all be on that same module version.
  • On-Prem Management Capabilities - Azure Automation DSC has the ability to manage on-prem virtual machines, either directly or via Hybrid Workers.  This gives you the ability to manage all of your virtual machines and monitor their configuration status from a single pane of glass.  Azure DSC does not have this capability.
  • Managing Systems in AWS - Yes.  You can also manage your virtual machines in AWS using the AWS DSC Toolkit via Azure Automation DSC!

So that's the overview of what we're going to be talking about through this series.  Tomorrow, we'll be getting into how to add configurations into Azure Automation DSC and compiling your configs.

Setting up the PowerShell.org DSC tools from Github

I have created a short blog series about how to setup the DSC tooling from the PowerShell.org DSC repository. With the mindset of contributing changes.

 

  1. Test-HomeLab -InputObject ‘The Plan’
  2. Get-Posh-Git | Test-Lab
  3. Get-DSCFramework | Test-Lab
  4. Invoke-DscBuild | Test-Lab
  5. Test-Lab | Update-GitHub

-David Jones

 

Episode 288 - PowerScripting Podcast - Hal and Jon talk about Splunk and DSC troubleshooting

Listen:

In This Episode

Tonight on the PowerScripting Podcast, Hal and Jon talk about Splunk and troubleshooting DSC

Links

<migreene> http://aka.ms/dscmp

<alevyinroc> here's my screen. https://www.dropbox.com/s/5bz3jqbghjsh2lx/Screenshot%202014-10-23%2021.42.33.png?dl=0

<halr9000> https://apps.splunk.com/app/1477/

<halr9000> wave 7 shipped https://gallery.technet.microsoft.com/scriptcenter/DSC-Resource-Kit-All-c449312d

<halr9000> wrong link: this one: https://gallery.technet.microsoft.com/xExchange-PowerShell-1dd18388/

<halr9000> http://blogs.citrix.com/2014/10/09/tech-preview-xendesktop-desired-state-configuration-resource-provider/

<rcookiemonster> They did a series on it a short while back - http://blogs.citrix.com/author/brianeh/ has them I think

<halr9000> http://blogs.msdn.com/b/powershell/archive/2014/01/03/using-event-logs-to-diagnose-errors-in-desired-state-configuration.aspx

<halr9000> http://www.ravichaganti.com/blog/portfolio/book-windows-powershell-desired-state-configuration-revealed/

<migreene> https://gallery.technet.microsoft.com/scriptcenter/xDscDiagnostics-PowerShell-abb6bcaa

<JonWalz> https://gallery.technet.microsoft.com/scriptcenter/xDscResourceDesigne-Module-22eddb29

<halr9000> http://blogs.citrix.com/author/brianeh/

Chatroom Highlights:

<Schlauge> ### input inot splunk    key/value pairs?    custom objects?

<Schlauge> ### @JonWalz   did you say you have a resource that pushes your PowerShell profile to a remove computer?

The current and future state of the Windows Management Framework

At the 2nd of October, Lee Holmes gave a presentation about the current and future state of the Windows Management Framework (WMF) during the Dutch PowerShell User Group (DuPSUG) at the Microsoft headquarters in The Netherlands.

The slide decks and recorded videos will be made available soon, but this is what was discussed:

The release cycle of the Windows Management Framework (WMF)

Faster incremental releases of preview versions are being released. This rapid development means that companies that need specific new functionalities to tackle current problems they're having, don't have to wait as long as they had to in the past.

Everyone should keep in mind that documentation for preview versions can be more limited, but should still read the release notes carefully. They contain descriptions of some of the improvements that are discussed in this blog post, but also cover other things that aren't discussed here. Also be sure to take a look at What's New in Windows PowerShell at TechNet.

A request from the audience was to include more helpful real-life examples until documentation is fully up-to-date.

 

Desired State Configuration (DSC) partial/split configurations

With DSC partial/split configuration it is possible to combine multiple separate DSC configurations to a single desired state. This could be useful when a company has different people or departments that are responsible for a specific part of the configuration (by example Windows, database, applications).

 

OneGet

OneGet is a Package Manager Manager (it manages package managers). It enables companies to find, get, install and uninstall packages from both internal and public sources. Public repositories can contain harmful files and should be treated accordingly.

Besides the OneGet module included in the Windows Management Framework Preview, updated versions are continuously being uploaded to https://github.com/OneGet/oneget by Microsoft. These can include bug fixes and new functionality like support for more provider types.

While in the past it seemed that Nuget was required, during the PowerShell Summit it was demonstrated that a file share can be used as well.

From the audience a question was raised whether BITS (Background Intelligent Transfer Service) could be used. This is currently not the case and there were also no plans yet to implement it.

 

PowerShellGet

PowerShellGet is a module manager which should make it easier to find the many great modules that are already available, but are not very discoverable because they're fragmented on numerous websites across the Internet.

Microsoft is currently hosting a gallery of modules. The modules that are available in there are currently being controlled by Microsoft, but this might change in the future.

It is possible to create an internal module source and the save location for modules can be specified as well.

 

PSReadLine

PSReadLine is a bash inspired readline implementation for PowerShell to improve the command line editing experience in the PowerShell.exe console. It includes syntax coloring and CTRL+C and CTRL+V support, for more information about other improvements, view their website.

PSReadLine is one of the modules that can be installed using PowerShellGet:
Find-Module PsReadLine | Install-Module

 

Security

  • Always be careful when running scripts that include Invoke-Expression or its alias iex because it might run harmful code.
    • For a non harmful example, take a look at this blog post by Lee Holmes.
  • Many people in the security community are adopting PowerShell.
  • PowerShell is done in memory and is therefore volatile. To improve security the following enhancements were introduced:
    • Transcript improvements
      • Transcript support was added to the engine so it can used everywhere, also in the Integrated Scripting Environment (ISE).
      • A transcript file name automatically includes the computer name.
      • Transcript logging can be enforced to be redirected to another system.
      • Transcription can be enforced by default.
  • Group Policy
    • An ADMX file is currently not available to configure it on all platforms, but it can be found in the technical preview versions of Windows 10 and Windows Server under: Administrative Templates -> Windows Components -> Windows PowerShell
  • More advanced Scriptblock logging
    • Enable ScriptBlockLogging through GPO (in later Windows versions) or by registry by setting EnableScriptBlockLogging to 1 (REG_DWORD) in: HKLM:\SOFTWARE\Wow6432Node\Policies\Microsoft\Windows\PowerShell\ScriptBlockLogging
    • The additional logging will show you what code was run and can be found in event viewer under Applications and Services Logs\Microsoft\Windows\PowerShell\Operational.
    • Scriptblocks can be split across multiple event log entries due to size limitations.
    • Using Get-WinEvent -FilterHashTable it is possible to get related events, extract the information and combine it.
    • Since attackers would want to remove these registry settings and clear event logs, consider using Windows Event Forwarding/SCOM ACS to store this information on another server. Also consider enabling cmdlet logging.
  • Just Enough Admin (JEA)
    • JEA enables organizations to provide operators with only the amount of access required to perform their tasks.

 

New and improved functionality and cmdlets

 

Manage .zip files using Expand-Archive and Compress-Archive

.zip files can be managed using Compress-Archive and Expand-Archive. Other archive types like .rar are not currently supported, but this might be added in future versions.

 

New-Item

It is now not necessary anymore to specify the item type. To create a new item, simply run
New-Item foo.txt

 

Get-ItemPropertyValue

This makes it easier to get the value of a file or registry:

  • Get-ItemPropertyValue $Env:windir\system32\calc.exe -name versioninfo
  • Get-ItemPropertyValue-PathHKLM:\SOFTWARE\Microsoft\PowerShell\1\ShellIds\ScriptedDiagnostics -Name ExecutionPolicy

 

Symbolic links support for New-Item, Remove-Item and Get-ChildItem

Symbolic link files and directories can now be created using:

  • New-Item -ItemType SymbolicLink -Path C:\Temp\MySymLinkFile.txt -Value $pshome\profile.ps1
  • New-Item -ItemType SymbolicLink -Path C:\Temp\MySymLinkDir -Value $pshome

Junctions cannot currently be created, but this might also be added in a later version.

 

Debugging using Enter-PSHostProcess and Exit-PSHostProcess

Let you debug Windows PowerShell scripts in processes separate from the current process that is running in the Windows PowerShell console (by example long running or looping code). Run Enter-PSHostProcess to enter, or attach to, a specific process ID, and then run Get-Runspace to return the active runspaces within the process. Run Exit-PSHostProcess to detach from the process when you are finished debugging the script within the process.

 

Use Psedit to edit files in a remote session directly in ISE

Simply open a new PSSession to a remote computer and type PSEdit <path to a file>.

 

Classes and other user-defined types

    • The goal is to enable a wider range of use cases, simplify development of Windows PowerShell artifacts (such as DSC resources), and accelerate coverage of management surfaces.
    • Classes are useful for structured data. Think by example about custom objects that you need to change afterwards.
    • Name of the class and the constructor must be the same.
    • Code is case insensitive.
    • In classes, variables are lexically scoped (matching braces) instead of dynamically scoped.
    • Every return must be explicit.
    • Sample code:

Class MyClass
{
  MyClass($int1, $int2)
   {
        "In the constructor"
   }
   [int]$Property1
   [DateTime]$Property2
   [int]MyHelper($param1)
   {
       return 42
   } 
}

 

 

 

 

 

 

 

PhillyPoSH 06/05/2014 meeting summary and presentation materials

Uncategorized
Jun 9, 2014
0

 

Patterns for Implementing a DSC Pull Server Environment

My Patterns for Implementing a DSC Pull Server Environment talk from the PowerShell Summit is now online.

Enjoy!

Building Scalable Configurations With DSC

My Building Scalable Configurations with DSC talk from the PowerShell Summit is now online.

Enjoy!

Life and Times of a DSC Resource

My Life and Times of a DSC Resource talk from the PowerShell Summit is now online.

Enjoy!

 

Going Deeper on DSC Resources

Desired State Configuration is a very new technology and declarative configuration management is a very young space yet.  We (Microsoft and the community) are still figuring out the best structure for resources, composite configurations, and other structures.

That said, there are certain viewpoints that I've come to, either from hands on experience or in watching how other communities (like the Puppet community or Chef community) handle similar problems.

How Granular Should I Get?

There is no absolute answer.

Very, Very Granular

Resources should be very granular in the abstract, but in practice, you may need to make concessions to improve the user experience.

For example, when I configure an IP address for a network interface, I can supply a default gateway. A default gateway is a route, which is separate from the interface and IP address, but in practice they tend to be configured together. In this case, it might make sense to offer a resource that can configure both the IP address and the default gateway.

I tend to think resources should be very granular. We can use composite resources to offer higher level views of the configuration. If I were implementing a resource to configure a network adapter's IP and gateway, I would have a route resource, an IP address resource, and probably a DNS server setting resource. I would then also have a composite resource to deal with the default use case of configuring a network adapter's IP address, gateway, and DNS servers together.

The benefit of doing it this way is that I still have very discrete, flexible primitives (the IP address resource, the route resource, and the DNS server resource). I can then leverage the route resource to create static routes, or use them directly to more discretely configure the individual elements.

Unless...

You have some flow control that you need to happen based on the state of the client or the environment.  Since your configuration is statically generated and is declarative, there are no flow control statements in the configuration MOF document.  That means that any logic that needs to occur at application time

Unfortunately, this leads to the need to re-implement common functionality.  For example, if I have a service that I need to be able to update the binary (not via an MSI), I need to basically re-implement parts of the file and service resource.  This use case requires a custom resource because I need to stop the service before I can replace the binary, but I don't want to stop the service with every consistency check if I don't need to replace the file.

This scenario begs for a better way to leverage existing resources in a cross resource scenario (kind of like RequiredModules in module metadata), but there isn't a clean way to do this that I've found (but I'm still looking!).

My Recommendation

So for most cases, I would try to use existing resources or build very granular custom resources.  If I need to offer a higher level of abstraction, I'd escalate to putting a composite resource on top of those granular resources.  Finally, if I need some flow control or logic for a multistep process, I'd implement a more comprehensive resource.

What Should I Validate?

Now that we are seeing some more resources in the community repository (especially thanks to the waves of resources from the Powershell Team!), we are seeing a variety of levels of validation being performed.

I think that the Test-TargetResource function should validate all the values and states that Set-TargetResource can set.

An example of where this isn't happening currently is in the cNetworking resource for PSHOrg_cIPAddress.  I'm going to pick on this resource a bit, since it was the catalyst for this discussion.

The resource offers a way to set a default gateway as well as the IP address.  So what happens if after setting the IP and default gateway, someone changes the default gateway to point to another router?

In this case, the validation is only checking that the IP address is correct.  DSC will never re-correct the gateway and our DSC configuration document (the MOF file) is no longer an accurate representation of the system state, despite the fact that the Local Configuration Manager (LCM) will report that everything matches.

This is BAD!!  If a resource offers an option to configure a setting, that setting should be validated by Test-TargetResource, otherwise that setting should be removed from the resource.  The intent of DSC is to control configuration, including changes over time and return a system to the desired state.  If we ignore certain settings, we weaken our trust in the underlying infrastructure of DSC.

What should I return?

The last element I'm going to tackle today is what should be returned from Get-TargetResource.  I've been on the fence about this one.  Like with Test-TargetResource, there are a number of implementation examples that vary in how they come up with the return values.

Currently, I don't see a ton of use for Get-TargetResource and it doesn't impact the Test and Set phases of the LCM, so it's been easy to ignore.  This is bad practice (shame on me).

Here's my thoughts around Get-TargetResource.  It should return the currently configured state of the machine.  Directly returning parameters passed in is misleading.

Going back to the PSHOrg_cIPAddress from the earlier example, it directly returns the default gateway from the parameter, regardless of the configured gateway.  This wouldn't be so bad if the resource actually checked the gateway during processing and could correct it if it drifted.  But it does not check the gateway, so Get-TargetResource could be lying to you.  T

he most consistent result of Get-TargetResource would be retrieving the currently configured settings.

What's left?

What other burning questions do you have around DSC?  Let's keep talking them through either in the forums or in the comments here.

Skip to toolbar