Category Archives: Judges’ Notes

Scripting Games 2013: Event 6 Notes

We have finally hit the final event of the 2013 Scripting Games! The past 6 weeks have given us many amazing scripts and some that were in need of extra work. Regardless, for those of you who have finished all 6 scripts in your respective, I say Congratulations! You have hit the finish line sprinting hard to the end! Now you can sit back and know that you made it and have learned (hopefully) some great things along the way. Remember, not only have you learned some new techniques, but also the techniques that you have used have taught others how to write better scripts!

Check out the rest of my notes on my blog here!

Last events: my notes and scripts.

Oops! Looks like I totally forgot about posting what I did over here. Sorry!
In order of appearance:

Event 5 – script
Event 5 – notes
Event 6 – script
Event 6 – notes

This is last event, and I would like to thank everybody who took part in this games. Thank you guys for great ideas, inspiration, feedback… It was really educational experience for me (as it was in the past), and I hope it was educational for you too. And – congratulations for all the winners. :)

Notes for Event 6

When I read the instructions for event 6, I thought that here’s a tough one. A lot of competitors won’t have access to a test environment with Windows Server 2012 and Virtual Machines that they can actually work with. So, I expected that many of the entries wouldn’t get tested and intended to forgive minor errors that would have shown up in testing.

Well, there was one thing that really surprised me. The instructions were quite clear about minimizing “Are You Sure” queries to the user, but you can count on one hand the number of entries that included -Confirm:$false. This is just an example of why it’s so important to read the problem statement very carefully and extract the solution requirements. Then, after creating the solution, go back and verify that the requirements have all been met. Many of the entries called out this requirement in the comments, but then didn’t account for it in the script.

I had mentioned in a previous blog entry that, particularly in the advanced entries, the author was working too hard. Sometimes this means putting more emphasis on “completeness” than in solving the problem. Here’s an example of a wasted effort. A few entries used the [ValidateNotNullOrEmpty()] test for a possible alternate to the default value for “Server”.  Because there is a default value for the parameter, it won’t be null or empty making this test unnecessary. Here, give this a try:

function Test-NullOrEmpty { [CmdletBinding()] Param ( [ValidateNotNullOrEmpty()] $Name = "Server" ) "Got $Name" } Test-NullOrEmpty

Note that calling the function without a named parameter just assigns the default value. In order to make it fail you have to deliberately call the function with an empty value (Test-NullOrEmpty -Name), which is not going to happen in the real world.

I know that these are just nit-picking — and if these are examples of the nits in the Event 6 entries, then CONGRATULATIONS!! y’all did a mighty fine job of solving the problem. Calling out these issues is just intended as a learning opportunity. There are lots and lots of correct ways to write PowerShell solutions, it’s just that some are more efficient or take less typing than others. And learning about them is one of the important results of participating in the games.

Thanks to all of you for your efforts!

Scripting Games 2013: Event 5 Notes

With week 5 in the books, I can see that everyone just continues to grow and show some great submissions. Of course, nothing is perfect and can always show areas of improvement, but trust me, you are all doing an excellent job!

I was hoping to have this article completed prior to now, but between a flight to Tech Ed and forgetting my power cord for the laptop, I am just now getting this accomplished. Better late than never :).

With that, head over to my blog to check out my notes on Event 5 here.

Scripting Games Week 5

I loved this week’s challenge as it had the right wiggle room to bring out the best in our participants.  Of course, this is also the point in the games when we start to get everyone’s “A” game.  At this point even our new competitors are all warmed up and in the zone, and let me tell you the entries this week show it!   I want to start with the beginners as I actually ran almost every entry this week.  Honestly everyone fell into one of three buckets Select-string, Import-CSV or ,Foreach.  Let me explain there where three primary means to solve this problem.  Use Select-String and some basic text parsing to get the ip addresses, and then using Select-Object to filter.  Converting the logs to objects with Import-CSV and using Where-Object to filter.  Or using Foreach and a combination of if and where.

They are all three correct, so how does one judge one from another?  As this is a competition I used speed as the determining gauge.  For a long time I was convinced that the following was about perfect.  Quick simple and accurate.

Select-String -Path C:\Reporting\LogFiles\*\*.log -Pattern "(\b\d{1,3}\.){3}.\d{1,3}\b" -AllMatches | 
Select-Object -Unique @{Label="IP";Expression={$_.matches[1]}}

I was particularly drawn to this approach because it only used two cmdlets if that’s not PowerShell I don’t know what is. At first I was convinced converting the logs to objects was a waste.  Let me explain.  Over the course of this past month you’ve heard us rant and rave about objects, and how PowerShell is not text, but rich .Net objects.  For the most part that is an iron law, but it’s a law with an exception.  There is one place where text is just text, log files!  That’s why I loved this event.  This is the exception where all the old tricks still apply and where we found out which of you really know your regular expressions.  However in this one instant since we had a well formed log converting to a CSV was actually faster.   I wasn’t expecting that, but consider my gold standard example takes about 10 Seconds on my PC.   The Following finishes in 3!

$LogFilePath = 'C:\Reporting\LogFiles' 
$header = 'date','time','s-ip','cs-method','cs-uri-stem','cs-uri-query','s-port','cs-username','c-ip','cs(User-Agent)','sc-status','sc-substatus','sc-win32-status','time-taken' 
Import-Csv -Path $(Get-ChildItem -Path $LogFilePath -File -Recurse).FullName -Header $header -Delimiter ' ' | 
# if the contents of 'c-ip' can be converted to an IP address then it is a valid IP 
Select-Object @{n='ClientIP';e={if ([IPAddress]$_.'c-ip'){ $_.'c-ip' }}} | 
Sort-Object -Property 'ClientIP' -Unique

Now I’m not crazy about that entry it’s hard to follow, and will always return a blank string, but if you really look what makes it work is the author is offloading the IP filtering to the [IPAddress] type accelerator.  That is brilliant, and is x5 faster than a regular expression, which really adds up when you’re performing over 6k comparisons.   I know the general consensus is to leave the .Net stuff alone, but I have no religion when it comes to this stuff. If it’s better it’s better and in this instance it was better.

But that’s not the end of the story. While sorting through the entries I found the following solution.

Get-ChildItem -File C:\Reporting\LogFiles -Recurse | Get-Content | 
    # Selecting "GET /" gives us only the lines we want from the files.  
    Select-String -Pattern "GET /"  |
    # Split the remaining lines into an array and write element 8, the IP, to a file.
    ForEach-Object {$_.Line.Split("")[8] } | Select-Object -Unique @{Name="Source Address"; Expression={$_}}

Now that’s an old school PowerShell solution if I’ve ever seen one, and you know what it’s fast as hell!  There’s no validation of any kind. It will only work with provided source files, and it’s absolutely perfect!  You see the goal is to get the job done.  We don’t always have to author a tool that can be used by the world.  There is nothing wrong with leveraging your brain and cheating a little!

As for the advanced entries I think they’ve been adequately covered by my fellow judges.  In general my feedback would be to start a slow clap for the group.  There not perfect, but as a group you’ve learned from the feedback over this past month and man does it show! Heading into the final stretch I encourage you all to treat this last entry as your victory lap as you’ve all already one.



Notes on Event 5

Into the home stretch and the entries just keep getting better! The only advice I’d like to offer this time is to be careful to read the instructions carefully. They included the specific folder where the files were located and I noticed several misinterpretations in the scripts. Some included a mandatory Path parameter and others had a default Path that was not the specified folder. Including an optional Path with the correct default would certainly be acceptable, but not those variations.

The instructions also included some ambiguity about what the log file actually contains. Was the client IP address in the first column (as specified in the instructions) or in a different column (as presented in the example logs)? There were a number of entries that just searched the logs for IP addresses and returned all of them. This approach would not be able to distinguished between the client and server addresses, which would give a wrong answer. Another approach searched for the “c-ip” column, but this would only work if the log files were as in the samples. Another method, select the second IP address in a line would also only work on the sample log style. There weren’t many entries that supported both file types, but one of them did it in a very concise manner, checking the first and ninth columns for an IP address and selecting the correct one.

Most of the entries used Sort-Object -Unique or Select-Object -Unique to eliminate duplicates, which was the first approach that I thought of. There were several entries, however, that used alternate methods that I thought were quite clever applications of PowerShell technology: hash tables with the IP address as the key, and Group-Object on the IP address. Both options provided a fairly simple way to also report the instance count for each address.

Returning an instance count sounds like an interesting option, but after thinking about it some more, I’m not so sure. Counts of the number of sessions and the hits per session would be much more interesting than the raw hits count. But that’s way, way beyond the scope of this event.

Anyway, just one more event to go. I’m expecting a spectacular finish!

Event 4: My notes…

Active Directory is one of those things I just love to work with. That’s why I was really looking forward to this event. As always, I learned few things, but still – seen some mistakes that I would like to highlight. As always – you can read about those both in Polish and in English. Enjoy!

Scripting Games Week 4

Again if you’re participating in the games this year you’ve already won!  If you’re not and you’re reading this post what are you doing!  I’ve watched authors step there game up over the past month, and I can tell you from personal experience the games will make you better at your real job.  It’s like sharpening an axe, an axe made of super juice that can automate the world :)

Well that’s clever!
I came across this script this morning.

$prop = Write-Output Name,Title,Department,LastLogonDate,PasswordLastSet,LockedOut,Enabled
Get-ADUser -Filter * -Properties $prop | 
    Get-Random -Count 20 | Select-Object $prop |
        ConvertTo-Html -Title "Active Directory Audit" -PostContent "<hr>$(Get-Date)" | Out-File C:\adresult.html

Well formatted, simple concise, all around a very clean approach to the problem.  However the use of write-output threw me for a second.  I actually had to run it to see what was happening there, for a second I thought maybe there was yet another way to create a custom object in PowerShell.  Alas no, our intrepid author has simply deduced a way to avoid having to put quotes around the text.  Consider the following Prop1, and Prop2 are identical, but it’s one less character using write-output.

$prop1 = Write-Output Name,Title,Department,LastLogonDate,PasswordLastSet,LockedOut,Enabled
$prop2 = 'Name','Title','Department','LastLogonDate','PasswordLastSet','LockedOut','Enabled'

I’m not saying we should start using write-output instead of quotation if for nothing other than syntax highlighting it’s incorrect. However, this one time it’s forgiven, and I’m tipping my hat to you sir, well done.

Don’t put spaces or dashes in your property names.
I’ve seen this on and off throughout the games and I’ll admit this one isn’t a slam dunk, but that said don’t do it. You’re writing a script, camel case is the established standard for spaces. Yes the spaces do make it slightly easier to read, but at the cost of eliminate the reuse of the code.

Oh the Humanity.
Seriously read the damn help already. I could just fill this post with examples of simple mistakes that could have been avoided. Using the wrong cmdlet is one thing but take the following.

Get-Process | Sort-Object {Get-Random} | select -First 5

What’s wrong with that picture? Well nothing except it’s horribly inefficient since the Get-Random cmdlet has a count parameter!

Get-Process | Get-Random -Count 5

To the author You know who you are, everyone else read the help people!

Light week this week, but I will say I am super excited about next weeks offerings it’s a problem that tickles my kind of fancy, and I hope you all have as much fun solving it as I did.


Scripting Games 2013: Event 4 Notes

It is all downhill from here folks! Event 4 is in the books and we only have 2 more to go! Everyone has been doing an outstanding job with their submissions and it is becoming clear that people are learning new things and showing some great techniques with their code.

Of course, this doesn’t mean that there isn’t room for improvement with some submissions to make them even better or just some simple mistakes that can be cleaned up to make average submissions into amazing submissions. With that, its time to dive into my notes… You can check out the rest of this article here.

Event 4 Notes

Loved seeing [OutputType([PSObject])] in an entry this morning… that helps the help system document what your script produces. It’s a shame it doesn’t work well with custom type names (since those are a bit of a fake-out on the object), but it’s an attention to detail I appreciate.

I am seeing a little bit of misunderstandings. Keep in mind that the lastLogonTimestamp attribute in AD is the one that replicates, although there is a long possible delay in that replication. There are other “last logged on” attributes that don’t replicate so you can’t rely on them unless you’re querying every DC (pretty inefficient).

Hey, one thing to think about: sometimes simpler is better. For example, instead of adding a dozen lines to check and see if a module exists and can be loaded, just add a #requires comment for that module. Let the shell do that work and spew an error if the module isn’t present. It’ll even force-load the module into memory. Saves lots of steps.

Hey, don’t declare functions as global:Do-This. It’s a neat trick, but you’re polluting the shell’s global scope. Plan to write in-scope functions and make them a script module, so they can be loaded and unloaded. From the Games perspective, “whatever,” but in the real world… don’t pollute the global scope.

A comment I saw: “You should check to make sure the module isn’t loaded before loading it again.” Disagree. The shell does this for you when you use Import-Module. But, doc your module dependency in a #requires, and you won’t have to worry about the module. In fact, the whole theme of “checking to see if the AD module is loaded” appears to be a major point of commenting. I’m a fan of “easier” and a 1-line #requires -module ActiveDirectory is far easier to write and maintain than, say, and entire function designed specifically to load the ActiveDirectory module.

Judge notes for Event 4

 Wow! That’s the only word I can think of to describe the submissions this time. I’m really impressed with the approaches taken to solve this problem. The only thing that could have been better is quitting when the ActiveDirectory module or the Quest snapin weren’t found. I chalked that up to not having experience with an actual audit where no answer is not acceptable, so I didn’t count against it when evaluating the scripts. But, on this point kudos to the one script that tested for the AD module, then the Quest snapin, and fell back to the ADSI accelerator if neither were found.

Beginner entries

For me, the best entries were those that had the shortest pipelines. Those of you who used Get-Random -Count 20 -InputObject (Get-ADUser…) | Select … | ConvertTo-Html | Out-File had the shortest. And those who used Get-ADUser | Get-Random -Count 20 were a close second.

A couple of entries had something that at first I thought was silly. But, instead, it offers a learning opportunity. Here’s the code fragment: Get-ADUser -Filter {ObjectClass -eq ‘User’}. Paying attention to what the cmdlet does saves a lot of typing, not only here where the filter is redundant, but also when entering other parameters. For example, a similar extra effort occurs when default properties are explicitly listed in a -Properties parameter.

Advanced entries

As mentioned, the best entries were those that fell back to the [ADSI] accelerator when the AD module or the Quest snapin weren’t found. Making this kind of check and fallback is pretty important when responding to audit requests. This reminds me of a case where I actually had to respond to an audit request with the actual last logon date in a domain with mixed W2K3, W2K8, and W2K8R2 domain controllers. The default choice was to use the AD module, but since we had to check each domain controller (there were 72 of them), it turned out to be a real pain determining which method to use on each of them. In the end, we decided to install the Quest tools on the audit server and just avoid the issue.

There were several different methods used to verify the presence of the AD module before trying to load it. Most of them were actually more work that really necessary. The reason for this is that the Import-Module cmdlet does not return an error if the module has already been loaded. Thus, the easiest test would be:

Try { Import-Module ActiveDirectory -ErrorAction Stop $Users = Get-ADUser ... } Catch { Write-Error "AD Module not available" # Fall back to ADSI to get User data }

The same is true for Add-PSSnapin for PowerShell 3, but in V2, it generates an error with “because it is already added” in $Error[0].Exception.Message. So, you can use something similar to check for that.

To close out this set of comments, here’s something to think about. The topic is embedded, or local, functions in a master function. Question 1: should you even go through the trouble of writing a local function if it’s only going used one time? Question 2: since the local function will execute in a controlled environment, does it need to be an advanced function with comments and parameter validation, or would a simple function make more sense?

Until next time: keep up the great work!!