Category Archives: PowerShell for Admins

theposhway

Series – Microsoft Certification the PoSh way – 70-411 – Part 1


Richard’s Log, Stardate 2457170.5

Our destination for today is a introduction on Microsoft Certification, the PoSh way.

As an IT Professional, I am trying to keep myself constantly up-to-date, and with that comes certification. Currently I’m working on my MCSE: Server Infrastructure certification, and specifically on the preparation for exam 70-411.

In all of the Microsoft exams you have several objectives. These objectives consists of several tasks you should be able to execute. These exams are heavily GUI driven, and sometimes you have some shell questions. But as a PowerShell geek, I also want to know how to do the several tasks measured in the exam with PowerShell. Is it necessary to pass the exam? No, but it’s fun to learn a bit beyond the horizon, true? :)

I am going to write up a serie of articles, and each article will zoom in on each task included in each objective, and how to administer it with PowerShell. Hopefully, this not only give you a understanding of the task itself, it will also teach you how to use the existing PowerShell functionality in your daily work.

To start the serie, I will write articles on the following objective of the exam 70-411:

Deploy, Manage and Maintain Servers

which consists of the following sub-objectives and tasks:

  • Deploy and manage server images
    • Install the Windows Deployment Services (WDS) role; configure and manage boot, install, and discover images; update images with patches, hot-fixes, and drivers; install features for offline images; configure driver groups and packages
  • Implement patch management
    • Install and configure the Windows Server Update Services (WSUS) role, configure group policies for updates, configure client-side targeting, configure WSUS synchronisation, configure WSUS groups, manage patch management in mixed environments
  • Monitor servers
    • Configure Data Collector Sets (DCS), configure alerts, monitor real-time performance, monitor virtual machines (VMs), monitor events, configure event subscriptions, configure network monitoring, schedule performance monitoring

My purpose is to write one article per sub-objective, to get some scoped topics. I will need to see how this is going to work out in terms of time, but I will do my best to put coverage on all of the objectives in the exam.

Watch out for the first post in this series for the sub-objective Deploy and manage server images. In that first article I will zoom in into using the Server Manager Cmdlets, the Deployment Services Cmdlets and the Deployment Imaging Servicing Management (DISM) Cmdlets.

Disclaimer: I will show also a lot of screenshots, in which you will see me using ISESteroids. This just because I love the product. :)

Till next time, and live long and prosper!

New PS Module for working with F5’s LTM REST API


If you use F5’s BIG‑IP Local Traffic Manager (LTM) for load-balancing, then you may find the new PS module I’ve written helpful. The module uses the REST API in ver. 11.6 of the LTM to query and manipulate an F5 LTM device. You can add and remove members from a pool, enable and disable them, and find out what pools a member is in, among other things.

I’ve made the module files available here. I welcome all comments.

A few notes: Since the module uses the Invoke-WebRequest cmdlet, PowerShell 3 or higher is required. Also, since some F5’s utilize self-signed certificates, and Invoke-WebRequest is unhappy if part of the certificate chain isn’t trusted, I’ve included a dependency on Jaykul’s PS module TunableSSLValidator, which allows for temporarily ignoring certificate errors. If you’re using a trusted certificate chain, then you don’t need the TunableSSLValidator module and can remove the -insecure flags from the Invoke-WebRequest calls.

Cheers,
Joel

6th Dutch PowerShell User Group Event 2015 Recap


Richard’s Log, Stardate 2457169.5

Today’s destination is the Recap for the 6th Dutch PowerShell User Group 2015.

Now in my hotelroom nearby Schiphol, rethinking all the awesome content I saw today, and the great talks from IT PROFESSIONALS like:

And last but not least, the absolute IT rockstar:

  • Jeffrey P. Snover,Distinguished Engineer and the Lead Architect for the Windows Server Division,Inventor of PowerShell @ Microsoft – ( Twitter / Personal Website )

The day kicked of with a short keynote from Jeff Wouters. He was allowed to give away some prices from the sponsors, like a discount on software from Sapien Software, and a free license for the Sapien Software Suite. He also gave away a license for ISESteroids from Tobias Weltner.

After this it was time for the first session of the day, and this session was all about the cool stuff you can do with OneGet (now officially called PowerShell Package Manager); which is a ‘package management manager’. It allows you basically to do what was already possible in *nux systems for years. That is, grabbing and install software packages from either external or internal repositories. For a deep dive on Package Management I would recommend you to watch this great MVA course: Package Management and Workflow Automation

Next up was a session from Bartek Bielawski on making WMF4 acting like WMF5, in terms of partial configuration. He showed us a solution based on AST (Abstract Syntax Tree). Bartek wrote a nice article on AST a while back, check it here: Hey, Scripting Guy! Blog. Bartek will maybe share the code which he showed to us, depending on the permission of his company. Keep a eye on this, highly advanced stuff!

After this Ben Gelens went crazy on the topic of ‘Lessons Learned: Introducing DSC Configuration into an existing environment’. He went deep into the security considerations of setting up a secure Pull server, with very sleek demo’s. He showed us how to secure the complete communication, based on certificates, between the LCM and the Pull server. I think this session was recorded, so hopefully we get the chance to see it online somewhere.

Stefan Stranger went up next, on the topic of Azure Extensions, and on the DSC Extension in particular. This was a nice session to see, on how to use DSC on Azure IaaS VM’s. Quite easy and straight forward. I recommend you to read more about this in this blog post: Introducing the Azure PowerShell DSC (Desired State Configuration) extension.

Next was a session from Jeff Wouters, on a solution he made himself, called ‘MONK’. I’m not sure if I can elaborate on this, but it’s a very interesting solution for consultants. I recommend you to contact Jeff for more about this.

Also Jeff showed us a nice demo of ISESteroids, a must-have tool for any serious PowerShell professional! It provides great features like autoindenting, refactoring, a function browser and much and much more. Check out the website from Tobias: ISESteroids

For the last session of the afternoon we had Bartek again, on using DSC in Linux. Man, what a superb session was this! Bartek showed us how to install the OMI package, in order to be able to talk to a Pull server. I recommend you to start with this blog post from Bartek: PowerShell DSC for Linux Released

In the meanwhile, Jeffrey Snover also entered the building. I have to say, that was a exciting moment for me, to meet the man in person. Jeffrey is a idol for me, he has a inspiring influence on me. The man just does a fantastic job at Microsoft. During dinner we talked a bit on adoption of PowerShell in general. The point made by Jeffrey was that either a SysAdmin learns how to use PowerShell, of gets into the lumber business. 😉 Also he had a nice argument on security in PSRemoting; why you would not trust PSRemoting, while you do trust other Management products? Something to think about, the next time you have a discussion again about security in PSRemoting.

After dinner it was time for Jeffrey to deliver a session. He did it in the format of Q&A, which is working really well in small groups I think. He fired up PowerShell ISE as his ‘notebook’ (what else 😉 ) and started with the question: “who has a question?”. And he started jotting down all the questions which came from the audience. The questions were these:

Jeffrey then elaborated on these questions and made a nice comprehensive story of it. Jeffrey can present, for sure! If you want to have the answers he gave to the questions, I’m sorry, but I didn’t note those answers down. One big takeaway from this session is that when you buy new server hardware, make sure the NIC is RDMA enabled! This will greatly improve the overall speed of the hardware.

Last but not least, we got some nice stickers from Jeffrey:

IMG_20150527_022104

Jeff also mentioned for me that we are doing a project on putting up the PowerShell Manifesto on Penflip. I uploaded the whole manifesto, and soon you will see the annotations coming from Don Jones and Timothy Warner. You can also make your contributions, just go to the Penflip project page Monad PowerShell Annotated, and help us out with extending this document with explanations and links!

I would recommend you to go to the next DUPSUG meeting, should be somewhere in Q4 this year.

That’s it for now folks! Live long and prosper!

Episode 299 – PowerScripting Podcast -Warren Frame on Invoke-Parallel and PoshRSJob


Listen:

In This Episode

Tonight on the PowerScripting Podcast, we talk to Warren Frame

Interview

Chatroom Highlights

<proxb> https://github.com/proxb/PoshRSJob

<MikeFRobbins> PowerShell Function to Create CimSessions to Remote Computers with Fallback to Dcom http://mikefrobbins.com/2014/08/28/powershell-function-to-create-cimsessions-to-remote-computers-with-fallback-to-dcom/

<MikeFRobbins> Targeting Down Level Clients with the Get-CimInstance PowerShell Cmdlet http://mikefrobbins.com/2012/09/20/targeting-down-level-clients-with-the-get-ciminstance-powershell-cmdlet/

<migreene> AppVeyor badge on DSC modules (Pester tests) – https://github.com/PowerShell/xAdcsDeployment

<rcookiemonster> https://t.co/B2cQqErHM7

<rcookiemonster> https://t.co/B2cQqErHM7

<MikeFRobbins> Is it Sergei Vorobev http://twitter.com/xvorsx ?

<sp> <p‌scookiemonster> https://www.crowdcast.io/e/vipug-2015-05

<migreene> http://stevenmurawski.com/powershell/2015/4/youll-pry-the-gui-from-my-cold-dead-hands-1

<migreene> StarWars trailer (whoa! I did not know!)  https://www.youtube.com/watch?v=ngElkyQ6Rhs

<rcookiemonster> http://ramblingcookiemonster.github.io/

<migreene> GitHub pages – https://pages.github.com/

Question

  • Superhero:Omniglot

The Monad Manifesto Annotation Project


Richard’s log, stardate 2457164.5

Today’s destination is the Monad Manifesto Annotation Project.

The idea behind this project is to keep the manifesto intact somewhere on the internet, and to provide the possibility to the community to annotate on the several topics in the manifesto. The idea for this came from Pluralsight author Tim Warner, with the initial annotations being made by Don Jones. Jeffrey Snover gave his permission for this project, but with a big warning: the content only can be shared on the original source page on penflip, and cannot be hosted anywhere else.

I am already in the progress to put all the chapters from the Manifesto in penflip, and I’m putting the right formatting on it. The idea is to finish this the coming days. After that the actual annotation can be started.

For more information check the project page on penflip:

https://www.penflip.com/powershellorg/monad-manifesto-annotated

Till the next time, live long and prosper.

boldlygo_preview

PowerShell… An exciting frontier…


PowerShell… An exciting frontier…

These are the voyages of a PowerShell adventurer.

Its continuing mission:

To explore strange new cmdlets…

To seek out new modules; new parameters…

To boldly go where no one has gone before!”

Richard’s log, stardate 2457163.

Our destination for today is my very first post on PowerShell.org. As you can see, from the opening lines, I approach my journey in PowerShell as a exploration into the unknown, just like the crew of Star Trek, Next Generation did. Till now my journey has been a pleasant one, because you know, exploring PowerShell is a lot of fun! And your exploration should also be a lot of fun, for that reason I want to share with you my discoveries and experiences. These will help you, I hope, to boldly go where no one has gone before!

About Me, And A Statement

My name is Richard Diphoorn, and I’m a IT Professional based in the Netherlands. I work now for around 14 years in IT. My daily work consists mostly of automating, scripting, provisioning new servers, working with System Center, Azure Pack, SMA. Actually everything which can be automated, is that what I am working on. I believe in automation, it’s in my opinion the mindset every IT professional should have.

When I started working in IT, it was not called IT in the Netherlands, it was called ‘automatisering’; in english it’s called ‘automation’. And there you have it, the job I’m doing was always ment to do automation. But still I see a lot of ‘click-next-admins’ around in the field. This term has been thrown up by Jeffrey Snover, and what it means is that there are administrators who click their way trough provisioning and configuration situations, instead of trying to automate this.

It’s my personal quest, to get the intention into the click-next-admins, to learn and use PowerShell. I strive for a transitional change in the admin’s life, by giving them new perspectives on how to ‘do’ things.

For sure I am not the person who possesses all the knowledge, but at least I want to share my passion and the knowledge I build up till now, with the people who are open for it. And with this I invite you, to do together this exploration into ‘strange new cmdlets’. 😉

A Small Introduction

So, with this personal statement I kick off this first post. Our first mission is the exploration of what this thing ‘PowerShell’ actually is, which kind of advantages it brings to you, and why it’s here to stay.

I assume ‘level 200’ as the basic level of my audience, therefore I will not go into the very basics of how you operate a Windows Operating System. I try to avoid unclear technobabble as much as possible, but I don’t want to oversimplify things. I try to make it as simple as possible, but not simpler (where did we heard that before…hmmm…).

Monad, A Brief History Of Time.

If you are like me, you probably bought a little book called ‘Monad’, written by Andy Oakly, quite some years back (if I remember correctly, I bought this book somewhere late december, 2005. I saw this little book on a bookshelf in Waterstone’s, in London. I bought the book because I heard of MSH, and I wanted to learn more about it.

I was hooked. 100%

I still encourage people to read this book, because a lot of information in that book is still relevant, in term of concepts. Topics like the Pipeline, Verb-Noun syntax, cmdlets, repeatability and consistency did not changed from the first version of Monad that the PowerShell team released. This is also the reason why you still see ‘C:\windows\system32\WindowsPowerShell\v1.0’ on all the Windows OS’es till now. This is because the underlying architecture did not changed. As we will continue to explore, you will see what I mean.

This book will explain to you the very first basic concepts, but for really getting into the dirt, I encourage you to read the Monad Manifesto, written by Monad’s architect, Jeffrey Snover. This manifesto explains the long term vision behind Monad, and describes many elements which are consisting today in PowerShell. This is really a comprehensive document on how Jeffrey first saw the big problems that existed in the way administrators did their work.

He explains the new approaches to different kind of models, and how existing problems be solved with these new approaches. This document will also contributes in your way of thinking the ‘DevOps’ way, because many concepts in here contribute directly to the way you should ‘do’ DevOps. For example, Jeffrey talks about ‘prayer-based parsing’, which is in direct conflict with predictability in certain DevOps scenarios.

Because you need to be able to predict what is happening when you go from Testing to Production. In all cases Deus Ex Machine situations needs to be prevented. You always need to know what is happening and why. In my opinion, DevOps is nothing more than just being really good in automating stuff, PowerShell gives you this possibility.

So, what is PowerShell, and how do I benefit from it?

PowerShell basically is a Shell ( a black box, in which you can type 😛 ), in which you can interact with every aspect of the Operating System in either a interactive or programmable manner.

You type commands in a specific format in this window, and magic happens. This is the simple explanation. Now the rest…

The concept of a shell in which you can manipulate the whole windows system in a interactive way or scripted way, with common syntaxes and semantics, was for me a really powerful and inspiring idea. This new shell helped me working more efficient, effective and with more fun. It enabled me to explore the whole OS, to boldly go where I never have gone before!

This concept is not new for the mainframe operators and the *nix admins; it’s something they are used to already for a long time. If you doubt if working on a command line is a bad thing, go and talk with the *nix admin in your company. They happily will show you how fast they can work, I’m sure!

So for you, as a Windows Administrator, what kind of benefits do you get from learning PowerShell? There are obvious benefits like getting stuff done more quickly, and doing it always in the same way so that you never make that one mistake again. A more un-obvious benefit is that you get to know the OS & Apps very well, because sometimes you really dig into the system, really deep. This level of knowledge can and will benefit you in terms of understanding how a system works, and how to resolve problems. This hugely contributes to your personal success in your career, because you are ‘the topnotch’ engineer. You will be the Geordi La Forge of your company, so to say. :)

PowerShell is dead, long live PowerShell!

PowerShell is here to stay, rest assured. Almost all the products made by Microsoft can be manipulated with PowerShell in one way or another. This by providing a direct API to the product itself, or either by providing a REST interface. A lot of products from third-party suppliers also support PowerShell, like VMware, NetApp and Citrix. PowerShell is really getting (or already is) a commodity; actually I advice customers to only buy products which can be manipulated with PowerShell.

Be honest here, if a product cannot be automated, how does this product contributes to the WHOLE business? The business thrives by efficient processes, and if all IT processes are efficient, the business profits hugely from that.

In every company where I have been till now in my IT career, make use of Microsoft software. I believe in the best tools for the job. PowerShell is such a tool. It’s ducttape, wd400 and a swiss knive in one, whatever you want to do, PowerShell can do it (and better).

PowerShell is here to stay my fellow IT pro’s, embrace it fully and enjoy the voyage!

I want to thank the crew at PowerShell.org to give me the opportunity to blog on this site!

Till next time, when we meet again.

Survey Results: Source Control for the IT Professional


First off – thank you to everyone who participated in the version control survey!

We’ve had a fun few weeks – Somehow the PowerShell Summit, Build, and Ignite were scheduled back-to-back-to-back. Among a host of other announcements and tidbits, we found that Microsoft has open sourced the DSC resources on GitHub, that Pester will be included in Windows, and saw a cool demonstration from Steven Murawski on using Test Kitchen to test DSC resources.

These and other solutions and technologies are starting to assume you know how to use source control, and many require having a source control solution in place – how do you automate testing and deployment on a commit, if you have nothing to commit to?

Source control has long been an important component of IT, but it seems IT professionals, particularly those in Microsoft environments, aren’t consistently using it.

You might expect a gap between IT professionals and developers, but less than 50% of IT pro respondents used source control as a team.

Breaking down the IT professional population by environment, we see that Microsoft environments are even further behind. Many PowerShell aficionados work on teams that aren’t using version control.

Long story short? IT professionals, management, and vendors have work to do; these new tools and ideas that rely on source control are great, but we need to work on finding a horse for the cart. The rest of my rambling analysis can be found here.

If you want to get up and running quickly, consider using GitHub for your PowerShell projects. You can start with the easy-to-use GUI client, and drop into the command line when you want to get your hands dirty. It’s a great way to start learning about source control, and to get involved in the community.

Do you have any suggestions on how we can get to a place where using source control is common place for IT professionals? Is this a worthwhile goal? Sound off in the comments!

Survey: Source Control for the IT Professional [Results in]


Edit: The results are in.

I was watching Don and Jeffrey’s PowerShell Unplugged session from Ignite the other day, and something stood out.

At 30 minutes in, Don asked the crowd whether they were using source control. Based on the video, the crowd wasn’t big on source control.

I work in IT. If I asked that same question at work, I would likely get a similar response. Why is that? Source control is incredibly important and can drive a number of other processes, yet it seems to be an afterthought for many IT professionals.

I drafted up a quick, informal survey on source control for IT professionals. If you have a moment, would love to see your responses. Stay tuned for a rough analysis and write-up on the results [Edit: Results are in].

Cheers!

Setting up the PowerShell.org DSC tools from Github


I have created a short blog series about how to setup the DSC tooling from the PowerShell.org DSC repository. With the mindset of contributing changes.

 

  1. Test-HomeLab -InputObject ‘The Plan’
  2. Get-Posh-Git | Test-Lab
  3. Get-DSCFramework | Test-Lab
  4. Invoke-DscBuild | Test-Lab
  5. Test-Lab | Update-GitHub

-David Jones

 

Dealing with the Click-Next-Admin


I had a good deal of yard work to do this weekend; I see yard work in a similar way that a click-next-admin sees Windows PowerShell. I want no part in it. So I wrote a quick bit on how we can deal with the click-next-admin.

Jeffrey Snover recently gave a TechDays Online session where he candidly asked us to “make today the last day you hire a click next admin.”

Reward the right people

This is a fantastic goal, but how do we get there? There’s no set answer, but I listed out some of the major challenges I see.

Would love to hear your feedback and ideas – flip through the post and stop back here to discuss!

If you’d like to have some fun, share your click-next-admin stories on twitter with the #ClickNextAdmin tag.

Too Busy

Aside: Thank you for the invite to contribute here, it’s an honor.

Cheers!

Why is Remoting Enabled by Default on Windows Server?


There was a brief and lively discussion on Twitter recently stemming from someone asking for advice on how to convince management to turn on Remoting.

“Fire Management, if they have to ask” was apparently not an option, although it should have been. I mean, at this stage, you either know the value of PowerShell and its Remoting technology, or you’re being willfully ignorant.

But that wasn’t where the discussion got lively.

Continue reading

Management Information: The OMI/CIM/WMI/MI/DMTF Dictionary


Not too long ago, over on DonJones.com, I wrote an article that tried to explain some of the confusion between Microsoft’s World of Management Instrumentation – e.g., WMI, OMI, CIM, and a bunch of other acronyms. I glossed over some of the finer details, and this article is intended to provide more specificity and accuracy – thanks to Microsoft’s Keith Bankston for helping me sort things out.

CIM and the DMTF

Let us begin with CIM. CIM stands for Common Information Model, and it is not a tangible thing. It isn’t even software. It’s a set of standards that describe how management information can be represented in software, and it was created by the Distributed Management Task Force (DMTF), an industry working group that Microsoft is a member of.

Old WMI, DCOM, and RPC

Back in the day – we’re talking Windows NT 4.0 timeframe – Microsoft created Windows Management Instrumentation, or WMI. This was a server component (technically, a background service, and it ran on Workstation as well as Server) that delivered up management information in the CIM format. Now, at the time, the CIM standards were pretty early in their life, and WMI complied with what existed at the time. But the standards themselves were silent on quite a few things, like what network communications protocol you’d use to actually talk to a server. Microsoft opted for Distributed Component Object Model, or DCOM, which was a very mainstream thing for them at the time. DCOM talks by using Remote Procedure Calls, or RPCs, also a very standard thing for Windows in those days.

New WMI, WS-MAN, and WINRM

Fast forward a bit to 2012. With Windows Management Framework 3, Microsoft releases a new version of WMI. They fail to give it a unique name, which causes a lot of confusion, but it complies with all the latest CIM specifications. There’s still a server-side component, but this “new WMI” talks over WS-Management (Web Services for Management, often written as WS-MAN) instead of DCOM/RPC. Microsoft’s implementation of WS-MAN lives in the Windows Remote Management (WinRM) service. The PowerShell cmdlets that talk this new kind of WMI all use CIM as part of the noun, giving us Get-CimInstance, Get-CimClass, Invoke-CimMethod, and so on. But make no mistake – these things aren’t “talking CIM,” because CIM isn’t a protocol. They’re talking WS-MAN, which is what the new CIM standard specifies.

Sidebar: From a naming perspective, Microsoft was pretty much screwed with the new cmdlets’ names, no matter what they called them. “Cim” is a terrible part of the noun. After all, the “old WMI” was compliant with the CIM of its day, but it didn’t get to be called CIM. The new cmdlets don’t use any technology called “Cim,” they’re merely compliant with the newest CIM standards. Maybe they should have been called something like Get-Wmi2Instance, or Invoke-NewWmiMethod, but that wasn’t going to make anyone happy, either. So, Cim it is.

OMI

Now, at some point, folks noticed that implementing a full WMI/DCOM/RPC stack wasn’t ever going to happen on anything but Windows. It was too big, too “heavy,” and frankly too outdated by the time anyone noticed. But there was a big desire to have all this CIM-flavored stuff running elsewhere, like on routers, switches, Linux boxes, you name it. So Microsoft wrote Open Management Instrumentation, or OMI. This is basically a CIM-compliant server that speaks WS-MAN, just like the “new WMI.” But it’s really teeny-tiny, taking up just a few megabytes of storage and a wee amount of RAM. That makes it suitable for running on devices with constrained compute capacity, like routers and switches and whatnot. Microsoft open-sourced their OMI server code, making it a good reference item that other people could adopt, build on, and implement.

Under the Hood: Provider APIs

Time to dig under the hood a bit. “Old WMI” got its information from something called the WMI Repository. The Repository, in turn, was populated by many different WMI Providers. These Providers are written in native code (e.g., C++) and only run on Windows. They’re what create the classes – Win32_OperatingSystem, Win32_BIOS, and so on – that we IT ops people are used to querying.

As Microsoft started looking at OMI, and at updated WMI to the newer CIM standards, they realized these old-school Providers weren’t hot stuff. First, they were kinda hard to write, which didn’t encourage developers to jump on board. They were also kinda huge, relatively speaking, making them less suitable for constrained environments like routers and switches.

So Microsoft came up with a new Application Programming Interface (API) for writing providers, calling it simply Management Instrumentation, or MI. MI providers are easier to write, and a lot smaller. MI providers, at an API level, work under the “new WMI” as well as under OMI. So if you’re getting a router hooked up to all this CIM stuff, you’re going to implement the teeny OMI server, and underneath it you’re going to write one or more MI providers to provide information to the OMI server. MI providers don’t necessarily need a repository, meaning they provide information “live” to the server component. That helps save storage space.

MI providers are also written in native code, which is nice because lots of developers who work with low-level system stuff greatly prefer native code. The client and server APIs are (on Windows, at least) available in native or managed (.NET) versions, so both kinds of developers get access. Providers, though, are always native code.

As an IT ops person, you’ll probably never care what kind of provider you’re using. The “new WMI” on Windows supports both old-style WMI Providers and new-style MI Providers, so developers can pick and choose. Also, Microsoft doesn’t need to go re-do all the work they already did writing providers for “old WMI,” because “new WMI” can continue to use it.

PowerShell Cmdlets

When you’re using Get-CimInstance in PowerShell, by default you’re using “new WMI,” meaning you’re talking WS-MAN to the remote machine. Those commands also have the ability to talk DCOM/RPC, mainly for backward compatibility with machines that either aren’t running WMF3 or later, or that haven’t enabled WinRM (remember, WinRM is what “listens” for the incoming WS-MAN traffic).

Client API Differences: This Matters

It’s massively important that you understand the inherent differences between DCOM/RPC and WS-MAN. Under DCOM, you were basically connected to a “live” object on the remote machine. That meant you could get a WMI instance, execute methods, change properties in some cases, and generally treat it as functioning code. The RPC protocol was designed for that kind of continuous back-and-forth, although it wasn’t terribly network- or memory-efficient, because of the “live connection” concept. WS-MAN, on the other hand, is basically like talking to a web server. Heck, it uses HTTP, even. So when you run Get-CimInstance, your data is generated on the remote machine, serialized into XML, transmitted back in an HTTP stream, and then deserialized into objects on your computer. Those aren’t “live” objects; they’re not “connected” to anything. That’s why they don’t have methods. To execute a method, you have to send another WS-MAN request to the machine, which will execute the method and send you any results – which is what Invoke-CimMethod does. The entire relationship between you and the remote machine is essentially stateless, just like the relationship between a web browser and a web server. So your coding technique has to change a bit as you move from “old WMI” to “new WMI.” The good news is that the new, web-style approach is a lot lighter-touch on the server, requiring less network and memory, so it becomes a lot more scalable.

Versions

Anything running WMF3 or later (Win2008R2 and later, Win7 and later) has “new WMI.” Microsoft continues to include “old WMI” for backward compatibility, although on newer versions of Windows (I’m playing with Win2012R2), the ports for DCOM/RPC may not be open, while the ports for WS-MAN are, by default. So we’re clearly moving forward.

Enabling WinRM CIM Remoting New WMI

Oh, and as a complete side note, a LOT of us in the industry will say stuff like “enable PowerShell Remoting” when we refer to enabling WS-MAN. Technically, that’s not accurate. Enabling Remoting, if you do it right, enables WinRM, and enables WinRM to pass traffic to PowerShell. It’ll also enable most of the other cool stuff we use WS-MAN for, including PowerShell Workflow, the “new WMI” communications for CIM cmdlets, and so on. But you could also enable the “new WMI” stuff without also turning on PowerShell Remoting. At the end of the day, though, turning on Remoting is just the Right Thing To Do, so why not make life easy and turn it all on at once?

Summary

OLD WMI: Uses DCOM/RPC. Uses old-style native code providers and a repository. Available only on Windows. More or less deprecated, meaning it’s not a focus area for further improvement or development. You’re connected to “live” objects and can play with them.

NEW WMI: Uses WS-MAN (via WinRM service). Supports old-style native code providers and a repository, as well as new-style MI providers. Available only on Windows. The way forward. If something can talk to “NEW WMI” it should be able to talk to OMI, also. You’re not connected to “live” objects, and have an essentially stateless relationship with the remote machine.

OMI: Uses WS-MAN (OMI code includes the protocol stack). Supports only new-style MI providers. Available on any implementing platform. Also the way forward. If something can talk to OMI, it should be able to talk to “NEW WMI” also.

CIM: Defines the standard. Created by DMTF. Early versions were implemented as “OLD WMI” by Microsoft, newest version implemented both in “NEW WMI” and OMI by Microsoft and others.
And if you prefer summaries by layer:

SERVER (or, the bit that serves up the info, which could technically be a client device like a laptop) uses PROVIDERS (either old-style WMI, new-style MI, or both) to generate management information. If the SERVER is a non-Windows device, it would run OMI and only support new-style MI providers.

CLIENT (the machine doing the querying) uses either old-style WMI (DCOM/RPC) or new-style (WS-MAN) to send requests to SERVER and to receive the results. CLIENT doesn’t care what API was used to write the providers running on the server, because the server makes the information all look the same. If CLIENT queries a SERVER that only supports WS-MAN, then CLIENT must obviously use WS-MAN.

Hope that helps.