Ever wonder what it’s like to attend PowerShell Summit? Attendee Tommy Maynard blogged about his entire experience – including the build-up anticipation prior to the event – and it’s a great set of reads. Check it out.
There was a brief and lively discussion on Twitter recently stemming from someone asking for advice on how to convince management to turn on Remoting.
“Fire Management, if they have to ask” was apparently not an option, although it should have been. I mean, at this stage, you either know the value of PowerShell and its Remoting technology, or you’re being willfully ignorant.
But that wasn’t where the discussion got lively.
Not too long ago, over on DonJones.com, I wrote an article that tried to explain some of the confusion between Microsoft’s World of Management Instrumentation – e.g., WMI, OMI, CIM, and a bunch of other acronyms. I glossed over some of the finer details, and this article is intended to provide more specificity and accuracy – thanks to Microsoft’s Keith Bankston for helping me sort things out.
CIM and the DMTF
Let us begin with CIM. CIM stands for Common Information Model, and it is not a tangible thing. It isn’t even software. It’s a set of standards that describe how management information can be represented in software, and it was created by the Distributed Management Task Force (DMTF), an industry working group that Microsoft is a member of.
Old WMI, DCOM, and RPC
Back in the day – we’re talking Windows NT 4.0 timeframe – Microsoft created Windows Management Instrumentation, or WMI. This was a server component (technically, a background service, and it ran on Workstation as well as Server) that delivered up management information in the CIM format. Now, at the time, the CIM standards were pretty early in their life, and WMI complied with what existed at the time. But the standards themselves were silent on quite a few things, like what network communications protocol you’d use to actually talk to a server. Microsoft opted for Distributed Component Object Model, or DCOM, which was a very mainstream thing for them at the time. DCOM talks by using Remote Procedure Calls, or RPCs, also a very standard thing for Windows in those days.
New WMI, WS-MAN, and WINRM
Fast forward a bit to 2012. With Windows Management Framework 3, Microsoft releases a new version of WMI. They fail to give it a unique name, which causes a lot of confusion, but it complies with all the latest CIM specifications. There’s still a server-side component, but this “new WMI” talks over WS-Management (Web Services for Management, often written as WS-MAN) instead of DCOM/RPC. Microsoft’s implementation of WS-MAN lives in the Windows Remote Management (WinRM) service. The PowerShell cmdlets that talk this new kind of WMI all use CIM as part of the noun, giving us Get-CimInstance, Get-CimClass, Invoke-CimMethod, and so on. But make no mistake – these things aren’t “talking CIM,” because CIM isn’t a protocol. They’re talking WS-MAN, which is what the new CIM standard specifies.
Sidebar: From a naming perspective, Microsoft was pretty much screwed with the new cmdlets’ names, no matter what they called them. “Cim” is a terrible part of the noun. After all, the “old WMI” was compliant with the CIM of its day, but it didn’t get to be called CIM. The new cmdlets don’t use any technology called “Cim,” they’re merely compliant with the newest CIM standards. Maybe they should have been called something like Get-Wmi2Instance, or Invoke-NewWmiMethod, but that wasn’t going to make anyone happy, either. So, Cim it is.
Now, at some point, folks noticed that implementing a full WMI/DCOM/RPC stack wasn’t ever going to happen on anything but Windows. It was too big, too “heavy,” and frankly too outdated by the time anyone noticed. But there was a big desire to have all this CIM-flavored stuff running elsewhere, like on routers, switches, Linux boxes, you name it. So Microsoft wrote Open Management Instrumentation, or OMI. This is basically a CIM-compliant server that speaks WS-MAN, just like the “new WMI.” But it’s really teeny-tiny, taking up just a few megabytes of storage and a wee amount of RAM. That makes it suitable for running on devices with constrained compute capacity, like routers and switches and whatnot. Microsoft open-sourced their OMI server code, making it a good reference item that other people could adopt, build on, and implement.
Under the Hood: Provider APIs
Time to dig under the hood a bit. “Old WMI” got its information from something called the WMI Repository. The Repository, in turn, was populated by many different WMI Providers. These Providers are written in native code (e.g., C++) and only run on Windows. They’re what create the classes – Win32_OperatingSystem, Win32_BIOS, and so on – that we IT ops people are used to querying.
As Microsoft started looking at OMI, and at updated WMI to the newer CIM standards, they realized these old-school Providers weren’t hot stuff. First, they were kinda hard to write, which didn’t encourage developers to jump on board. They were also kinda huge, relatively speaking, making them less suitable for constrained environments like routers and switches.
So Microsoft came up with a new Application Programming Interface (API) for writing providers, calling it simply Management Instrumentation, or MI. MI providers are easier to write, and a lot smaller. MI providers, at an API level, work under the “new WMI” as well as under OMI. So if you’re getting a router hooked up to all this CIM stuff, you’re going to implement the teeny OMI server, and underneath it you’re going to write one or more MI providers to provide information to the OMI server. MI providers don’t necessarily need a repository, meaning they provide information “live” to the server component. That helps save storage space.
MI providers are also written in native code, which is nice because lots of developers who work with low-level system stuff greatly prefer native code. The client and server APIs are (on Windows, at least) available in native or managed (.NET) versions, so both kinds of developers get access. Providers, though, are always native code.
As an IT ops person, you’ll probably never care what kind of provider you’re using. The “new WMI” on Windows supports both old-style WMI Providers and new-style MI Providers, so developers can pick and choose. Also, Microsoft doesn’t need to go re-do all the work they already did writing providers for “old WMI,” because “new WMI” can continue to use it.
When you’re using Get-CimInstance in PowerShell, by default you’re using “new WMI,” meaning you’re talking WS-MAN to the remote machine. Those commands also have the ability to talk DCOM/RPC, mainly for backward compatibility with machines that either aren’t running WMF3 or later, or that haven’t enabled WinRM (remember, WinRM is what “listens” for the incoming WS-MAN traffic).
Client API Differences: This Matters
It’s massively important that you understand the inherent differences between DCOM/RPC and WS-MAN. Under DCOM, you were basically connected to a “live” object on the remote machine. That meant you could get a WMI instance, execute methods, change properties in some cases, and generally treat it as functioning code. The RPC protocol was designed for that kind of continuous back-and-forth, although it wasn’t terribly network- or memory-efficient, because of the “live connection” concept. WS-MAN, on the other hand, is basically like talking to a web server. Heck, it uses HTTP, even. So when you run Get-CimInstance, your data is generated on the remote machine, serialized into XML, transmitted back in an HTTP stream, and then deserialized into objects on your computer. Those aren’t “live” objects; they’re not “connected” to anything. That’s why they don’t have methods. To execute a method, you have to send another WS-MAN request to the machine, which will execute the method and send you any results – which is what Invoke-CimMethod does. The entire relationship between you and the remote machine is essentially stateless, just like the relationship between a web browser and a web server. So your coding technique has to change a bit as you move from “old WMI” to “new WMI.” The good news is that the new, web-style approach is a lot lighter-touch on the server, requiring less network and memory, so it becomes a lot more scalable.
Anything running WMF3 or later (Win2008R2 and later, Win7 and later) has “new WMI.” Microsoft continues to include “old WMI” for backward compatibility, although on newer versions of Windows (I’m playing with Win2012R2), the ports for DCOM/RPC may not be open, while the ports for WS-MAN are, by default. So we’re clearly moving forward.
WinRM CIM Remoting New WMI
Oh, and as a complete side note, a LOT of us in the industry will say stuff like “enable PowerShell Remoting” when we refer to enabling WS-MAN. Technically, that’s not accurate. Enabling Remoting, if you do it right, enables WinRM, and enables WinRM to pass traffic to PowerShell. It’ll also enable most of the other cool stuff we use WS-MAN for, including PowerShell Workflow, the “new WMI” communications for CIM cmdlets, and so on. But you could also enable the “new WMI” stuff without also turning on PowerShell Remoting. At the end of the day, though, turning on Remoting is just the Right Thing To Do, so why not make life easy and turn it all on at once?
OLD WMI: Uses DCOM/RPC. Uses old-style native code providers and a repository. Available only on Windows. More or less deprecated, meaning it’s not a focus area for further improvement or development. You’re connected to “live” objects and can play with them.
NEW WMI: Uses WS-MAN (via WinRM service). Supports old-style native code providers and a repository, as well as new-style MI providers. Available only on Windows. The way forward. If something can talk to “NEW WMI” it should be able to talk to OMI, also. You’re not connected to “live” objects, and have an essentially stateless relationship with the remote machine.
OMI: Uses WS-MAN (OMI code includes the protocol stack). Supports only new-style MI providers. Available on any implementing platform. Also the way forward. If something can talk to OMI, it should be able to talk to “NEW WMI” also.
CIM: Defines the standard. Created by DMTF. Early versions were implemented as “OLD WMI” by Microsoft, newest version implemented both in “NEW WMI” and OMI by Microsoft and others.
And if you prefer summaries by layer:
SERVER (or, the bit that serves up the info, which could technically be a client device like a laptop) uses PROVIDERS (either old-style WMI, new-style MI, or both) to generate management information. If the SERVER is a non-Windows device, it would run OMI and only support new-style MI providers.
CLIENT (the machine doing the querying) uses either old-style WMI (DCOM/RPC) or new-style (WS-MAN) to send requests to SERVER and to receive the results. CLIENT doesn’t care what API was used to write the providers running on the server, because the server makes the information all look the same. If CLIENT queries a SERVER that only supports WS-MAN, then CLIENT must obviously use WS-MAN.
Hope that helps.
We offered our first in-person, proctored VERIFIED EFFECTIVE exam at PowerShell Summit in April 2015, located in Charlotte, NC. While the exam is not intended as a diagnostic or learning tool, there are definitely some observations I can share from glancing through some of the submissions so far.
First, the exam isn’t easy. 31 people signed up to take it (our room capacity; more would have if we’d had space), and only 12 turned in submissions. Of those, fewer than 5 are probably going to pass by the end of the grading process.
- If you don’t know what [CmdletBinding(SupportsShouldProcess=$True)] does, then you shouldn’t be using it. It should never be used in a cmdlet that merely queries information and doesn’t make changes to the system. It isn’t boilerplate that should be included in every function, and it has nothing to do with the PROCESS script block.
- If you don’t understand ValueFromPipeline and ValueFromPipelineByPropertyName, then you need to learn.
- If you’re using aliases like % in a function, you’re not creating a readable, maintainable script. Avoid aliases, especially ones that don’t immediately communicate the task being completed. Dir might be acceptable; ? not so much.
- If you’re not neatly indenting your constructs, your script is not going to be readable.
- Creating a parameter that accepts a limited set of values (say, “foo” and “bar”) doesn’t create internal variables with those names (e.g., $foo and $bar). Don’t confuse parameter names with their values.
In the end analysis, there’s a difference between being able to hack out a working script, and being able to create a professional, maintainable tool that complies with PowerShell’s native practices and patterns. If you’re to the point where you’re able to hack out a working script, take a next step by reading something like The Community Book of PowerShell Practices (available for free), or solidify your skills and understanding through a book like (gratuitous plug) Learn PowerShell Toolmaking in a Month of Lunches.
Most of the non-passing submissions we’re seeing have simple mistakes – for example, including a static computer name in a verbose message, rather than inserting the name of the currently-processing computer. Or creating a CIMSession, but then not using it (forcing a later command to spin up a second session). In other instances, we saw poor practices (like globally and unnecessarily setting $ErrorActionPreference, suggesting a lack of understanding about the more specific -ErrorAction). There was also a few instances where a lack of attention to details – or perhaps simply running out of time – was a problem, such as failing to define a needed parameter, or defining a ValidateSet() with incorrect values.
We’re going to be removing one of our VERIFIED EFFECTIVE exam scenarios from production use, and turning that into an “example scenario” that you can use to self-assess your toolmaking skills. Look for that in the next few weeks. We’ll continue offering in-person proctored exams at PowerShell Summit, with Europe 2015 in Stockholm being our next go. In 2016, look for us to expand the program with more capacity (so more people can sit the exam), and for us to eventually offer a DSC-related exam.
In the meantime, anyone with a VERIFIED EFFECTIVE certificate has indeed completed a challenging, practical exam that shows they are definitely effective toolmakers, capable of building professional-grade tools that are consistent with PowerShell’s native use patterns. Thus far, fewer than 20 certificates have been earned.
Doug Finke has written an awesome article – complete with a module! – to help get data into Excel spreadsheets.
When Microsoft first released the DSC Resource Kit (in Wave 10 as of this writing), they opened the door to community contributions. Our own PowerShell.org GitHub repo consists partly of DSC resource that used Microsoft’s code as a baseline, and then corrected problems or expanded capabilities.
What we never had was a way for Microsoft to circle back, pick up those enhancements, and include them as part of an official future Resource Kit Wave. Now, we do.
We’re announcing a venue change for PowerShell Summit Europe 2015. Although we’re very appreciative to Microsoft for offering the use of their office in Kista, our registration velocity warrants a larger venue, and gives us the opportunity for a more central location.
Dates are not changed. We will be at the Scandic Klara hotel, which is near to the HTL Kungsgaten, both of which has sleeping room available as of this writing. Both are as close as we can get to Stockholm Central station, and both are near a tram line.
We are recommending that attendees reserve sleeping rooms immediately. A government congress at the waterfront convention center has made room inventory tight. Our registration website has been updated with the additional attendee capacity.
First: Because e-mail these days is actually unreliable, what with spam filters and all, please know that we’re relying on you to keep yourself informed on Summit updates. Following the Summit category on PowerShell.org, and watching the @PSHSummit Twitter account, are the reliable means of doing so.
First: Summit Europe is happening. There was some confusion because a draft blog post from a month ago got resurrected somehow, but the Summit is on.
Second: We’re almost sold out. I think we literally have 2 or 3 seats left. There was a rush over this past weekend.
Third: We’re exploring other venues in Stockholm and Kista, which would afford us more room. I expect to have this pinned down no later than mid-May. The dates will not change, and the Kista area will probably not change. But pay attention so you’re not going to the wrong building. Watching the Summit category and @PSHSummit Twitter page is vital, especially closer-in.
Fourth: Hotel inventory in central Stockholm is dicey because there’s some giant conference at the waterfront conference center. There are rooms available just outside the central area, as well as in Kista. So long as you’re close to a tram line or Metro stop, you’re good to go – the Metro will be able to get you to whatever venue we select (we’re ensuring that).
Fifth: That is all. Have a good week :).
Registration for PowerShell Summit Europe will commence on February 27th, 2015 at roughly 12:01am server time (I believe the server is in a Pacific time Azure datacenter). We will be limited to roughly 100 attendees.
I want everyone to understand the basic rules of engagement for this. Setting up and running this event involves significant financial risk. While in this case the event venue, a Microsoft office in Kista (near Stockholm), Sweden, isn’t charging us huge fees and requiring us to commit to hotel rooms and the like, there is still risk. Most of that risk is not borne by PowerShell.org, but for the most part by myself, personally. Our speakers also commit to covering their own travel expenses (something we’re hoping to offset this year). In addition, PowerShell team members are taking time away from the product to attend, which is a huge logistical commitment because it’s such a relatively small team.
For the Europe 2014 event, we had very poor registration numbers almost until the last minute. We also had to work very hard to drum up topic submissions from European speakers. Those two facts worry us a lot, because it suggests that there isn’t a strong and engaged community interested in this event. If that’s the case, we don’t want to barge in and run the event at all. As a result, we’re going to be taking a pretty risk-averse approach this time, and I wanted to be up-front and forthright about it.
So: We’re going to evaluate the registration numbers and velocity in mid-April. By then, we need to see at least 20-30 registrations. (We usually achieve that in the first week of registrations for the North American event.) If we’re not hitting that level, then the event is subject to cancellation (and everyone will naturally get a full and complete refund).
Also know that, should we make it past that point, registration will end by August 15th 2015 or when we fill the available space, whichever comes first. In other words, last-minute registration won’t be a thing.
The success of this event depends on the European members of the overall PowerShell community. You need to help get the word out. We aren’t going to be advertising, soliciting Microsoft’s help, or other techniques. This isn’t a commercial conference; it’s being done by the community and for the community – and if the community can’t make it happen, then it won’t happen.
Our agenda will be going online shortly, and you should head to http://PowerShellSummit.org to find the registration links (after reading the introductory material, click “Europe 2015″ for details). We’ll get it all posted and ready for February 27th – it won’t be live until then. Help us get the word out. Tell co-workers. Use Twitter, Google+, and Facebook. Attend user group meetings and spread the word. We’ve got about 6 weeks to get 20-30 people signed up to make sure we’re covering base expenses and making this happen.
We have some folks working on the next Scripting Games… but we want some feedback from the community to make sure we’re offering something of value.
The current plan is to run a series of events, with both Beginner and Intermediate tracks. There will be no “advanced” track; the feeling is that, if you’re advanced, you should be helping out by judging ;). Events will be constructed as a combination of puzzles and real-world tasks, meaning some things will simply test your PowerShell skills, while others will test them in a more production-applicable way.
What we need from the community is some sense of what you want to get from the Games. However, before you reply, understand what is NOT on the table: we will not be running an event where every entry gets personal commentary or feedback from an expert judge. It simply isn’t practical – everyone doing the judging has a full-time job, and offering personal feedback just isn’t feasible.
What COULD be on the table is offering a numeric score from a judge, based on the completeness of your entry and what the judge thinks of it. However, if it’s a low score, you’re not going to be told why (“no commentary,” see above). So we’re not sure that numeric scores are useful.
One proposal has been to post the events, and have judges select both good ones and less-good ones to write about. In other words, provide commentary on the outstanding entries, but not EVERY entry. Individual entries wouldn’t receive a score, but you could certainly compare what you did to the outstanding ones that did receive commentary. The idea here is to give you a task on which to test your skills, and to provide some educational feedback on some representative entries. The fact is that, in any given task, we tend to see a lot of similar-looking entries anyway, so hopefully taking some of them and commenting (both positively and constructively) will help everyone “judge” their own entries and improve their skills.
After trying numerous approaches to the Games over the past years, and after listening closely to people’s feedback, we’re trying to come up with something that is both useful and do-able.
What do you think of that proposal? Or, would you offer another proposal for us to build the Games around? Keep in mind – any proposal that suggests “expert commentary on every entry” will simply have to be turned down outright. After major discussion, we simply can’t commit to it. We’ll leave this open for the month of February 2015 – discuss away!
Add to the discussion in the Forums. Login required; not accepting comments on this post.
Over the past few weeks, Matt Penny has been busy moving our free eBooks into their new home on Penflip. Code, when available, is located in our GitHub repo, and modules will soon be available in the PowerShell Gallery for downloading via Install-Module.
Penflip is a Markdown-based editing system backed by GitHub. This means anyone can contribute corrections, additional material, and so on – which will make it easier to maintain these great books over time. You can download ebooks directly from Penflip in a variety of e-book formats. We’re now focused on electronic formats, rather than traditional page-based layout, although PDF is still an available download option if you want to make a hardcopy.
The conversion from Word to Markdown was challenging and largely manual, so if you run across formatting problems (especially with code), we absolutely appreciate your help in fixing those. Simply “branch” the book, creating your own copy of the project. Make corrections, and then submit those back to the master branch. Approvals are manual, so give us a few days to review what you’ve done and merge it into the master.
Massive thanks to Matt for all the long hours making this conversion happen, and to the folks who’ve submitted cover art for the new books.
We’re in the process of migrating our free ebook collection over to Penflip, an online, Git-based collaborative authoring and publishing tool. Matt Penny has taken the lead in converting our Word documents to the Markdown syntax used by Penflip, and as you can see on our ebooks page, most of the titles now have an initial version in Penflip.
One neat thing about Penflip is that anyone can register for a free account, fork one of our projects, and make their own modifications. You can then submit your changes back to the master branch, so we can incorporate your changes into the ebook. This will make it easy for everyone in the community to suggest new content, offer corrections, and so on. I encourage you to help out – right now, you may simply notice some flaws from the semi-automated and fully hellish Markdown conversion, and we’d love your assistance in correcting those.
Penflip also supports on-demand downloads of each ebook in a variety of common formats, including EPUB, PDF, and more. That means you’ll always be able to grab the latest version of your favorite ebook. We’ve not yet migrated the source code that goes with some of the ebooks; the plan is to move those into our GitHub repo over the next week.
Penflip will be enabling the next generation of our ebooks, including a massive new DSC title I plan to begin working on in 2015.
Thanks for any help you can provide, and I hope you continue to find the ebooks helpful!
The folks at Smarterer have agreed to let us – that’s all of us, as in “The PowerShell Community” – build a sort of “exam” for people to prove their PowerShell Proficiency. And I need your help to do it!
Step 1, you need to be pretty decent with PowerShell yourself. Not Level 12 Guru Level, mind you, but you should be working with it daily. Most of this book should make sense to you.
Step 2, you need to download my Quiz Question Writing Guide (It’s all of 1 page) and Topic List. PowerShell Quiz Guidelines is the download. Go on, I’ll wait.
Step 3, you need to sign up, using your e-mail address, and let me know you’re interested in helping. What you’re volunteering to do is, over the course of February 2015, write at least 20 questions. That’s about 2 questions per category. You’re also agreeing to help peer-review the questions other folks write, so we can spot the stinkers. Signups are due by January 20th 2015.
<blink>Go here to register!</blink>
BTW, 20 questions total is only about 1 per day. You could totally do 5 per day if you made an effort. Think about PowerShell questions you’d ask during a job interview, to tell if someone knew their stuff or was merely a poser. We cannot have too many good questions.
Now for the good news there are prizes! Pluralsight is offering a prizes to the top net question contributors (“net contributor” means the number of questions you write that survive peer review and are accepted by the Quiz Captain).
- 1st place: $200 Amazon gift card and 6 months of access to the entire Pluralsight library
- 2nd place: $100 Amazon gift card and 3 months of access to the entire Pluralsight library
- 3rd place: $50 Amazon gift card and 1 month of access to the entire Pluralsight library
We’re also looking for a Quiz Captain, so when you register, indicate if you’re willing to take on that role. There’s only one, and you’re exempt from the prize (that’s what you get for stepping up). You’re in charge of final acceptance on all questions that go into the final pool – not so much for technical accuracy, but for being well-written.
Disclosures: You’ll be using an online authoring tool called Flock, which means your registration e-mail address (which you provide) will be provided to Smarterer, so they can load you into the tool and send you an access invite via e-mail. Your e-mail will also be used to contact you about the project, and regarding any prizes you may earn.
WHY? Well, the idea is that we’re all getting to a point where we’ll need to hire PowerShell sk1llz. Rather than us all concocting our own job interviews, this’ll act as a kind of central, crowdsourced job interview you could direct a job candidate to. Yes, some of you will also ask for a more in-depth interview, perhaps offering a coding challenge or something – that’s awesome. This is just the first stage you could use. The exam will be available free of charge to anyone who wants to take it, anytime, ever. And it can be updated and evolved as the technology, and our business needs, evolve.
Fancy yourself a graphics person? Just like to doodle?
We’re holding a contest to create new covers for our various ebooks. Winners will receive absolutely nothing, other than a cover credit within the text (hey, we’ll also give you a full set of the ebooks for free, what the heck).
- Covers must include the book title, and should include the PowerShell.org logo. The logo is below.
- Don’t include author names in the artwork. Authors are credit on the book’s “About” page.
- Images must be 8.5″ wide by 11″ high, preferably at 300dpi, in PNG or JPG format (see these specifications if you need that sizing in pixels).
- Don’t include art, photos, or any other elements that you yanked off the Internet, including Microsoft imagery, unless you can provide us with written permission from the copyright holder to use it.
You can submit a series for all the books, or just covers for the book or books you like best.
Be serious. Have fun. Whatever! Send submissions via e-mail to Admin, right here at PowerShell.org. We’ll let you submit until the end of January 2015, and we’ll pick the best selections we have at the time.
As of this post, PowerShell Summit North America 2015 is full, and registration has been cut off. We’re taking some time to confirm our numbers and venue capacity; if we’re able to open additional seats, that will happen in January 2015. We will allow any additional capacity to be registered until one month prior to the Summit, or until it sells out, whichever comes first. We do not maintain a waiting list; please check here and on the @PSHSummit Twitter feed for any announcements.
For those already registered, we do not have any official hotel recommendations. You’re welcome to use the Summit Forum to see where others are staying, or to arrange for carpooling or other stuff. We certainly encourage all attendees to check the Forum for Q&A and other discussion – it’s never too early to start getting involved. On the hotel front, just look for hotels in downtown Charlotte, or near Microsoft Charlotte, based on your preferences. The reason there’s no official hotel is that there are numerous business-class hotels nearby, and after a close call last year we didn’t want to take the financial risk of booking out a room block.
Our intent at this time is to book the venue to fire code capacity, which is why we may be able to open additional slots after we confirm everything. That means both venue rooms will be full at all times. You will not be permitted to stand or sit in the aisles, back of the room, or block the doorways. If the session you hoped to attend is full, you’ll need to go to the other one. Keep in mind we’re recording everything, so you won’t miss out entirely.
The last sessions on all three days will only have a single session. We’ll position the speaker in one of the two rooms, and we’ll live-stream to the other room. This is where we plan to put Jeffrey Snover’s talks, both to accommodate what has historically been high interest in his sessions, and to accommodate his total inability to do a session in only 45 minutes :). If you don’t get a chair in the “live” room, you’ll need to join from the “overflow” room.
The two rooms are actually in different buildings, separated from each other by a driveway/courtyard arrangement. We’re suggesting that you not bring your ginormous 21″ laptop, since it’ll just drag you down moving between sessions. Maybe stick with a Surface if you want to take notes and stuff. Although we’re recording everything, so… you know. Maybe just enjoy the session.
Lunches will be taken in the session rooms, with buffet setups in the hallways just outside each room.
Stay tuned for further details, and please use the Summit forum to ask questions.