Category Archives: Inc.

PowerShell Summit Recordings

We’re nearing the end of our IndieGoGo fundraising campaign, and are pleased to announce that we’ve met our funding goal. Including offline donations, we will be able to purchase enough equipment to record the PowerShell Summit sessions and post those recordings to YouTube. We’ll begin with the Europe event in September!

This is a truly humbling and momentous milestone for The generosity of the PowerShell community has simply been amazing. We’ve had five donations of $1,000, which helped push us over our goal incredibly quickly. Combined with numerous smaller donations, you’ve made it possible to record and preserve the unique content offered at the Summit.

The campaign officially closes on June 1st; additional contributions will go toward ancillary equipment like shipping cases, upgrading our microphone plans, and so on.

You can see the list of contributors here. Our thanks to all of them!

Fundraising: PowerShell People Kick Butt, Take Names

Our IndieGoGo Campaign is off to an amazing start, raising over $6,300 (including some offline donations) toward our ultimate $9,000 goal. So far, we’ve raised enough to ensure we can record two tracks of Summit content – enabling us to record speakers’ laptops and voice, and to post the videos on YouTube, for free. Meeting our full $9,000 goal will enable three tracks of recordings, which is what the North American show currently produces.

The equipment we’re investing in will also support, should we choose to add it, an analog camera input and automatic picture-in-picture, meaning we can later add-on to include video of the speaker(s) as well as what’s on their laptop.

This equipment also meets an important set of goals for us: It requires no software on speaker laptops (often problematic), and it’s operated – literally – by a single big, red, lighted button. Meaning, it’s easy to use and shouldn’t interfere with the live audience’s experience.

I’m personally humbled by the generosity of our community. While larger donations are being considered “share purchases” in, Inc., these contributors are essentially getting nothing in return for their money – but they’re making something possible that will benefit everyone. Making this content permanently available, for free, will become a treasure trove of valuable information forever. I can’t express my gratitude enough.

Tell a colleague, tell a friend: Every donation helps, no matter how small. And thank you, thank you, thank you.

PowerShell Summit N.A. 2014 – Budget

As part of our commitment to being a transparent, community-owned organization, I wanted to share the basic budget for the upcoming Summit. Now that registration is cut off, we have most of our final numbers. Keep in mind that, at live events, things “on the ground” can change quickly – so these are, at present, only our expectations “going in.”

  • $113,833.51 in net registration fees. This is after paying credit card transaction fees.
  • -$398.00 for event insurance (already paid)
  • -$76,466.04 for the venue, which includes A/V, F&B, room rental, etc. (already paid)
  • -$9,335.01 for speaker lodging (hotel)
  • -$3,000 for professional event management (including travel for the event manager)
  • -$1,490 for our registration web site (already paid)
  • -$1,710.51 for deposit on the European Summit
  • -$7,500 for speaker reimbursement

That last number is presently the big question; we have some speakers who paid for their registration, and we need to reimburse them. That’s probably about $4,000. We have another $2,500  in promised travel offset fees to speakers doing 3 sessions. We’re trying to reimburse additional travel expenses for other speakers so they’re not totally out of pocket; the final number may be more than $7,500.

Right now, that puts us at an event profit of roughly $13,933.95. Again, some of that may end up going to additional speaker reimbursement; the rest will help fund ongoing activities (like Azure hosting and so forth; I’ll share a full annual operating budget in June, but it’s about $17,000 per year). We have about $20k in payments coming up for the European Summit.

We have approximately $92,000 on-hand; much of that will go to the expenses above that are still pending. We should end April with around $65,000 on-hand – a lot of that comes from earning back a $40,000 pre-payment for the N.A. Summit that we made in fiscal 2013-2014. We’ll use some of that $65k to cover the remaining $20k fees on the European Summit; the rest of our cash-on-hand will help provide deposits for the 2015 N.A. Summit, and to fund ongoing operations for 2014-2015. We’re in good financial shape – we’re making a bit more than we need, but not very much – which is right where we want to be.

The good news is that, between the Summits and our generous corporate sponsors, we’re on track to actually find the $17k wish-list budget we’ve put together (which we’re still researching and tweaking; as stated, I’ll share the full thing in June). That means we’ll be able to start spinning up services like the VERIFIED EFFECTIVE program, monthly TechSession webinars, and so on.

State of the Org, ending 2013

I wanted to take a moment and wish everyone a very happy new year, and to do a sort of wrap-up of 2013 from’s perspective.

We started 2013 with a bang, including our first-ever PowerShell Summit North America, held on-campus at Microsoft in Redmond. We’ll be returning to the Seattle area in April 2014 for PowerShell Summit North America 2014, and are planning the first PowerShell Summit Europe 2014 in Amsterdam in September. For the N.A. show, we need about 50 more Summit attendees to break even, and can accommodate about 100 more than we’ve currently got registered.

We ran a very successful Scripting Games that kicked off just as the Summit was ending. Thousands participated, tens of thousands of dollars in prizes were handed out, and most importantly the Games made the transition from being a much-loved child of the Microsoft Scripting Guys to being a community-owned event that can hopefully continue forever. We’ve got the first Winter Scripting Games in a loooong time starting in just a few days, in fact.

In the wake of The Scripting Games, we ran a summer-long series of Great Debates, and your comments on those informed the first-ever Community Book of PowerShell Practices, now offered as a free ebook., Inc. closed its first fiscal year at the end of June 2013, and financially we lost just a bit of money. Don’t worry – that was always more or less the intent; we’re not running the corporation to make a buck, but rather to more-or-less break even. At the moment, we have $29,988.25 in our checking account, most of which is earmarked for Summit 2014 expenses.

We’re now providing hosting services for about 17 local and regional user groups, giving them a spot to post upcoming meeting dates, post-meeting file attachments, and other details. We’re hoping this helps raise awareness of the efforts they’re all making to have a strong local PowerShell support system in place.

2013 also saw the PowerScripting Podcast become a welcome part of Host Jon Walz also got his first MVP Award, a long-awaited and well-deserved honor that he now shares with co-host Hal Rottenberg. Everyone appreciates the hard work they do, and we at wanted to make sure they had the resources to keep doing it (equipment ain’t free), so we offered to help out when they needed, and they graciously accepted. We’re delighted to be working with them. played an important role in developing Microsoft’s official entry-level PowerShell training, course 10961, by giving the authors (e.g., me) a place to survey folks about topic, level of coverage, and more, and to solicit feedback on the “A” and “B” revs while updating the course for PowerShell v4. This site (and all of you) also played an important role in selecting topics for the advanced-level training, course 10962, which will be developed in 2014. Finally, you all helped provide feedback for Microsoft Courseware Marketplace course 55039, which covers PowerShell scripting and toolmaking. When you see a survey posted here, jump in – it makes a very real difference in some very important projects!

2013 was also the year we Moved to Azure, spinning up an Azure-hosted CentOS VM that’s now running the site. It’s gotten faster, is a bit easier to maintain, and is a heck of a lot more highly available thanks to Microsoft’s cloud hosting.

I’m extremely proud to have had so many folks jump in and help out this year. Dave Wyatt, Matt Penny, Matt Johnson, Mike Shepard, and Nicholas Getchell have all taken on curator roles for the free ebooks we offer on They’re doing a wonderful job in making sure those titles stay updated – so much so, that we’re now just linking to the books’ GitHub repository, where you can download the DOC files directly. Dave Wyatt has also been posting some incredibly detailed and informative blog posts that I hope you’re reading – I really appreciate his contributions here. I also want to thank Matt Tilford, Chris Hunt, and Mark Keisling, who have taken on editorial duties for the TechLetter newsletter. Our aim is to put out a solid, informative, technically deep monthly offering and these guys are absolutely on the job. I hope you’re subscribed, because if you aren’t, you’re missing out. Finally, MVP Steven Murawski has made his home for Desired State Configuration (DSC) blogs and code, and he’s been prolific. His employer, StackExchange, has been an early adopter of the DSC technology, and Steven’s been sharing pretty much everything he’s learned.

We’ve had some transitions in 2013. Board member and co-founder Kirk Munro has had to step away from day-to-day duties with, although he remains a member of the board. Board member Jason Helmick has stepped into a second-in-command position, and is more or less running the North America Summit from an operational perspective. Jason earned his first MVP Award this year, giving us an all-MVP Board that also includes myself, Jeffery Hicks, and Richard Siddaway.

I’m extremely proud of everything we’ve accomplished. I’m delighted that so many folks are jumping into the forums and offering answers to questions – it’s a massive relief on my own workload, and there are some damn smart folks offering their help to the community for free. In fact, we plan to recognize some of them in our first-ever PowerShell Heroes award, scheduled for January 2014. We’re also going to make good on a promise I made when we started this site: our above-and-beyond contributors are going to become part-owners of this community with an award of stock in, Inc. That’ll give them some concrete control over the community they’re helping to build. Look for that mid-2014, when we near the end of our fiscal year.

For 2014, I’d like to thank our returning sponsors, SAPIEN Technologies and Interface Technical Training. These folks give a lot, financially, to help make this site work. Please show them your appreciation in every way you can. In 2014, my company, Concentrated Tech, is also coming aboard as a sponsor, and I’ll be offering my first-ever public PowerShell training.

I think 2014 should be a great year, both for and for the broader PowerShell community that we’re trying to serve. If you’re new here, or you’ve just been lurking, please jump in and help. Write an article about something you learned, answer a question in the forums, or volunteer to help out. We’re all in this together, and the stronger a community we all make together, the more we’ll be able to support each other when needs arise.

I look forward to serving you in 2014!

Don Jones
President and CEO’s Azure Journey, Part 4: Incoming Advice and Fun Facts

Had an opportunity to speak with some folks on the Azure team yesterday – Mark Russinovich was kind enough to make a contact for me.

First of all, fun fact: Azure only charges you for used pages in VHDs. That is, if you create a 100GB VHD and load 1GB of data on it, you’re paying for 1GB of data. Very clever. So it’s charging you as if it was a dynamically expanding VHD, but of course it’s a fixed VHD with all of the related performance improvements. Nice.

Second, they basically confirmed something I’d suspected. Azure’s “website model” tends to appeal more to smaller businesses or personal Web sites; most “serious” players (my word) are using the IaaS model, meaning they’re hosting VMs in the cloud, not just hosting a Web site. Having a full VM under your control obviously has advantages in terms of management, along with the ability to run things like in-memory caching software, load additional Web extensions, and so on. IaaS is absolutely the right model for for many of those reasons.

That said, they also confirmed that the Web site model and the IaaS model cost about the same, at least as you get started. So it’s really – for a smaller Web site – a matter of what you want to do. Again, there are specifics about the IaaS model that work well for us, so that’s what we’re looking to do.

Azure also costs about the same, in an apples-to-apples comparison, as Amazon Web Services. That’s probably somewhat deliberate on Microsoft’s part, but Azure has advantages. For one, their virtualization layer has been approved by the various Microsoft product teams, so if you’re running SharePoint or SQL Server in an Azure VM, the team will support you. Not the case with AWS. Also, I frankly found Azure’s presentation of the costs easier to grok.

Fifth (I love numbered lists, sorry), I confirmed that the IaaS option charges you for (a) the VM’s you’re running, by the minute; (b) the storage used by all VM VHDs’ used pages, and (c) outbound bandwidth. This can potentially make IaaS more expensive than the “website” model because Azure won’t spin down an IaaS VM, so you run 24×7 unless you’re manually deallocating. With a website, Azure only spins up worker processes when they’re needed, so your site isn’t “running” 24×7, so you might pay less if it’s not being “hit” 24×7. Again, though, the website model offers us less control and flexibility.

Just thought you’d enjoy some of those details!’s Azure Journey, Part 3: Load Testing [UPDATED]

So, I’ve gotten a two-VM version of running in Azure. Yay, me! My *nix skills are unaccountably rusty (go fig), but it didn’t take too long. Restoring the WordPress installation was the toughest, as a number of settings had to be tweaked since the site is no longer under the same URL (the  test site that is).


I ran a load test against the existing production site yesterday; you can view the results at This simulated a 50-person concurrent load from three US locations and on UK location, which approximates our real-world traffic. The results are what they are; we’re looking for the delta between these and the Azure-based system. In this test, the green line is the number of concurrent connections, and the blue is the time it took each page to load. The test ran for 10 minutes total, with each simulated user hitting three different pages on the site (home page, a forums topic, and a blog post).

A key fact is that the site currently runs under a shared hosting plan; I don’t have any details on how much RAM, how much CPU, or what kind of bandwidth exists for the site. It’s also important to note that the production Web site uses a Content Delivery Network, or CDN, which offloads a good amount of traffic from the site proper. Because that costs, we didn’t implement a CDN for the test site. I’d therefore expect it to be somewhat slower.

Azure 1: XS+XS

The first Azure test is at This uses an extra-small instance for both the Web server and the database server (separate VMs; that reflects the fact that the current site runs the DB on a separate shared server). As you can see, the results weren’t promising. By around 40 users, page load times exceeded 3 minutes, at which point they started timing out. So the test clearly overwhelmed the instance. That wasn’t unexpected; an XS instance runs on a shared core with 768MB of RAM. That ain’t much. I think it’s also powered by a 9-volt battery. But I wanted a baseline; XS instances are super-cheap.

(As an aside, scaling out the Web tier of isn’t trivial, due mainly to the presence of user uploads. We’d need to make some tweaks to have all uploads sent to, and downloaded from, a single server; if we just scale-out by load-balancing in a second Web server, user-uploaded content won’t work correctly. Also, doubling the instance size – e.g., from XS to S – costs the same as adding a second XS instance. Scale-out isn’t off the table, but since it’s more complicated to set up, I’m not testing it right now.)

Azure 2: S+XS

The third test moved the Web server to a Small instance, which offers a dedicated core and 1.75GB of RAM. The DB server remained at an XS instance size. It was super-cool that you can upsize the instances whenever you want. You pay by the minute based on instance size, and the Azure Price Calculator rolls that up into a monthly estimate based on 24×7 usage. One thing I’ve learned is that when the Azure Web console says it’s done with something, like starting a VM, you really still need to wait a few minutes before all the bits and bobs are in place to make the Web site work. Another PITA is that, when you shut down a VM, you lose both your public IP (no problem, since they handle DNS for you) and private IP (a bit of a pain since there’s no DNS for it, so I had to re-point the Web server at the database server’s new private IP).

(As another aside, Azure also offers the option of just moving the Web site and the database into the cloud, using PaaS rather than IaaS. We get to select the kind of instance our site runs on, but it’s potentially shared with other sites. MySQL gets outsourced to ClearDB. There’s some more complexity in that model from the perspective of getting the site working, and having our own VMs gives us some additional performance-improving abilities, like in-memory opcode caching. Either model costs about the same, so we’re playing with the VM model at present.)

Anyway, the third test results are at I’ll mention that the S instance size allows a lot more room for opcode caching, which can help tremendously, as well as having more RAM and CPU for handling the concurrent requests. Because the simulated users are all asking for the same pages, the caching should go a long way toward helping. For this test, response times held pretty well under 20s for the majority of the test, excluding some spikes (likely due to cached items expiring and being re-generated). Things started to get dicey at 40 concurrent users, but still held about the same average performance that the current production site offers. Using the test site interactively while this load test was underway was slow, but not utterly painful.

(Real-world note: We disable a number of caching mechanisms for logged-in site users, because we don’t want to serve a cached page form a logged-in user to an anonymous user. So logged-in users will get somewhat different results. For the purposes of this test, we’re comparing apples to apples with anonymous simulated users.)

Azure 3: S+S

Now the database server has also been upgraded to a small instance, featuring a dedicated CPU core and 1.75GB of RAM. Having to update the database server’s private IP address each time it restarts is a PITA. I need to find out if there’s any way to use a DNS name for that instead – something Azure updates for me when it reassigns the IP. I don’t want to use the public IP/DNS, because I’d pay for bandwidth – with the internal IP, the traffic stays inside the Azure datacenter, so I don’t pay for it.

Anyway, this test result is at Can I tell you how much I love LoadImpact for doing these tests? Set up the test once, run it over and over against different configurations. Awesome.

As you’re comparing the charts, pay close attention to the scale on the sides. They’re not necessarily the same – you actually have to look at the numbers, not just the height of the blue line.  This time, although the blue line climbed high, it was actually under 1m for the entire test. That’s a marked improvement over the XS+XS test! In addition, a S+S configuration is pretty affordable. It’s about $180/mo in VMs, plus about $35 for storage and estimated bandwidth. That’s less than two dedicated rackmount servers would cost, for sure.


I need to do a bit of analysis – LoadImpact lets me download CSVs, which will let me make some direct-comparison charts – but Azure’s looking like a good option for us, especially in the S+S option. I may also run a Medium+Small test (I have one credit left with LoadImpact for the month, so why not) just to see the difference. UPDATE: I did. The M+S test is at’s Azure Journey: Part 2

I had no idea Azure gives MSDN subscribers a huge free monthly credit – $200 for the first month, and then on the Ultimate subscription level (which is what I get as an MVP) you get  $175 per month thereafter. That starts to really justify the MSDN pricing. You want a lab in the cloud? Free Azure!

Given the free-ness of it, I decided to set up a in the sky to see how it went. Configuring dual CentOS VMs was a bit of an all-day affair; I have less experience with RHEL (which is what CentOS is based on) and it took me a while to figure out that the built-in firewall was causing all my grief. Fixed now.

Microsoft publishes some pretty good guides for getting a LAMP stack running on CentOS in Azure. Not great guides, but good. They lack a decent guide on getting Passive FTP working – and it’s a PITA because Azure only lets you configure incoming ports on a one-at-a-time basis (not ranges), and you can only have 25. So that’s kind of a pain. But I got it working, got MySQL installed and working, and I’m presently waiting on VaultPress to smush up our latest site backup and spew it onto the Azure server. Remember: you don’t pay for bandwidth going into Azure, so I can load the backup in as many times as I want without incurring bandwidth.

This VaultPress thing is neat, if it works. It continually pulls changes from our WordPress installation and backs them up, timestamped, a la Apple Time Machine. Allegedly, if you give them the FTP info on you new server, and you have a base WordPress install working on the new server, they can “push” your whole site down to the new server. Given my fits and starts with FTP on CentOS today, we’ll see how well it works, but I’m optimistic. Dunno. It’s been saying “Testing Connection” for a long time now. Sigh.

Anyway, I’m starting both VMs in extra-small instances. Part of what I want to play with is whether or not I can upgrade those to bigger instances without breaking the universe. Depends on how CentOS behaves when it suddenly finds itself running on “new hardware.” We shall see! If it works, then it’ll truly be killer in terms of scaling. I also want to see if we get more “juice” running two load-balanced extra-small instances vs. a small instance (which is technically twice as big as an extra-small). Common logic suggests that more, smaller servers is better – a la every web farm, ever. But it’ll be fun to test.

Question: anyone have any Web site load-testing software they’re fond of? Mac or Windows is fine, or even both. I’ll enlist some folks to help with that, since I know my DSL line’s upstream side will chokepoint long before the Azure server does. Ooo, maybe we can have a botnet that I could control… bwaa haa haa!

Meantime, Eric Courville, our new volunteer Webmaster, is setting up a similar Azure-based VM set with his own MSDN subscription. In addition to documenting the setup process, we’re going to try and do some load-testing and see what kind of instances we need to run in to get solid performance out of the site. currently peaks at fewer than 50-60 concurrent connections (and even that day was a rare peak), so we’ll load test to that number.

Stay tuned!’s Azure Journey, Part 1

When we started, my company (Concentrated Tech) donated shared hosting space to get the site up and running. We knew it wouldn’t be a permanent solution, but it let us start out for free. We’re coming to the point where a move to dedicated hosting will be desirable, and we’re looking at the options. Azure and Amazon Web Services are priced roughly the same for what we need, so as a Microsoft-centric community Azure’s obviously the way to go.

Azure Technical Fellow Mark Russinovich is having someone on his team connect with me to discuss some of the models in which we could use Azure. What makes the discussion interesting is that runs on a LAMP (Linux, Apache, MySQL, and PHP) stack. We’re not looking to change that; WordPress requires PHP, and the Windows builds of PHP typically lack some of the key PHP extensions we use. I’m not interested in compiling my own PHP build, either – I want off-the-shelf. WordPress more or less requires MySQL; while there’s a SQL Server adapter available, it can’t handle plugins that don’t use WordPress’ database abstraction layer, and I just don’t want to take the chance of needing such a plugin at some point and not being able to use it.

What’s neat about Azure is that it doesn’t care. I adore Microsoft for selling a service and not caring what I do with it. Azure runs Linux just fine. Huzzah!

So, we’ve got two basic models that could work for us. Model 1 is to just buy virtual machines in Azure. We’re planning one for the database and another for the Web site itself, so that we can scale-out the Web end if we want to in the future. We’re not going to do an availability set; that means we risk some short downtime if Azure experiences hardware problems and needs to move our VM, but we’re fine with that because right now we can’t afford better availability. We’d probably build CentOS machines using Azure’s provided base image (again, adore Microsoft for making this easy for Linux hosting and not just Windows). We know we tend to top out at 250GB of bandwidth a month, and that we need about 1GB of disk space for the Web site. 500MB of space for the database will last us a long time, but we’d probably get 1GB for that, too. It’s only like $3 a month. We could probably start with Small VM instances and upgrade later if needed. All-in, we’re probably looking at about $125/mo, less any prepay discounts.

Model 2 is to just run a Website. We still get to pick the kind of instance that hosts our site, so if we went with Small and a single instance, we’d be at about $110 including bandwidth and storage. That doesn’t include MySQL, though. Interestingly, Microsoft doesn’t host MySQL themselves as they do with SQL Azure. Instead, they outsource to, which provides an Azure-like service for hosted MySQL. Unfortunately, the Azure price calculator doesn’t cover the resold ClearDB service. Looking at ClearDB’s own pricing, it’d probably push us to about $120-$125 a month – or about the same as having our own virtual machines. The difference is that, with Model 2, Microsoft can float our Web site to whatever virtual hosts they need to at the time to balance performance; with Model 1, they can potentially move our entire VM – although they’re unlikely to do so routinely, since it’d involve taking us offline for a brief period. A super-neat part of this model is its integration with Git: I can run a local test version of the site, and as I make changes and commit them to our GitHub repository, Azure can execute a pull and get the latest version of the site code right from Git. Awesome and automated. I love automated.

An appeal of Model 1 is that I can build out the proposed CentOS environment on my own Hyper-V server, hit it with some test traffic loads, and size the machine appropriately. I can then deploy the VHDs right to Azure, knowing that the instance size I picked will be suitable for the traffic we need to handle. It also give me an opportunity to validate the fact that a dedicated VM will be faster than our current shared hosting system, and to play around with the more advanced caching and optimization options available on a dedicated VM. I can get everything dialed in perfectly, and then deploy.

Azure has other usage models, but these are the two applicable to us. I think it’s great that we get these options, and that the pricing is more or less the same regardless. And again, I think it’s pure genius that Azure’s in the business of making money for Microsoft, and that they’re happy to do so running whatever OS I want them to.

I’ll continue this series of posts as we move through the process, just for the benefit of anyone who’s interested in seeing Azure-ification from start to finish. Let me know if you have any questions or feedback!

New Visual Design Draft, Pt 2

Spoke too soon in the morning’s updates; my designer buddies worked last night and took their first stab at the forums pages. They also changed their mind about the big black boxes, which I appreciate ;). The forums material is denser now, meaning more info per page, which should please some folks.

Samples below – and comments welcome. Just keep in mind these folks aren’t being paid, so be nice ;).

new-forum-list new-single-topic new-topic-list new-article new-article-comments new-front


Whatcha think? They said they’re tweaking the smaller-screen version still, but I’ll update this post and add those examples once they’re ready. I know getting the forums working on a smartphone is something people have kvetched about, but it’s fairly tricky. They said they might just end up not making a smartphone version, but instead focus on dropping unnecessary elements and letting the phone scale the page. The text input box is apparently giving them a lot of grief when it’s sized too small. Anyway…

State of the Org: Website, Games, Summit, and More

I wanted to share a quick update on, Inc.

First, a couple of Web designer friends of mine have volunteered to do a visual re-theme of the site. Below is some of their early work, and you’re welcome to comment; I’ll just remind you that they’re volunteers and doing this as a favor. So be nice! You’ll notice that one of these reflects the layout a smartphone would use, which trims much of the “chrome” in favor of the content. They haven’t tackled the forums yet – that’s harder, and will probably come last.

3-001 3-002 3-003


Second, in the last quarter of the year we’re planning a move from our current shared hosting plan (my company is actually hosting the site for free) to a more dedicated plan – likely in Azure, since that offers us redundancy without the need to actually pay for two servers. We’ll set up a 2-server system with one server dedicated to the database, and the other the Web site, which reflects what we have now under the shared plan. We’ll remain on the current LAMP stack, just running inside Azure. That takes a lot of work to set up and test, and the schedule will depend largely on our volunteers’ time, but it’s in the works. The move should help a bit with some of the performance. It’s crazy expensive compared to “free” (around $3600/year max, although obviously it’s based on usage so that’s kind of a worst-case guess), but we’re growing to the point where we need it and it isn’t any more expensive than a dedicated server. I love that the Azure folks are smart enough to offer a LAMP stack. Own the back end, who cares what people do with it!

Third, we’ve disabled a few site features that were really eating up page load times. Most you won’t notice, but the “badges” functionality is presently turned off. We haven’t deleted any data, so we can bring that back, but for right now it’s unavailable.

Fourth… and off of the Web site… the PowerShell Summit North America 2014 is about 12% sold out. As of today, our 2013 alumni and shareholders no longer have a reserved block; our TechLetter subscribers still have a reserved block through September 15th, at which point everything goes on sale to the public. The velocity of sales has been good, and we should be able to hit our next scheduled payment to the event venue. We are still holding back about 50 slots for 2014Q1, for those of you who can’t register until next year. But I wouldn’t hold out for those if you don’t have to. It does not look, at present, like we’ll have many (if any) additional discounted memberships – in order to hit our numbers, it’s likely everything will hold to full price. If we do offer any discounts, it’ll be absolutely last-minute. Also, our team is getting going on content, and you should see a Call for Topics real soon, now.

Fifth, the PowerShell Summit Europe 2014 is coming along, but not really going anywhere. Ha! By that, I mean we’re simply too far out (more than a year) for venues to be able to talk to us. So we’re holding tight until September and October this year, when we can start checking pricing and availability. Madrid snuck on to our short-list of cities, along with Munich, Milan, and Amsterdam, due to the presence of a large MS conference facility there. If anyone lives in Europe and speaks Spanish, and wants to be our liaison to communicate with MS Madrid, please contact me (via the Site Info menu above). It’d be nice to have someone local who can contact the office and see what we can do there, or at least put us in touch with an evangelist over there who could work on our behalf.

Sixth, don’t forget that Mark Schill has announced PowerShell Saturday 005 for Atlanta. Mark’s also been tasked to help one or two other organizations put on their own PowerShell Saturday, so if you think you’d be interested, please contact him. Having done this four times already, he’s got a good grip on how to go about it.

Seventh, we’ve got some great new guys acting as editors for the TechLetter, and the September issue will be their first go at it. Wish them luck and give them your support! We’re also looking to launch free online TechSession webinars next month; I’ll probably run the first one, and there will be a required (and free) registration process, and it may be bumpy. But we’re going to try and do those monthly. They’ll supplement the new MVA offerings from MS, and get back to the days with TechNet did a whole series of different free webinars. Once we start, please help spread the word – if we’re not getting good attendance or recording views, we won’t keep doing it.

Eighth, I’m unsure if we’ll be doing a Winter Games event or not. We had someone volunteer to coordinate it, but I haven’t heard any details from them, and I’m kinda getting overbooked on my end, which will make it tough to do up whatever Web site they might need. We’re going to play this one by ear.

Ninth… and before I make it to a full strike… I want to express my deep gratitude for everyone that’s helping make this community work. The Forums are obviously a big piece, and it’s been fantastic to see so many of you jumping in and volunteering your time to help answer questions. Truly, I feel that this whole thing is finally taking off and that it’s a real community. Along those lines, in Q4 this year, we’re going to announce (so start thinking about it) a PowerShell Heroes award. This will be for folks who have not already received some kind of recognition (like MVP) for helping out in the community, so that we can formally offer them a thank-you. Awards will be by nomination, and will carry no benefits whatsoever (grin). But start thinking of who you’d like to thank, and why.

OK – that’s probably enough for the morning. Thanks for coming along for the ride, and have a great rest of the week!


Now Accepting Nominations for, Inc. Board of Directors

At our first annual Shareholders Meeting (shareholders will receive an e-mail from me later this week about that meeting), we will be voting on our Board of Directors. Our corporate articles permit our existing Board members to serve indefinitely, and so all are automatically re-nominated. The current Board includes:

  • Myself (Don Jones)
  • Kirk Munro
  • Richard Siddaway
  • Jason Helmick
  • Jeffery Hicks

The Board is responsible for appointing a CEO (which is currently myself) to run the company; the CEO then appoints other officers as needed to conduct the corporation’s business. I’ll reiterate that is a not-for-profit business, meaning our goal is to more or less break even. We obviously have expenses – Web site hosting, running the Summit, and so on – and the corporation provides a place where the needed funds can be managed, without running through anyone’s personal checking account.

If you would like to nominate someone for the Board, please e-mail president/at/ no later than May 15th, 2013. Provide the person’s name and e-mail address. You are welcome to nominate yourself.

Each shareholder will receive 5 votes per share owned, and can distribute those votes however they like amongst the nominees. The top 5 vote-earning nominees will comprise our new Board. They will then elect their Chairman, who presides over Board meetings, and either reconfirm the existing CEO or appoint a new one.

If you are interested in becoming a shareholder, please see this post. Note that shares must be purchased before May 1st, 2013, in order to be eligible for voting in the upcoming cycle. We are also nearing the end of our capital campaign, so time is running out to own a piece of

Own a Piece of the Community: Buy Shares in, Inc.!

When Kirk Munro and I set this site up, and started redirecting traffic from the old, one of our main goals was to make this a truly community-owned resource. We wanted it hosted independently (my company, Concentrated Tech, is being paid to host the site, so we get pretty good service and total control). We didn’t want to be beholden to anyone’s commercial interests or whims (companies do get distracted by their real jobs from time to time, after all).

When we started talking to Microsoft about holding a PowerShell Summit, we wanted that to be community-owned too, and not tied to a commercial interest – in part so that we could keep the price low, but also so that Microsoft would be able to support us without getting into any possible conflicts of interest with any of its ISV partners.

Today, our intention becomes legally realized., Inc., a Nevada corporation, is born – and we’re offering ownership shares to help raise capital. This capital will be used to pay for necessities like bookkeeping, and also to help bootstrap the Summit event. Shareholders are legal owners of the corporation, and will vote for its Board of Directors – who in turn appoint the Officers that make things happen. Our first Board will consist of myself, Kirk, Jeffery Hicks, Richard Siddaway, and Jason Helmick.

Want to become a community owner? You’ll want to start with our “Shareholder Brochure,” which is available in the new “, Inc.” forum on this site. That forum will also get you our Bylaws and Articles of Incorporation; the Brochure will outline the purpose of the corporation, and explain what it means to be a shareholder. The forum also contains the Share Purchase Order form, which you can use to purchase shares, and contains documents that outline our initial Board of Directors and Officer lineup and other important details.

Cool tip: Shareholders get access to a special forum on to discuss company business, are eligible for an e-mail address, and may receive a discount to the PowerShell Summit North America 2013. In fact, if you’re planning to attend, you can add $100 worth of stock to your event registration for just $75 (plus card fees), instantly giving you your $25 discount!

We hope you’ll give serious consideration to supporting this community effort, and to finally – about six years after PowerShell’s introduction – help us realize our dream of creating a truly community-owned online resource, educational event, and more. We have created a set of forums on for discussion and Q&A about this corporation, so if you have any questions, we encourage you to turn there for your answers.

Although the corporation will not be publicly-traded in the sense of appearing on a stock market, we do intend to make as much of its business as possible completely open and transparent. To that end, we’ll use this blog to periodically announce the availability of public documents (as we create them), along with shareholder meetings and other important events. Just look for items in the “Inc.” category of the blog. We’ll also use the Forums as a repository for various documents, so that you can always find them easily.

What do you get by being an owner? Well, a vote (one per share owned) for the Board of Directors makeup. The aforementioned $25 discount to the PowerShell Summit. An e-mail address or forwarding alias, if you want one. And a chance to help us create a truly independent, group-driven entity that’s owned not by any one person, but by all of us together.

Thanks for joining.