When you look at your scorecards for these Games (which will happen after judging closes, about a week after the event itself is done), we want you to know what to expect.
Each event scenario will include a fairly detailed bullet list of what the judges are looking for. Judges will have a corresponding scorecard, and you'll be able to see that scorecard once judging is done. That scorecard creates your team score.
Some criteria are optional, although the event scenarios present everything as merely "desired." That means, for some things, you'll receive points for doing it, and merely receive a 0 on that item if you don't do it.
Some criteria, mainly those which reflect best practices, will earn you negative points if you don't do them. For example, using aliases will net you a -1, whereas not using them will earn you a +1. Positive reinforcement, and all.
Some criteria are multi-point. For example, if you're asked to provide help for all functions, and you don't provide any, you might get -2. Providing some help might earn you a +1, whereas providing comprehensive help would get you a +2.
Sometimes, a bad practice will knock of more points. For example, providing parameter validation might be +1, and not providing it would be -1 if the scenario asked for it. However, if you wrote your own validation routines instead of using validation attributes, you might get -2, because you wasted a lot of effort.
Some criteria reflects above-and-beyond effort vs. not applicable. For example, if you come up with a solution that requires remote access, and you take the time to test connectivity and fail back to another protocol, you might earn +2. Not testing or failing back might be -1. But you might also get 0 for that criteria if it simply wasn't applicable.
Building routines into a module might earn you +2, whereas modularizing into separate scripts only a +1. But that's better than not modularizing at all, which might earn you a 0 for that criterion.
So you see, there's a lot of variety to consider. The practice event alone has almost 20 scoring items, including subjective "overall style" points and other criteria. You'll always be told the main things the judges are looking for; expect some disagreement between judges over whether or not what you implemented meets the goal. That's why we have a panel of judges, so that individual opinions will tend to even things out. There's no arguing with the judges, though - they're the experts. Sometimes, you may get a point or two less overall simply because the judge feels a different approach would have been better. That doesn't make you wrong globally, but it does make you wrong in that judge's eyes. That kind of subjectivity is very much present in the real world, too.
Also, remember that the Practice Event is when we'll be testing the scorecards for the first time. Not every team may get a practice event score, and the display of the scores and scorecards may change once we start playing with the system. Some scores may be completely off-kilter, as we do plan to test various scoring scenarios that have no reflection whatsoever on your actual entry. In other words, you might get a negative score simply so we can make sure the system handles it, not because your entry sucked. We appreciate your patience!
Good luck in the Games!