Our readers predicted a close, tough loss to Oregon. But we didn't predict that we would lose six more games.
When we first asked our readers how we thought we'd fare in the 2010 season, few of us thought we'd wrap up our season with a bowl-eliminating loss against Washington. In fact, the average prediction was a solid 8 wins and 4 losses. Even the oldest and bluest of the Old Blues thought we would end up in a bowl game this season. With the pain and anguish of 2010 reduced to a dull ache (and a glimmer of optimism with a top-10 recruiting class for 2011), let's take a journey into the past and see just how wrong we were.
If you can remember all the way back to August when we obtained out last set of season predictions, we asked our readers to give us a 0.00-1.00 likelihood that Cal would defeat an opponent. If we compare the average predictions to how Cal actually fared, our predictions weren't too bad for the first half of the season (except for the obvious trap game at Nevada).
Once we lost Kevin Riley for the season and the already-sputtering offense ground to a halt, our initial predictions fell off track. None of us anticipated we'd lose our senior signal-caller just prior to the most important stretch of the season, so our inaccuracy is forgivable.
After the fold we'll break this down further and hand out some awards to those with the most and least accurate predictions.
If we add in our predictions from each previous week's report card, the cycle of predicted wins is losses becomes comical. How many times did we have an impressive performance, start pumping the sunshine and follow that with a loss and predictions of subsequent doom and gloom? Answer: many, many times. After each solid performance, we overinflated our expectations for the following game (the Nevada, USC, and Stanford predictions are great examples). Naturally, we'd then lose and then woefully underestimate our performance in the next game (Arizona, ASU, and Oregon predictions stand out). The longer I stare, the more entertaining it becomes! We certainly are a predictable bunch.
For those curious about how I'm coming up with the number for how Cal actually fared, I'm using the percentage of points Cal scored relative to the total number of points scored in the game. I believed it to be the best approximation for our predictions, but if any of y'all have a better benchmark, I'd be happy to hear it.
You've seen that as a group we didn't do so badly, but we didn't predict particularly well either. So let's break it down to the individual-level and see who among us can foresee the future and who among us got our predictions from a bowling ball instead of a magic 8-ball.
We have two sets of awards here: first is the Ursadamus award for those who most accurately predicted the season and last (and definitely least) is the Miss Cleo award for those with the most inaccurate predictions. If you're interested in how exactly I computed these scores, see the "methods" section at the end of the post. If you completed our fall predictions and you're really curious about how you fared, ask in the comments and I'll give you your score. Be warned, though, you might have embarrassed yourself.
||Net Score, Percentage
|1. Gustav Nikolai
|4. Hurt Locker
|5. Spazzy McGee
Miss Cleo Award
||Net Score, Percentage
|2. itrublu||5.66, 47.16%|
We've seen the best and the worst, but how did your fearless leaders do?
|Ohio Bear||7.95, 66.23%||29th|
HydroTech was the least accurate?! And I was the most accurate?! THE BALANCE OF POWER AT CGB WILL BE FOREVER CHANGED!!!
You can think of the scores as percentages and grades that go along with them. This means that means only two of you passed and the rest will be retaking the course. See you in fall of 2011!
Methods: You thought I made it all up, didn't you? After all, I was the most accurate mod and in the 90th percentile! Read on for how I calculated everyone's scores
To generate your scores, I took everyone's predictions and, for each game, found the difference between the prediction and the actual outcome (naturally, I took the absolute value of that difference). This gives us 12 scores for all 300+ submissions. I wanted to assign penalties for being far off from the predicted outcome, so I took the square root of the score. So if you guessed .5 and the outcome was .34, your score of .16 was turned into .4. Likewise, if you guessed .15, your score of .05 was turned into .22. So the farther off you were, the more you were penalized (caveat: this is technically not true for large errors, but such errors were uncommon enough that this is not a big issue). I also computed the scores without penalties, but I felt this was a better method for assessing one's accuracy.
Since there were 12 games, I obtained these squared scores for each game and summed them up for each person, so each person has a net score. The lower your score, the more accurate you were. In theory, the most you could possibly be off in a game is 1 (in which case Cal shut out or was shut out by its opponent and you predicted, respectively, a guaranteed win or certain defeat). This gives us a maximum of 12, which is the largest possible score you could obtain.
I took your total scores and subtracted them from 12, so that higher scores are now associated with higher accuracy. I took that number and divided it by 12 to get a percentage (because everyone loves percentages). Here's a look at the distributions of percentages (aka how you all failed to achieve passing scores). Better luck next time!