clock menu more-arrow no yes mobile

Filed under:

Cal Football Season in Review: Did the Bears Meet Expectations?

Opponent Preseason prediction Pregame prediction Result
Fresno St. 70.5% 70.5% Win
Colorado 71.0% 80.4% Win
98.1% 93.9% Win
Washington 54.1% 57.5% Loss
Oregon 25.5% 16.8% Loss
USC 38.1% 38.2% Loss
Utah 53.3% 50.3% Win
UCLA 68.4% 83.9% (lol) FAIL
Washington St 82.5% 37.8% Win
Oregon St 58.0% 64.1% Win
Stanford 40.3% 33.2% Loss
Arizona St 53.0% 68.3% Win
Total 7.128 wins 6.949 wins 7-5

As if the bombardment of recruiting and coaching rumors has not reminded us, the offseason thrives on unchecked speculation and wild predictions. Like watching a multi-car pileup on the freeway (or the defenses in that Baylor-Washington game), we cannot help but have a morbid yet insatiable desire to gobble up these predictions and speculation.

I am certainly not immune to the temptation to offer some speculation of my own (nor are you readers, as evidenced by the hundreds and hundreds of responses we get when we solicit preseason predictions).

So let's take a look at how accurate we were with our predictions. In the chart on the right we tallied the pre-season predictions in mid-August while the pre-game predictions were collected about a week before we played that opponent.

At first glance, it looks like we generally won games we were supposed to win (except that UCLA debacle) and lost the games we were supposed to lose. We roughly broke even on the toss-ups.

Of course, predicting the outcome in terms of wins and losses is interesting, but rather simple. What happens if we match up our predictions with the final score? Are we still fairly accurate or is this just a gilded curtain hiding our woeful inaccuracy? Let's break it down.

First, let's figure out how I relate the outcome on the field to our predictions. The way I compute "reality" in the following graphs is pretty simple. I take the number of points Cal scored and divide it by the total number of points scored in the game. The reasoning behind that is this: to the extent that Cal and its opponent are evenly matched, we would expect the score to be close to even. If Cal would win 7 times out of 10, we would expect Cal to outscore the opponent 7:3 on average. Following this logic, we can relate our predictions to the final score.

I used this method to compute "reality" when revisiting last year's predictions and there were some differing opinions on how else I should have computed it. I am more than happy to hear suggestions for a better method to compute reality.

Now let's get our hands dirty with the results:Photobucket

Overall, we did a pretty good job. Other than that USC-Utah-UCLA stretch, we were very close in most cases.

In fact, looking at this chart can explain our collective swings of opinion towards the team as the season wore on. There was some grumbling as we performed slightly below expectations against Fresno St, Colorado, Presbyterian (105-0!), and UW. We did as expected against Oregon, so most of use were not too disappointed. Then things got weird...

We underperformed in the turnover-fest against USC and then seemed to have turned a corner when we posted an overachieving victory against Utah. Of course, the UCLA fiasco happened and we all thought it was the end of the world.

We righted the ship against Wazzu and then exceeded expectations in our final three games.

So the season can be summarized as follows: mild disappointment, utter confusion, then a series of uplifting performances. That all sounds accurate enough.

Just for the lulz, let's compare our preseason predictions to the highly volatile predictions we had in the week before each game.Photobucket

How predictable: after each win we start pumping the sunshine with overconfident predictions and after each loss we expect nothing but DOOOOM in the following game. The swing from the Utah win to the UCLA loss is most comical. We were walking on sunshine after blowing out Utah while watching the Bruins crumble in Arizona on national television. After the UCLA loss we didn't even think we had a 40% chance of winning against Wazzu (we were terrified of the Lobbster!).

To summarize, we pretty much broke even this season. We performed about as well as expected in most of our games except for that mid-season stretch. It's tough not to feel some disappointment after losing to Texas in the bowl game, but overall we met expectations. I'm sure we will all admit that our expectations were a bit lower than usual this season, as we were coming off a losing season and breaking in a new quarterback. The big challenge will be meeting next season's expectations, which will undoubtedly be raised to 8 or 9 wins.


Of course, we can't revisit our predictions without handing out some awards. Just as we do with the weekly report cards, we tallied the results and put together a list of individuals who are worthy of recognition (for better or for worse). First up is the Ursadamus award which goes to those with unparalleled predictive abilities. And after that is the Miss Cleo award, which goes to those whose predictions were furthest from reality. If you're really interested in how I computed these numbers (you don't trust my judgment? Well I never!), let me know in the comments and I'll explain it. It's a little complicated, but I stand by their reliability and accuracy.

Ursadamus Award
Name Overall Deviation Grade
the beer 3.391 71.7
eltripper 3.420 71.5
Ohio Bear 3.523 70.6
nate 3.578 70.2
no 3.580 70.2
Spazzy McGee 3.583 70.1
xpotster 3.777 68.5
calbeers05753 3.807 68.3
BobbyRozay10 3.825 68.1
jabber 3.836 68.0

the beer is our winner this year, with the smallest overall deviation from the actual results. eltripper finished second with Ohio Bear claiming the final spot in the top-3! "Spazzy Mcgee 2010 Regular Commenter Season Prediction Champion And Form Fucker-Upper" didn't let his Ursadamus Award from last season get to his head, as he posted another top-10 finish.

Nice job all around! the beer and calbeers05753 owe y'all some beers!

Miss Cleo Award
Name Overall Deviation Grade
Balls 6.934 42.2
bobsyeruncle 6.595 45.0
Texashaterforlife 6.274 47.7
BenBear 6.231 48.1
Yleexotee 6.066 49.5
shahofCAL 6.058 49.5
gobearsjr 6.019 49.8
Redonkulous Bear 5.862 51.2
LAFILMMAKER 5.835 51.4
Seanthom2004 5.831 51.4

At the other end of the spectrum we have those whose predictions were the absolute furthest from reality. Don't let this stop you from submitting your predictions next time!

And finally we have the predictions from your fearless leaders! Are we worthy of our impressive titles? Sort of: we didn't do too badly overall (our ranks are out of 252 participants).

Name Overall Deviation Grade
3. Ohio Bear 3.523 70.6
23. Berkelium97 4.005 66.6
34. HydroTech 4.126 65.6
41. Kodiak 4.174 65.2
60. Cugel 4.289 65.3
65. atomsareenough 4.330 63.9
110. norcalnick 4.504 62.5

Thanks for participating, everyone! We'll see you in a couple months for our first round of 2012 season predictions!