A little over a year ago, the Justin Wilcox era began at Cal, with fanfare increasing as he hired a tremendous staff of assistants. While many of us envisioned a bright future for Cal football, uncertainty remained about how long the turnaround would take. As such, expectations were modest heading into 2017. At the beginning of the season CGB readers predicted 5.76 wins for the Bears. We obviously fell a little short of that, yet many of us were quite pleased with the first year under Wilcox. In fact, according to the scoreboard the team overachieved as well. So how is it possible that Cal both underachieved and overachieved this season?
Expectations vs. Outcomes
First, let’s take a look back at our predictions last season and the outcome for each game. In August I asked you all to predict the Bears’ chances of defeating each opponent. I list those preseason predictions in the table below, along with the final outcome for each game.
Opponent | Win Chance | Outcome |
---|---|---|
at North Carolina (3-9) | 44.8% | W (35-30) |
Weber State (11-3) | 93.0% | W (33-20) |
Ole Miss (6-6) | 49.6% | W (27-16) |
USC (11-3) | 23.0% | L (20-30) |
at Oregon (7-6) | 39.8% | L (24-45) |
at Washington (10-3) | 20.9% | L (7-38) |
Washington State (9-4) | 49.4% | W (37-3) |
Arizona (7-6) | 66.3% | L (44-45) |
at Colorado (5-7) | 42.6% | L (28-44) |
Oregon State (1-11) | 64.0% | W (37-23) |
at Leland Stanford Junior University (9-5) | 40.1% | L (14-17) |
at UCLA (6-7) | 42.1% | L (27-30) |
Total | 5.76 wins | 5 wins |
Heading into the season we only favored the Bears in three games: Weber State, Oregon State, and Arizona (which everyone expected to be mediocre until Khalil Tate briefly made the Wildcats look like Pac-12 title contenders). Cal won an uncomfortably close game against Weber State, took care of business against the Beavers, and fell a two-point conversion short of beating a much-tougher-than-anticipated Arizona team. Adding up our preseason win probabilities gives us an expected total of 2.23 wins in those favored games. Winning only 2 is disappointing, but not much of a departure from what should have happened.
Seven games were in the 40-60% toss-up range: Cal won 3 of these, compared to an expected win total of 3.08. That’s pretty close to meeting expectations, so we shouldn’t be too disappointed with those outcomes.
Finally, the USC and UW games looked unwinnable prior to the start of the season. On average, the Bears would only be projected to win 0.44 games against those two. Cal played a pretty good first half against USC, so that’s close enough to meeting expectations.
Across these three sets of games—should win, might win, and won’t win—we were close to matching our preseason predictions. Each category fell a fraction short of expectations and those sums collectively add up to those .76 wins short we fell of expectations. While this review is somewhat insightful, comparing our predictions to win-loss outcomes obscures the fact that some wins are much better than others and some losses are much better than others. Destroying Wazzu was obviously a much more satisfying win than that nail-biter against Weber State. Likewise, the Big Game loss felt better than that head-scratching loss in Boulder. In the next section we take a more nuanced approach to comparing outcomes against our predictions.
Expectations vs. Reality
The last section was easy. Outcomes are indisputable measures of reality. No sane person will look at the Cal-UCLA game and say that Cal actually won that game. However, a reasonable person may suggest that the close final score suggests that if Cal and UCLA were to play again, Cal would have a decent shot of winning. It follows, then, that if we gave Cal a 55% chance to win and they won 28-24, then the Bears will have roughly met expectations. If we gave Cal a 55% chance to win and they won 55-17, they will have greatly exceeded expectations. Yet both games show up as wins in the win column. To say that we met expectations in both games because we won is problematic because it ignores the fact that the final score reflects how well the team performed. So if we incorporate the score into our assessment, we can get a more nuanced understanding of how well the team met expectations.
How do we measure reality? Objectively defining reality was difficult enough before the rise of alternative facts, fake news, and Herm Edwards’ tenure at ASU. Nevertheless we persist to define reality, and we’re using the Pythagorean Projection to do so. Following the line of reasoning I outlined above, the Pythagorean Projection uses the final score to determine the winner’s chance of winning again if the two teams had a rematch.
I calculated the Pythagorean Projection for each game of the 2017 football season. Below I’ve plotted our preseason predictions (blue) against our win likelihood based on the final score (gray). Based on the above calculation, any measure of reality above 50% is a win while anything below is a loss. Where reality is higher than expectations (i.e. the gray point is above the blue point), the team exceeded expectations; where our predictions are higher than reality, the team failed to meet expectations.
We opened the season with an overachieving win over the Tar Heels and followed that up with a bizarrely underachieving win over Weber State. An overachieving win over the Ackbars Rebels capped our 3-0 start. Cal roughly met expectations against USC before underperforming on the road against the Ducks and Huskies. A wildly overachieving win over Wazzu breathed life into our hopes of attaining bowl eligibility. Alas, underperformances against Arizona and Colorado put those bowl hopes in doubt until an overachieving win against the Beavers. The Bears almost perfectly met expectations in the Big Game and against UCLA. Unfortunately, that meant falling painfully short against both rivals.
Overall, the Bears overachieved four times, underachieved four times, and met expectations four times. That all balances out to a team that roughly met expectations. As another reminder of how heartbreakingly close we were to bowl eligibility, the sum of our reality scores suggests that we should have won, on average, 5.93 games. That means that according to the scoreboard, the team overachieved relative to our preseason expectations. It’s a shame that wasn’t reflected in the final record, where we managed to underachieve.
Pregame Predictions vs. Preseason Predictions vs. Reality
One shortcoming of our preseason predictions is that we cannot update those predictions with new information that develops over the course of the season, such as Khalil Tate terrorizing the Pac-12 or the wave in injuries that wiped out the Bears. To remedy that, I’ve included our predictions from our weekly report card series. In each of the report cards I ask readers to tell us how likely the Bears are to defeat the next opponent. Below I’ve updated our image from above with our pregame predictions in dark blue.
One amusing feature of our pre-game predictions is that they are incredibly volatile. Our predictions of a win skyrocket after big wins like the wins over Ole Miss and Wazzu and they plummet after woeful losses like the Colorado loss.
Comparing predictions to reality we again find that the Bears overachieved in 5 games (UNC, Ole Miss, Wazzu, OSU, UCLA), underachieved in 6 (Weber St., USC, Oregon, Arizona, Colorado, LSJU), and met expectations once (UW). Adding up the win probabilities from our pre-game predictions puts us at 6.31 wins. According to our pre-game predictions, we definitely underachieved this season. However, I’m always rather skeptical of those predictions because of that volatility.
In Which Las Vegas Lost An Incredible Amount of Money
Finally, we examine the predictions according the Las Vegas’ moneylines. In theory, oddsmakers are supposed to create a scenario in which roughly equivalent numbers of individuals will bet for and against each team. As a result, Vegas is usually quite accurate in its predictions about outcomes for games. When Las Vegas gave Cal an over/under of 3.5 wins this season, many of us were rather surprised and concerned that perhaps we were a little too optimistic heading into 2017. Fortunately, Vegas was very, very, very wrong.
According to Vegas, the Bears overachieved a whopping eight times this season, underperformed twice, and met expectations twice. Vegas consistently underrated the Bears over the course of the season, from the opener in Chapel Hill to the finale in Pasadena. Adding up Vegas’ win probabilities gives Cal a meager 3.68 wins. Even after Cal started the season 3-0, Vegas still underrated the Bears on a consistent basis. I hope several of you Cal fans made some good money at the oddsmakers’ expense this past season.
If you dismiss charts as eye candy for numerically illiterate heathens, then you’re 1) in luck and 2) oddly passionate about data presentation. Below I have a table with our various predictions compared to reality.
Outcome | Reality | CGB Preseason Predictions | CGB Pregame Predictions | Vegas Predictions | |
---|---|---|---|---|---|
UNC | W 35-30 | 59.0% | 44.9% | 44.9% | 17.6% |
Weber St. | W 33-20 | 76.6% | 93.0% | 97.5% | 96.0% |
Ole Miss | W 27-16 | 77.6% | 49.6% | 49.2% | 27.9% |
USC | L 20-30 | 27.7% | 23.0% | 64.1% | 11.9% |
Oregon | L 24-45 | 18.4% | 39.8% | 56.9% | 12.4% |
UW | L 7-38 | 1.8% | 20.9% | 7.4% | 2.9% |
WSU | W 37-3 | 99.7% | 49.4% | 36.5% | 13.6% |
Arizona | L 44-45 | 48.7% | 66.3% | 77.3% | 33.7% |
Colorado | L 28-44 | 25.5% | 42.6% | 69.1% | 36.7% |
OSU | W 37-23 | 75.5% | 64.0% | 43.3% | 72.1% |
LSJU | L 17-20 | 38.7% | 40.1% | 54.3% | 16.1% |
UCLA | L 27-30 | 43.8% | 42.1% | 30.5% | 27.2% |
AWARDS!
Finally, we’re contractually bound by our SBNation Overlords to hand out awards whenever we evaluate predictions, so we’ve got some awards to hand out.
Ursadamus
The first award goes to those whose preseason predictions were closest to the outcomes on the field. In case you’re curious (which you’re probably not unless you’re a big ol’ stats nerd), I calculated the following metrics by taking the sum of the squared differences from reality and dividing that sum by 12 (i.e. your typical standard deviation calculation). If you’re not a stats nerd and you’re still reading this paragraph, you have my deepest condolences.
Username | Deviation from Reality |
---|---|
1. CalBear91 | 0.3718 |
2. 12345 | 0.3811 |
3. turtle | 0.3896 |
4. Alex Ghenis | 0.4288 |
5. Mitchgobears | 0.4697 |
6. Calarchitect | 0.4716 |
7. Nate58 | 0.4831 |
8. rare bear | 0.4985 |
9. Blungld | 0.5195 |
10. AAHQ | 0.5226 |
CalBear91 leads the way followed by 12345 and turtle. None of the CGB writers finished in the top 10 because we’re all woefully lacking in editorial integrity.
Miss Cleo
Next we have those whose predictions were furthest from reality.
Username | Deviation from Reality |
---|---|
1. IAmJustinsLustyTongue | 3.9966 |
1. SanBernardinoBear | 3.9966 |
3. BerdoBear | 3.9845 |
4. StanfordEnvy | 3.8564 |
5. Old Bear 71 | 2.4597 |
6. chazhorn11 | 2.4038 |
7. OCBEAR1983 | 2.2963 |
8. Oski Disciple | 2.2893 |
9. Dgh | 2.2770 |
10. The Ghost of Joe Roth | 2.0069 |
I’m not sure what’s in the water in San Bernardino, but it clearly induces delusion. You had better get that taken care of before we run our first round of 2018(!) season predictions in a few months. Until then, Go Bears!
Poll
How did Cal perform compared to your expectations?
This poll is closed
-
0%
Much worse than expected
-
0%
Worse than expected
-
7%
Slightly worse than expected
-
23%
About as expected
-
32%
Slightly better than expected
-
30%
Better than expected
-
4%
Much better than expected