Maybe I missed it, but is there a page where you initially explained what these rankings were intended to show?
I see lots of different rankings, like the quantitative methods designed to measure achievement up to this point, which fairly describes most computer ranking systems, and then the human pollsters, who seem to start the season predicting the finish and slowly through the season blend in the respective achievements. What I’m asking then is are you casting your ballots on the impression that at the end of the season Vanderbilt will be 21st in the country? Or are you casting your ballots based on the impression that up until now, Vanderbilt has accomplished more than all but 20 other teams? And so on through the rankings...
Although simply calling our Top 25 a 'Power Poll' wouldn't be entirely inaccurate, I felt that this question deserved a deeper, more thorough answer. There are several ways of ranking college football teams, and each is intended to show a different relationship amongst the top teams in the country. If we don't first establish what relationship it is we're trying to show, however, it's hard to say that such a ranking has any value at all. Take, for example, the following ranking of breakfast cereals:
1. Honey Nut Cheerios
3. Lucky Charms
4. Cocoa Puffs
5. Cinnamon Toast Crunch
Now, I'll bet most of you looked at my rankings and immediately disagreed with them. "How could you include Cereal X," you say, "and forget about Cereal Y?" "This order is all wrong. You know nothing about breakfast cereals!" Perhaps. Still, until you know how I'm ranking these cereals, arguing with me is rather moot. I could be ranking them on tastiness, or on how fresh they stay in milk. I could rank them on price, or on some sort of tastiness/price value scale. I could simply throw all of that out and rank them on the entertainment value of the packaging and any prizes that might be inside. Each of these would be a valid ranking, though some would clearly be more useful than others. The smart shopper chooses the ranking system that most closely approximates what he or she is trying to accomplish. Want to get healthy? Look into Grape Nuts, or Total. Trying to keep the kids happy? Stick with the Lucky Charms.
And football teams? There are plenty of ways to rank them, too.
Fortunately for me and my ability to have time to watch lots of Futurama reruns, Sunday Morning Quarterback has already written on this subject. Let's quote him (but by all means, read the full article -- I would have quoted the whole thing if logistics and/or ethics allowed):
First up, the Power Poll, or as SMQ terms it, the 'Holistic Method':
The apparently preferred method, which asks simply, "Who's better?" or "Who would beat who on a neutral field?" or something like that. No measurables, just a human brain sorting information as it sees fit - a kind of almost metaphysical effort to determine the "essence" of a team in its current incarnation. If you're a voter and haven't given much thought to your overriding method, this is almost definitely what you're doing.
You can really sense the disdain, can't you? SMQ is not fond of this method, but I think he sells it short. Just because a voter isn't using a mathematical model to construct their ordering doesn't mean their reasoning lacks internal logic or well-thought-out arguments.
Still, he makes a good point; a voter could assemble any haphazard hodgepodge of teams and call it a 'Power Poll' using whatever justification they liked, to the point where those arbitrary justifications needn't even be internally consistent. If we're going to use this method, we need to guard against lazily throwing together ballots that contradict themselves.
Next up, Résumé Ranking:
A method that attempts to rank based strictly on the measurable: if each team had a resume for this season and this season only, and its name at the top was blacked out, how would the voter rank those resumes? Takes into account only games played to date this season - these are folks who always complain about polls that come out and distort reality before October. SMQ's preferred method all year, and seemingly the default method for most end-of-season rankings.
A noble effort, I think. Résumé voters attempt to completely ignore whatever reputation or conference affiliation a team may posses in assembling their rankings, judging teams solely by what they've accomplished on the field. Ideally, this is what teams *should* be judged on, and not some sort of wishy-washy 'perceived strength' or anything, but in practice, it can be difficult to vote completely blindly.
My problem with this method, especially early in the season, is that when you avoid making judgments about a team, you still have to make judgments about how 'good' their wins were. And that means making judgments about how good their opponents were. Or their opponents opponents. In effect, a résumé voter hasn't eliminated arbitrary judgments of teams; they've simply shifted them to a different level.
Some polls attempt to approximate the football season's Predicted Finish:
At the other pole, it's the mock stock approach - explicitly embraced most weeks by Orson and ripped off at least once by Gameday - of "buying" and "selling" (or "holding") teams based on where they're going to end up at year's end. These are the people who have West Virginia at two, or, weirder, one, based on the Mountaineers' softy schedule. It's not about what you've done, or how "good" you are - it's only about where you wind up.
I dislike these polls, not merely because they give too much credit to teams with soft schedules, but because in doing so, they reward teams not for quality of wins, but merely for quantity. Going undefeated is great, and should be applauded, even rewarded; however, having the fewest losses doesn't necessarily prove to me that a team is actually the best. Oftentimes, it merely demonstrates the schedule-maker's ability to line up a string of patsies behind a couple of ESPN Thursday Night specials.
And finally, we get Computer-Generated Rankings:
Like the "Resume" method, eliminates speculation and abstractions like perception and previous history to the extreme by running cold, hard numbers to reach a conclusion most bordering as closely as possible to scientific fact. The much-maligned computer guys.
It seems like this is what résumé-rankers aspire to: a complete analysis of the football season under scrutiny, with a total disregard for any team's reputation. While there are weaknesses with this method (I'm sure many of you have looked at the computer rankings used by the BCS and wondered, "How in the world did they come up with that?"), it's not really worth discussing here, because I am not a computer. Perhaps someday I'll devise my own computer ranking method; we can talk then.
So, then, what method are we using to rank these teams? Well, I shall not presume to speak for either Yellow Fever or CBKWit, but for my own part, I would call my efforts a blend of the first two methods. Call it a Power Poll that leans heavily, though by no means exclusively, on teams' résumés. What? Let me explain:
In constructing my ballot, I attempt to provide a snapshot of who the Top 25 teams are 'right now'. Not at the end of the season, and not necessarily how good they were in Week 1, either. I do try and ask myself questions like, "Who would win on a neutral field?" but I try not to get too caught up in looks. A team's résumé is very important - who they've beaten, where they beat them, and by how much. Just as important is who they lost to, and how they lost. The past is not forgotten (hense why Oregon State was never considered for this week's ballot), yet it may offer diminishing influence; if the OSU runs off a number of wins, demonstrating that last Thursday's toppling of USC was "improvement" and not merely a fluke, they shall rise in my rankings accordingly.
More than anything else, I'm looking for "evidence", of whatever kind I can get my hands on, that Team A is better than Team B. Head-to-head matchups are great, but common opponents and relative scoring margins are also useful. Anything I can point to and say, "This happened, and so it is likely to happen again." Often, however, this isn't enough. To rank two teams with dissimilar but relatively strong résumés, one must often look to wishy-washy human judgments such as "offensive dominance" or "has a better defense". Especially early in the season, we just don't have enough data points to offer strong evidence for ordering any team over any other team; Oklahoma is almost certainly better than Louisiana-Monroe, but can you tell me whether or not Kansas is better than Utah?
In the coming weeks, I shall attempt to further elucidate my method. For now, however, here is this week's Top 25. Normally, I offer a few comments on the rankings, attempting to explain or justify various positions taken within. Today, I offer it without comment, and invite you, dear reader, to examine our poll and determine if indeed our method is at least internally consistent. Does it show an up-to-date power relationship amongst the top teams in college football? Are teams with better résumés ranked higher? Judge us, and I shall attempt to explain and defend our position, or, failing that, capitulate to your superior judgment.
After all, is Vanderbilt *really* one of the 20 best teams in college football?