There’s been a lot of talk lately about The Associated Press college football poll being biased toward SEC teams. Apparently the AP has been taking recent criticism to heart. Recently, they released a “study” which they say showed there was no bias toward the SEC in their college football poll. In their study, they took the results of several hundred games totaled them all together, got results they preferred and called it good. We can do a little better than that.
The criteria they used was fairly simple. Every time a ranked team in a major conference played another team in their conference from 2009 to 2013, they took the number of positions the team changed in the AP poll and totaled them together. Separate sums were taken for the change after a win and the change after a loss and were further separated into the conference each team was in. When finished, they divided by the number of games that made up each total and found the average change in the poll for each conference after a win and after a loss. They then did the same for games played for the 2014 season up until October 18, when the article was published. Unranked teams were considered to be in the 26th position. The released results were as follows:
Average rise in poll spots of ranked teams after beating a conference opponent from 2009-2013
ACC — +2.0
Big Ten — +1.9
Big 12 — +1.8
Pac-12 — +1.6
Southeastern — +1.5
2014 (through games of Oct. 18)
Big 12 — +3.1
SEC — +2.8
Pac-12 — +2.3
Big Ten — +1.7
ACC — +0.4
Average drop in poll spots of ranked power-five teams after losing to a conference opponent from 2009-2013
Pac-12 — -5.3
Southeastern — -5.5
Big 12 — -6.0
Big Ten — -6.0
ACC — -6.6
2014 (through games of Oct. 18)
ACC — -4.3
Big Ten — -5.5
Big 12 — -7.0
Pac-12 — -7.5
SEC — -7.5
While this is interesting, having run these kinds of numbers for years, I know this method is useless for what they are trying to measure. Even people who haven’t run the numbers could spot the obvious problems. The most glaring issue is that teams at the top of the poll have nowhere to rise to and teams at the bottom of the poll have nowhere to fall to. That’s just the real obvious issue. A large number of factors affect how far a team moves in the polls. These include the quality of opponent, the margin of victory, what other teams in the poll do, and the time of year that the game took place in. I’m just going to focus on where a team started in the poll.
The most glaring example of how where you start from affects the above totals, is to show a breakdown of average of 4.3 positions The ACC fell after a loss in the 2014 season. If we are to believe the AP study, we have to say that the ACC is getting the most bias after losing because the 4.3 is a smaller average drop than any other conference. Looking at the three games that make up the average, we see that Louisville was ranked #21 to unranked after losing to Virginia for a drop of 5, Clemson dropped from #22 to unranked after losing to Florida State for a drop of 4 and Georgia Tech dropped from #22 to unranked after losing to Duke for a drop of 4. In every case, the ACC dropped the maximum positions it could and yet ended up being the conference the poll was most bias for, if we believe the AP take on it. That alone should tell you the peril of using the total average. However, let’s examine how the AP concluded there was “no evidence of SEC bias” in the AP poll.
From 2009 to 2013, there were 177 games in which an AP ranked SEC team beat another SEC team. In those 177 games, the winning team climbed a total of 264 positions for an average of 1.492 positions a game. That is indeed the lowest “average” of any of the current major conferences, but what position were those 177 teams in prior to winning their game? The following chart list the position in the AP poll, followed by the number of games played at each position to make up the 177 games.
SEC AP GAME RANK WHEN PLAYING A CONFERENCE TEAM 2009-2013
What stands out is that a full 31 games of those 177 were played by an SEC team that was ranked #1. The PAC-12 had the second most conference games won at #1 with 6. That means that over 17% of the games in the average for the SEC came in games that the SEC could not climb at all because they were already at the top. In fact, teams in the top 3 make up the top 3 highest game total in the entire study. That makes up almost a third of the games that got totaled into the SEC’s average. The total positions that those teams climbed in those 57 games were 7 for an average climb of 0.123 positions a game. Remove teams in the top 5 from the totals and the SEC’s average rise per game moves up to 2.300, highest of any major conference. The SEC had 67 winning conference games from top 5 teams, the PAC-12 has the second most with 28. It would be a shock to see the SEC as anything but last when you calculate all those in with the rest of the totals. I find it difficult to believe that the AP didn’t realize this as well before releasing their finding claiming there was no bias in their poll.
By now, I hope you understand why the numbers the AP put out are an unreliable number to see if there’s bias in the poll. Now, I want to take an effort in showing how each conference should have scored based on how many games played by each conference at each position and compare it to the five conference average rise or fall at that position. The following chart shows the average rise or fall that the teams from the five conferences had when ranked by the AP at each position.
It comes as no surprise that as you get toward the bottom of the chart, the average climb in the poll gets higher. Now I’m going to calculate each conference multiplying how many games played at each position by the five conference average to find out what each conference should have scored if they climbed the average at each position. The following chart shows the average positions per game each conference climbed per conference win and compares it to the average positions per game they would have climbed if they had climbed the average amount at that each position:
Calculating how each conference should have done, we see the SEC got the highest rise by position on average. If the SEC had risen the average amount per position, they would have risen only 1.3373 positions per win. However, the actual amount they did climb was 0.1542 above what the average rise was. That’s a 11.5% rise over the average they should have risen with the number of games the conference played at each position. I have charts that show that if you calculate drops after losses, the SEC got the biggest break in the polls there as well. A decent study by the AP should have shown the opposite results than what they came up with, and I suspect they know it.
Now, this is far from conclusive, but also a far better indication than the “study” the AP released. A weighted study of the numbers indicate that the SEC does get bigger breaks in the polls, as people already suspected. However, that’s not out of line with the history of polling either. Teams that have had success in the past, or have come from conference that were better have a history of preferential treatment in the polls. It’s not hard to justify either because teams playing better schedules might be better than their record indicates. In recent years, the SEC won a higher percentage of their quality non-conference games than the other conferences had and it has shown in the polling results. That might have been a better thing for the AP to study to justify the preferential treatment instead of denying it.