clock menu more-arrow no yes mobile

Filed under:

History shows Florida State's main BCS competition is August, not Oregon

As the final four weeks of the regular season are upon us, Florida State fans find themselves making the case for their team to be included in the BCS. Every aspect of FSU's resume is dissected and compared to those of Oregon and Alabama. Innumerable chickens have been counted before they're hatched assuming no major November upsets. But is it possible that the decision for the title game was made before the season ever began? Let's take a look at BCS outcomes in the past...

John David Mercer-USA TODAY Spor

Longtime readers of Tomahawk Nation will remember the 2010 offseason with fondness. The Noles were undergoing one of the most significant changes in their history, and Tomahawk Nation was about to go mainstream.  In preparation for the upcoming season, the first under new head coach Jimbo Fisher, TN published its first Preseason Preview magazine.  In this magazine, I took a statistical look at strength of schedule and BCS ranking. Before we begin our look at this season, allow me to quote a few sections of that article.

The correlation between final Harris (AP) and USA Today poll ranking and the strength of schedule rankings of Anderson & Hester and the Coley Matrix was calculated and the results are seen in Table 1.  The further a correlation coefficient is from zero, and the closer it is to -1 or +1, the stronger the relationship between the two items is.  As you can see, there is a very weak relationship between the strength of schedule ratings and the final placement in each of the top 25 rankings.  These particular strength of schedule ratings, components of the BCS formula, actually have very little to do with a team’s final overall rank.  The correlation between the number of wins and losses for top 25 teams and the final ranking in the Harris (AP) and USA Today polls was also calculated and included in Table 1.  Here we see a much stronger relationship.  Human polls are not swayed as much by strength of schedule as they are by the number of wins, or losses, a team has.

Table 1: Are the Human Rankings More Associated With Strength of Schedule or Winning?

A&H SoS with Harris Ranking

CM SoS with Harris Ranking

A&H SoS with USA Today Ranking

CM SoS with USA Today Ranking

Harris Ranking with Wins

USA Today Ranking with Wins

Correlation Coefficient

-0.00248

0.060925

-0.00698

0.053518

0.783107

0.779179

Seemingly, the entire debate between FSU and Oregon has come down to who has the most difficult remaining schedule.  Brad Edwards famously called FSU's number 2 ranking in the initial BCS standings a "mirage," implying that the difficulty of Oregon's remaining schedule would clearly put them at number two.  However, past BCS history has shown (in the quoted section above), that the voters who represent two-thirds of the BCS formula have rarely, if ever, placed a significant value on strength of schedule, instead being swayed more by the number of victories a team has.  Over the next month, I'm sure national media will attempt to frame the debate between FSU and Oregon as the Ducks having a more difficult overall schedule, but that has almost never been a motivating factor in the human component of the BCS standings, and will quite possibly be a red herring argument this year as well.

First, we should determine whether or not the computer rankings are highly affected by their strength of schedule components.  The correlation between each polls’ strength of schedule ranking and team ranking is calculated and shown in Table 3.  Both the Anderson & Hester and Colley Matrix appear to have a weak relationship between strength of schedule and actual ranking.  However, a statistical test shows that each of these two coefficients are significant at the=.05 level ().  There is a significant relationship between strength of schedule and ranking within each of these computer polls.  However, does this significance mean that strength of schedule has an overall effect on a team’s BCS ranking?  The correlation between each of the SoS measures and BCS ranking is calculated and included in Table 3.  Each of these relationships is very small and not significant according to the statistical test.  The two individual computer rankings are somewhat dependent on strength of schedule.  However, a team’s overall BCS ranking has been historically independent of these strength of schedule measures.

Table 3:  Relationship Between SoS Rating and Overall Rating in the Computer Polls

A&H SoS with A&H Ranking

CM SoS with CM Ranking

A&H SoS with BCS Ranking

CM SoS with BCS Ranking

Correlation Coefficient

0.25122

0.14389

0.076812

0.02078

The individual computer ratings were significantly correlated with their strength of schedule ratings.  However, strength of schedule rankings and overall BCS rating showed no significant relationship.  Once again we see that historically strength of schedule has had very little to do with BCS rating.  So what does the BCS rating depend on?

Further, it should be pointed out that in spite of the significant relationship between strength of schedule and the computer poll rankings, the computer polls are much more strongly related to other factors.  Table 4 shows the correlation coefficients between the Anderson & Hester and Colley Matrix ratings and the number of wins a team has.  A team’s ranking in the computer polls is much more strongly associated with the number of wins that team has as opposed to the quality of schedule that team faced.  Table 4 also shows the correlation between overall BCS ranking and the number of wins a team has; there is a very strong relationship that as the number of wins increases, the team’s rank gets closer and closer to number 1 overall.

Table 4: Relationship Between Computer Rating and Winning

A&H Ranking with Wins

CM Ranking with Wins

BCS Ranking with wins

Correlation Coefficient

-0.645286

-0.72226

-0.765974

In a word?  Winning.  As a team wins more games, their BCS rating gets closer and closer to the top.  The true debate this season will not be the amount of won games (if we're assuming no upsets then both FSU and Oregon will be 13-0 at the end of the season).  If history is any indication it will not be about strength of schedule, margin of victory, or any other such soft factors that will inevitably be used by media members to argue the case for their team of choice.  Over the past 15 seasons of the BCS era there has been a significant predictor of the final teams selected for the championship game that is independent of any of these factors.

Confirmation bias.

Confirmation bias is the tendency of people to believe information that supports what they already believed anyway.  It's our innate human desire to say "I told you so."  Historically, the human voters in the BCS have had an overwhelming urge to prove their ability to predict the upcoming football season.

Since 1998, when Florida State played Tennessee in the inaugural BCS championship game, there have been 38 teams with identical or better records than the number 2 rated BCS team that were left out of the title game.  The teams and their preseason rankings are given below.

1998- Florida State(2) selected over: Kansas State(6), Ohio State(1), UCLA(7), Arizona(24), Wisconsin(20), Tulane(Unranked)

1999- None

2000- Florida State(2) selected over: Miami(5), Washington(13), Virginia Tech(11), Oregon State(Unranked)

2001- Nebraska(4) selected over: Oregon(7), Maryland(Unranked), Illinois(Unranked)

2002- None

2003- LSU(14) selected over: USC(8)

2004- Oklahoma(2) selected over: Auburn(18)

2005- None

2006- Florida(7) selected over: Michigan(14)

2007- LSU(2) selected over: Virginia Tech (9), Oklahoma(8), Georgia(13), Missouri(Unranked), USC(1), Kansas(Unranked)

2008- Florida(5) selected over: Texas(10), Alabama(Unranked), USC(2), Utah(Unranked), Texas Tech(14), Penn State(22)

2009- Texas(2) selected over: Cincinnati(Unranked), TCU(17), Boise State(16)

2010- Oregon(11) selected over: TCU(7)

2011- Alabama(2) selected over: Oklahoma State(8), Stanford(6), Boise State(7)

2012- Alabama(2) selected over: Oregon(5), Florida(23), Kansas State(21)

Out of these 38 teams in 15 years, there have only been 5 instances where a team was left out of the title game in favor of a school that was ranked below them to start the season.

1998: Ohio State was left out for FSU  (Preseason 1 vs. Preseason 2)
2003: USC was left out for LSU (Preseason 8 vs. Preseason 14)
2007: USC was left out for LSU (Preseason 1 vs. Preseason 2)
2008: USC was left out for Florida (Preseason 2 vs. Preseason 5)
2010: TCU was left out for Oregon (Preseason 7 vs. Preseason 11)
Let's have a closer look at those five seasons.

1998: Florida State lost in week 2 as number 2 to North Carolina State.  Ohio State lost in week 9 as number 1 when Florida State was number 6.  OSU fell to number 7 in week 10 and FSU moved up to number 5.  FSU never trailed OSU again.  FSU ended its season ranked 4th, but Kansas State and UCLA lost on the same day moving them to 2nd, OSU moved to 3rd.

2003: "USC had lost a triple overtime thriller at California on September 27, LSU lost at home to Florida on October 11, and Oklahoma, which had been #1 in every BCS rating, AP and Coaches' Poll of the season, lost to Kansas State in the Big 12 Championship Game, 35-7 on December 6. Although USC, then 11-1, finished ranked #1 in both the AP and Coaches' Polls, with LSU (12-1) ranked #2 and Oklahoma (12-1) #3, Oklahoma surpassed both USC and LSU on several BCS computer factors. Oklahoma's schedule strength was ranked 11th to LSU's 29th and USC's 37th. Oklahoma's schedule rank was 0.44 to LSU's 1.16 and USC's 1.48. As such, despite the timing of Oklahoma's loss affecting the human voters, the computers kept Oklahoma at #1 in the BCS poll. LSU was ranked #2 by the BCS based on its #2 ranking in the AP Poll, Coaches Poll, 6 of 7 computer rankings (with the remaining one ranking them #1), and strength of schedule calculations. USC's #3 BCS ranking resulted from it being ranked #1 the AP and Coaches Poll, but #3 in 5 of 7 computer rankings (with the 2 remaining computer rankings at #1 and #4) and schedule strength, though separated by only 0.16 points." - Wikipedia

2007: USC lost before LSU moving to 7th while LSU took 1st.  When LSU lost, they only fell to 5th while USC actually moved down that same week to a tie for 9th.  The very next week, LSU jumped OU and into 3rd while USC lost again and fell to 15th.When LSU got their second loss, they were 1st while USC was 12th, leaving them at 7 and 9 respectively.  In the final week of the season, LSU's win in the SEC championship game allowed them to vault all the way to number 2.  They jumped West Virginia and Missouri who lost, but also jumped Va Tech, Georgia, and Kansas (all of whom they started the season ranked ahead of). 

2008: USC was number 1 and Florida was number 4 when they both lost (Oregon State and Ole Miss respectively).  USC fell to 9, Florida fell to 13.   They were at 5 and 10 in the first BCS, 5 and 8 in the second, and 7 and 5 in the 3rd.  In week 10, Florida jumped 3 places while USC fell 2 places (Florida beat number 6 Georgia by 39 points while USC blew out unranked Washington).  At the end their were tons of 1 loss teams.  Florida got in because they beat Alabama head to head, Oklahoma got in because they beat Texas head to head (and Texas Tech didn't receive national respect), Utah was a non-BCS team, and USC found themselves behind all of these teams.  

2010: One team was in a BCS conference, the other wasn't.  End of story.
There are two main takeaways from this:
  • The old Pac 10 got zero respect from the voters while the SEC has gotten love ever since 2004
  • 67% of the time, the team that was selected had started out higher ranked than the team that was left out.  In order to buck that trend you apparently have to be either from the SEC or competing with a team from a non-respected conference.
The voters have always liked to prove themselves right.
Strength of schedule, margin of victory, conference affiliation, "eye tests", and any number of other soft factors have been used to prop up their position, but the majority of the time the team ranked higher to start the season will be ranked higher to end the season.   The four times this was not true (let's face it, TCU was never getting in over a BCS school in 2010) it involved one or more losses, when these losses were, and ridiculous conference tiebreakers.  When the debate was undefeated team vs. undefeated team, the team with the preseason advantage went on to play in the BCS title game. Historically, voters have just not deviated from the "survive and advance" voting model.

Unfortunately for Seminoles fans, the case of an undefeated Oregon vs. an undefeated FSU was decided in mid-August before a single game was ever played.  That being said...

GEAUX TREE!