The RPI has long been a source of frustration for college basketball fans. Yours truly wrote extensively over the years about how easy it was to manipulate the index, leading to incomplete results at best.
So, when the NCAA announced earlier this year it would be RIP for the RPI, shouts of rejoice rang out across the country. Finally (it seemed), the governing body most known for its inconsistency and out-of-touch leaders would be moving into the analytics age. The NCAA Tournament, already a fantastic annual spectacle, would be made even better as the 68 teams would be selected and seeded more appropriately. No one knew exactly what to expect—most just assumed they’d take a composite ranking of several existing advanced rating systems—but it was universally believed that whatever was rolled out this fall would be an improvement from the RPI.
And then the NET rankings were released.
Debuting on November 24th, the NET rankings immediately drew attention...for all the wrong reasons. Within hours of their release (which featured Ohio State as the No. 1 team), respected analytical minds were calling them a “train-wreck” and “the worst rankings I’ve ever seen in any sport, ever.”:
The big problem seems to be that only 1 of the 5 components (not sure why there are 5 components instead o 1 cohesive rating) considers strength of schedule. So it rewards team with good records against very poor schedules. Probably won't get much better as the year goes along. pic.twitter.com/WTzqcaW6Qr— Nate Silver (@NateSilver538) November 27, 2018
What exactly are the NET rankings and why are they so bad? Glad you asked.
Here are the five components of NET listed by the NCAA:
Let’s break things down one-by-one (apologize for not using proper A-2-D format, but I want to reduce confusion by using exactly what the NCAA has):
- Team Value Index — We honestly don’t know much about this because—surprise, surprise—the NCAA won’t say exactly what they are using. The claim is that it’s an algorithm rewarding teams for beating other good teams, and it bases this “results-oriented component” game results, opponent, and location. What that actually means? Your guess is as good as mine.
- Net efficiency — Efficiency metrics (adjusting for tempo, and the relative strength of your opponent) are the new gold-standard in college hoops. This is what most people assumed would make whatever replaced the RPI a major improvement. Unfortunately, instead of just utilizing one of the well-refined, existing efficiency based rankings (such as KenPom), the NCAA decided to just use the raw efficiency difference between your offensive and defensive performance. Meaning it doesn’t matter if you’re playing the Duke Blue Devils or the Central Connecticut Blue Devils, if you blow a team out you’re going to have a big boost in net efficiency.
- Winning percentage — Exactly as it sounds. Seems redundant with component 1, but whatever.
- Adjusted winning percentage — Well lookie here. The RPI isn’t dead and gone after all. This 1.4/1.0/0.6 split for road/neutral/home wins is directly pulled over from the RPI, which means the NET is also able to be manipulated. Going to play Stetson? Play them on the road, where you’ll still win easily, while getting more than double the credit for the win than if you played them at home.
- Scoring margin — This is as it sounds, except the margin is capped at 10. That seems far too low in order to accurately capture the flow of a game. Imagine a back-and-forth game that is a 5 point margin with 90 seconds left. The trailing team fouls, and the leading team makes their free throws to stretch the lead to 7. Then again to push it to 9, and a final time makes it 10. You’re telling me that’s the same as a 19 point blow out that was a double-digit margin the entire 2nd half? Of course not. On top of all that, this component is actually similar to the net efficiency margin, since the efficiency isn’t adjusted for opponent-strength (only that one isn’t capped at all...).
Basically, instead of just using a composite ranking of many proven models, the NCAA tried to re-invent the wheel and ended up creating a slip-and-slide. It’s a mishmash of components with varying degrees of redundancy, being weighted in a way they won’t explain, and (worst of all) actually incentivizing teams to schedule a bunch of scrubs and run up the score. Don’t believe me? Check out some of these numbers:
A. Belmont, out of the OVC, is ranked 9th in the most up-to-date NET Rankings. As of that date they were 6-0 (they have since lost to Wisconsin-Green Bay), with one of those wins coming over a non-D1 team. On KenPom (prior to the loss), Belmont was ranked 73rd. Why such a large gap? KenPom is adjusted for strength of schedule and Belmont’s is 213th, while NET just uses raw margins, meaning Belmont’s 91-53 road win over Kennesaw State (337th in KenPom) is a massive boost, as is their 92-73 win over Middle Tennessee (204th), and potentially their 104-50 win over Trevecca Nazarene (the NCAA won’t say if D3 wins are counted).
2. Loyola Marymount is 25th in NET, yet just 122nd on KenPom. The middle of the pack WCC team is 8-0 with victories over powerhouses FAMU, Cal St. Northridge, Central Connecticut, and two D3 teams. In fact, the highest-ranked KenPom team they’ve played is Georgetown (98th), and they’ve played two sub-300 teams. Therefore, it should come as no surprise that their KenPom SOS checks in at 334th out of 353 teams.
d. Florida State is 37th in the most recent NET ranks, up from 41st in the initial poll. That’s a good bit lower than the Seminoles’ 15th place rank on KenPom. FSU scores points with NET for having two neutral-site wins, along with a true road win. However, a tough early season slate has resulted in just one true blowout—a 93-61 shellacking of Canisius. This is likely holding the ’Noles back when it comes to their raw efficiency margin. KenPom recognizes that FSU has faced the 19th toughest schedule, giving them much more credit for close wins over teams like Purdue and LSU.
Now, some outlets have cautioned a more wait-and-see approach with NET criticism, suggesting the rankings will self-correct as the season progresses. However, I’m not convinced. Sure, it should be slightly better with more data points, but a student improving his class grade from a 35 to a 45 is still failing.
Ultimately, it’s hard to know, as the NCAA refuses to release the specific weights given to each component. Nonetheless, when I look at the components used, and the rankings being generated, I’m reminded of a scene from the classic film, Austin Powers: The Spy Who Shagged Me. Austin complains to his director, Basil, that the office coffee smells like poop (a different, four-letter word is used). Knowing that what Austin poured is actually Fat Bastard’s excrement, Basil replies that it smells that way because it is indeed poop. Austin, relieved, exclaims, “Oh good, it’s not just me then,” and proceeds take a sip...to which he notes that it’s a bit nutty.
The point being, these results look like crap because they’re the product of a crap ranking system. My only hope is that when the rankings look so bad come March, the committee is forced to ignore them completely, instead relying on the other data points listed on each team’s “Team Sheet”, such as KenPom and BPI. Then again, relying on the NCAA to be logical often proves to be an exercise in futility.