clock menu more-arrow no yes

Filed under:

Rating the Best College Basketball Teams Since 2002

New, 6 comments

A complete season by season ranking of all D1 basketball teams.

Jesse Johnson-USA TODAY Sports

Recently, FiveThirtyEight wrote an article outlining a method to rank every NBA team for the entire history of professional basketball. While we here at The Daily Gopher are equally as ambitious, we do not possess the same data sources as a fully sourced ESPN operation. On the other hand, we probably drink better bourbon. Here are the rankings for every college basketball team since 2002.

Methodology

So what is an Elo Rating, and why am I using something a little different? As a review or an introduction, Elo Ratings are an algorithm for deciding the best players or teams in head to head competition. Elo is the inventor's name. In order to determine an Elo rating, we need to know who played who. Rather self-explanatory, but no head to head system can determine a victor without knowing who played and the final score. In basketball, unlike chess, the location of the game may also have an effect. Second, Elo is zero-sum. Winning teams earn points and losing teams lose points. Upset wins are awarded more points and as you'll see beating Grambling State garners very few.

An issue with an Elo rating--and spoiler, why I'm using something different--is that it claims complete accuracy regardless of a team's player’s inactivity, game results, or strength of schedule. To put it another way, an Elo rating is a pure point estimate, without a measure of uncertainty. FiveThirtyEight experiments with different measures to regress back to the mean, but these still have a problem associated with solely reporting a point estimate.

Instead of Elo Ratings, I'm using Glicko Ratings. Named after their creator, Mark Glickman, Glicko Ratings extend Elo Ratings by adding a parameter to measure the deviation of a rating. Mathematically, it works out to being the same as a 95% Confidence Interval around the Rating point estimate. The specific formula is complex, but if you're interested in more information I recommend reading Mark Glickman's explanation.

Finally a word of caution regarding interpretation. In my specification, a starting Glicko rating is 2200 with a starting rating deviation of 300. Therefore if you see a team you've never heard of with a huge rating, it's not necessarily because the model thinks they are good. It likely means that they are a new D1 and have not played any games in a given season. The ratings do not have enough information, which will be represented by a large ratings deviation. There is also some minor rating inflation in this iteration. Future iterations will correct for this.

Data

Using several sources, primarily Kenpom.com, I gathered data on every game played between two Division 1 programs between the 2002 and 2015 seasons inclusive. I used seasons as rating periods of interest. Each team plays more than the minimum 10-15 games in a season suggested by Glickman for a valid metric.

How Do I Read The Charts In This Post?


To understand the ratings in context, from 2002-2015

  • A Team with a 2460 Rating will not sweat Selection Sunday
  • The Best teams are rated 2500 and above
  • The Worst teams are rated 2000 and below.
  • The Median team is around 2175

Highlights

Since this is a Gopher blog, I'll start with Minnesota's. The graph below plots Minnesota Glicko rating over time. The shaded region shows the deviation parameter over time.

Minnesota is currently 61st in the country. The Gophers under Dan Monson made the NCAA Tournament once, in 2005, before cratering in 2007. The hiring of Tubby Smith brought the Gophers back to respectability. As can be seen by the graph, most of the team's improvement came during his tenure. Smith's teams also posted higher ratings, and better improvement year over year than Monson's teams. Under Richard Pitino, the Gophers achieved their highest Glicko rating in 2014 by winning the NIT, making this last season even more disappointing.

Contrast Minnesota's graph with Kentucky's.

Smith's 2005 season was the apex of his Kentucky teams. The Cats regressed in his final years before dropping perilously close to early 2000s Minnesota level under Billy Gillespie. That sound you hear is Big Blue Nation sighing at the mention of the Gillespie years. Calipari's tenure has been incredible for the Wildcats, who sit second behind Duke over the full period. One part of this graph that is interesting to me is the decrease between the national championship in 2012 and the "rebuild" that was necessitated after. It's an open question if Calipari's 2016 team will have a similar stumble after losing so many key players.

From the near pinnacle to the very bottom. Here's Grambling State's:

Remember that even at their worst last season, the Gophers were almost 300 points better than Grambling State's best season over the last 14 years. The team managed to go winless in 2012-2013 season, joining seven other teams in the history of D1 basketball. From a mathematical viewpoint, this graph also gives a good indication about how the ratings deviation parameter works. Grambling State is objectively awful, and the confidence interval is very tight around that point estimate. With that said, since it's an interval, it's possible that Grambling was "actually" a below 1700 rating team at the end of this season, almost 500 points below the average team.

Where Do We Go From Here?

There are a lot of ways we as writers, fans, and a community can go with these rankings. I intend to break down the top conferences and coaches in upcoming posts. I'm also interested in hearing about other topics you'd like to see TDG tackle with this data or what you yourself might choose to do with it in a FanPost!

Make sure to share any questions or thoughts in the comments!