Wednesday, July 29, 2009

Golden Rankings, 2009 changes

Just like the BCS, our goal here with the modified Golden rankings is to keep tinkering with the system year after year to try and correct the previous year's mistakes, hoping we're not opening up new holes in the process.

Midway through the bowl season, we noticed that the rankings were doing a particularly poor job of predicting the bowl results. For the year we ended up 13-21. When I looked at the losses in detail, I found that in 11 of the 21 losses, the team that won had a higher schedule rating than the team that the rankings predicted to win. Was it possible that we weren't giving strength of schedule enough weight?

I started to think about this in relation to our basketball rankings, and in particular to the way the NCAA calculates RPI for basketball (and other sports). The RPI is calculated based on 25% for a team's record, 50% for the opponent's record, and 25% for the opponent's opponent's record. So, I took a look at two somewhat average teams in last year's rankings. Baylor, which finished 4-8 with an overall rankings of just above -14 and Houston, which finished 8-5, with just under 16 points. When I took a look at where each team earned their points, I found them to be remarkable consistent. Each team got under 14% of their points from their opponent opponent's records, over 50% from opponent's records, and over 30% from their own record and bonuses and penalties for margin of victory or defeat, road wins, and home losses.



Looking at this, I tweaked the overall formula to double the value of the opponent's opponent's record so that each win or loss by the opponent of an opponent counts 0.04 points, instead of just 0.02. This had the following effect on our two average teams:



Immediately I saw percentages more in line with how the RPI is calculated in other sports. Additionally, when I applied these changes to all teams, I found that our bowl predictions flip-flopped in 10 of our 21 losses. That is to say, with this system in place last year, we would have correctly predicted 23 of the 34 games!

At the start of last season, we looked at the way we counted wins and losses for and against FCS teams, and decided to weight those games based on the winning percentage of FBS teams vs FCS teams (97.62% or 82 out of 84 games) measured against the winning percentage of FBS teams against FBS teams (50% by definition). Last year that meant that wins over FCS teams counted slightly more than half as much as a win over an FBS team, and losses to FCS teams counted twice as much as losses to FBS teams. While I like the methodology we used, and the fact that it has the capability to change year to year based on the performance of the FCS teams, there was a slight hole in our logic. We compared the FBS vs FCS winning percentage to 50%, when we should have compared to the FBS winning percentage of home teams since all 84 FBS/FCS games took place on the home field of the FBS teams. So this is what we are going to do this year. While this will boost FCS teams in the rankings, that boost will be counteracted by the extra weighting given to schedule strength through opponent's opponent records. More than likely this change will decrease the penalty incurred by FBS teams for scheduling FCS teams.