How the rating system was developed.... updated from a post two years ago.
I am using an algorithm similar to what you find in USA Today for the Sagarin ratings. Only these are high school kids that are sometimes more unpredictable and might play over their heads one game and lose by 40 the next.
Here's a little history into how I developed this rankings. I also have a 2024 St. Louis Ranking over at https://gramps9850.blogspot.com and a 2018 Missouri High School Football Ranking over at https://gramps2018.blogspot.com.
A years ago, my grandson's select 8th grade team, D1 United (now Larry Hughes Basketball Academy), played in and won a C4 freshman fall league. I started taking game statistics and developing a ratings power index like Sagarin or RPI. I used D1 United as 100 and gave their opponents ratings based on the point spread. A team that D1 beat by 20 received an 80.0 rating, a team that won by 5 received a 105.0 rating.
By late fall of 2016, D1 United had joined up with Larry Hughes Basketball Academy and was their top team. I added some KC teams after the Boys & Girls Club of KC tournament in November, and then found some Kansas and Omaha tournaments that KC teams played in. That included some Iowa and Oklahoma teams which then led the adding Minnesota, Wisconsin, Texas and Arkansas teams.
One thing led to another and I was picking up NYBL and IndiHoops tournaments with teams from all over the country. These ratings were all anchored to LHBA in St. Louis, so teams like Vegas Elite, New World, etc. were pumped up to 125+ ratings, or a 25 point advantage over LHBA, still anchored at 100. I rated over 4000 teams at https://gramps8851.blogspot.com
One could argue, "hey, we beat New World by 2 at XYZ tournament", but as I added more tournaments, I had up to 50 comparative scores for some top teams and the rating was based on their season results, not one game. So your team might have upset New World once, but on average, your team was posting lower quality wins against other teams, including opponents of New World that were blasting those same teams.
So how does the algorithm work?
Well, I used an Excel spreadsheet with comparative formulae for each game score. For instance, at Indi Worlds 2 in San Diego, Seattle Rotary had some tight games with Vegas Elite so their RPI for that tournament would look like this:
Seattle Rotary = ($D$211-3+$D$213+10+$D$211-1+$D$213+11+$D$211-12)/5 = 103.7
where location $D$211 is Vegas Elite and $D$213 is New York Dragons.
So this formula indicates that Seattle Rotary lost to Vegas Elite by three, one and twelve and beat New York Dragons by 10 and 11 points. Since I used the comparative formula for Rotary, I cannot use the same formula for Vegas Elite or New York Dragons or Excel will give me a "circular" formula so I would add the point spread to their ratings.
Vegas Elite = (130.7+3 + 130.7+1 + 130.7 +12) = 136.0 rating
That's an over-simplification because I can use a formula for other teams Vegas Elite beat so that all teams have some comparative formulae. That way, if one team drops in ratings, it can drag their previous opponents down or if they improve significantly, it gives their opponents a little boost. You can see how one bad tournament can drag your rating down even after you have beat New World (btw, I don't know if anyone ever beat 2021 New World, LOL).
One could argue, "hey, we beat New World by 2 at XYZ tournament", but as I added more tournaments, I had up to 50 comparative scores for some top teams and the rating was based on their season results, not one game. So your team might have upset New World once, but on average, your team was posting lower quality wins against other teams, including opponents of New World that were blasting those same teams.
So how does the algorithm work?
Well, I used an Excel spreadsheet with comparative formulae for each game score. For instance, at Indi Worlds 2 in San Diego, Seattle Rotary had some tight games with Vegas Elite so their RPI for that tournament would look like this:
Seattle Rotary = ($D$211-3+$D$213+10+$D$211-1+$D$213+11+$D$211-12)/5 = 103.7
where location $D$211 is Vegas Elite and $D$213 is New York Dragons.
So this formula indicates that Seattle Rotary lost to Vegas Elite by three, one and twelve and beat New York Dragons by 10 and 11 points. Since I used the comparative formula for Rotary, I cannot use the same formula for Vegas Elite or New York Dragons or Excel will give me a "circular" formula so I would add the point spread to their ratings.
Vegas Elite = (130.7+3 + 130.7+1 + 130.7 +12) = 136.0 rating
That's an over-simplification because I can use a formula for other teams Vegas Elite beat so that all teams have some comparative formulae. That way, if one team drops in ratings, it can drag their previous opponents down or if they improve significantly, it gives their opponents a little boost. You can see how one bad tournament can drag your rating down even after you have beat New World (btw, I don't know if anyone ever beat 2021 New World, LOL).
So, back to November, 2018, I am taking the final rating from the 2017-2018 Missouri high school boys rankings and adding results for November games to each teams formula. Some teams have not played a game yet, so the early ranking I posted is somewhat meaningless until we get more game results. One might argue that I should start over from scratch but teams, depending on how much experience comes back year after year, tend to fall into a ratings level that gives size of school, community, feeder programs, coaching and other factors as more influential than team turnover. Good programs continue to feed good players into the high school program.
No comments:
Post a Comment