Back to poll results >>

BosskOnASegway Ballot for 2017 Week 6

Ballot Type: Computer

Submitted: Oct. 3, 2017, 8:50 a.m.

Overall Rationale: This week my model introduced 1 new models into the existing model and reduced the weighting of the preseason model.  The preseason model now comprises of just 20% of our weighting.Our Score is determined by taking the distance from #1 for each ranking.  This means #1 in a category is worth 1, and #208 is worth 0.  Every ranking in between is evenly spaced.  Next we took the weighted average of these values with a weighting of 1 for MLE, 1 for PageRank, 1 for QElo, and 1 for Preseason.  As new models are added, Preseason is weighted less.  Next week it will be removed entirelyEach of these models are creations of my own design based on various maths and ideas I have had over the years.  You can find version of many of them in my post history along with very detailed explanations, so I will keep the descriptions a little more terse this week.The first model is **MLE**.  This is the Most Likely Elo model.  I created this model to address what I saw as two major short comings in tradional Elo.  First, playing a team early in the year can be worth a vastly different amount than playing the same team later in the year even if their quality of play has not changed.  Additionally traditional Elo is strictly win, loss, or draw.  To address this I created a game value.  Each game uses the scores and margin of victory to allocate win shares using a cubic function that best fitted the historical data to maximize retrodictive accuracy.  From there we use these game values as probablities of each team winning the game if it were replayed.  Once we have a cluster of probablities, we combine those with the initial rating for each team to find the Elo that maximizes the log likelihood of those probablities occuring.  We take several passes to allow these ratings to stabilize and have the "true" or "most likely" Elo for each team.  This Elo represents where a team's performance actually is most likely to place them.  The next model is **Modified PageRank**.  PageRank is an algorithm created at Google to rank search results.  It was used to rank pages based on the links to it and who it linked to.  Links out from websites with more links in were more valuable than those with fewer.  Now, we can make a slight mental shift to see losses as links out from a website and wins as links in.  We can use this PageRank model to get the value of wins a team has.  Unfortunately this model, does not penalize losses much at all.  To account for this, we can reverse the direction of our links (losses in and wins out).  This lets us get a cost value from our losses.  We can now differentiate the values and get a teams performance value that reflects what I see as a team's true potential. The third in season model is **QElo**.  This is my Quality Elo model.  This model was one of my earlier creations.  It generates to Elo rankings one going forward and one going backward in time to generate two Elos and averages them to get a team's Elo. Rather than using the games wins and losses as firm outcomes we use the Pythagorean wins formula with powers of 2.37 in order to assign parts of the win to each team in order to calculate each Elo. Next we have our **NEW model Margin Rank.**  This model works on a simple premise, it uses light statistics and logistical regression to determine how much the model would favor each team over another based on their margin of victory in each game.  It is a prettty straight forward model conceptually though a bit complicated in execution.Finally we have our **Preseason** model.  This model is based on the votes from the simulated season during the offseason.  During the offseason, we did a simulated season where people voted who they thought would win and that was treated as their win probability and we randomly chose winners based on that probability.  The premise of votes as a probability applied to me and seemed like an ideal fit for a modification of my MLE model.  Rather than calculating a game value, we could take the votes as the probabilities and generate Elo scores for each team.  These rankings served as my preseason votes for the season and have remained as my preseason weightings and seeds for my various models.

Rank Team Reason
1 Alabama Crimson Tide Score: 0.9703 (6, 9, 3, 3, 5)
2 Clemson Tigers core: 0.9703 (8, 2, 5, 7, 9)
3 Oklahoma Sooners Score: 0.9675 (4, 7, 7, 5, 11)
4 Michigan Wolverines Score: 0.9656 (7, 3, 6, 17, 3)
5 Georgia Bulldogs Score: 0.9598 (10, 4, 3, 9, 16)
6 TCU Horned Frogs Score: 0.9541 (3, 1, 1, 4, 39)
7 Miami Hurricanes Score: 0.9502 (2, 11, 4, 12, 23)
8 Washington Huskies Score: 0.9483 (5, 12, 9, 15, 13)
9 Penn State Nittany Lions Score: 0.9426 (15, 10, 17, 10, 8)
10 UCF Knights Score: 0.9407 (1, 5, 2, 2, 52)
11 Ohio State Buckeyes Score: 0.9368 (11, 22, 16, 11, 6)
12 USF Bulls Score: 0.9330 (9, 13, 14, 19, 15)
13 Oklahoma State Cowboys Score: 0.9321 (19, 19, 19, 13, 1)
14 Wisconsin Badgers Score: 0.9263 (17 ,15, 27, 14, 4)
15 Washington State Cougars Score: 0.9196 (22, 6, 12, 18, 26)
16 Auburn Tigers Score: .9158 (14, 34, 22, 6, 12)
17 USC Trojans Score: 0.9158 (23, 18, 10, 27, 10)
18 Notre Dame Fighting Irish Score: 0.9053 (12, 20, 13, 8, 46)
19 Louisville Cardinals Score: 0.8880 (22,33, 20, 25, 19)
20 Virginia Tech Hokies Score: 0.8852 (30, 26, 34, 16, 14)
21 Florida Gators Score: 0.8593 (26, 17, 31, 55, 18)
22 Navy Midshipmen Score: 0.8545 (27, 14, 37, 42, 32)
23 Arkansas Razorbacks Score: 0.8517 (25, 28, 28, 36, 38)
24 SMU Mustangs Score: 0.8478 (16, 36, 33, 20 54)
25 Iowa Hawkeyes Score: 0.8450 (32, 47, 21, 31, 30)

Back to poll results >>