Earlier this year, HPN unveiled a new model named PERT based on player ratings, instead of team ratings like most other models. And it’s landed with a splash, currently sitting atop the models leaderboard on 74 correct tips.
It’s doing less well on Bits* and MAE*, which is a little suspicious, since those metrics tend to be better indicators of underlying model accuracy. But still! It’s enough to suggest there might be something in this crazy idea of considering who’s actually taking to the field.
So I’m hopping aboard. Starting this week, Squiggle’s in-house model considers selected teams and adjusts tips accordingly.
The difference that team selections make to each tip can be seen in the TIPS section of Live Squiggle.
In most cases, team selections will make only a difference of a few points to the Squiggle tip, which remains focused on team-based ratings. The adjustment is derived from a simple comparison of scores from AFL Player Ratings. So it will only swing a tip when it’s already close to a 50/50 proposition.
Over the last six years, this seems to deliver about a 0.40 point improvement in MAE. Naturally, though, 2018 will be the year it all goes to hell.
* “Bits”: Models score points based on how confidently they predicted the correct winner. Confident & correct = gain points, unsure = no points, confident & wrong = lose points.
* “MAE”: Mean absolute error, which is the average difference between predicted and actual margins, regardless of whether the correct winner was tipped.