AFL Prediction & Analysis

When You Should Bet

I have an AFL analytics site, and since football and betting have become so intertwined it can be hard to tell where one stops and the other begins, people sometimes ask me whether they should bet.

I’ve found it hard to give a short answer, so instead I prepared this handy guide.

Photo: Michael Coghlan

Here are circumstances in which I think it’s a good idea to bet:

  • You have a terminal disease and nothing better to do with your money. The Great Beyond looms; you have no energy for rock-climbing or international travel; you hate your children; so maybe you can, even for a few moments, feel a connection to the spark at the heart of the universe by plonking down some cash on Martin to kick the first goal. If you lose, well, you were in a bad mood anyway. Because of the terminal disease.
  • You have inside information. I think it’s definitely easier to have inside information than be consistently smarter than everyone else in the market, so this strikes me as a good way to go. You do have to be careful not to, you know, get mixed up in organized crime. I’m not saying it’s completely risk-free. But if you can get yourself a steady source of exclusive information that’s timely, relevant and reliable, you should be able to beat the market more often than not. So if that’s your situation, I say go for it.
  • You enjoy losing money. I’m not sure this is a common scenario. But I want this to be a comprehensive document. You are normally guaranteed to lose money if you bet long enough, so this would be a good strategy for you.
  • You have located a stupid bookmaker. You shouldn’t expect this to last long, because other market players will also locate this stupid bookmaker and begin to exploit their stupid practices, thus sending the stupid bookmaker a strong signal to cease being stupid, e.g. by bankrupting them. But it certainly is possible to be profitable through information arbitrage, where you may know nothing about the actual sport but are able to efficiently exploit asymmetries in how different bookmakers engage with the market.
  • You are extremely smart, highly informed, equipped with an objective, evidence-based strategy that you intelligently supplement with personal insight, capable of dealing with extended periods of losses both financially and emotionally, and equipped with a thorough understanding of the myriad ways in which your human psyche will attempt to trick you into making bad decisions at the worst possible time. Congratulations, by the way. That’s really impressive. I do kind of wonder why you don’t apply yourself to something with a better risk/reward ratio, like, why are you gambling with your mortgage instead of pulling down high six figures from Citibank? But that’s your call. If you tick all these boxes, you should probably bet. You do have to tick them all, though. If you miss one, it will eat you.
Photo: Simon Yeo

On the flip side, here are some circumstances in which I think it’s a bad idea to bet:

  • You believe life is more predictable than it really is. This is everyone, by the way. It is something like 99% of people. If this surprises you, you are definitely one of those people. The human brain is excellent at detecting patterns in random noise, which used to be super helpful back when we were being stalked by jaguars, but today means we tend to wildly overrate the significance of Sydney losing their last three games at home. The universe is full of randomness that humans are bad at identifying. We don’t even like to acknowledge it’s there; we gravitate to stories with logical cause and effect. Those are illusions. It’s chaos out there.
  • Your emotional state depends on whether you win or lose bets. Holy shit, man. Get out of there. You know what you’re doing to yourself.
  • You struggle for status and respect in a heartless world and earn some by bragging about betting wins. I considered putting this in the “good reasons to bet” section, since at least you’re getting something out of it. Not money, obviously. But something of value to you. But it’s here because it’s still ultimately self-destructive and more costly than it first appears, like gambling in general.
  • You hear people talking up wins but not mentioning serious losses and think this means betting is a generally profitable pastime. It is not. You’re competing in one of the most ruthlessly efficient markets on Earth against people and companies who have been doing it for longer than you, with more capital, resources and information. The playing field is also tilted so that to be profitable you must outperform the average waged dollar — not the average bettor, mind you, the average dollar, bearing in mind that the market effectively strips out stupid dollars and inflates smart ones—and must also outperform it by a wide enough margin to beat the vig.
  • Similarly, you hear a lot of talk about betting and get the idea that everyone is doing it, at least occasionally. This is an illusion created by betting companies who wedge sponsorship dollars into every visible crevice — dollars they have because they take them from bettors. I’ve been approached three times in two years about putting betting material on this site, including one who offered to pay me only if I didn’t reveal I was being paid. Betting is not common; the vast majority of it is done by a minority of people.

So that’s my view. It mostly stems from respect for two powerful systems: the free market, which can be extraordinarily efficient, and human psychology, which is impressively terrible at discerning objective reality.

That’s a high bar to clear in order for you to be good at betting—as good as you are at, say, driving a garbage collection truck, for which you can reliably generate $26 per hour.

So I don’t bet. I don’t like the feeling of losing money (or even risking it); I don’t receive much emotional gratification from winning it, since I’m very aware of how big a factor luck is; I suspect I would lose my nerve after a period of heavy losses; I’m primarily interested in learning how football works, not beating bookmakers; and, most of all, I don’t think it can be done reliably over the long-term without an oversized investment of time and energy. And maybe not even then.

You may be different, in which case, I wish you all the best! You go live your life your way. But I’ll be buying index funds.

Who Won the Round?

When you come off a good win, you don’t just want to analyze how great you were compared to the other team; you want to see how great you were compared to ALL the other teams.

Sadly, it’s hard to establish objectively how much better (or worse) Richmond’s defeat of Hawthorn was to Collingwood’s thumping of St Kilda, for example, or any of the round’s other games.

Until now! Squiggle now offers an algorithmic ranking of who had the best round. Using data from the aggregate Projected Ladder, which brings together the predictions of many different excellent AFL prediction models, this determines how the weekend’s results impacted each team, by comparing how their predicted ladder finishes changed.

This is all based on pre-round expectations, so an upset win can be hugely meaningful for a team, radically improving its prospects of finishing higher on the ladder. Equally, a shock loss can be catastrophic, as the cold-hearted computer models begin shaving down its finals chances.

The importance of “eight-point games” is clearly visible, too, where teams that defeat an opponent competing for the same ladder spots are recognized both for advancing their own position and damaging their competitor’s.

To have an outstanding weekend outside of “eight-point games,” teams need to rely on other results falling fortuitously, so that teams around them lose, while teams too far above or below to matter win.

The current algorithm is a bit experimental, since it applies a weighting to decide the relative importance of changes in predicted ranks vs wins vs percentage. It also applies its own ideas in determining how much to scale these based on the predicted “closeness” of teams, and therefore who is competing with whom for which spots. So it’s currently in beta.

But I think it offers a pretty good map of the round, allowing a peek into the changing fortunes of each team, as prognosticated by the internet’s finest models.

The Aggregate Projected Ladder

In the same way that Squiggle Dials aggregate predictions from the internet’s best AFL computer models, so does the new auto-updating aggregate Projected Ladder!

As I write (post-Round 6), it looks like this:

There are some funny quirks to projected ladders, which are quite a bit weirder than they first appear. You can read some discussion of that at the bottom of that page, but the fundamental question is: What are we trying to predict? It’s not at all clear how we should rate the accuracy of a ladder prediction — for example, is it more valuable to correctly tip who finishes 1st than who finishes 12th? How much better? How do you score a ladder that gets the ranks right but had the number of wins all wrong, compared to one that was very close on wins but had some incorrect ranks?

It’s worth noting also that a ladder prediction is not the best way to answer questions like, “What are the chances that my team makes finals?” You can find those kinds of estimates from many Squiggle-friendly models, including FMI‘s Swarms, Graft‘s Projection Boxes and PlusSixOne‘s Finishing Positions. They aren’t aggregated here, but are better targeted to those kinds of questions.

In the background, the Projected Ladder is recording the ladder predictions of each contributing model, so in the future it should be possible to go back and see how they evolved. We could even score them on how accurate they were — once we establish what it is, exactly, that we want to score.

Speaking for myself, I’m pretty sure that my Live Squiggle ladder predictions are quite a lot less intelligent than my game tips, simply because there isn’t a clear way to rate it, which makes it more difficult to refine and improve. A standard metric of some kind would help.

If you’d like to play around with this data, it’s available in a machine-readable format via the Squiggle API!

Podcast: Chilling With Charlie

There’s a terrific new podcast on sports analytics available from Robert Nguyen, author of the site Analysis of AFL and co-creator of the very popular R data package fitzRoy.

All the episodes are worth your time, but this one features me talking about the torment of Richmond fans and the genesis of Squiggle:

You can find it on iTunes Podcasts by searching for “Chilling With Charlie,” or via this link.

Introducing Fat Stats

One more model sneaks in ahead of the season! It’s Fat Stats, with a machine learning-based player metric model incorporating Elo.

That brings the number of new models to four, and the total field to 16 this year, including Punters, which is our aggregate of bookies.

That’s a lot of models! It’s double the number from only two years ago, and many (most?) are now player-aware, which means they take into account who’s actually taking the field each week, rather than modeling teams as a single entity.

Rise of the Machines

Squiggle will track two new machine learning AFL models in 2019: one from AFL Gains and another from AFL Lab.

I believe these are the first public models to lay claim to a machine learning heritage, so this is a good opportunity to see how they go in action, at least until the inevitable robopocalypse when they destroy us all.

In related news, most of 2018’s models are already up and running for the new year, including Massey Ratings, Swinburne University and reigning champ Live Ladders!

It’s 2019!

Here is a January 1 ladder prediction:

WINS
1.RICHMOND14.8
2.MELBOURNE14.8
3.GWS14.3
4.WEST COAST14.1
5.GEELONG13.6
6.COLLINGWOOD13.2
7.ESSENDON13.0
8.ADELAIDE12.8
9.Hawthorn11.9
10.Port Adelaide11.6
11.North Melbourne10.6
12.Brisbane9.5
13.St Kilda9.5
14.Western Bulldogs8.7
15.Sydney8.3
16.Fremantle7.3
17.Carlton4.8
18.Gold Coast3.6

Some notes:

  • Teams are ranked by average wins from 100,000 simulated seasons.
  • Unlike a regular ladder, it doesn’t round off wins to whole numbers and tie-break on percentage. That’s why it’s different to the quick-and-dirty Live Squiggle Ladder Predictor you may see on the right. This way is better.
  • That said! Historically, season-long predictions aren’t much more accurate than tipping everyone to win the same number as games as they did last year. So, you know.
  • This takes into account off-season list changes.
  • Predictions will continue to evolve as Squiggle is able to factor in pre-season results, major off-season injuries, and Round 1 team selections.
  • Sydney are that low because they tailed off badly in 2018 despite being able to put out something close to a full-strength team most weeks, and did not significantly bolster their team in the trading period.

How the Fixture Screwed St Kilda

It’s hard to prove the fixture affects any team’s finishing position. To say that, you need to find games that were so close that the result could easily have been different, and establish that they were scheduled in a lopsided way.

The only really clear examples of this tend to be self-inflicted wounds, where a club sells a home game and loses it narrowly — such as Richmond’s famous after-the-siren loss in Cairns to the Gold Coast in 2012, or St Kilda’s 3-point defeat by Brisbane in New Zealand in 2014.

These cases are nice and clear: Computer modeling can tell us with a high degree of certainty that Richmond’s home ground advantage at the M.C.G. is worth more than 2 points, and therefore they should have won the game if it had been played there. Likewise, St Kilda should have defeated Brisbane if they’d stuck to their regular home ground in the Docklands. Of course, you can point to many things that could have changed a close result, but one of them is the venue.

Otherwise, though, the picture is muddier. You can establish that a team had a favourable fixture — weaker opponents than average, or games at friendlier venues — but you can’t really say it was worth a certain number of wins. When Fremantle played Gold Coast “away” in Perth this year, due to the unavailability of Carrara, that was certainly unbalanced fixturing… but the Dockers won the game by 28 points, so would the result have been different in Queensland? Modeling suggests probably not.

However, you can say one thing for sure: St Kilda got screwed.

St Kilda (2018)
Net Benefit from Fixture: -123 points (ranked 18th)
Opposition: 18th Timing: 12th Home Ground Advantage: 18th

St Kilda is no stranger to scheduling screwings. It had the AFL’s worst fixture in 2014, the worst in 2015, the 9th best in 2016, and the worst in 2017. This year, it was the worst again, and that was also the worst fixture of any team in five years.

This analysis rests on three key factors: who you play (Opposition), when you play them (Timing), and where you play (HGA).

(Two factors that come up sometimes in these kinds of discussions, and which aren’t included because they don’t matter, are Six-Day Breaks and The Cumulative Effect of Travel. There is more on these at the end of the article.)

Opposition

The simplest factor is the strength of opposition. All clubs face each other at least once, of course, and the AFL attempts to equalize the competition by ensuring that Bottom 6 teams have no more than one double-up game against a Top 6 opponent.

This is fine in theory, but since this year’s performance isn’t a reliable guide to next year, it often works out less well in practice.

St Kilda was a Middle 6 team in 2017, finishing 11th. They were duly given a 2/2/1 split of double-up games: That’s two matches against Top 6 opponents, two against Middle 6 opponents, and one against a Bottom 6 opponent.

This was already a touch on the mean side, since two other Middle 6 teams were assigned the more favourable 1/2/2 split.

But the Saints’ real misfortune came from how its double-up opponents — Richmond, GWS, Melbourne, Hawthorn, and North Melbourne — performed unexpectedly well. This turned their 2/2/1 split into 4/1/0: four double-up games against this year’s Top 6, one against the Middle 6, and none against the Bottom 6.

And that put the Saints into a whopping 95-point hole for the season.

You can also see that in practice there isn’t a whole lot of equalization going on here. Many of 2017’s stronger teams had weaker than average double-up opponents (Melbourne, Port Adelaide, GWS, Hawthorn, Richmond), while many of last year’s lower teams faced harder than average opposition (Gold Coast, Brisbane, Fremantle, Western Bulldogs, St Kilda).

At least North Melbourne, the recipient of this year’s most beatable opponents, had a generous fixture by design. After finishing 15th in 2017, the Kangaroos were given a 1/2/2 split, which turned out to be 0/1/4, with repeat games against Brisbane, Gold Coast, Western Bulldogs, St Kilda, and Sydney.

Timing

There’s a problem with considering strength of opposition like this: It assumes that teams are equally strong all season long. That’s clearly not the case. Team strength fluctuates both on a weekly basis, as star players get injured and miss games, and over the medium- and long-term, as a club gets stronger or weaker for any number of reasons: fundamental gameplan changes (Essendon), or deciding the season is lost and looking to the future (Carlton), ramping up toward finals (West Coast, Melbourne), or simply having the wheels fall off from no discernible cause (Port Adelaide).

Each club will happen to play some opponents around their weakest point and meet others near their peak. Lucky clubs will meet more opponents at times of relative weakness; unlucky clubs will run into more at times of relative strength. It should average out, but doesn’t always. And since team form can rise and fall quite a lot, it can make a real difference.

After a stirring Round 1 victory over Adelaide, the Bombers lurched from one unconvincing performance to another, culminating in their Round 8 defeat at the hands of Carlton. This led to a very public dissection, staff changes, and a very different looking team for the remainder of the season.

As such, it was quite a lot better to face Essendon early in the year. In fact, it’s possible to identify the worst possible time to play Essendon: Round 21. This is when the Bombers were performing well as a team, but just before they lost Orazio Fantasia (Round 22) and Tom Bellchambers (Round 23). As it happened, the team they played in Round 21 was St Kilda.

(Note: This is a naive rating, which means it rates the apparent strength of a team each round before they played, in order that it remain uncontaminated by the performance of the team they played against. It means there’s often a real difference between how beatable a team appeared to be and how they performed on the day. Essendon provide a fine example of this, too, in Round 9, when, after looking abysmal against the Blues and then losing Hurley and Parish, they upset Geelong.)

In truth, though, St Kilda weren’t particularly screwed by timing; not this year. They rank around the middle of the league on that metric, receiving a mix of good luck (Geelong and North Melbourne early in the season) and bad (Adelaide early, Essendon and Hawthorn late).

The worst timing belongs to Fremantle, who managed to encounter a string of opponents near their peak: Collingwood (round 23), Hawthorn (round 19), Essendon (round 18), Port Adelaide (round 17), Sydney (round 9), Carlton (round 13), and Gold Coast (round 3).

The Blues had the most fortunate timing, thanks to repeatedly running into teams who were losing key players to injury — although perhaps this is less to do with good fortune than their opponents seizing the opportunity to rest players. The Blues also had early-season games against teams who would dominate the season later: West Coast, Collingwood, Richmond, and Melbourne.

Home Ground Advantage

But back to the screwing. In theory, home ground avantage (HGA) is balanced: every team receives roughly the same advantage from its home games that it must face in its away games.

West Coast, for example, play ten home games against opponents traveling from interstate, and ten games to which they must travel, plus two local derbies. There’s no real HGA in a derby, and the benefit the Eagles receive from playing interstate sides at home is neatly counter-balanced by the penalty of traveling interstate to play away.

A Melbourne-based team such as Collingwood, by contrast, plays many games against other local opponents at relatively neutral venues in the M.C.G. and Docklands while hosting interstate sides about five times and traveling away about the same number.

Either way, the net benefit is roughly zero.

But there are exceptions. Sometimes teams give up a home venue they’re entitled to, such as Melbourne playing in the Northern Territory. Hawthorn and North Melbourne occasionally drag a Melbourne-based team to Tasmania, creating more venue-based advantage than there would otherwise be. And occasionally there are weird situations like the Commonwealth Games depriving Gold Coast of a home ground, sending the Suns to play a home game against Fremantle in Perth.

Also, sometimes a team is given unbalanced travel.

Now, HGA is hard to define, and various models use different methods. You can get a reasonable approximation simply by assigning 10 or 12 points of HGA whenever a team hosts an opponent visiting from out of state. A more sophisticated, but also more fragile, strategy is to attempt to divine particular teams’ affinity for particular venues from the historical record of how they under- or over-perform there.

I employ something in between, which is essentially a Ground Familiarity model. This awards HGA based on how familiar a team is with the venue and location; in practice, it serves as a proxy for an array of factors that probably comprise HGA in the real world, including crowd noise, travel time, and psychological disruption.

There’s a fair argument that Sydney wasn’t actually the second-most HGA advantaged team this year, because Sydney didn’t play its home ground very well. Similarly, many believe that Richmond received more advantage from M.C.G. games this year than the average M.C.G. tenant. Such ideas are popular, but tend to be transient and based on few data points. For example, for me, Richmond’s much-discussed four interstate losses are more easily explained by the fact that those were its four hardest games. So there is no attempt here to model that kind of theory.

Then again, a Ground Familiarity model has quirks of its own. Much of the reason Sydney scores well on this measure is that the Swans played 18 matches at just three grounds: the S.C.G., the M.C.G., and Docklands. They traveled to Western Australia just once and South Australia not at all. This means the Swans frequently play away at grounds they’re reasonably familiar with, while their opponents don’t have the same experience at the S.C.G.

This small but persistent imbalance affects Docklands tenants in reverse: They are almost always less familiar with their opponents’ home grounds than their opponents are with theirs. For example, the Saints are relatively familiar with Perth for a Victorian team, being dispatched there at least once a year, and four times in the last two years. But when they last met the Eagles in Melbourne, that was the Eagles’ fifth trip to Docklands that year alone. The venue still offered St Kilda an advantage (especially compared to Perth), but it was a little less than it might have been.

Therefore, under a Ground Familiarity model, being based at the Docklands is the worst option. You probably don’t even get to play home finals there.

But the real culprit behind St Kilda’s poor rating on home advantage is their persistent travel imbalance:

St Kilda’s Games vs Non-Melbourne Opponents

Year Home Away Difference
2014 5 7 -2
2015 5 7 -2
2016 5 6 -1
2017 6 6 0
2018 4 7 -3
Average 5.0 6.6 -1.6

Having to travel to your opponents more often than they travel to you is a clear source of home ground advantage disparity. This is rare for non-Victorian teams, who almost always have a 10/2/10 split, but common for those in Melbourne: They will often benefit a little more or a little less from travel than their opponents. For the Saints, almost every year, it’s less.

This year’s version was the most extreme yet. St Kilda enjoyed home advantage from hosting travelling teams only four times (Brisbane, Sydney, GWS, and Adelaide), while facing disadvantage from travelling interstate five times (GWS, West Coast, Port Adelaide, Fremantle, and Gold Coast), as well as having to visit Geelong at Kardinia Park (with no return home match), and Hawthorn in Tasmania.

The Saints have actually been sent to play North Melbourne or Hawthorn in Tasmania for five years in a row now, each time turning what would be a neutral game at Docklands or the M.C.G. into one where the venue favours the opposition.

Three extra games of significant disadvantage is quite a lot. The ratio is eyebrow-raising, too; the equivalent of a non-Victorian team playing 8 home games and 14 away.

Sooner or later, the law of averages will ensure that St Kilda get lucky with their double-up opponents, or else their timing. But their unbalanced travel is an enduring, deliberately fixed anchor. It ensures that whenever fortune smiles on the Saints, it’s more of a smirk.

St Kilda’s Fixture: A History

Year Fixture Rank Oppo Timing HGA
2014 18th 12th 11th 18th
2015 18th 12th 17th 18th
2016 9th 2nd 10th 14th
2017 18th 13th 16th 14th
2018 18th 18th 12th 18th

Continue reading “How the Fixture Screwed St Kilda”