AFL Prediction & Analysis

Fixture Analysis 2022

First, the headlines: Geelong had the easiest fixture, GWS the hardest. But before we go any further, an important disclaimer: the Cats were so comfortably in far of everyone else, not even the league’s hardest fixture would have kept them from the minor premiership.

Home advantage is important, but not that important. If home advantage was as important as people say, all the left-column circles would be green and all the right ones would be red:

Games Won & Lost in 2022 (incl finals)

With Significant Home AdvantageNeutral-ishWith Significant Away Disadvantage
1
GeelongWWWWLWWWWWWWWWWWWWLLWWWWL
2
SydneyLWWWWWLWWWWWWWWLWLLWLWWWL
3
BrisbaneWWWLLWWWWWWWWLWWLWLWWWLLL
4
CollingwoodWLWWWWLWWWWWLLWWWWLLLWWWL
5
FremantleLWWLWLLWWWWWWWLLWWWDLWWW
6
MelbourneWLLLWLWLWWWWLLWWWWWWLWWW
7
RichmondWWWWDWWWLWWLWLLWLLLWLWL
8
BulldogsWLWWLLWWLWWWLWWWLLWLLLL
9
CarltonWWWWLWWLWLWWWLLLWLWLLL
10
St KildaWLLLWLLWLWWWLLWLWLWWLW
11
Port AdelaideLLWWWWLWWLWLWLLLLLLWLW
12
Gold CoastLLWWWLWWLWWLLLLLLWLLWW
13
HawthornWWWWLWWLLLLLLLLLLLWLWL
14
AdelaideLLLLWLWLWWWLLLLWWLLLLW
15
EssendonWWWLLLLWWLLLLLLLWLLLLW
16
GWSLLLLLLWWWWLLLLLLLLWWLL
17
West CoastLLLLWLLLLLLLLWLLLLLLLL
18
North MelbourneLLLLLWLLLLLLLLWLLLLLLL

There is a bias there – home advantage is worth something – but it’s not a guaranteed ride to the Top Eight, or even a single extra win. You still actually have to be a good team.

(In the above table, “Significant Home Advantage” means games between interstate teams at a home ground, Geelong playing anyone at Kardinia Park, and Hawthorn or North dragging anyone off to Tassie.)

Of course, there are different degrees of home advantage. In Round 19 alone, we had:

  • West Coast vs St Kilda @ Perth Stadium (WA) – an interstate game with fervent crowd support for the home team – that’s about as extreme a case as you’ll find, and good for 13.3 pts, by Squiggle’s model, which is generally calibrated to the level of crowd support
  • Carlton vs GWS @ Docklands (Victoria) – an interstate game with good home crowd support, at a venue frequented fairly often by the away team – that’s 7.9 pts
  • Brisbane vs Gold Coast @ the Gabba (Queensland) – two teams with smaller fan bases from the same state at one’s home ground – 2.8 pts
  • Collingwood vs Essendon @ MCG (Victoria) – an extremely well-supported team hosts a very well-supported team at the Magpies’ home ground – 2.6 pts
  • North Melbourne vs Hawthorn @ Bellerive Oval (Tasmania) – two teams in their secondary state, at a ground more often played by the Kangaroos – 2.0 pts

There’s a real hodge-podge of scenarios, which over the season shake out a bit like this:

Don’t stare at that too long, though; there’s not much to be gleaned from it. The Squiggle model considers Collingwood and Richmond to enjoy many games of mild home advantage, by virtue of their large crowds at MCG games. The South Australian & West Australian teams usually have 10 games of extreme home advantage but fewer games of extreme disadvantage, as they revisit the same venues repeatedly (especially Docklands). NSW and Queensland teams essentially never create the same level of home advantage as the rest of the league, due to their lack of fan-filled cauldrons. And the Cats have a cauldron as well as warm fan support at many of their away games, which is a pretty handy setup.

Let’s now throw in Opposition Strength, because that’s the other big piece of the puzzle. As you know, each year the AFL carefully divides the previous year’s ladder into blocks of 6 teams, and assigns double-up games based on an equalisation strategy, so that weaker teams receive gentler match-ups.

Ha ha! We know that never works, since it only takes a couple of teams to shoot up or down the ladder to throw the whole thing out. But it may never have worked worse than this year, with Geelong, the eventual premier (and last year’s preliminary finalist) receiving quite gentle double-up games, while back-to-back wooden spooners North Melbourne faced a much sterner test. To some extent, this happens because teams can’t play themselves – you can’t fixture the wooden spooner against the wooden spooner – but still, things have not gone well when the premier has double-up games against the bottom 2 teams (representing 4 wins combined), while the bottom team faces both Grand Finalists, who have 34 wins.

Overall, Adelaide did well out of the 2022 fixture – which, as a bottom-6 team, was at least to plan. Gold Coast, also lowly ranked in 2021, received a terrific set of double-up games, but lost it all to home advantage, as they hosted interstate teams at Carrara only 8 times while flying out 10 times themselves – and not just to familiar Docklands; the Suns were dispatched to every state plus the Northern Territory (twice), and even country Victoria.

St Kilda had terrible everything, as usual; St Kilda always have a terrible fixture, to the point where I’m starting to think it must be written into the AFL constitution. They hosted just 4 interstate teams (at Docklands, which their opponents visit often) while taking 6 interstate trips, including two to Perth, plus a bus to Kardinia. Their five double-up games – which should have been mild, as a middle-6 team – included both Grand Finalists, a Preliminary Finalist, and a Semi-Finalist. This combination of bad luck and bad design is very St Kilda, as was the Round 7 home game the Saints sold to play in Cairns and subsequently lost by a single point: a rare sighting of the case where a team’s unfair fixture really did cost them the match.

GWS also had four finalists in its five double-up games, and its fifth opponent was Carlton, who missed finals by a point. That’s enough for the Giants to take the booby prize for the worst set of match-ups.

Geelong’s bounty, while appreciated, I’m sure, was mostly wasted, since they finished two wins and percentage clear on top of the ladder, and were decidedly the best team in finals as well as the second half of the year in general (after Melbourne’s slide). It’s unlikely their fixture affected anything, and the Cats almost had a case for being dudded, escaping by 3 points against the Tigers in a home game played at the MCG, and by a goal against Collingwood in a home final at the same venue.

The 2023 AFL fixture will be released in the near future, and I have some thoughts. Chief among them: We are not actually achieving much equalization when we focus on the 6-6-6 system – which is obviously flawed and often produces the opposite effect – while ignoring systemic, completely predictable imbalances, such as:

  • Poor teams sell home games.
  • Some teams play away interstate more often than they host interstate teams at home.
  • Some teams have many more games at the Grand Final ground – which doesn’t matter if you don’t make it, but can matter quite a lot if you do.
  • Teams with smaller fan bases generate less home advantage.
  • Geelong generate more home advantage playing any Melbourne-based team at Kardinia Park than they give away in the reverse match-up.
  • Some teams have home games shifted to their opponents’ home ground.

To be fair, the fixture-makers do seem to be aware of most of the above, and I think they make some effort to avoid any of them becoming too egregious. But the priority is clearly the double-up games, which is the least predictable part of the equation. The result is that Docklands teams – especially St Kilda! – are almost guaranteed a bottom-4 fixture every year.

And maybe we can’t fix that; maybe the world isn’t ready for a fixture that provides kinder fixtures to poor teams with smaller fan bases. But it should be part of the conversation. Today, any talk of fixture fairness quickly shifts to how many times each team should play each other, and stops there, as if that’s the whole problem. It’s not: a 17-round fixture (or 34 rounds) won’t stop teams selling games, or being shifted to the MCG to face Richmond and Collingwood, or being sent to country Victoria; or, for that matter, being lucky enough to play a team when they have a bunch of outs versus when they don’t.

It’s a grab-bag of factors, and there’s no way to smooth them all out. Teams will inevitably have good fixtures and bad fixtures. But we can do better if we don’t rest the whole thing on 6-6-6 and the clearly wrong assumption that next year’s ladder will look just the same as today’s.

The Squigglies 2022: Ladder Predictions

Oh sure, now, everyone looks back on the preseason ladders and mocks how wrong they were. “Essendon to make finals,” they say, shaking their heads. “Not even close.”

But no-one was close, of course; everyone’s ladder has a howler or two. If you picked Essendon to fall, you probably didn’t also pick Collingwood to rise, or Port Adelaide to miss.

That doesn’t mean they’re all equally bad, though. Here at Squiggle, we value the signal in the noise, even if there’s still a lot of noise. And ladder predictions that were less wrong than everyone else’s are to be celebrated.

Every 2022 Expert Ladder Prediction Rated

Best Ladder: Peter Ryan

This is a heck of a good one, and it’s no flash in the pan:

Ryan’s ladder managed to get 7/8 finalists, which is fantastic given that three of them finished last year in 11th, 12th, and 17th. (His tip of Fremantle for 6th — a single rung too low — was especially good.) Like everyone else, he missed Collingwood, but correctly foresaw exits by Port Adelaide, Essendon and GWS. He also resisted the popular urge to push Geelong down the ladder, and wisely slotted the Eagles into the bottom 4.

Damian Barrett also registered a good ladder this year, with 6/8 finalists and three teams in the exact right spot. There was a fair gap from these two to Jake Niall in third.

Runner-Up: Damian Barrett

Best Ladder by a Model: Squiggle (6th overall)

Squiggle nudged out other models with some optimism on Sydney and pessimism on Port Adelaide, but not enough of the former on Collingwood and not enough of the latter on GWS and the Bulldogs.

Honourable Mention: The Cruncher (11th overall)

Long-Term Performance Award: Peter Ryan

Not everyone publishes a ladder prediction every year — it’s a little shocking how frequently journalists come and go from the industry — so although I always have a bag of 40 or 50 experts and models to rank, only half appear in all four of the years I’ve been doing this. Of those, Peter Ryan has the best record, finishing 19th (out of 45), 9th (/56), 3rd (/42) and 1st (/45). That’s an average rank of 8th, making him the only one to outperform Squiggle over the same period.

Honourable Mention: Squiggle (5th, 20th, 9th, 6th)

Live Running Predictions

Squiggle pipped AFLalytics and Wheelo Ratings on the Ladder Scoreboard this year, mostly thanks to some solid returns in the early rounds.

Throughout the year — but especially early — the teams models overrated the most were GWS and Hawthorn, while they underrated Collingwood and Fremantle.

Introducing Power Rankings

Last week, in the Squiggle models group chat – of course there’s a group chat – Rory had a good idea:

Rory’s idea

It turned out that everybody had data on hand for this, because if you have a model, you also have a rating system. So I began collecting this, and now there’s a page to view it.

There’s also a widget here on the site, to the right of this post, or else above it.

On the main page, you can see how ratings change over time, and compare ratings from different models.

Power Rankings measure team strength at a point in time. They ignore the fixture, home ground advantage, and all the other factors that go into predicting the outcome of a match or a season. Instead, they’re a simple answer to Rory’s question: Which teams are actually good?

The Curse of the Curse

I enjoy a useless AFL stat as much as the next person, but this kind of thing tests me:

“Curse” is a bit of a tell in footy. It usually means “coincidence.” If it was a real effect, we’d have a decent theory about why. People love to invent theories. There’s no effect we won’t try to pair with a cause, no matter how thin the evidence. When there’s an effect and no cause, I tend to doubt it’s due to the spooky unseen hand of an unnamed force.

Usually a “curse” is an odd stat that, at first glance, seems like it can’t be the result of random chance, but that’s only because we don’t understand randomness. Our gut tells us that flipping five heads in a row is basically impossible, for example, when in fact true randomness tends to contain a lot more natural variation than people think. (You can flip ten heads in a row, if you’re willing to toss coins for a few hours, and people will think you’re a magician.)

Here’s the 0-2 stat:

I have a few problems with this.

First, I have to point out it’s technically wrong, because we’ve had nine finalists from 0-2, counting Carlton in 2013 who were elevated from ninth after Essendon’s disqualification.

But more importantly, the underlying effect sounds suspiciously like “It’s harder to make finals if you lose games.” And we knew that already. Is there anything magical about the first two games? Because if not, it’s just saying that dropping games hurts your finals chances.

Then there’s two snipes: the starting point (2010), and the number of games (2). If there’s a genuinely interesting effect here, and not a coincidence, we should expect to see not-quite-as-dramatic-but-still-suggestive numbers when those key numbers are varied a little.

Instead, it vanishes pretty abruptly. If you look at a longer time period, you see about 20% of 0-2 teams making finals, and if you look at 0-1 or 0-3 or 0-4 teams, the numbers again are about what you’d expect: about one-third of 0-1 teams make it, about one-in-ten 0-3 teams, and only Sydney 2017 has made it from 0-4 this century. So the more games you lose, the harder it is to make finals, in a steady and predictable way.

Because what actually happened here – the whole reason this stat became popular – is that between 2008 and 2016, there was a patch where only two 0-2 teams made finals (Carlton 2013 and Sydney 2014). This hit rate was quite a bit lower than the years before and after, although not wildly so:

YearFinalists from 0-2
20002 out of 5
20011 out of 5
20021 out of 5
20031 out of 3
20042 out of 4
20050 out of 3
20062 out of 3
20071 out of 4
20080 out of 4
20090 out of 4
20100 out of 6
20110 out of 4
20120 out of 5
20131 out of 7
20141 out of 6
20150 out of 5
20160 out of 4
20171 out of 8
20181 out of 4
20191 out of 5
20201 out of 4
20213 out of 5

Eyeballing that, you might notice something else about the middle years: There are more 0-2 teams. And indeed we had a number of clubs at historical lows in this period, including two teams who were introduced to the league. Fourteen of those 0-2 non-finalists from 2008-2016 are actually just four clubs failing over and over: the two expansion teams plus Melbourne and Richmond.

So this always looked a fair bit like random variation plus an unusually weak bottom end of the comp. But somehow it gave birth to a “curse” that meant flag contenders couldn’t afford to drop their second game.

And now that regular service has resumed – implying that there was never much to see in the first place – “a new trend is emerging.”

Ladders of Future Past

You can now use the ladder predictor on seasons as far back as 2000. Relatedly, the Squiggle API now serves fixture info on games dating back to 2000, and you can also use it to get a list of which teams were playing in any of those years.

You might be wondering why you’d ever want to predict past ladders. To be honest, I’m not sure. I just know that people write in sometimes asking if the site can let them do that.

This particular addition was triggered by Jake, who emailed me to say he’d been in iso for a month, and he kept busy by re-entering past seasons into the predictor one game at a time to see how the ladder changed. Jake had done this for 2011-2022, but wanted to go back further.

So now you can. I am all about football as a mental escape from reality, Jake. That’s the best possible use of football.

The Squigglies 2021: Pre-Season Ladders

Heading into 2021, there was a bit of hive mind syndrome going around:

So everybody had Richmond way too high, and Melbourne, Sydney and Essendon too low. Collingwood were generally tipped for somewhere around mid-table, often pushing into the Eight, as were St Kilda.

This same-same field of predictions delivered neither a spectacularly good nor spectacularly bad ladder. Instead, everyone was just kind of okay. The average was better than just tipping a repeat of 2020, but not by much.

Every 2021 Expert Preseason Ladder Rated

Best Ladder: Daniel Cherny

All year long, the Western Bulldogs looked a deserving top 2 team. Then they plunged from 1st to 5th in the final three rounds, upending a lot of ladder predictions along the way. A benefactor was Daniel Cherny, who’d tipped them for 6th, and suddenly had the best projection out of anyone. He had 6 of the Top 8, missing Sydney & Essendon for Richmond & St Kilda, and half the Top 4. He also wisely tipped Collingwood to fall further than most (although not as far as they actually did).

Runner-Up: Sarah Black

Best Ladder by a Model: The Flag (6th overall)

After coming second in this category last year, this was a great performance by The Flag, nailing three out of the Top 4, with Richmond the only miss.

Honourable Mention: AFLalytics (8th overall)

Lifetime Achievement Award: Peter Ryan

Of the 26 experts and models I’ve tracked for three consecutive years, Peter has the best record, averaging 65.03 points across that period. He’s been getting better, too, finishing 19th in 2019, 9th in 2020, and 3rd this year.

Honourable Mention: Squiggle (5th in 2019, 20th in 2020, 9th in 2021)

Mid-Season Predictions

If you’re interested in how models predicted the final ladder during the season, head on over to the Ladder Scoreboard. New model Glicko Ratings scored best this year, while as usual all models significantly outperformed the actual ladder.

Ninety-nine percent

via maxbarry.com

If you do one thing each day that has a 99% survival rate, you’ll likely be dead in under ten weeks. If boarding a plane had a 99% survival rate, a typical flight would end by carting off at least one passenger in a body bag, perhaps two or three. Ninety-nine sounds close enough to 100, but anything with a 99% survival rate is incomprehensibly dangerous.

Go sky-diving, and you’re over two thousand times safer than if you were doing something with a 99% survival rate. Driving, the most dangerous everyday activity, requires you to clock up almost a million miles of travel before you’re only 99% likely to survive. Even base jumping, perhaps the single most dangerous thing you can do without actively wanting to die, is twenty-five times safer than anything that carries a 99% survival rate.

Ninety-nine bananas is essentially one hundred bananas. Ninety-nine days is practically a hundred days. But 99% is often not even remotely close to 100%. It feels like similar numbers should lead to similar outcomes, but the difference in life expectancy between 99% and 100% survivable daily routines isn’t one percent: It’s ten weeks versus immortality.

It’s simple enough to calculate the probability of more than one thing happening: You just multiply the individual probabilities together. The likelihood of surviving for three days, for example, while doing one thing per day with a 99% survival rate, is 0.99 x 0.99 x 0.99 = 0.9703, or 97.03%.

But we find this deeply counter-intuitive. We prefer to think in categories, where everything can be labeled: good or bad, safe or dangerous, likely or unlikely. If we have an appointment and need to catch both a train and a bus, each of which have a 70% chance of running on time, we tend to consider both events as likely, and therefore conclude that we’ll make it. The actual likelihood that both services run on time is 0.70 x 0.70 = 0.49, or only 49%: We’ll probably be late.

We also prioritize feelings over numbers. Here’s a game: Pick a number between 1 and 100, and I’ll try to guess it. If I’m wrong, I’ll give you a million dollars. If I’m right, I’ll shoot you dead. Would you like to play?*

Most people won’t play this game, because the thought of being shot dead is too scary. It’s shocking and visceral, so when you weigh up the decision, both potential outcomes balloon in your mind until they feel roughly equal, as if the odds were 50/50, rather than one being 99 times more likely than the other.

But put the same game in a mundane context — if instead of being shot, you get COVID, and instead of a million dollars, you just go to work as usual — and we tend to return to categorical thinking, where the dangerous-but-unlikely outcome is filed away as too improbable to be worth thinking about. As if close to 100% is close enough.

Between 99% and 100% lies infinity. It spans the distance between something that happens half a dozen times a year and something that hasn’t happened once in the history of the universe. With each step we take beyond 99%, we cover less distance than before: 1-in-200 gets us to 99.50%, then 1-in-300 to 99.67%, then 1-in-400 only to 99.75%. We’ve quadrupled our steps, but only covered three-quarters of the remaining distance. We can keep forging ahead forever, to 1-in-a-thousand and 1-in-a-million and beyond, and still there will be an endless ocean between us and 100%.

You have to watch out for 99%. You have to respect the territory it conceals.

* I pick 73.