LADDER Updated

Squiggle

Pre-season 2020. Click teams to reveal detailed projections.

Wins % Range
1
Richmond 16.2 140%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1
2
Hawthorn 13.6 118%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
3
3
Geelong 13.5 119%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2
4
Western Bulldogs 13.5 116%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
4
5
Collingwood 12.7 113%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
5
6
Port Adelaide 12.2 108%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
6
7
Brisbane Lions 12.0 107%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
7
8
GWS 11.8 106%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
8
9
West Coast 11.6 104%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
9
10
Fremantle 11.3 103%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
10
11
North Melbourne 10.5 98%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
11
12
St Kilda 10.1 96%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
12
13
Sydney 9.2 90%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
13
14
Melbourne 9.1 90%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
14
15
Carlton 8.9 89%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
15
16
Essendon 8.4 87%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
16
17
Adelaide 7.9 83%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
17
18
Gold Coast 5.5 71%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
18
Squiggle
7.9 83%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
17
Squiggle
12.0 107%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
7
Squiggle
8.9 89%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
15
Squiggle
12.7 113%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
5
Squiggle
8.4 87%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
16
Squiggle
11.3 103%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
10
Squiggle
13.5 119%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2
Squiggle
5.5 71%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
18
Squiggle
11.8 106%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
8
Squiggle
13.6 118%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
3
Squiggle
9.1 90%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
14
Squiggle
10.5 98%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
11
Squiggle
12.2 108%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
6
Squiggle
16.2 140%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1
Squiggle
10.1 96%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
12
Squiggle
9.2 90%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
13
Squiggle
11.6 104%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
9
Squiggle
13.5 116%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
4

Live Hasn't factored in current round in progress Hasn't factored in last completed round

This ladder is a weighted average of all models: Full weighting 0% weighting 0% weighting

+ About ladder projections...

Ladder Projections are weirder than they first appear! It's natural to think that there are many possible ladders and we should simply pick the most likely one. The problem is there are far more possible ladders than you can imagine—more than the number of atoms in the universe. Even ignoring the number of wins and simply trying to rank teams in order, there are 6.4 quadrillion possible combinations.

Until there are only one or two rounds left, even the most likely ladder has no chance of actually occurring — the likelihood is so close to zero, it may as well be.

So we need to abandon any hope of being able to actually pick the final ladder. Instead, what we're attempting to do is to minimize the amount of error between our prediction and reality.

This raises a few subjective questions, such as: What is the most important part of a ladder? Is it the ranking or the number of wins? Because we can't always minimize the error in both at once.

This ladder cares most about ranks, because that determines finals positions. But ranks are derived from wins, and wins are more predictable — a good model should get the number of wins about right, but can easily miss the mark on ranks, since those can change sharply from only a few unexpected results, particularly around the middle of the ladder.

Projected ladders, then, are a series of compromises. They won't get everything right, and they often can't make one thing more right without making something else more wrong. Such quirks are the unavoidable result of attempting to distill a wide range of possible futures down into one single ladder — which will definitely be wrong, but is hopefully close.

Common Questions

Q. Why is the season tipped to be so close?

Most early-season projections will predict an unusually close year, tipping too few wins for the top team and too many for the wooden spooner. This isn't an error; it's because "How many wins will the top team have, whoever they turn out to be?" is actually a different question to "How many wins will Geelong have?" — even if we expect Geelong to finish on top!

It's highly likely that the minor premier will turn out to be a team that wins more games than people expected. But we can't know who this will be, since we can't predict who will be better than we predict.

It's the same as tossing a coin 10 times: If I'm predicting a ladder, I'd say heads and tails will each go 5-5, since this is their long-term average. I know that whichever one finishes on top will probably score 6 or more flips (75% chance), but I can't say which one that will be. I have to choose which question I want to get right: the number of wins for each team, or the number of wins of whoever finishes on top. If I choose to answer the first question, I will get a more accurate forecast of wins per team, but at the price of sacrificing some accuracy from the shape of the overall win distribution.

Q. Why is a team ranked lower even though it has more projected wins?

Usually because of percentage. When a team is projected to have "12.2 wins," this really means: "a bit more likely to have over 12 wins than under." At the end of the season, everyone will have whole numbers of wins (plus draws), and at this point, their ladder position may be determined by their percentage. Ladder projections based on tens of thousands of simulations can tally up the number of times this occurs and figure out which may matter more: an edge in projected wins or a lead in percentage.

It's also possible for this to happen because this is an aggregate Projected Ladder, combining the predictions of many different sources, and it prioritizes predicted ranks over anything else. So if, on average, the sources rank Port Adelaide higher than Richmond, that's where the Projected Ladder will place them, too. It will do this even if, on average, Richmond have higher both predicted wins and predicted percentage — which is possible because different models will have different ideas about how wins might be distributed. Fundamentally, the Projected Ladder trusts each source to interpret its own data and reach sensible conclusions about what this means in terms of likely finishing position.

It's also worth noting that there's a little more sophistication than meets the eye in how "average rank" is calculated. For sources that provide finish position estimates — those colourful bars that show how likely each team is to finish in each position — the Projected Ladder calculates true average finishing ranks, rather than relying on simple ordinals. For example, if a model says that West Coast have a 90% chance of finishing 1st and a 10% chance of finishing 2nd, their simple ordinal finishing rank is 1, while their true average finishing rank is 1.1. If they have a 60% chance of finishing 1st and a 40% chance of finishing 2nd, their simple ordinal rank is still 1, while their true average finishing rank is 1.4 — much closer to the midpoint of 1st and 2nd. In this way, the Projected Ladder is able to distinguish more precisely between teams predicted to finish close together.

Q. Why doesn't the ladder make logical sense?

Sometimes ladders don't add up to the exact right number of wins, or predict teams to finish in logically impossible places, such as tipping a team to finish 4th with one round to go when it must finish either 3rd or 5th. The natural human response is to think that if they can't get even such basic things right, they shouldn't be trusted at all. In fact, the ladder is probably prioritizing error minimization, and is likely to be more accurate, at least by that measure, than a ladder that insists on being logically possible.

Pre-season 2020.

PROJECTIONS
W L D % History Wins Finish
1
Adelaide 0 0 0
+7.9
17
2
Brisbane Lions 0 0 0
+12.0
7
3
Carlton 0 0 0
+8.9
15
4
Collingwood 0 0 0
+12.7
5
5
Essendon 0 0 0
+8.4
16
6
Fremantle 0 0 0
+11.3
10
7
Geelong 0 0 0
+13.5
3
8
Gold Coast 0 0 0
+5.5
18
9
GWS 0 0 0
+11.8
8
10
Hawthorn 0 0 0
+13.6
2
11
Melbourne 0 0 0
+9.1
14
12
North Melbourne 0 0 0
+10.5
11
13
Port Adelaide 0 0 0
+12.2
6
14
Richmond 0 0 0
+16.2
1
15
St Kilda 0 0 0
+10.1
12
16
Sydney 0 0 0
+9.2
13
17
West Coast 0 0 0
+11.6
9
18
Western Bulldogs 0 0 0
+13.5
4