Fixing the ICC ratings
Russell Degnan

In my post on the fundamental sameness of ratings I implied some criticism of the ICC ratings. Many choices about how to construct a ratings system are (for the most part) either a design choice - home advantage doesn't matter with a large sample and even schedule - or relate to what is trying to be achieved. The decay rate will be different if a rating is supposed to reflect the last 2 months versus the previous two years.

The ICC ratings go to a championship trophy and should therefore reflect the previous 12 months, but with scheduling so uneven that is near impossible, and different choices have been made to provide a relatively simple system.

As discussed in a previous post however, the ICC ratings have some genuine problems. The choice to cap the implied probability at 90% means that for a large number of matches the ratings are a poor reflection of the quality of the sides. Similarly, the choice of decay that reduces then drops previous results causes other issues when the quality of opposition has already been accounted for.

Both of these issues are relatively easy to fix, and this post discusses the benefits of doing so, particularly in a new world where nations with wildly different abilities must both be included in the ratings - as opposed to the full member oriented system where all teams were broadly at the same level.

Changing the implied probability

As noted, the basic issue with the ICC ratings' implied probability is that once teams are more than 40 ranking points apart the ratings assume that the stronger side will win 90% of matches. This pushes the ratings apart - particularly when one side is significantly weaker than their opponents. It also means that the points on offer for wins over strong sides are lower for bad sides than good ones - which limits the ability of the ratings to adapt to changes in ability.

As the graph above shows (the blue ICC lines), once the gap between teams gets above 40 points, their points gained relative to their current rating remain same. The value of a win therefore declines as the probability of them winning decreases. At its most extreme, when sides are rated more than 180 points apart, a strong side will get more points for losing a match than the weaker team will get for winnings it.

The solution is to adjust the points on offer in proportion to the ratings gap of the two teams, as per the red lines in the graph which eventually settle on the stronger side receiving no additional points (ie. their current rating) for a win - an implied probability of 100% - and the weaker team half the ratings gap plus 80 in the unlikely event they win.

The formulas would therefore be as follows:

Ratings gapICC FormulaProposed Formula
Stronger teamWeaker teamStronger teamWeaker team
0-40Win: OppRat + 50
Loss: OppRat - 50
Win: OppRat + 50
Loss: OppRat - 50
Win: OppRat + 50
Loss: OppRat - 50
Win: OppRat + 50
Loss: OppRat - 50
40-90Win: OwnRat + 10
Loss: OwnRat - 90
Win: OwnRat + 10
Loss: OwnRat - 90
Win: 0.1 * OppRat + 0.9 * OwnRat + 14
Loss: 0.6 * OppRat + 0.4 * OwnRat - 66
Win: 0.6 * OppRat + 0.4 * OwnRat + 66
Loss: 0.1 * OppRat + 0.9 * OwnRat - 14
90-180Win: OwnRat + 10
Loss: OwnRat - 90
Win: OwnRat + 10
Loss: OwnRat - 90
Win: 0.05 * OppRat + 0.95 * OwnRat + 9
Loss: 0.55 * OppRat + 0.45 * OwnRat - 71
Win: 0.55 * OppRat + 0.45 * OwnRat + 71
Loss: 0.05 * OppRat + 0.95 * OwnRat - 9
180 plusWin: OwnRat + 10
Loss: OwnRat - 90
Win: OwnRat + 10
Loss: OwnRat - 90
Win: OwnRat
Loss: 0.5 * OppRat + 0.5 * OwnRat - 80
Win: 0.5 * OppRat + 0.5 * OwnRat + 80
Loss: OwnRat

They look more complicated than they are. The existing ICC ratings use either a team's own rating or the opposition. The combination allows the much more gradual increase in points shown above (optimally the area between 0 and 40 would also be curved, but I have chosen to leave it as is).

The changed implied probability shows the benefits of this approach:

Whereas previously teams were either closely matched or a 90% chance of victory, now their approximate chance of victory can be determined across a full range of ratings gaps.

This change would only make subtle changes to the ratings. Bangladesh's improvement a few years ago would have given them a more rapid (and noticeable) boost, reflecting their actual ability rather than their long period of tepid performances. The odd associate upset would have been better reflected in their ratings - when they are included. But as these results are rare, the broader outline of the ratings would be the same. The more important change is to the decay rate.

Changing the decay rate

As a matter of basic maths, if points were to accumulate indefinitely then new matches will have a decreasing effect on the ratings. The ICC works around this in the simplest way - by reducing the previous two years by 50% and excluding anything before that. But it has an unfortunate side effect: each exclusion date, ratings jump, sometimes substantially, and often, in strange directions.

The effect of this change can be seen in a simple example. Here a team plays (and wins or loses matches) at different levels over the course of several years. The true rating of the team in each year (and which, nominally the ratings should reflect) is as follows: 100, 80, 100, 120, 120, 120, 100. The graph shows this shift (at the start of the year) and the impact of the ICC decay formula (at the end of each year).

Notice that, because the previous year is reduced to 50% in preparation for a new year, the rating shifts away from the true rating at the end of the second and third years as old results are re-weighted up relative to the past year. The ICC rating eventually meets the true rating only if the team has maintained the same rating for two years, otherwise it is often substantially far from correct.

The oddity with the simple choice of decay is that it is also unnecessary. The "natural" way to ensure old results do not impact the rating without unseemly jumps is to merely divide both the points accumulated and the number of matches by an amount. In the graph above this was 3, effectively reducing the impact of old results by a third each year (and by a ninth the following year).

The proposed system never quite matches the yellow line - though arguably nor should it - but it is consistently closer than the ICC and gradually gets closer the longer a team stays at the same level (in the third year of ratings at 120 it reaches 119).

More importantly, there are no jumps. As both points and weights are declined by the same amount, a team stays on the same rating until they play. Which is exactly how it should be.

Cricket - Analysis 23rd October, 2018 23:33:04   [#] 

Comments