A Change of Methodology
Russell Degnan

A brief respite after the conclusion of the summer schedule has given me a chance to consider better ways of calculating the ratings, and to address one of its long term (and deepest flaws). As the ratings were calculated, teams will improve their rating with a win, and regress with a loss, with a draw impacting slightly on the better team.

There were several factors in calculating the change (in declining order of importance): the ratings difference between the two teams, adjusted for a home team advantage (expectation); the margin of victory or loss (result); the number of tests in the series (significance); and whether the series was alive, meaning it could still be drawn.

Two problems have persisted. An over-sensitivity to unexpected results, and a lag in the date of decline of very good teams. Something that could be seen most clearly in the period 1948-1954 with the Australian team. Although the Australian team is generally considered to have peaked in 1948, they continued to win for several years afterwards, and therefore continued to improve their rating.

Old win/loss based ratings: the mid 20th century.

Although great teams will continue to win for a while after they lose their great players, it would be nice if the ratings reflected the declining quality of the side before they start to record losses. And vice versa, for losing teams (New Zealand during the same period for instance) to start recording improvements before they begin to win.

A change has been made therefore, to adjust not on the result itself, but on the actual and expected margins. Setting this was complex. There are essentially two linear fits, a noisy win line of about 0.6 runs per point of ratings difference, and a flat line of draws. Because a draw is stil a result of sorts - those not the result of prolonged rain - the expected margin has been set at <0.35> runs per point of ratings difference, with the home advantage still 100 points. These values are always relative, and therefore marginal, but it is nice to have them accurate.

The other factors above have been retained. As has the result, because it should matter whether a team actually won. A modifier of 2/3 is therefore used on both draws (which can be merely unlucky) and victories by margins smaller than expected.

The effect of the change can be seen by comparing the past two decades of ratings. Whereas before the Australian team is marked by continual improvement with a few significant drops in 2001 and 2005, the closeness of the losses they suffered, and the size of the victories has meant they've been fairly stable since about 2002-03, with perhaps just the hint of a drop in the last two series. Overall it has had the effect of smoothing changes out, with the best side being more clearly delineated, and in most cases, sides having to show consistent quality to improve their ranking.

Old win/loss based ratings: the past 16 years.

New margin based ratings: the past 16 years.

At some point I should blog more extensively on the historical rankings. For now though, here are the new rankings for each team prior to the Australia-West Indies and New Zealand-England series.

Australia(1st)1514.39
South Africa(2nd)1243.67
England(3rd)1193.81
India(4th)1188.82
Sri Lanka(5th)1138.14
Pakistan(6th)1104.19
New Zealand(7th)1037.93
West Indies(8th)882.56
Zimbabwe(9th)497.45
Bangladesh(10th)353.05

Cricket - Ratings - Test 12th August, 2008 17:22:20   [#] [0 comments]