They look more complicated than they are. The existing ICC ratings use either a team's own rating or the opposition. The combination allows the much more gradual increase in points shown above (optimally the area between 0 and 40 would also be curved, but I have chosen to leave it as is).
The changed implied probability shows the benefits of this approach:
Whereas previously teams were either closely matched or a 90% chance of victory, now their approximate chance of victory can be determined across a full range of ratings gaps.
This change would only make subtle changes to the ratings. Bangladesh's improvement a few years ago would have given them a more rapid (and noticeable) boost, reflecting their actual ability rather than their long period of tepid performances. The odd associate upset would have been better reflected in their ratings - when they are included. But as these results are rare, the broader outline of the ratings would be the same. The more important change is to the decay rate.
Changing the decay rate
As a matter of basic maths, if points were to accumulate indefinitely then new matches will have a decreasing effect on the ratings. The ICC works around this in the simplest way - by reducing the previous two years by 50% and excluding anything before that. But it has an unfortunate side effect: each exclusion date, ratings jump, sometimes substantially, and often, in strange directions.
The effect of this change can be seen in a simple example. Here a team plays (and wins or loses matches) at different levels over the course of several years. The true rating of the team in each year (and which, nominally the ratings should reflect) is as follows: 100, 80, 100, 120, 120, 120, 100. The graph shows this shift (at the start of the year) and the impact of the ICC decay formula (at the end of each year).
Notice that, because the previous year is reduced to 50% in preparation for a new year, the rating shifts away from the true rating at the end of the second and third years as old results are re-weighted up relative to the past year. The ICC rating eventually meets the true rating only if the team has maintained the same rating for two years, otherwise it is often substantially far from correct.
The oddity with the simple choice of decay is that it is also unnecessary. The "natural" way to ensure old results do not impact the rating without unseemly jumps is to merely divide both the points accumulated and the number of matches by an amount. In the graph above this was 3, effectively reducing the impact of old results by a third each year (and by a ninth the following year).
The proposed system never quite matches the yellow line - though arguably nor should it - but it is consistently closer than the ICC and gradually gets closer the longer a team stays at the same level (in the third year of ratings at 120 it reaches 119).
More importantly, there are no jumps. As both points and weights are declined by the same amount, a team stays on the same rating until they play. Which is exactly how it should be.
The fundamental sameness of ratings
The distribution of opportunities to take a measurement is similar, but because it takes more successful attempts to generate higher opportunities, it is shifted slightly across, and centred around 4 (or n/2). The breakdown also demonstrates the key to the problem: if four opportunities are to be had, the attempts will be distributed in such a way that the average success rate is 50%. But the only way to generate 7 opportunities is to have succeeded in each of the first 7 measurements. The percentage will be either 6/7 (83.33%) or 7/7 (100%). And as a consequence, the average of multiple strings of measurements ought to sit not at 50% (the middle of the opportunity distribution), but at the centre of the instances of measurement distribution (plus a term for the two extras) - around 45% for strings of length 8.
All very fascinating, particularly as it implies that previous studies showed a "hot hand" after all. But what does it say about cricket? The short answer is that this is a very elegant way of measuring form: find the median score for a batsman, if they surpass it, then test their subsequent score.
For Tendulkar, who played so many innings that the expected percentage is close to 50, his test "form" saw a 53.1% success rate in innings where he`d surpassed the median (excluding not outs below the median). In ODIs however (counting only matches where he opened) the figure drops to 50.5%.
That is only a single data point, and some batsmen are likely to be more prone to runs of form than others, but it also points to an issue. In ODI cricket, where multi-lateral series exist, a batsman tends to shuffle opposition quite quickly, and therefore face a reasonable variety of bowling strength from match to match. In test cricket, the subsequent innings is less likely to be independent from the first, without being held in identical conditions - the second innings being on a wearing pitch. Apparent runs of form may just be a string of matches against poor opposition.
Conversely, ODI cricket may be less prone to form, being a format that requires a higher amount of risk-taking, and therefore more luck. Hence a discrepancy between test and ODI matches is feasible. Comparing all innings adds in time gaps when a player might fall out of form (and vice versa), and a proper study ought to remove them. The relative sparsity of innings means that when a player is really in form, it would be hard to distinguish between that and luck with any method. Most likely the effect is small - perhaps three or four runs on a batting average, but probably half that.
Hence measuring the effect, if any, of form remains difficult. On selection matters, - the only avenue where form might matter - there is a lot to be said for judging a player on technique, temperament and overall career trajectory and ignoring runs of form. Everything else is largely academic, albeit an interesting question.
We know such an approach is a good thing. There is an obvious correlation between that and success, though which is the chicken and which the egg is debatable.
- Rob Smyth - The joy of selection roulette
The need for stability is always the catch-cry of teams struggling, and players fearful of their places. It has been an article of faith that Australia built their dynasty around youth in the 1980s, though even that might need some revision.
The longest period of batting stability for Australia immediately followed the 1989 Ashes, with only the substitution of one Waugh for another in 21 tests. But it was also a period marked by weak opposition, with the only losses being in NZ and to the West Indies (2-1). The 12 tests that followed the enforced retirement of Geoff Marsh, leading up to the 1993 Ashes, was anything but stable, with 6 different openers, 4 different players at first drop and 6 more players in the middle order - 7 of you include Greg Matthews. The results? Only three losses, one in NZ, and a 2-1 loss to the West Indies. Another two top-order changes were made for the first test in 1993; as in 1989, Australia were 4-0 up by the final test.
Perhaps results might have been better with more stability (a series lost by one run has a lot of what-ifs); or perhaps the opposition over-rides whatever difference might exist. It is reasonably unlikely that swapping the 6th best player for the 7th makes a big difference, though ongoing panic such that you select the 13th best, might.
To factor out the opposition, we can compare the expected margin against the actual result, and graph that against the number of changes made. There is a lot of noise:
There is also some indication that making zero changes is better. In the short term, the best side is probably the one you thought was the best side. But making one or two changes is still likely to produce a (very slightly) above-average result - note that 20 ratings points equates to 10 around runs, a fifth that advantage conferred by playing at home. Though this doesn't necessarily solve Smyth's chicken and egg quandary, as a result above expectations may merely represent below average expectations.
It gets more interesting when we look at changes per match over the previous two years. A side in constant flux ought to under-perform relative to expectations, if stability matters.
Actually, we don't see that. There is a lot of noise, and the difference is minimal, but sides making fewer than 1 1/2 changes per match do worse than expected than those making more.
I'd proffer two possible explanations. Firstly, that there is an information problem, in finding the best set of cricketers, and most likely some benefit in trying several out until one shines sufficiently to become more permanent. And secondly, that stable sides are more likely to be older sides - established, successful - and therefore more likely to be declining in performance. That doesn't mean an alternative player will perform better though, particularly in the short term. As the game to game suggests, more often than not, the best players a team has are those who've proven to be the best they have, even when they are losing.
Don't. There are exceptions, but the oft-told story of Richie Benaud's that as a captain he was told to "bat; if in doubt, think about it, then bat anyway" hasn't been true for 20 years.
S Rajesh noted as much a couple of weeks ago, but his analysis was based on the results obtained which has issues (amongst them, that Australia automatically bats) while other sides are a little more discerning. We can run a slightly more sophisticated analysis by comparing the expected margin (based on my ratings) against the actual margin and seeing whether the batting or fielding team beats expectations in each match. In short: for the last 20-odd years they have not.
In the 1930s - with uncovered pitches - the advantage in winning the toss and using the (most likely) best conditions was clear: it added as much as 40 runs per game. But that benefit has steadily eroded, and batting first is now a negative proposition, while fielding sides are regularly beating their expected margin. Interestingly, this is happening in both drawn (margin of 0) and result games:
A drawn games means the better side missed its expected victory. And for the better side, fielding first offers the advantage of time. By bowling there is no wasted runs from the need to set a target - such as last season when Australia still needed two wickets at close of play and had 172 runs, but also in 2003 and 2006 when the batting side had a first innings in excess of 500 and still went on to lose courtesy of a poor third innings. Even with the margin as large as it was, given the rain on the last day, England probably wouldn't have managed to beat Australia in Adelaide in 2010, because their bowlers would have been tired (had they enforced the follow-on), or they'd have run out of time.
Similarly, despite having to bat last on a potentially wearing pitch, if the match is heading for late on the fifth day, the need to buffer a margin by 100 or so runs when declaring helps the weaker side avoid a loss. Of the three recent bat-first-and-win games at Adelaide, all three went into the fifth day, despite the losers scoring less than 520 runs in total in the match. Australia and England both batted and lost in that period with more than 680 runs.
In general, a side that wins beats its expected margin, because the expected margin takes into account draws. In games with a result then, you'd expect any advantage from the pitch to accrue to the batting side, because they get the best conditions, and managed to exploit them. But in recent years we've not seen that; the new pitch has offered movement to the bowlers, and the old pitches haven't broken up significantly enough to negate that. There isn't a huge difference (and quite a bit of randomness), but taking into account the time benefits the bat-first approach is no longer valid, and actually unhelpful.
So unhelpful, in fact, that the expected margin for the toss winner was negative in the 1990s and first part of the 2010s, as well as negative for those batting first in the 2000s. By less than a dozen runs, but negative is negative. Any side with ambitions to win in Adelaide should bowl first; new pitch caveats aside, there is little to fear on the fifth day.
Update on Adelaide:
Australia chose to bat; but that is not a surprise. For reference, this graph depicts the number of total runs in the match for teams batting first and second since 1990; wins at the top, losses at the bottom, and draws in the middle.
For teams batting second, more than ~560 almost guarantees at least a draw, although it is possible to win with less (because obviously the opposition can be bowled out for less). Batting first, there has only been one victory with less than 590 (by a single run no less), and three losses with more than 600. The runs required to force a result in Adelaide are substantial.
Moreover, there is always pressure on the side batting first to keep batting well, because all results remain possible, even with very high totals. Whereas, the side batting second can, if they bat well enough, guarantee at least a draw and press for a victory.
Finally, the innings by innings runs per wicket for the top order: 1st: 48.5 2nd: 49.5 3rd: 31.5 4th: 28.9. That calculates to a total value in the top-order of batting first of 11.2 runs (miniscule in context). The Adelaide pitch clearly becomes harder to bat on, almost twice as hard: but it does so too late to gain advantage in the second innings, and too early to prevent a catastrophic third innings resulting in defeat. In Adelaide, it is the third innings that counts, and you are better off bowling when it does.
What follows is necessarily inexact. Perhaps very inexact. Cricket has many issues that confront it, but by far the biggest is a lack of transparency. The Woolf review, despite the resources available to it was forced to admit much the same:
We believe there is an overall lack of transparency around financial distribution in global cricket, which means certain aspects of the finances of global cricket are not well understood. We have been unable to obtain a full picture of the current financial position of global cricket. For instance, although there are various media estimates in circulation of the impact of tour cancellations (actual or threatened), it is not known with any degree of certainty the financial effect a tour by one Member has on another Member. It is clear that tours by certain Members (such as India) to other Members give a significant revenue boost to the host nation.
There are four points we know with relative certainty however, from which we can begin a deeper investigation. Again, to quote the Woolf review:
The basic structure of cricket finance runs from TV markets - nominally constituted at a national level - to either the home board of a particular fixture, or the ICC. We therefore need to make three assessments: the size of a local cricket market; the flow of money generated from that market to various bodies; and the distributive flow from the ICC and others.
For most purposes here I'll be talking averages over four years, because this takes into account the cycle of both the FTP (give or take), and the ICC major events. Every cricket board exhibits substantial variation from year to year, depending on who is touring, and the dividends distributed by the ICC.
Cricket Market Size
This is the most inexact of all the estimates, not least because it isn't clear what percentage of the cricket market is actually being drawn on by various members. Empty stands and no push to fill them by sensible scheduling, and fixturing that makes an inefficient use of resources, and no attempt to contextualise the season means most boards make less money than they might.
A previous assessment of TV rights deals across sports in Australia indicated that cricket gets roughly what you'd expect, given its ratings, and total hours of programming. Taking into account sponsorship, merchandise and match-day attendance; then cross-checking against Cricket Australia's annual reports, and adjusting for income earned from overseas, puts the size of the Australian cricket market (or at least, the part of that that pays to watch professional players), at something like an average of AUD$150m (which for current purposes is nearly identical to USD) over the past four years. The most recent TV deal has probably inflated that to nearer AUD$200m . The continuing rapid inflation of sports rights makes this process harder than it might otherwise be.
The simplest model for calculating market size is to multiply GDP (incomes, which includes population) but the level of cricket interest. There are various reasons why it won't correct: the distribution of cricket fans amongst income quartiles (particularly in England and to a lesser extent India where cricket is an upper-class sport); the size of disposable income which will make the sports market in wealthy nations much larger; and the difference between TV income and ground income, with the latter more easily captured in richer, but smaller nations. Nevertheless, I'll look at three methods of assessing cricket market size. Two are quite simple but (relatively) complete, the third complex, but stifled by the lack of annual reports from the most poorly governed members.
avg. Last 4 years
|GDP (millions USD)
|% Cricket Players
|% Cricket Articles
Kaufman + Patterson (2005)
% News Articles
% Cricket Players
* Estimate - by which I mean guessed and/or completely made up
+ Estimates of USA market size are very sensitive to assumptions. The K+P figure was close to zero, but also 8 years ago, and there has been a recent shift towards more cricket articles. Similarly, estimates of the number of players vary from the official figure of ~30,000 to ten times as many. Consider this a low figure; under reasonable assumptions the USA is cricket's fourth biggest market. Though almost none of that goes into US cricket.
# K+P give two figures for England - 8% normally and 17% in summer - I have used the higher one for obvious reasons (estimates outside the season are irrelevant). They don't offer similar figures for Australia, which makes all the figures in this column sensitive to this assumption.
Method 1: Estimate from GDP/news media
One of the more interesting pieces on cricket take-up in various nations is the 2005 piece by Kaufman and Patterson. It is worth reading the paper, but for my purposes its most useful feature is an estimate of cricket popularity by counting articles in the sporting press. The numbers have been relayed into the table above, and used to calculate, by multiplying that out by GDP and a factor that makes the markets I have good data (Australia, England, India and New Zealand) approximately the right size (around 120,000).
Method 2: Estimate from GDP/playing numbers
Playing numbers as a proportion of population ought to be a good measure, but are complicated by the fact that, although have a complete set of figures for associate/affiliate nations, I have none for most test nations. Estimates of the major nations should therefore be taken with very large grains of salt. Nevertheless, it gives some reasonable numbers, and those give a good indication of the size of markets where published annual reports are sparse, or the market is undeveloped for lack of matches.
Method 3: Estimate from Annual Reports
Estimating market size from actual revenues is complicated by the amount of revenue generated by most boards in external markets (either overseas tv rights or sales to spectators), and the lack of reporting on the source of that revenue.
India is perhaps the easiest market to estimate, because the BCCI generates relatively little profit from external sources. Their annual report puts average four year revenue (adjusted for currency) at USD$168m, of which $25m is dividends from the IPL or CLT20. Those two competitions have approximately $240m in revenue. The ICC brings in around $200m in revenue a year, of which India is the source of approximately 60% (by most accounts). Finally we must estimate the amount of revenue earnt by playing India at home, and which therefore goes to the local board. If that is assumed to be around $100m then the Indian market is roughly three times that of Australia and England: around $600-700m. This is less than most estimates, but consistent with their annual reporting.
What we really need to do though is understand where the money comes from, and where it goes.
ICC Distributions are relatively easy to calculate, and the annual report is quite informative. 75% of profits - after events costs, TAPP payments and administration are removed - are paid as dividends to the ten full members, with the remaining 25% going into the development fund, the majority of which is distributed to the 96 associate and affiliate members. Dividends to full members over the last four years were USD$319.9m, with USD$41.4m being distributed to the development fund. That doesn't include prize money, or TAPP funds, from which relevant members took an extra million or so (relatively little, when considered over four years).
Non-full-members are paid according to a scorecard system - judged on 35% the men's ranking, 41% various participation figures, and 24% administrative development. The top-tier receive $300,000 USD, the bottom $5,000 USD, in addition to $100,000 for associates and $10,000 for affiliate members. High Performance Program members receive funds commensurate with the expected costs of transitioning from amateur to semi-professional cricket structures, depending on what events they play and qualify for. The Asian Cricket Council derives significant additional funding (approx $5m) from the Asia Cup, which in turn feeds into their member-base.
Estimating value: a multi-variate regression approach
A significant proportion of income for full members comes from the rights to host certain nations via the FTP - unless you hire Haroon Lorgat, then all bets are off. To estimate the size of these flows I went through every recently published annual report and noted the revenue in USD - converting by the exchange rate of the time - the year, the ICC grant (where noted, or estimated based on the ICC report where not), the number of home matches, and the number of those matches that were played against Australia, England, India or Other (meaning everyone else). A linear regression was then run, which produced some moderately accurate looking numbers:
The top line of figures - in thousands USD - is the important one - the second is the standard deviation which is worth noting only because they are quite large (around $1.2m on each variable). By multiplying out the revenue earned for a day of cricket by the number of days played I've estimated the flow from markets to boards in the following figure (click for pdf version).
This is necessarily representative, which is why I left the figures off. The size of each board logo and market is proportioned to represent the relativity between comparable entities. Where arrows are not clear, remember the money flows from market to board, sometimes via intermediaries (such as the ICC). I have not noted any payments directly from board to board, though the Woolf report implied they might exist.  Nor have I accounted for county cricket (which earns perhaps $40-60m) nor other T20 leagues which aren't accounted independently. In the interests of readability, flows less than $1m have been left off, as have most associate members. I can't guarantee I caught everything. 
There are a number of points that can be made from this. In no particular order:
The BCCI earns around $4m more than average from home matches, as does England, with Australia on $2.7m. (All give or take $1.2m; the Ashes earns at the top-end of those numbers - i.e.. above $4-6m per day). This makes intuitive sense. It also shows the importance of attendance versus TV in the revenue streams of otherwise smaller markets, and the weakness of the BCCI in creating a home schedule that meets what they might earn. As the annual reports bear out, India earns less from their home internationals than England, which doesn't accord with their perceived financial muscle.
Notwithstanding that the IPL effectively doubles what the BCCI make from their home market, they ultimately end up with only around half the revenue generated locally, despite having a monopoly control over the team that market pays to watch. This is both quite surprising, and an indication of why they are increasingly bullish about increasing their share of global revenue.
Board revenue has increased by $6m a year; that is, the regression estimates base revenue of $29m in 2000, and $89m in 2010. Obviously only three boards actually earned this; the variables are best interpreted as relative amounts. Moreover, the large standard deviation hints at the growing disparity in how much teams have managed to gain from overall revenue increases.
Playing India earns $1.6m for the home team more than Australia or England, and $2m more than any other team. This, in one sentence, explains most of what you need to know about the nexus of finance and the obnoxious chaotic scheduling of the FTP.
South Africa make $450k above the base rate (around $1m per match) from home matches, but New Zealand and the West Indies are making less than that, which means matches against teams outside the big-3 are likely to be losing money. We know this, in relation to why test cricket costs these teams money to play, but add in Pakistan - whose market lies dormant with no tours possible; Sri Lanka, Bangladesh and Zimbabwe, and it is clear the bulk of fixtures are neither profitable nor generating particular interest.
The total size of the non-full member market is around $90m; but it is almost certainly not being tapped. Zimbabwe has no market to speak of at all - less than $0.5m. Their GDP is tiny, their population is small; cricket is a minority sport. You frequently hear commentators remark on the importance of building up existing markets, rather than chasing markets in nations cricket has a small profile in. This is basically nonsense. There are three really big cricket nations, each of which has a GDP in the top-15 in the world. There is limited scope for growth in the big-3. But amongst the rest, because they are already pushing against the point of market saturation; and because their GDP is relatively small - despite their population size in the case of Pakistan and Bangladesh - their potential for growth is weak.
If one attitude weakens cricket's case for globalisation it is the perception - largely because of the bias in the origin of the cricket media - that there is a certain standard to aspire to equal to India, England, Australia and perhaps South Africa. With the possible exception of the USA, and in the longer term, China, no nation will reach that standard in the next 20, or perhaps 40 years, without remarkable (unprecedented) growth. The nations currently in the HPP are too small, or too poor, or both; the G-20 nations that might open up new frontiers have tiny playing bases.
There is, nevertheless, strong encouragement to the idea that cricket could have 20 nations of a standard somewhere between that historically maintained by New Zealand and currently by Bangladesh. With the assumption that, given that, at least a half dozen of those teams will have a transcendent talent (ala Hadlee or Muralitharan) that will allow them to compete with the big-4. A future post may look into this; as some nations will surprise.
If it wasn't obvious, cricket's finances are fundamentally unstable. The wealth available to three boards, and their local competitions means that noone else can afford the market rates for their players. While we haven't seen mass defections, it is increasingly clear that international cricket, as currently structured, cannot support the existing nations, let alone provide the investments needed to promote and grow the game elsewhere. Either a substantially larger proportion of the money moving from market to boards needs to be routed through the ICC (which means them taking ownership and control of tournaments), or a substantially larger proportion of the money must direct itself into competitions that will pay players from all nations, with a reduced emphasis on international cricket.
This would not be historically unusual; it has been the case for West Indies cricketers from the turn of the 20th century in English league cricket, through Constantine and Sobers; and onto the Packer years. There are also various ways both these scenarios could come to pass. Some are outlined in my manifesto on test cricket; others ideas will have to wait for another day. But don't be surprised if the CSA-BCCI spat is a harbinger of things to come. There are too many opportunities for the BCCI to redirect money currently exiting the Indian market back into their own pockets, and too much inequality, for things to stay as they are.
 It is interesting the cricket's largest markets come under both the best and worst governed nations but relatively few in the middle. The crisis of governance at ICC level is exacerbated by very different philosophies of action by its board members.
 The late Ronald Coase would have found this interesting. There is no good reason why boards couldn't bid for tours, thus maximising both BCCI income and cricket's overall revenue by playing the most desirable fixtures (albeit not those that make the best competition/product). Transaction costs at the ICC are high though, and we are far from man efficient touring structure.
 Also, apparently cutting a google map means I get abused by nationalist idiots over J+K. I don't care. Don't bother me over your craziness. It isn't remotely relevant to cricket.
One of the standout aspects of Australia's collapse in Durham was the tentative batting; admittedly it was what begun the collapse - Khawaja and Clarke's half hearted footwork - and not what continued it - Haddin and Watson's playing across the line. But it raises an interesting question over whether players play worse in the midst of a collapse, or much the same. Is there a drop in performance from the psychological pressure, in other words.
I tested this proposition using a technique Chris at Declaration Game used, by comparing the runs scored by the 5th and 6th wickets against the other innings, and matched that against the difference in runs between the fall of the 2nd and 4th wickets (the collapse amount - though most aren't a collapse).
As it turns out, there is no effect. The average scoring for the 5th-6th wickets is 59, which is consistent with one exception across the range of collapse amounts. Not only that, but there is so much randomness in the difference between the two innings, and the previous run-scoring, that even sample sizes over a hundred for low collapse amounts end up with reverse effects from one to the next.
You can see from all the data points that the difference remains resolutely centred at zero for all low amounts. This is actually doubly odd, because it indicates that even where several wickets have fallen for other reasons - a crumbling pitch or new ball - the difference between that and the previous innings was negligible.
Data clumping over shows some of the randomness, and don't be confused by the jump around 30; a different division produces a completely different result.
What both graphs do show though, is that where the previous two wickets have put on 200+ the average of the 5th-6th wickets combined drops to 45. It isn't clear why this is - the bowlers, presumably are tired - but perhaps one or more large preceding partnerships make it harder for an incoming batsman. Something to look at another day.
It does bode badly for Australia though. There is a tendency after a collapse to attribute it to the moment, and assume that next time, more focus and hard-work will arrest the problem. The data suggests that even losing three wickets for not many makes almost no difference to the mind-set. If a team is in the habit of losing 6 or 7 for not many it is because they are poor, and just as likely to lose quick wickets when the previous stands have been productive or dismal.
Inspired by Fake Ritzy's ICC rankings based analysis of Australia's Ashes chances, I ran a monte-carlo simulation of the series using my own (25% draw probability, matching the historic English average).
Australia is a roughly 1 in 8 chance of winning, and a 1 in 8 chance of drawing. 3-1 England is the only scoreline returning a positive net return on the betting markets. Australia's most probable winning scoreline, 1-2, is very (very, very) slightly more likely than losing 5-0. Australia wins 5-0 in about 0.04% of series. I think Australia's Indian tour has caused their rating to under-estimate their chances. With a decent team selection there is grounds for no more than mild pessimism, but given Watson is locked in to open, things are very bleak. Very.