Vignette # 2 Lies, Damn Lies, and Statistics

 

Research has shown that fifty per cent of statistics are wrong sixty per cent of the time?

What is wrong with the statement?

If true, what are the chances of a statistic being correct?

Where do statistics come from?

By way of example, there are scores of martial arts in the world but really there are only three – striking arts, grappling arts, and weapons. Similarly, there are endless reams of statistics, charts, diagrams and conflicting claims. However, there are really on three sorts of statistics, what I call – count statistics, survey statistics, and speculative statistics.

Count Statistics

Firstly, there are things that have actually been counted. These are the most reliable statistics and they are rare. They are rare because most things are hard to count (like how many fish are there in the sea) and paying someone to count things is expensive. For that reason, these sorts of statistics are rare in natural resource management, economics, and politics but fairly common in business and some areas of government such as health and law enforcement. That is because businesses actually need to know how many cars they own, widgets they make, people they employ, taxes they pay, and what their people are actually doing. They count these things very carefully. Most of this information is commercial in-confidence but businesses will make it public if it suits their interests e.g. via a prospectus, or to show their importance to the community. Similarly, the government also counts a lot of things, for example, number of asylum seekers, numbers of operations in public hospitals, and crime statistics. Every four years the Australian Bureau of Statistics conducts a national census that provides an accurate count of the total population at a moment in time – census night.

What to look out for

For statistics to have meaning in public policy there must be a time reference and a policy context. “Last year Queensland grew one million tonnes of bananas” tells us that Queensland grows a lot of bananas but nothing really useful. Is Queensland growing more or fewer bananas than they were 10 years ago? Does it matter? However, a statement such as: “Queensland grows half the amount of bananas than it did 10 years ago. Rates pressures are forcing many farmers off their land and increasingly prime banana growing areas are being subdivided for real estate” is meaningful.

Similarly, a statistic like “50 women were murdered by their partners in 2013” is saddening but tells us very little. Is the incidence of domestic violence increasing or decreasing? Is the nature of the violence changing? Are the underlying causes the same now as they were 30 years ago? Is the level of reporting/under reporting the same? Without a policy context the figure, while unfortunate, has little real meaning.

Survey Statistics

Secondly there are survey statistics. These are the most common form of statistics in public life. They form the basis of nearly all natural resource management, most social policy, and are important in most large marketing campaigns.

Essentially one or more survey samples is taken and then checked for statistical errors. These results are then extrapolated to a broader population. It is, for example, impossible to count the number of fish in the sea, but if you count enough of them you can estimate the rest. The Australian Fisheries Authority takes ocean samples at various times, places and depths to count various fish populations and species. This sample data is mapped over time and extrapolated to larger areas of ocean. In the basis of that research fisheries scientists calculate how many fish of a given species in a given area can be harvested sustainably. The Authority uses that information when granting commercial fishing licenses to, for example, super trawlers. This same approach across multiple disciplines comprises climate science.

In the social sphere sample populations are asked a series of questions, either by way of questionnaire or in a group focus session or both. The results are then analysed and extrapolated in order to gain an understanding of attitudes, preferences, beliefs and behaviours within a given population or across the whole population.

What to look out for

The relevant concerns are:

  • Is the survey sample(s) big enough? The bigger, the more accurate.
  • Is the time series long enough? The longer, the more accurate.
  • As a corollary of the above, are the results statistically significant and if so, can they be fairly extrapolated to a larger area/population?
  • Is there another explanation? Is there contrary data/trends?

In the natural sciences survey statistics can be considered reliable if there is a well-funded long term research effort over decades in which comparable datasets are collected. For example, in the context of Antarctic science, research programs are run on a three decade time frame. It takes that long to produce credible science. Politically driven, ad hoc, grants depended, quick fix surveys are not a reliable basis for serious social or natural science. The science of climate change is now four decades old, which is why cautious professional scientists are now certain within a reasonable probability of error that man-made (anthropogenic) climate change is a fact. The climate change thesis has been a topic of climate research since the early 1970’s.

In the social sphere survey method is vital to credibility. This is too big a topic to tackle here. Suffice to say that surveys can easily be weighted or biased to favour a particular outcome, and not all survey samples are representative. For example, a sample of inner city persons in the 18-35 demographic would yield very different data on social attitudes to a survey of rural households. Both surveys would be valid explorations of particular populations but neither could be relied to provide insight into how the overall population thinks.

Speculative Statistic

The third type of statistic is the speculative statistic stated as fact. The policy landscape is littered with these. “Ten per cent of the population is gay” “One in four women will experience sexual assault” “120,000 abortions occur every year in Australia” “all men are bastards” “we are being swamped by boat people” “Labor has left a debt/budget crisis” “97 per cent of scientists agree with man-made climate change” are examples of statistical claims that have no clear evidentiary basis. They may or may not be true, but if stated often enough they become accepted as fact.

What to look out for

Usually these statistics are ammunition for someone’s policy agenda, either to add weight to their agenda or to counter opposing agendas. For people interested in truth they should be treated with extreme caution.

How can we judge the reliability of a statistic?

The first two types of statistic are used as tools of empirical analysis but in public life are often used as means of persuading people to adopt the agenda of the person using the statistic. It is important therefore when confronted with a statistic to know what weight to apply to it. This can be done by analysing the statement itself then looking at how the figures were derived. Usually this will involve some sustained effort and may not yield a clear result. This is the kind of research that the ABC Fact Check people do very well. I recommend reading their material to get a feel for how statistics are used and where they come from. Few people outside of academia or research institutes have that kind of time. However, there are some shorthand checks that we can all do when confronted with a concerning or unlikely sounding statistic. These will not prove the statistic right or wrong but will help weigh its reliability.

Is it inherently believable?

For example, the oft quoted statistic that one in four women will be sexually assaulted in their lifetime is not inherently believable – if sexual assault is understood in the legal sense of actual criminal behaviour. That doesn’t make the claim untrue, but it does beg for further analysis. I know of no evidence base that supports for this claim.

 

Does the person/organisation/publication quoting the statistic provide a source (say where the statistic comes from)?

If not they could be recycling an out of date, irrelevant or simply wrong figure. All credible persons/organisations/publications reference their data. For example, the Safe Schools Coalition selectively quotes numerous statistics on the prevalence of anti GLBT bullying and the percentage of school students who are GLBT. No sources are given in many of their publications and a cursory examination of the research shows the misuse of data in pursuit of a broader social agenda. (http://www.safeschoolscoalition.org.au/contact-us).

 

Does the person or organisation have a vested commercial or ideological interest in the subject matter?

The fact that they do does not make their claims wrong. There would be very little public life if persons without a special interest did not put their arguments forward. However strong agendas often lead to selective interpretation and presentation of evidence. Traditionally Universities, the CSIRO and the ABS were considered above such interests and hence reliable. However, with the privatization of public institutions this is becoming less so. If there is a strong ideological agenda being pushed there will almost certainly be contrary statistics to consider. The decades long debate about whether smoking causes cancer is a good example of how commercial interests confused the public policy debate.

 

Is the source credible?

Technically this is an argument ad hominem since even a fool or a wrong doer can say something that is accurate. The Nazis were very good engineers, as drivers of the VW Volkswagen know. However, experience and qualifications are relevant. For example, a person who has no relevant qualifications, no science training, no academic training, and who is funded by petro-chemical interests, is not a reliable source for information about climate science. They may be useful people to point out irregularities or anomalies and in this way may have something to contribute, but only as a small part of a bigger picture. Similarly, pastors are not the ‘go to’ people for an understanding of evolutionary science although they may ask useful and relevant questions not asked by others. Those questions then need to be answered by someone else. Non expert sources can however be very useful when it comes to anecdotal evidence that no one is counting. Pro-life abortion counseling services for example are a very good source of information about post abortion grief and trauma. Non-academic sources can thus be highly relevant.

 

Does the statistic have adequate definition and internal coherence?

The false statement given at the start of this section is an example of a statistic that lacks internal coherence. It actually doesn’t make sense. Colloquially the idea of internal coherence is expressed in the term “comparing apples with oranges”. This is discussed below.

 

Apples and Oranges – all fruit?

In order for a statistic to be meaningful it must define what it is counting and have a baseline for comparison. This is considered in the following two examples:

 

Study One – Regional Forest Agreement

A statement frequently made in the Tasmanian forests debate is “Eighty per cent of old growth forests are protected.” That is a powerful and persuasive statement. It implies that the concerns of conservationists are already well catered for, that further concessions are unnecessary and that those seeking to lock up more forests are “insatiable” and “extreme” and “want to close down our industry”. These statements run together were part of a calculated public relations strategy to draw public support away from conservation. Since this is an important issue, let’s unpack the statement.

Firstly, what is an “old growth forest”? Is it a forest that may have had some selective logging in the 19th or early 20th century but is ecologically intact? Is it forest that is supposed to be left in riparian reserves but might not be? Is it forest that is as it was in 1788? Is it an isolated forest reserve surrounded by logging or part of a contiguous wild area? It makes a big difference. For example, the Wielangta forest has had some selective logging with cross-cut saws but are/were diverse, ecologically intact, home to rare and endangered species, and have mostly now been logged.

What does “protected” mean? Is it protected from logging but not from mining? This is the case for the largest intact temperate rain forest in the world in Tasmania’s Tarkine. Is it protected from clear-felling but not from selective logging? This is now the case with a large corridor of land in the above rain forest. Is it protected from logging but roads and resorts can be built in it?

Also the term “old growth forest” does not distinguish between forest types. It is possible for eighty per cent of all old growth forest to be protected and for some forest types to become extinct. Below a certain level of reservation some forest types may become ecologically unviable.

So enough with the definition, what about the baseline? Eighty per cent of what old growth forests are protected? Those that were here in 1806, 1986, 1996, 2006, or yesterday?

By shifting the baseline, it is possible for the percentage of protected forest to increase while the actual amount of forest diminishes. Consider this example of a chicken farmer:

A farmer has 10 chickens. ‘I am not going to slaughter all of my chickens, I am going to protect two’, he decides. On Monday no chickens are slaughtered and twenty per cent are protected. On Tuesday he slaughters two chickens leaving eight. 2/8 chickens are now protected. That is one quarter or 25 per cent of the chickens. On Wednesday he slaughters another two chickens leaving six chickens. 2/6 chickens are now protected. That’s one third or over 30 per cent of the chickens. On Thursday he slaughters another two chickens. Now 2/4 chickens are protected. The percentage of protected chickens has reached 50 per cent of the total number of chickens remaining, up from just ten per cent on Monday. On Friday he eats fish for lent. On Saturday he slaughters just one fat chicken. Over sixty percent of the chickens are now protected, but there are only three left. How could anyone expect a farmer to protect more than sixty per cent of his flock? On Sunday he slaughters the remaining unprotected chicken leaving only two. The level of protection for his chickens has now reached 100%.

Note that the number of protected chickens has not changed, only the base line. As fewer chickens remain the baseline shrinks and this increases the percentage rate of the protected chickens. This is how people can claim that eighty per cent of Tasmania’s old growth forests are now protected despite 200 years of land clearance and 40 years of industrial logging. As more old-growth forests are logged, the percentage of protected forest will increase without any more forest being added to conservation. Others have claimed that what is protected is 86 per cent of the 13 per cent of what old growth forest remains in Tasmania which gives you a figure closer to one tenth of the original cover. It is very difficult to actually find out how much old growth forest is protected in Tasmania against a baseline of what was here in 1806.

The Regional Forests Agreement (RFA) has the following formal definition of old growth forest:

“Old-growth forest is ecologically mature forest where the effects of disturbances are now negligible.“

According to the Commonwealth Department of Agriculture (http://www.agriculture.gov.au/forestry/policies/rfa/regions/tasmania)

“The application of the CAR [comprehensive adequate and representative] criteria in the RFA process has resulted in around 68 per cent of the extent of old-growth forest identified in 1997 or 1998 being protected in reserves in RFA regions [Australia wide].”

For more information on Tasmanian forest reserves and the Regional Forest Agreement see the official site here: http://www.stategrowth.tas.gov.au/forestry/rfa

Note that in the context of the RFA process Forestry Tasmania indisputably had a commercial conflict of interest and an ideological conflict of interest. Note also that the statistics and definitions used by the RFA are not universally accepted and the RFA has been strongly criticised by conservation groups. The Wilderness Society and the Australian Conservation Foundation have claimed that:

  • During the RFA process Forestry Tasmania inappropriately included thin stream side reserves to arrive at inflated figures for reservation
  • Social weighting criteria led to key forest areas being logged and resulted in the RFA not achieving its stated conservation goals
  • Too few bioregions were included in the analysis resulting in very poor outcomes for some forest types
  • Conservation groups were locked out of the decision making process and disputed statistical claims by Forestry Tasmania were accepted uncritically by the Commonwealth
  • Forestry Tasmania was found guilty in the Federal Court of breaching the Regional Forest Agreement by logging endangered species habitat, following which the law was changed to allow logging to continue
  • In practice Gunns Ltd and Forestry Tasmania have breached the Regional Forest Agreement by not properly administering the forest estate.

See for example here: https://www.wilderness.org.au/forestry-tasmanias-old-growth-land-grab-exposed

To understand how these statistical issues fit into the broader policy landscape see here: https://www.themonthly.com.au/issue/2007/may/1348543148/richard-flanagan/out-control

We can see from the above that in order for a statistic to be meaningful it must define what it is counting and have a baseline for comparison. It is also helpful to consider how these statistics fit into a broader narrative about the policy topic.

 

Study Two – What Percentage of the Population of Britain are Christians?

How many Christians are there in Britain? The simple answer is that, according to the most recent census 59 per cent (roughly two thirds) of respondents identified Christianity as their religion on the census form. On that basis Christians in general and the Church of England in particular, can lay claim to special consideration in public life…such as seats reserved for Church of England Bishops in the House of Lords. But can they? As part of his campaign to secularise Britain (and the world) Dawkins took issue with the notion that Christians should have any special clout in British public life. For that reason, the Richard Dawkins Foundation for Reason and Science UK hired a survey firm Ipsos MORI to look more closely at what those people who identified as Christian actually believe.

The results reported here: (https://www.ipsos-mori.com/researchpublications/researcharchive/2921/Religious-and-Social-Attitudes-of-UK-Christians-in-2011.aspx) are overwhelmingly secular. For example, almost as many Christians oppose holding acts of worship in State schools as support. This highlights the need to better define what is being counted. Are these people actually Christians? What do they really believe?

MORI states:

“The research sought to measure a number of Christian practices, including regular reading of the Bible and prayer outside church services, to see how prevalent these were amongst respondents self-identifying as Christian.  Among the results, we find that:

  • The majority (60%) have not read any part of the Bible, independently and from choice, for at least a year.
  • Over a third (37%) have never or almost never prayed outside a church service, with a further 6% saying they pray independently and from choice less than once a year.
  • Only a quarter (26%) say they completely believe in the power of prayer, with one in five (21%) saying they either do not really believe in it or do not believe in it at all.

The low level of religious belief and practice is reflected in church attendance. Apart from special occasions such as weddings, funerals and baptisms, half (49%) had not attended a church service in the previous 12 months.  One in six (16%) have not attended for more than ten years, and a further one in eight (12%) have never attended at all.  One in six (17%) attends once a week or more.

When asked where they seek most guidance in questions of right and wrong, only one in ten (10%) said it was from religious teachings or beliefs, with over half (54%) preferring to draw on their own inner moral sense.

Half (54%) of the self-identifying Christians describe their view of God in Christian terms, with the others using the term in the sense of the laws of nature (13%), some form of supernatural intelligence (10%), or whatever caused the universe (9%). Six per cent do not believe in God at all.

Just a third (32%) believe Jesus was physically resurrected, with one in five (18%) not believing in the resurrection even in a spiritual sense; half (49%) do not think of Jesus as the Son of God, with one in twenty-five (4%) doubting he existed at all.”

The figures given are percentages of 1,136 responses on a face-to-face survey of persons who identified as Christians in the 2011 census. The report highlights those things respondents don’t do or believe rather than what they do which makes it more difficult to identify who the real Christians are. However, we can say of the respondents that:

  • 32 per cent believe in Jesus’ physical resurrection. So, on a population wide basis that is 32 per cent of 59 per cent. This can be calculated as a simple formula 59/100 * 32 = 18.8 per cent.
  • 17 per cent regularly attend church. 59/100*17 = 10.03 per cent.
  • 26 say they believe in the power of prayer. 59/100*26= 15.34 per cent.

Overlaying these figures suggests that between 10 – 19 per cent of the UK population is Christian in any sense in which that term would have been understood by the early church. If we take a middle figure 15 per cent is reasonably indicative on a population wide basis. So out of a random group of 100 British adults in Britain, you could reasonably expect that 15 of them would be Christians. The survey did not report on trend data or demographics.

Saintly Statistics

So what does a good statistic really look like? Generally good statistics have the following characteristics:

  • They are placed in a historical context
  • They are placed in a meaningful policy context
  • They quote a source which is publicly searchable and which explains the methodology used to arrive at the figure
  • The relevant terms are defined or there is an easily searchable reference which defines them
  • They have a reliable baseline

Some statistics by their nature are too complex to be easily expressed. For these it is legitimate in public life to use short hand phrases such as: “An increase in global temperature to two degrees Celsius above pre-industrial levels is considered the upper safe limit to prevent runaway/irreversible climate change”. There is a vast amount of statistical data and definition behind this statement. However it is publicly available here: http://ipcc.ch/

Does it matter?

Accuracy is more important in some contexts than in others. Where statistics are indicative of a social problem or historical grievance accuracy may be less important. I respectfully suggest that it matters very little whether the number of Jews who died in the holocaust was six million, or somewhat more or less than this figure. The point is that there was a holocaust and millions died. It is perhaps disrespectful to quibble about precise numbers. The same is true of many other current and historical injustices. However, there will be times when precise number matter. Hospital waiting lists are one example. Without reliable statistics it would be impossible, for example, to sustainably manage natural resources. Also, when outlandish statistics are used to promote radical social or economic agenda’s they should be held up so scrutiny.

Summary

All statistics can be grouped as count, survey or speculative. Of these, count statistics are the most reliable. Speculative statistical statements should be treated with extreme caution. Much if not most policy discussion involves survey data. The key factors in reliable survey data are:

  • Size of sample (how many)
  • Relevance of sample (who/what)
  • Sampling program over time (point in time or series)

Good statistical statements define their terms and baseline, and are placed in a historical and policy context. Statistics used to pursue ideological agenda’s or vested interests may be correct but should be viewed with caution.

0
Select your currency
USD United States (US) dollar