We don’t know that unemployment is exactly 4.7%, but should we communicate this uncertainty?

cube-no-animation-2

We don’t know that unemployment is exactly 4.7%, but should we communicate this uncertainty?

By Johnny Runge

There are two different types of uncertainty. People typically understand that there is uncertainty about the future. We cannot know for certain what the unemployment rate will be one month or one year into the future; we can only predict and forecast it. However, what about uncertainty about the past? Usually, economic statistics about the past, such as the unemployment rate, are presented without acknowledging any uncertainty about the numbers. For instance, a recent BBC news article about the labour market figures for February to April 2021 stated that “the unemployment rate fell again to 4.7% in the three months to April, down from 4.8% previously.”

Job Centre, photo by Helen Cobain, Creative Commons licensed CC BY 2.0

In fact, there is uncertainty around these numbers. It was not necessarily exactly 4.7%, and it is not certain that unemployment fell compared to the previous month. The Office for National Statistics (ONS) provides the information about the margin of error in their releases. The table below shows that the unemployment figure is, in fact, estimated to be ± 0.3 percentage points around 4.7%, that is somewhere between 4.4% and 5.0% (using the 95% confidence interval which means that ONS are 95% certain unemployment is somewhere in this range). The absolute level is estimated to be ± 96,000 around 1,613,000 people, that is somewhere between 1,517,000 and 1,709,000 unemployed people.


Adapted from ONS (2021) A11: Labour Force Survey sampling variability, 16 June 2021.

Similarly, looking at the changes in unemployment level, the data suggest it increased by 246,000 people in the past year, but this could equally be presented as an increase of somewhere between 135,000 and 357,000 people.

In the case of unemployment, this uncertainty exists because the figure is not based on data from the whole UK population. The ONS collects unemployment data based on the Labour Force Survey (LFS) which is a representative survey of 37,000 households in the UK. Other economic statistics are also subject to a degree of data uncertainty. GDP growth estimates are published early, and subsequently revised as more information becomes available over time.

There are reasons why uncertainty is usually not communicated clearly to the public, which is discussed well in this open-access paper (van der Bles et al., 2020). In particular, communicating uncertainty may cause unnecessary confusion, signal incompetence, or decrease trust in the numbers. However, in the field of economic statistics, these concerns are backed up by very little actual evidence. The decision to communicate the simpler (but potentially imprecise) point estimate seems to “boil down to a simple and powerful intuition” that no one likes uncertainty (van der Bles et al., 2020). Recent developments, such as the onset of a so-called ‘post-truth’ era in which facts and data are increasingly contested and distrusted, might have further discouraged transparency about data uncertainty. However, it is by no means unthinkable that it would be beneficial to be more transparent about the uncertainty around economic statistics. It would offer more precise information and be more transparent about the methodology, which could foster more trust rather than less.       

This paper, alongside another paper published this week by ESCoE, explores whether to present information about this uncertainty to the UK public, and how. I built my study on emerging studies in this field that use randomised controlled trials. Through surveys with the UK public, these papers test different ways of presenting the uncertainty associated with GDP estimates (Galvao & Mitchell, 2021) and unemployment figures (van der Bles et al., 2020). Broadly, these studies have found that communicating the uncertainty around economic figures can, in fact, improve perceptions that the uncertainty exists, without undermining trust in the data or its source, though it does not increase it either.

Through 20 online, semi-structured interviews with the UK public (recruited by a market research company and paid incentives to be representative of the UK population), my paper explores in more depth how people conceptualise and think about data uncertainty, and how they react to the communication tools used in the studies mentioned above.

On unemployment, statistics producers should proceed with caution

On unemployment, my findings suggest statistics producers should proceed carefully. Many interview participants could not understand why there would be uncertainty around unemployment figures. Part of the reason was that they (wrongly) assumed unemployment data was based on benefit claimant or tax data, which meant they thought unemployment figures should be readily available for the whole population:

‘I can’t imagine it’s hard to measure unemployment. No, the DWP know too much about everybody… They can find out about us through National Insurance.’

‘I would have thought they got the figures right by now. If it is from people signing on to benefits, then I would have thought there was quite a minimal amount of uncertainty.’

Some respondents wondered whether the uncertainty was due to the difficulty of capturing certain parts of the population, particularly those “off the radar”, such as those working cash in hand, those not claiming unemployment benefits, casual workers, and so on. Respondents were particularly surprised by the large degree of uncertainty around unemployment figures. They especially reacted to the uncertainty when expressed in absolute numbers as thousands of people, rather than when expressed in percentage terms:

                ‘That’s quite a lot of people, isn’t it? How can you not count 170,000 people?’

‘That seems ridiculous to me, saying it rose between 1,000 and 211,000. That’s such a large range. It just doesn’t seem plausible that they have looked into it probably.’

Furthermore, some participants feared that such ambiguity could be misused and interpreted in different ways in the public debate, with some actors choosing whatever figure suited their cause. A few respondents, some of whom had previously expressed trust in economic figures, also started questioning the value of the unemployment figures and the competence of data producers, though they sometimes conflated them with the government:

‘But surely they have the actual figures for July? So why would it be between two figures? Are you saying I can’t really trust ONS unemployment figures?’

‘I don’t trust the government, I don’t trust anything they say. It’s a big difference between 1,000 and 211,000… We better not let them know that it is thousands and thousands, so we will give them a rough figure.’

When explained why the uncertainty existed, participants were very interested in the information, but very surprised that it was collected through a survey. It was common that the information generated even more questions, including about the validity of the sample size and the methodology.

The interviews also contained some examples of people whose trust would likely increase due to the increased transparency, as they had always wondered how estimates could be so exact, and they appreciated the additional information:

‘I have sometimes thought that it was a very, very accurate guess. I have always thought it cannot be that exact.’

As usual – and this is what makes communication to the public exceptionally hard – this study illustrates that the public comprises many different people. What might work for one person might not work for another. Overall, however, the findings suggest statistics producers should tread carefully when communicating uncertainty around the unemployment figures, because it is at odds with many people’s assumption that unemployment figures are certain.

On GDP, statistics producers have more room for manoeuvre to experiment

There was a different story for GDP estimates. It was clear that GDP mattered less to interview participants, and they had less pre-conceived ideas about how GDP data was collected and whether data uncertainty would exist. Therefore, it was generally much easier for people to accept the existence of uncertainty around GDP estimates, and they were not generally able to judge the size of the uncertainties that were communicated. As such, these findings support those of the other ESCoE study published today that communicating uncertainty on GDP can increase public awareness about uncertainty, without undermining trust (Galvao & Mitchell, 2021). The lesser public interest in GDP figures means statistics producers can experiment with different communication tools without much risk, but they should be cautious about applying the insights directly to other economic statistics without further research. As we saw with unemployment, other economic statistics may have their own ‘playbook’ in terms of how the wider public will react to such information.  

A full version of this Discussion Paper is available here.

Johnny Runge is a Senior Social Researcher at the National Institute of Economic and Social Research (NIESR)

ESCoE blogs are published to further debate. Any views expressed are solely those of the author(s) and so cannot be taken to represent those of the ESCoE, its partner institutions or the Office for National Statistics

About the authors

Research Projects

Related publications

Events