DID BEHAVIOURAL SCIENCE COST LIVES? EVIDENCE-BASED POLICYMAKING DURING THE PANDEMIC

DID BEHAVIOURAL SCIENCE COST LIVES? EVIDENCE-BASED POLICYMAKING DURING THE PANDEMIC

Martyn Hammersley

During the pandemic, the UK Government’s policymaking has approximated more closely to the evidence-based model than in normal times. Ministers have repeatedly insisted that they are ‘following the science’ and that policies are ‘data-driven’. And, beyond the rhetoric, scientific evidence has clearly played a rather more direct role in shaping policymaking than is usually the case. This has arisen because dealing with the pandemic has been the central political issue for many months, and because this is a public health issue for which expert knowledge is patently required.

Distinctive amongst the kinds of evidence used has been that from ‘behavioural science’, a relatively new interdisciplinary field relying heavily on psychology and behavioural economics. And, early on in the pandemic, this was a focus of controversy as a result of a dispute about the risk of ‘behavioural fatigue’, the suggestion that lockdown restrictions must not be introduced too early because the public would soon tire of them, with compliance declining before the peak of the pandemic was reached. While the concept of behavioural fatigue was reported in the media as coming from behavioural science, it was subsequently publicly disowned by many behavioural scientists. This incident nicely illustrates some of the challenging issues surrounding the relationship between scientific evidence and policymaking, not just during a pandemic but more generally.

While Government leaders and their supporters have been keen to insist that their policies are based on scientific evidence, and the media have often questioned whether this is true, there are several reasons why any such direct relationship can be no more than fantasy (see Hammersley 2002, 2013). It has become increasingly clear over the course of the pandemic that there are many sources of uncertainty and disagreement around the evidence informing Government policymaking and its role.

One stems from the fact that scientific results, even in relatively well-established and consensual fields, are always fallible and open to change, with the findings of different studies sometimes in conflict. A second is that a routine feature of all scientific evidence is a margin of error. This is true in epidemiology, for example, and it can cause considerable difficulties in determining the scale of a pandemic and how it is likely to develop. Furthermore, slight changes in the information fed into epidemiological models can produce quite divergent findings, and there are multiple groups of modellers who use somewhat different approaches, these varying in the assumptions involved, and therefore producing rather different results. One problematic area here concerns the nature of people’s social contacts; often the assumptions about these are necessarily crude and at best would only approximate to aggregate patterns, fitting some sections of the population better than others.

Evidence about the likely effectiveness of different policy measures in dealing with the pandemic is also fraught with uncertainty. This is where behavioural science offers to make a contribution. However, human behaviour is a good deal more complex and variable than that of a virus, so that knowing what are the main factors contributing to the spread of infection and what would reduce it is even more challenging than producing reliable epidemiological evidence. As a result, the scope for divergent views is greater.

Another source of uncertainty is the penumbra of inference and judgment that surrounds the policy implications of any scientific evidence. The data employed in scientific studies rarely relate directly to the particular situation with which policymakers are dealing, and this means that inferences have to be made from the findings to the new situation about which decisions are to be reached. For example, at the start of the COVID-19 pandemic, there was relatively little research data relating specifically to this virus. In assessing the likely effects of various policies, reliance often had to be placed on research concerning previous pandemics, with adjustments made for what was known about the distinctive characteristics of the new virus.

There is also the problem that scientific evidence may be available in relation to some aspects of the problems faced, but not in relation to others. So, while there is evidence about the effects of lockdowns in previous pandemics, there is less about the effects of variation in the scope and character of lockdowns. Similarly, while there can be evidence about the effectiveness of particular measures, there may be little about how these would operate when used in different combinations.

Of more fundamental significance is the difference between scientific evidence and scientific advice. If we take the relatively straightforward case of epidemic modelling, its results are predictions about the scale and distribution of the epidemic across a population. In themselves, these predictions do not indicate how serious the problem is or what ought to be done. In fact, they do not – on their own – even tell us that there is a problem that needs tackling. In order to identify something as a problem, some value-judgments need to be made about the significance of what the science reports. And these can result in divergent policy conclusions. For instance, if we believe that the best way of dealing with an epidemic, in the absence of a vaccine, is to allow it to spread through the population so as to build up herd immunity, then (in extreme terms) the faster it is spreading the better, and nothing should be done to stop it.

A hitherto neglected aspect of the notion of evidence-based policymaking has come to the surface during the pandemic. This is that, to the extent that government policy is based on scientific advice, those supplying that advice, and those providing the evidence underpinning it, are potentially accountable for any policy failings. It is tempting to see Government ministers’ insistence that they are ‘following the science’ in cynical terms, as simply passing the buck – it is reminiscent of the old Irish joke: ‘Follow me, I’m right behind you’ (O’Connor 1995). However, to the extent that they have had to take account of scientific evidence, some responsibility is necessarily shared with those producing that evidence and turning it into advice. The debate around the notion of ‘behavioural fatigue’ nicely illustrates this.

Early on in the pandemic, a briefing was apparently given by David Halpern, head of the Behavioural Insights Team (BIT), a.k.a the ‘nudge unit’, warning of the risk of behavioural fatigue. While it is now a private company, BIT was originally part of the UK Government, and was responsible for introducing behavioural science directly into the policymaking process, developing the idea of ‘behavioural government’. This raises an interesting question about who belongs in the category of ‘scientists’ and who in that of ‘Government’ (a question that could also be asked about the Chief Medical Officer, the Chief Scientific Officer, and their deputies). Yet this is crucial for any understanding of the relationship between the two.

When the idea of behavioural fatigue was floated, a large number of behavioural scientists (some of whom were on Government advisory committees for the pandemic) wrote an open letter questioning the concept and expressing concern about the delay in introducing a lockdown. One commentator suggested that ‘Halpern was briefing on what essentially looks like his opinion as if it were science’. This touches on the point made earlier about the difference between evidence and advice, but also raises the issue of whether the notion of behavioural fatigue was based on research evidence or was simply a piece of common-sense reasoning. This, in turn, prompts the question of what role each of these plays, and should play, in scientific advice, especially in the behavioural field.

Government ministers have usually denied that there was any delay in instituting a lockdown, insisting that everything was ‘done to plan’; or they have rejected the idea that the timing of the lockdown caused increased loss of life. However, if there is ever an official inquiry into the handling of the pandemic, scientists as well as politicians may find themselves in the firing line. For their sake, and for other reasons too, it is to be hoped that such an inquiry would take account of the complexities of the relationship between scientific evidence and policymaking.

References:
Hammersley, M. (2002) Educational Research, Policymaking and Practice, London, Paul Chapman/Sage.
Hammersley, M. (2013) The Myth of Research-Based Policy and Practice, London, Sage.
O’Connor, T. (1995) Follow Me, I’m Right Behind You: A treasury of Irish humour, London, Robson Books.

 

Martyn Hammersley is Professor Emeritus at the Open University. A longer version of this piece is here.

Image: The Noun Project (CC-BY-3.0)

1 Comment responses

  1. Avatar
    November 04, 2020

    Nice piece. A thorough investigation of the notion of ‘behavioural fatigue’ was made by a BBC radio 4 programme and, strange though it is, no one claimed credit for it, not even the ‘thought leader’ (and part-owner) of the Behavioural Insights Team himself, David Halpern. If Halpern is genuinely the source it would be nice if he owned up. Certainly, given the level of incompetence in matters of the management of belief and behaviour it is becoming to be seen as something of a self-fulfilling prophesy.

    Behavioural Insights (perhaps the term has already been trade marked!) was always as much politics (at least for Chicago economist Richard Thaler) as science, the latter which, as Martyn says, should be understood in term of a fallibilistic understanding, and therefore outside of the positivist gloss which governments desire.

    Reply

<