This section of Discover Society is provided in collaboration with the journal, Policy and Politics. It is curated by Sarah Brown.
A growing interest among policy makers and researchers into the application of behavioural insights to public policy has led to the development of a wide range of behavioural public policy interventions, designed to change the behaviour of individuals, groups, communities and/or populations. These behaviour change interventions have been introduced in diverse spheres such as healthy living, tax returns and going green.
Unfortunately, the quality of the reporting in behavioural public policy research is often poor, making it difficult for the reader to understand what the intervention was or how the research was done. Poor reporting is certainly not unique to behavioural public policy, but there does seem to be a particular problem here. In 2018 a review was published about so-called ‘choice architecture’ and ‘nudge’: that is, behaviour change interventions where the environment, or decision-taking context, are designed in such a way that people are nudged toward more beneficial options. The review found 156 studies and reported an excessive amount of bad practice.
Guidelines are available for the reporting of randomised controlled trials and many other research designs. Yet despite this, only two per cent reported that their research followed suitable guidelines, and only seven percent provided evidence that they had worked out in advance how many units were required to ensure the results would be robust. If study plans are registered in advance of the commencement of the study, the registry provides a resource for a future reader of the study to check that the research was properly conducted as planned, but none of the studies was pre-registered. The descriptions of the interventions were non‐exhaustive, with frequently overlapping categories, making it difficult to compare one behavioural intervention with another. The quality of many studies was too poor to allow meta-analysis and the behavioural interventions were not described in sufficient detail to delineate one from another or allow replication.
There is a lot of guidance available for researchers to use to improve the robustness of their behavioural public policy research. Use of guidelines and checklists can make the research more transparent and convince sceptics of the value of behavioural public policy research. In our recent article in Policy & Politics, we argue that the question is less about whether to use a checklist or not, but rather which checklist to use.
Our four top tips to follow to improve research quality are:
#1 Paint a clear picture
Even the best policy interventions will not be picked up and implemented by anyone else unless they are described clearly in the first place, in a way that delineates them from other similar interventions. One tool with a long track record is TiDieR, which offers a checklist to guide the reporting and precise description of an intervention. The good news for public policy researchers is that there is a bespoke TIDieR for population and public health interventions (TIDieR-PHP).
How TIDieR can help researchers:
Allows others (potential adopters, policy makers, researchers) to understand exactly what was implemented.
Helps the intervention to have more impact, by encouraging replication beyond the original setting.
Helps to highlight what differences there are between the current intervention and any similar previous ones.
Allows systematic reviewers to accurately summarise interventions across multiple studies.
#2 Delineate the behaviour change techniques
The active behaviour change ingredients in a public policy intervention can be described using the Behaviour Change Technique Taxonomy. This can help us avoid confusing each other by describing the same technique in multiple different ways. The BCT Taxonomy is well-established, with a package of on-line training and a helpful app.
#3 Trial planning and reporting
Interest in behavioural insight has been accompanied by a rise in the use of randomised controlled trials as a suitable methodology for measuring the effectiveness of behaviour change interventions. Trials are not the only method for studying public policy: if the intervention is still at the early development phase or the feasibility is still being established, or you want to know about acceptability, other methods might be much more appropriate. But to establish whether the intervention is effective in changing behaviour, often a randomised trial is best. If you do a trial, don’t waste effort on a badly designed trial. Writing a protocol in advance, pre-registering the trial on a suitable site and developing a pre-specified statistical analysis plan all bring rigour. There are lots of guides out there to help with this, and we signpost them in our article.
There is significant overlap in the items required for trial planning and reporting. We argue the need for one reporting template containing all the items required, with an indication of which items are needed for the different stages of trial planning and reporting.
With all these tools freely available to improve research, why is behavioural public policy research often done poorly? The answer to this question is not completely clear. Some people argue that behavioural public policy is just too complicated and ever-changing to be described according to a template. Certainly policy making is a messy business, but we have shown elsewhere that TiDieR can be a useful tool in applied research for making sense of messy interventions, and that it can be used for a variety of study types, beyond randomised controlled trials. Our paper, Getting Messier with TIDieR, made four suggestions for how TIDieR could be revised so as to better capture the complexities of applied research:
‘Voice’: it is important to report who was involved in developing the TIDieR description (such as researchers, policy makers, service users or service deliverers) because each could have a different perspective on the behaviour change intervention.
‘Stage of implementation’ conveys what stage the intervention has reached, using a continuum of implementation research suggested by the World Health Organisation, with stages ranging from early proof of concept through to integration into regular services.
‘Modification’: reminds authors to describe in greater detail how the intervention was modified from earlier/planned versions during implementation.
‘Context’: encourages researchers to describe how the particular context (including people, resources, perspectives and activities as well as location) affected how the behaviour change intervention was delivered.
Another explanation for the absence of reporting standards in public policy research may be that it is a relatively new field and the methodologies are still being established: if so, we hope this paper can act as a useful tutorial.
Our P&P article provides evidence that behavioural public policy researchers sometimes let themselves down by not following the best research practice, and offers guidance on how to make research more rigorous, more transparent and more translatable to other settings. We hope our article will be a helpful complement to the forthcoming special issue of Policy and Politics on behavioural public policy and that it will have an impact on research practice in the field. Adoption of our recommendations has the potential to improve research conduct and clarity, as well as enhancing the legitimacy of behaviour change research both within and beyond the field. This is crucial if we are to maximise the potential of our behavioural change expertise for the common good.
Image: Justine Stuttard