Policy and Politics: Evidence-Based Policy – Older than Advertised and Weaker than We Could Wish

Policy and Politics: Evidence-Based Policy – Older than Advertised and Weaker than We Could Wish

Richard D. French

This section of Discover Society is provided in collaboration with the journal, Policy and Politics. It is curated by Sarah Brown.

I have watched an enthusiastic, well-intentioned lobby for evidence-based policy on my campus for several years. However, I frequently reflect that if I were to opine publicly on evolutionary biology, or astrophysics, with as little knowledge of the subject as various scientifically trained persons hold forth on public policy, I would soon be read out of the intellectually reputable part of the university community – and rightly so.

Just as scientific papers do not provide an account of the process of research with its false starts, negative results and sheer failures, but rather an account of its results, so too do accounts of policy fail to provide any sense of the process of arriving at that policy, with its many reversals, ‘irrationalities’, and contingencies.  Political rationales accompanying policy announcements should never be mistaken for accurate accounts of underlying processes or motivations. The point is that whatever the educated layperson may choose to assume about the making of public policy, there is no substitute for reading the literature on the subject, also known as ‘the evidence’.

My review of the literature on evidence-based policy (henceforth, EBP) leads me to two main conclusions. The first is that the EBP experience is a rerun of the campaign for the “policy sciences” of the immediate postwar era, and that the distinguished scholars of that period already provided most of the answers advocates of EBP might require. The second is that, while it does not occur to its advocates to question the underlying premises of EBP, let alone examine the literature by those who have studied it at first hand, there is a sophisticated and well-informed community of scholars who both asked the question and did the research.

This means that when we think of the EBP movement, we should imagine two different components: A populous fringe of noisy and visible but unsophisticated advocates whose convictions of the obvious merits of EBP far exceed their substantive knowledge of the conditions of policy work, and a much smaller but more significant set of scholars who have actually researched the phenomenon.  I call the first crowd the Reinforce School. They want government to get with the program. For many members of the Reinforce school, if policy is not made on the basis of evidence, then it must be made on the basis of some unedifying motivation: self-interest, power, ‘ideology’, ignorance, naked electoralism, co-optation by ‘elites’, craven submission to ‘interests’, and so forth. The possible roles of principle, prudence, compassion, historical commitment, or respect for public opinion, are ignored. The Reinforcers can be identified by their insistence that government should do this, that and the other; the onus is on governments which are alleged to have been intellectually or morally underperforming. The problem is that these advocates of evidence-based policy have not read the evidence on evidence-based policy.

I then divide the second group – those scholars who study EBP – into three further schools. The largest of them is the Reform School, which argues that a series of adjustments here and there by both researchers and policy-makers can realize the potential of EBP.  The Reform School are people who believe in the merits of the EBP idea but are discouraged by the experience of EBP on the ground. They recognize that it is hard to identify many successes to put on the EBP hit parade and they attempt to formulate measures to improve the hit rate.  These measures are incremental; they do not question the foundations of either research or government practice. The consensus in the Reform school would seem to put the priority upon (1) recognising that evidence is most likely to be helpful in enlightening and educating policy-makers rather than providing solutions to specific policy problems, (2) accepting that a variety of types of evidence – beyond that obtained by randomised controlled trials, for example – should be admissible, and (3) the finding that evidence provided by researchers who are in direct and sustained contact with potential consumers among policy-makers is most likely to be influential.

The next school attacks the fundamentals. It sees the same ambiguous, if not unsuccessful, record as the Reform School. I call this group the Reinvent School because it demands major changes in either the evaluation of research which is destined to support public policy, or the application of research by policy-makers, or both.  Such changes would demand an explicit and emphatic commitment to more intensive management of evidence in policy-making by senior officials, from the demand side, called governance of evidence, or on the part of scientists, from the supply side, called knowledge assessment.

The making of policy under good governance of evidence would have to be subject to protocols and audit procedures such as those applicable to, say, programmes for assessment of immigration or asylum claims, or the engagement and the promotion of officials named under merit-based public personnel systems (both of which happen in many democracies to be subject to judicial review). Policy-makers would have to document their actions and choices, so as to permit review for compliance with evidence systems requirements. Knowledge assessment, on the other hand, would involve the systematic screening of research products to ensure clarity and quality control. A variety of protocols, structured to expose the limits, weaknesses, lacunae and contextual linkages of evidence are proposed.

What the knowledge assessors see as ‘extended peer review and extended quality assurance’ appear to be forms of meta-research in which the assumptions of complexity serve as a base for the deployment of a critical apparatus in aid of rating the aptness of any given piece of evidence to serve policy makers. This apparatus is conceptually more elaborate, more searching, and broader than, but not fundamentally different in kind from, traditional disciplinary standards. This would be the ‘critical social science of knowledge applications…that uncovers and raises to a level of explicit consciousness those unexamined prior assumptions and implicit standards of assessment that shape and also distort the production and use of knowledge’ in public policy, for which William Dunn called 25 years ago. For this school, the potential of EBP would justify a much larger investment of time, energy and money than the EBP idea currently attracts. Extensive innovations must be undertaken to realize that potential.

The last school, the Reject School, does not believe that the EBP idea has the promise of making any important change for the better in the making of public policy. These scholars do not question the use of organized knowledge in the making of public policy. They just do not believe that there is a vast corpus of knowledge currently being ignored by government to the detriment of citizens.  The Rejectionists argue that much of what researchers do results in work unsuited for policy-making because it is designed to achieve grants and publications, rather than solve the practical problems of the creation and application of public policy. Public problems are too complex for disciplinary methods. In the complexity frame, we face a non-ergodic world with non-linear dynamics, phase shifts, multiple/suboptimal equilibria, path dependency, institutional lock-in, increasing returns, irreversibilities, and a number of other phenomena with which scientists and mathematicians are only beginning to grapple.

Complex systems produce emergent qualities, such that the whole is greater than the sum of its parts, and so frustrate the reductionism of classical science. Moreover, scientific models are just that – models. Too often, we take them for accurately constructed entities in the world. A hunger for certainty and security on the part of citizens induces inflated claims by the community of knowledge producers, but these claims ultimately prove empty. Moreover, scientists’ posture above the fray is equally empty; their choice of projects, their provision of advice, their attempts to achieve positional goods, mark them as no more nor less qualified than many other citizens to influence policy.

Sir Peter Gluckman, the doyen of global science advising, repeatedly warns his audiences that scientists should beware of hubris. It is remarkable how often similar themes – modesty, humility – emerge in studies of EBP and related issues, such as Ray Pawson’s conclusion (p.167) that ‘Above all, one must be modest in reflecting faithfully the limited authority of evidence itself. Good science is not futurology; we should always be humble about predicting the path ahead on the basis of what we know about the one already trodden.’ This review located more than a dozen such admonitions, employing one or both of these terms.

I conclude that to respond to the challenges facing EBP, researchers must develop a more realistic grasp of the task environment in which ministers and senior officials operate, reject naïve but prevalent assumptions about the level of analytical rationality in government, and recognise that direct and sustained engagement with policy-makers, found to be the best predictor of successful practice of EBP,  may not be compatible with career advancement in academia.

 

Richard P. French was professor and CN – Paul M. Tellier Chair on Business and Public Policy in the Graduate School of Public and International Affairs at the University of Ottawa between 2008 and 2016.

Image: Pixabay

3 Comment responses

  1. Avatar
    January 09, 2019

    I very much agree. The EVP concept has been used as a reductive tool to exclude certain witnesses from policy debate, in favour of an authorized class.

    A particular case dealt with here: https://academic.oup.com/jiplp/article-abstract/11/2/92/2358037

    Reply

  2. Avatar
    January 31, 2019

    I don’t disagree with this analysis, but it is one that applies very much to the academic community where a lively debate ensure. My attachment to EBP is not so much a description of what actually happens, or could happen, but more an advisory to policy makers on what they should not be doing. I think it is very useful for policy makers to be confronted with evidence of programmes that have been enthusiastically and expensively implemented, but that have either no effect or exactly the opposite effect. There are many examples, and they relate to programmes rather than broad policies I suppose. But this is a very useful exercise. It is a modern evidence-based version of “speaking truth to power”. You may still get absolutely nowhere with said policy-makers, but at least you can say “told you so” and feel that you have done your duty when the penny finally drops in the relevant political and policy arena.

    Reply

<