The journey from evidence to policy is inevitably complex and frequently becomes divisive as arguments rage about the validity and worth of the evidence presented. This is especially true in the “post-truth” era, where the opinions of experts are viewed with scepticism, opposing views (and evidence) are dismissed as “fake news”, and social media algorithms have fostered an “echo chamber” effect which further entrenches opinions. To effectively navigate this complexity, Peter Horton and Garrett Wallace Brown propose a new methodology for policy development, one which fully integrates scientific investigation with political debate and social discourse.
Few deny there are major global challenges ahead – from climate change and loss of ecosystems and biodiversity, to malnutrition, poverty, and disease, to how to harness new technologies and innovations for the betterment of all humankind. It should not have to be said, but the policies that will provide solutions to these complex problems have to be based on evidence, often from science, because many of these challenges are at their heart to do with science. But the journey from evidence to policy is far from simple, always lengthy, frequently divisive, and often ineffective. Why is that? To get to bottom of this question we first need to ask another – what is evidence?
Evidence is based on objective investigation, reasoning, and analysis as opposed to subjectivity and prejudice, belief and myth. Evidence may be derived from a purely scientific investigation, but it has to be allied to the political, cultural, economic, and social dimensions of these global problems and herein lies the origin of the difficulty. As arguments emerge about the validity and worth of the evidence, bias and prejudice become difficult to remove. These arguments are amplified by two interrelated aspects of 21st century life: firstly, the “post-truth” era where the opinions of experts are viewed with scepticism, where everyone’s opinions are equally valid and opposing views (and evidence) dismissed as “fake news”; and secondly, the sharing of information through the internet and social media, personalised through the use of algorithms aimed to harvest and respond to existing preferences and thereby fostering an “echo chamber” effect which increases the entrenched preferences of like-minded individuals.
Together these create a view that scientific evidence is not clear or conclusive, takes place behind closed doors, and is elitist, giving rise to conspiracies about who produced the evidence and for what purpose. In this environment, subjectivity prevails over objectivity as policymakers cherry-pick the evidence to fit with the preconceived views and aspirations of their supporters, as well as their own existing political mantras. To some, evidence becomes nothing more than any “fact” (whether true or not) that can be used to support a particular viewpoint.
So, what to do about it?
To see clear evidence ignored, distorted, or diluted in favour of ill-informed subjective views leads to frustration and anger. But this is not enough. It achieves nothing. A new approach is needed, away from naïve assumption that good evidence will be readily accepted and will quickly and easily contribute to policy. Appreciating the sheer complexity of many of the intractable problems that science is addressing is a good first step. From there we need new ways to gather, assimilate, and communicate evidence. In our recent article in Palgrave Communications, a rigorous protocol of mapping, analysing, visualising, and sharing is proposed:
- Mapping: to define the boundaries of a problem and the people and organisations involved
- Analysis: to identify what is known, what is not known, what are the important drivers, and what works
- Visualising: finding ways to present the accumulated knowledge in a transparent, accessible way
- Sharing: communicating the evidence to all sectors of society.
Figure 1: A process for integrating policy development: policy ideas stimulate evidence synthesis through the map, analyse, visualise, and share protocol. The evidence is then evaluated further by independent scientific investigation and inclusive deliberative forums, leading to revision of policy ideas and further analysis (if required) in an iterative process leading to final policy development. The same process can be used to evaluate policy success and bring about policy change. This figure is adapted from that which appears in the authors’ article, “Integrating evidence, politics and society: a methodology for the science–policy interface”, published in Palgrave Communications.
This process is essentially what is called evidence synthesis. Visualisation and sharing are particularly important – all too often, evidence synthesis results in lengthy and often impenetrable reports, which makes evidence sharing impossible. But what are the best ways to visualise and then share evidence? More research is needed. Could there be better evidence sharing via web-based national and international events, new online publishing models, and social media? Should people with expert knowledge be more active and proactive rather than passive and reactive? Is collective action needed? Do we need better ways to share experience and approaches to find out what works and what doesn’t? Should evidence be supplemented with powerful, “real life” stories to increase the power of the message? Can evidence be democratised in a way that does not undermine science itself; i.e. while also recognising that science relies on specialist knowledge, technical expertise, and years of training? Why do some issues grab public attention, facilitating remarkably rapid development of remedial policy (such as plastics in the ocean), whilst others of even more importance and impact (such as the increasing frequency and severity of extreme weather events resulting from global warming) do not? These are just some of the questions we need answers to.
The next step in our protocol is evidence evaluation. This also has to be an open and transparent process, a critical process that questions the validity of the evidence. Again, research is needed. Who should lead the evaluation process? Is there a central role for universities, academic organsations, commissions, etc.? To bring success we need independence and inclusivity, so should we break away from the traditional model of the “expert panel” (of mostly white male senior academics) and strive towards diversity of experience, ethnicity, and gender?
An important part of our protocol is that evidence evaluation should simultaneously and equally combine not only testing of that evidence under further independent scientific scrutiny but a wider discussion, debate, and deliberation. Within the evaluation process it is important to locate not only where evidence is lacking, inconclusive or ambiguous, but also to understand how evidence is perceived, misunderstood, or ignored. The same piece of evidence can be interpreted in different ways by different stakeholders, leading to disagreement and conflict. Deliberative forums involving the protagonists locate and challenge misconceptions and ideological stances in order to undermine enclave thinking and give the opportunity to reach agreements on contested pieces of evidence. These forums are currently physical meetings facilitated by researchers, governments, or experts, and of necessity are restricted in the number of people taking part. The internet could change that – broadening the scope of deliberative forums through innovation would allow much wider participation and larger sets of data to be collected and evaluated, analysis aided by the use of artificial intelligence techniques.
The results of this two-pronged evidence evaluation would enable a policy idea to be efficiently transformed into a policy plan, since all evidence has been validated and all stakeholder viewpoints have been either reasonably satisfied or properly discredited. The formal protocol, as described in our paper, would be a source of stability, discipline, and confidence-building, a recourse when problems arise and a way to break through logjams and overcome barriers. It establishes trust between scientists, government, and the public, and builds a more effective science-policy interface.
This blog post is based on the authors’ article, “Integrating evidence, politics and society: a methodology for the science–policy interface”, published in Palgrave Communications (DOI: 10.1057/s41599-018-0099-3).
Featured image credit: William, via Unsplash (licensed under a CC0 1.0 license).
Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
About the authors
Peter Horton FRS is Emeritus Professor of Biochemistry in the Department of Molecular Biology and Biotechnology and Chief Research Advisor to the Grantham Centre for Sustainble Futures at the University of Sheffield. He is a plant biologist with expertise in photosynthesis and his current interests lie in interdisciplinary approaches to achieving global food security.
Garrett Wallace Brown is Chair of Political Theory and Global Health Policy at the School of Politics and International Studies at the University of Leeds. He is Co-Lead of the University of Leeds Health Theme and has published widely on issues in global health, deliberative theory, and the evidence-based policymaking.
It is a very interesting article. We often in academia face a challenge of linking our research to practice, but a bigger challenge is making the research outcome accessible and readable among non researchers. This bridge is going to be a difficult one to walk across unless the nature / rules of the game changes drastically.