Systematic reviews are widely accepted as a ‘gold standard’ in evidence synthesis and the meta-analysis within provides a powerful means of looking across datasets. Neal Haddaway argues that while certain fields have embraced these reviews, there is a great opportunity for their growth in other fields. One way to encourage secondary synthesis is for researchers to ensure their data is reported in sufficient detail. Thinking carefully about legacy and future use of data is not only sensible, but should be an obligation.
How does research make its way into policy?
Many academics, my former self included, have a rather romantic view of the science-policy interface: a bedraggled researcher runs to Westminster clutching their latest research paper, ready to thrust the ground-breaking findings beneath the waiting noses of thumb-twiddling policy-makers, desperate for some form of evidence to help them to reach a decision. In reality, decision-makers use a wide variety of evidence, from constituents’ opinions to financial and from scientific research to media attention. With the exception of a technocracy, research forms only part of the picture in decision-making. That said, decisions should always be based on the best available evidence, whatever form that might take.
Whilst policy-makers may turn to science to guide their paths, they often do not have the time or training to trawl some of the 100,000+ academic journals to find relevant research evidence. Some organisations advocate the building of relationships between researchers and policy-makers, enabling research findings to be provided to those with an evidence need. Others believe that this can lead to unacceptable subjectivity where a topic is contentious or the published research is contradictory. Either way, a reliable review of research literature provides a quick and relatively low-cost means of summarising a large body of evidence and can provide an unbiased assessment where there are contradictory findings. Reviews have long been commissioned as part of the policy-making process, but the past decade has seen a rise in the number of systematic reviews being commissioned by national and international policy-makers.
Systematic reviews are, in essence, literature reviews that are undertaken in a specific way according to strict guidelines that aim to minimise subjectivity, maximise transparency and repeatability, and provide a highly reliable review of evidence pertaining to a specific topic. Methods in environmental sciences and conservation are outlined by the Collaboration for Environmental Evidence, in social sciences by the Campbell Collaboration and in medicine by the Cochrane Collaboration. However, systematic reviews are now widely used in a plethora of topics, including construction, psychology, economics and marketing to name a few. In brief, systematic review methods use peer-reviewed and published protocols to lay out the methods for a review, and then searches for studies, articles screening for relevance and quality, and data extraction and synthesis are undertaken according to a predetermined strategy. Where possible, meta-analysis provides a powerful means of statistically combining studies to look for patterns across studies and to examine reasons for contradiction in results where they occur. Systematic reviews are widely accepted as a ‘gold standard’ in evidence synthesis, but other methods, such as civil service rapid evidence assessments, have been developed that aim to offer a faster review, albeit to a lower level of rigour.
The PRISMA flow diagram, depicting the flow of information through the different phases of a systematic review (Wikimedia, GFDL)
Why should we care about secondary synthesis?
Systematic reviews are an attractive method for decision-makers for a number of reasons. Not only do they summarise large volumes of evidence, but they are often much cheaper than commissioning new primary research. Furthermore, they can include a wide range of evidence from diverse situations and long time periods, meaning that factors such as long timescales and wide geographical ranges can be examined that would be highly challenging in a primary research project.
Systematic reviews may seem rather time consuming and expensive (typical times range from around 9 to 24 months and costs are quoted at between c. £20,000 and £200,000), but these resource requirements are low relative to costly interventions that could be ineffectual or even cause more harm than good. Bat bridges, for example cost between c. £30,000 and £300,000 each, but if it were feasible, a systematic review on the topic may show that they are ineffectual at reducing road-related bat mortality, saving orders of magnitude more money than their cost when used in national policy-making.
In medicine systematic reviews have become an industry standard and are well understood by all contemporary practitioners. In the environmental sciences, however, despite their high utility and increasing acceptance by decision-makers, systematic reviews and meta-analyses are not particularly well understood. This lack of understanding has led to the present situation, where reviewers must spend considerable effort extracting data in the right format to use in systematic reviews and meta-analyses and where the information that reviews need is often not provided.
How can we make secondary synthesis easier?
In order to be able to accurately assess the reliability and applicability of individual primary research, reviewers must be able to extract information relating to study design, experimental procedure and the studies’ findings. In order to be able to include study findings in a meta-analysis, data must be reported either as a standard effect size or as means, and both must be accompanied by sample sizes and measures of variability. In my experience of systematic reviews in conservation and environmental management, a shocking proportion of published research fails to provide details on experimental design or fails to report means, variability and sample size: 46% of studies failed to report any measure of variability in one systematic review currently underway in agriculture and soil science. Without this information it is impossible to assess whether observed differences between groups are likely to be real differences or just chance variation in sampling.
Some journals and publishers now require raw data to be published alongside research articles (e.g. PLOS ONE), and this will undoubtedly help in improving the rates of inclusion of primary research in meta-analyses. However, until this policy becomes universal, researchers should ensure that they report in sufficient detail to allow for their methods to be critically appraised and their data to be assimilated in secondary syntheses.
As decision-making moves towards secondary synthesis of existing evidence, researchers in all disciplines should be thinking about how to maximise the impact of their research. Thinking carefully about legacy and future use of data is not only sensible, but should be an obligation.
This post is based on the article A call for better reporting of conservation research data for use in meta-analyses written by Neal Haddaway and published in Conservation Biology on the 14 January 2015
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
Neal Haddaway is a conservation biologist working for MISTRA EviEM at the Royal Swedish Academy of Sciences in Stockholm. Neal has a background in aquatic conservation and ecology and for the last three years has been researching how scientists interact with decision-makers through evidence reviews.
Evidence Based Practices have cropped up from epidemiology over the past 3 decades and have gone through many innovations. The most serious problem is that systematic reviews demand standards that can ignore serious causal papers and research simply over administrative procedures. The ultimate result is not only distortion but potentials for essentially large scale consensus and gross opinions being correlated as “evidence base” despite the fact that real evidence exists to contradict such conclusions. Ultimately this can become an outcome based tautology as a methodology and the ritual use of the use “gold standard” to brand such “evidence” becomes meaningless hype. Research Based Evidence has been undermined by the administrative outcome based emphasis on “peer review” (subjective at the core)” and technically sophisticated phrasings that imply stringent accountability such as “systematic” and “meta” analysis.
Thanks for your comment, Bruce. You raise a good point, and one we hear often following some of the earlier traditions with evidence-based movements. However, recent developments in SR methodology across the board have focused on the realisation that ‘what counts as evidence’ has been historically quite narrow, as you point out. These developments have been put in place to ensure that a much broader definition of evidence is now used. To give an example, I have been reading this afternoon a very heated debate on Twitter about where opinion and online content are placed in the ‘evidence hierarchy’. In environmental sciences we are moving away from an over-reliance on any hierarchy in critical appraisal of evidence, but many secondary researchers believe that all evidence should be includible in a SR. What matters is that the tools available to synthesise that evidence are appropriate. By necessity, early SR examples have focused on RCTs and experimental, quantitative evidence, but qualitative research methods are becoming more frequently used in SRs that can account for a broader range of evidence. Given these developments, which must continue to adapt, the view you present (at least from environmental and social sciences) is a rather outdated one. I’m not sure I follow your last point, but a central tenet of SR methodology is that peer-review means nothing – grey literature is just as valid as academic literature. As an academic we also have a rather blinkered view of what peer-review means. Many organisations have internal and external peer-review that can rival that of academic journals. The emphasis within an SR is to critically appraise all evidence equally.
But comments about SR aside, research should be fully reported whether it’s used in a SR, MA or just directly assimilated into policy and practice (the focus of my article). Thanks for your comment, though.
Neal: Compliments to you. I essentially agree with your perspective but i think the history of developments in “Evidence Based” applications tend to lend a heavy implicit disciplinary political nature to bias “evidence” as a general category that absolves all entries. Since “all” evidence becomes relevant but not necessarily significant it dilutes the premise and the promise of its authentic resourcefulness. In medicine the epidemiologists that were risking their lives in the field did not care to wait for the 5 year trials on “evidence” that could be relevant to their survival. In that regard “all” evidence has a center of gravity. As the “evidence base” for application first locked on to basic applications, it merged with “research based’ methodologies that emerged in the 1980s that attempted to take “practice variances” out of the picture by authoritative professionals that ignored research and acted upon opinionated experience that often obscured and confused science based practice with the behavior of scientists as postured. Without going into the self serving degree of bias involved, the heaviest influence came from conflicts of interest and the business sectors demand for accountability over credibility. by the mid-90s this confusion all but off-set the corrections that had been made over the 15 years previous, and the potential for distortion once again became obscured. I have recent medical training session advertisement that directly links ‘evidence base” with avoiding (evading?) liability. It becomes protocol to practice according to “evidence” as a matter of protection, and under that rubric the stifling of “all” the “real” evidence (as mentioned with the early epidemiologists…), becomes completely lost in the interest of a totally different survival.
I am an anthropologist. I have a generic interest not a vested one. I presume you are in ecology as presented or environmental sciences. The trend here is in the latest application of “resilience” over sustainability. It is quite impressive. However, capital resilience trumps ecological resilience and the “evidence base” you develop should be on guard against confirmation bias for resource management that accommodates processes of commercial interests that wish to modify “outcome” and the consensus of manageability that an ‘evidence’ based confirmation bias might entail.
Regards to you, Neal…and good luck!
Bruce
Thanks, Bruce. I think we essentially agree on most things. I’m a conservation biologist actually, so our discipline is a crisis-driven one similar to medicine (i.e. low resources, short time-scale, individual need for evidence). So we tend to see more demand for evidence reviews from institutions with strong policy backgrounds that need reliability in an effort to justify novel or modified policy. Our practitioner base is very different from medicine (not trained in analysing SRs, minimal resources for accessing literature), so they are best reached through summaries of SRs, but still ideally benefit from recommendations to change entrenched, non-evidenced practices. So I can’t speak for other subject areas, but I do always assert that SR is not a panacea, it was never intended to be and will never be that. But it is appropriate and very useful in some situations. In those situations is vital that the evidence isn’t biased in any way, as you rightly point out, which is why the methods are adapting to cope with other forms of evidence. They are not adapting so that everyone can use SR in all decision-making situations – but perhaps I’ve misunderstood.
It’s great to chat to people about evidence-based/evidence-informed policy and practice. For researchers and decision-makers alike, I think there is always much to learn.
Thanks again for your comments,
Neal