LSE - Small Logo
LSE - Small Logo

Neal Haddaway

October 19th, 2020

8 common problems with literature reviews and how to fix them

3 comments | 321 shares

Estimated reading time: 5 minutes

Neal Haddaway

October 19th, 2020

8 common problems with literature reviews and how to fix them

3 comments | 321 shares

Estimated reading time: 5 minutes

Literature reviews are an integral part of the process and communication of scientific research. Whilst systematic reviews have become regarded as the highest standard of evidence synthesis, many literature reviews fall short of these standards and may end up presenting biased or incorrect conclusions. In this post, Neal Haddaway highlights 8 common problems with literature review methods, provides examples for each and provides practical solutions for ways to mitigate them.


Enjoying this blogpost? 📨 Sign up to our mailing list and receive all the latest LSE Impact Blog news direct to your inbox.


Researchers regularly review the literature – it’s an integral part of day-to-day research: finding relevant research, reading and digesting the main findings, summarising across papers, and making conclusions about the evidence base as a whole. However, there is a fundamental difference between brief, narrative approaches to summarising a selection of studies and attempting to reliably and comprehensively summarise an evidence base to support decision-making in policy and practice.

So-called ‘evidence-informed decision-making’ (EIDM) relies on rigorous systematic approaches to synthesising the evidence. Systematic review has become the highest standard of evidence synthesis and is well established in the pipeline from research to practice in the field of health. Systematic reviews must include a suite of specifically designed methods for the conduct and reporting of all synthesis activities (planning, searching, screening, appraising, extracting data, qualitative/quantitative/mixed methods synthesis, writing; e.g. see the Cochrane Handbook). The method has been widely adapted into other fields, including environment (the Collaboration for Environmental Evidence) and social policy (the Campbell Collaboration).

Despite the growing interest in systematic reviews, traditional approaches to reviewing the literature continue to persist in contemporary publications across disciplines. These reviews, some of which are incorrectly referred to as ‘systematic’ reviews, may be susceptible to bias and as a result, may end up providing incorrect conclusions. This is of particular concern when reviews address key policy- and practice- relevant questions, such as the ongoing COVID-19 pandemic or climate change.

These limitations with traditional literature review approaches could be improved relatively easily with a few key procedures; some of them not prohibitively costly in terms of skill, time or resources.

In our recent paper in Nature Ecology and Evolution, we highlight 8 common problems with traditional literature review methods, provide examples for each from the field of environmental management and ecology, and provide practical solutions for ways to mitigate them.

Problem Solution
Lack of relevance – limited stakeholder engagement can produce a review that is of limited practical use to decision-makers Stakeholders can be identified, mapped and contacted for feedback and inclusion without the need for extensive budgets – check out best-practice guidance
Mission creep – reviews that don’t publish their methods in an a priori protocol can suffer from shifting goals and inclusion criteria Carefully design and publish an a priori protocol that outlines planned methods for searching, screening, data extraction, critical appraisal and synthesis in detail. Make use of existing organisations to support you (e.g. the Collaboration for Environmental Evidence).
A lack of transparency/replicability in the review methods may mean that the review cannot be replicated – a central tenet of the scientific method! Be explicit, and make use of high-quality guidance and standards for review conduct (e.g. CEE Guidance) and reporting (PRISMA or ROSES)
Selection bias (where included studies are not representative of the evidence base) and a lack of comprehensiveness (an inappropriate search method) can mean that reviews end up with the wrong evidence for the question at hand Carefully design a search strategy with an info specialist; trial the search strategy (against a benchmark list); use multiple bibliographic databases/languages/sources of grey literature; publish search methods in an a priori protocol for peer-review
The exclusion of grey literature and failure to test for evidence of publication bias can result in incorrect or misleading conclusions Include attempts to find grey literature, including both ‘file-drawer’ (unpublished academic) research and organisational reports. Test for possible evidence of publication bias.
Traditional reviews often lack appropriate critical appraisal of included study validity, treating all evidence as equally valid – we know some research is more valid and we need to account for this in the synthesis. Carefully plan and trial a critical appraisal tool before starting the process in full, learning from existing robust critical appraisal tools.
Inappropriate synthesis (e.g. using vote-counting and inappropriate statistics) can negate all of the preceding systematic effort. Vote-counting (tallying studies based on their statistical significance) ignores study validity and magnitude of effect sizes. Select the synthesis method carefully based on the data analysed. Vote-counting should never be used instead of meta-analysis. Formal methods for narrative synthesis should be used to summarise and describe the evidence base.

There is a lack of awareness and appreciation of the methods needed to ensure systematic reviews are as free from bias and as reliable as possible: demonstrated by recent, flawed, high-profile reviews. We call on review authors to conduct more rigorous reviews, on editors and peer-reviewers to gate-keep more strictly, and the community of methodologists to better support the broader research community. Only by working together can we build and maintain a strong system of rigorous, evidence-informed decision-making in conservation and environmental management.

 


Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below

Image credit: Jaeyoung Geoffrey Kang via unsplash


 

Print Friendly, PDF & Email

About the author

Neal Haddaway

Neal Haddaway is a Senior Research Fellow at the Stockholm Environment Institute, a Humboldt Research Fellow at the Mercator Research Institute on Global Commons and Climate Change, and a Research Associate at the Africa Centre for Evidence. He researches evidence synthesis methodology and conducts systematic reviews and maps in the field of sustainability and environmental science. His main research interests focus on improving the transparency, efficiency and reliability of evidence synthesis as a methodology and supporting evidence synthesis in resource constrained contexts. He co-founded and coordinates the Evidence Synthesis Hackathon (www.eshackathon.org) and is the leader of the Collaboration for Environmental Evidence centre at SEI. @nealhaddaway

Posted In: Academic writing

3 Comments