Almost every academic article starts with a literature review. However, although these short research summaries can be beneficial, as discussed in previous posts on the LSE Impact Blog, they also introduce opportunities for unverifiable misrepresentation and self-aggrandizement. In this post Gorgi Krlev proposes that short of abolishing them, or aiming for complete standardization of literature reviews, researchers in the social sciences and humanities should instead develop dynamic knowledge maps that can visually display the relationship between new research and the existing literature.
Literature reviews are a core part of academic research that are loathed by some and loved by others. The LSE Impact Blog recently presented two proposals on how to deal with the issues raised by literature reviews: Richard P. Phelps argues, due to their numerous flaws, we should simply get rid of them as a requirement in scholarly articles. In contrast, Arnaud Vaganay proposes, despite their flaws, we can save them by means of standardization that would make them more robust. Here, I put forward an alternative that strikes a balance between the two: Let’s build databases that help systemize academic research. There are examples of such databases in evidence-based health-care, why not replicate those examples more widely?
The seed of the thought underlying my proposition of building dynamic knowledge maps in the social sciences and humanities was planted in 2014. I was attending a talk within Oxford’s evidence-based healthcare programme. Jon Brassey, the main speaker of the event and founder of the TRIP database, was explaining his life goal: making systematic reviews and meta-analyses in healthcare research redundant! His argument was that a database containing all available research on treatment of a symptom, migraine for instance, would be able to summarize and display meta-effects within seconds, whereas a thorough meta-analysis would require weeks, if not months, if done by a conventional research team.
Although still imperfect, TRIP has made significant progress in realizing this vision. The most recent addition to the database are “evidence maps” that visualize what we know about effective treatments. Evidence maps compare alternative treatments based on all available studies. They indicate effectiveness of a treatment, the “size” of evidence underscoring the claim and the risk of bias contained in the underlying studies. Here and below is an example based on 943 studies, as of today, dealing with effective treatment of migraine, indicating aggregated study size and risk of bias.
Source: TRIP database
There have been heated debates about the value and relevance of academic research (propositions have centred on intensifying research on global challenges or harnessing data for policy impact), its rigor (for example reproducibility), and the speed of knowledge production, including the “glacial pace of academic publishing”. Literature reviews, for the reasons laid out by Phelps and Vaganay, suffer from imperfections that make them: time consuming, potentially incomplete or misleading, erratic, selective, and ultimately blurry rather than insightful. As a result, conducting literature reviews is arguably not an effective use of research time and only adds to wider inefficiencies in research.
We can of course stress the positive sides of reviews, namely that they are one part in enabling the emergence of a research community, help to establish the level playing field for new studies and serve to identify important research gaps. However, most would probably agree that they only present a partial overview of a research field. In this respect, Robert P. van der Haave and Luis Rubalcaba provide a good example in Research Policy of how different literatures on the same subject (social innovation) “do not speak to each other”, thereby limiting collaboration and faster progress in understanding new phenomena.
1.) Purple: Community Psychology cluster; 2.) Red: Creativity research cluster; 3.) Blue: Social and societal challenges cluster; 4.) Green: Local development cluster. Source: van der Haave & Rubalcaba (2016).
Our ambition as researchers should be to advance knowledge and one way of achieving this would be if anyone could easily gain a high-level summary of the research literature, in a way similar to what we’ve just seen the TRIP database do. I see no reason why these principles could not be applied to the social sciences and the humanities.
However, research in these fields offers unique challenges. For instance, in contrast to healthcare research, the volume of qualitative research renders the notion of effect sizes to some extent irrelevant. Measures such as these would instead have to be complemented by features that allow for mapping “research depth”. Two-dimensional graphs will not be able to capture this depth. Instead, we would require tree-like structures that allow us to dig deeper starting from broader research themes such as organizational stigma or innovation ecosystems.
Phenomena can also be looked at from very different angles in the social sciences and humanities. There is less of an objective “truth”, or that truth needs to emerge by combining analytic angles, methods and research traditions. This would require the analytic view displayed by the evidence map to be more adaptable and dynamic. But this is also not unheard of. Dynamic visualizations are already used to display and track collaboration networks. Why not use them to systemize our knowledge?
Publication network of all publications of the Oxford Protein Informatics Group. By Florian Klimm, available on GitHub.
Obviously, there are still many issues to be resolved, for example: Who decides on how particular analytical frames are mapped? How can we produce maps across disciplinary borders, research communities or “conversations”? And what role would journals play in developing and administering the maps? There might be a role here for learned and professional societies, or for scholarly self-organization, as in the open science movement. So the vision is far from clear cut yet.
But if we choose to invest into building dynamic knowledge maps, we will fully adhere to Vaganay’s calls for increasing standardization, while we will equally fully adhere to Phelps calls for not wasting time and resources for presenting the same thing repeatedly, each team of authors in their own light. Instead, we will be able to invest the time and energy we will save into furthering original thought and pushing the boundaries of our knowledge.
Featured image credit: Still life with a skull and writing quill, Pieter Claesz via The Met (Licensed under a CC0 1.0 licence).
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
I believe there is sometimes a preoccupation with the selection phase of any review (inclusion and exclusion criteria) at the expense of describing how the key findings of any selected paper were determined and presented. One hopes that this process is carried out in an ethically appropriate manner, but this might not always be the case. I note also a preoccupation with describing various aspects of the authors in elaborate but distracting ways. It might be true that lead authors with surnames beginning with C predominate in some fields, but I don’t really care. Do you have a view on so-called ‘Systematic Quantitative Reviews’?
Thank you, Paul. Yes, I think systematic reviews are important, but often a lot of work and sometimes not as systematic as they should be. Thus, the use of TRIP (link in the post). And also yes, the fact that there are inclusion and exclusion criteria–being cynic one might say to keep the workload manageable rather than for purely susbstantive reasons–already shows that we should probably seek to do better. Or put differently: Enable individual interpretation, but on a shared set of evidence. I think the dynamic knowledge maps I propose would be useful in this regard.
I love the idea that HSS researchers might explore the possibilities of dynamic knowledge mapping – as well as visual approaches to summarizing the state of current research. However, as a research supervisor I have always understood the literature view as an exercise that is valuable in its own right: requiring students to develop the skills of ‘reading’ a field and synthesizing complex perspectives into their own narrative. From this perspective, the purpose of the literature review isn’t to identify absolute truths. Rather, it is to allow for complexity. None of this is to say that dynamic knowledge maps don’t have something to add to the HSS – they do! But taking the time to read, thinking critically about the perspectives and biases of the researchers that have gone before you, and identifying gaps in knowledge that you, as a researcher, can help to fill are valuable processes.
Many thanks for these prompts! Yes, there is high value in literature reviews as regards developing critical thinking and research skills. But do those increase if you (need to do it) over and over again? Also, it seems we tend to stick within our own fields, while there might be others offering more pertinent insights to the questions we have in mind. But we are often blind to them. Therefore we see a lot of fragmentation and “gated” communities as shown in the graph on social innovation research. Plus, getting a good content-based (!) overview through dynamic knowledge maps wouldn’t exempt people from thinking critically. I would argue the opposite is true: Researchers could invest more into interpretation and developing original thought. So I believe both can be combined and existing virtues preserved, while pushing knowledge production and systemization.
Literature reviews are usually highly selective and skewed to research that favours the author’s position: work is often selected post hoc to make the current findings seem consistent with a prior body of work, while ignoring inconvenient discrepant findings. The idea of producing maps could in principle help draw attention to relevant work that might otherwise be neglected, but I see two problems.
1. it might end up making matters worse by entrenching some of the current biases. Back in the 1970s, Jerome Ravetz published Scientific Knowledge and its Social Problems and likened the publication cycle to an evolutionary process, where a paper had first to pass peer review, and then get cited, and ultimately produce findings that were useful. The problem is that steps 1 and 2 are distorted by confirmation bias and other biases. There’s a lot of good work that is either rejected for publication, or never cited, because it doesn’t fit with other work. I wasn’t clear whether or how your approach would handle that. If it could actually be used to document that kind of process at step 2, it could be useful in counteracting the biases, by showing how often work that doesn’t fit in is ignored. But I’m assuming such work would just exist as an unconnected node.
2. In many fields, the devil is in the detail. I think it would be very interesting to try your approach with the literature on 5-HTTPLTR and depression, which has recently been reviewed and shown to be a huge edifice based on quicksand. See https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-review/. Studies which purport to be on the same topic have subtly different methods, with evidence that data dredging is used to find something to report, but with true replication seldom achieved. To my mind a test of your approach would be to see how it handles a literature like that: does it appear to indicate a solid body of interconnected work? If so, you have a problem.
Wow, Dorothy, thank you.
On 1. My idea is that in the future dynamic knowledge maps will be constructed and expanded as the research evolves, not after the fact. What I mean thereby is that the contribution and relations of a new article to previous research would be assessed when it is “quality checked”. For this to become more objective, I think it would be beneficial for peer review to become more open and community owned rather than double blind. This would also help decrease the risk of good research not making it through for reasons entrenched in the system and not connected to the value of the work. Nodes of the map would also not (only) be based on citations, but on contents, so that strong pieces of research could hardly be excluded, even if otherwise never cited. An assessment of bias of each individual article could become an integral part of producing the maps, too (as in TRIP’s evidence maps).
On 2. All the above, one would hope, would be able to detect the actual disconnect in the “seemingly connected” you describe; and in the ideal case prevent something like this from happening in the first place. I understand way too little of the research you refer to, but it seems to me that these snowball effects usually kick in for this exact reason: We see stuff that seems related from the outside (refs, citations, claims), but lack deeper insights into the content connections. To get at those we do meta-research, which is extremely relevant but pretty time consuming. My hope would be that dynamic knowledge maps can mitigate such problems.
I agree that most of this is not part of the post. Here is a Twitter thread that picks up some of these things, and might provide deeper, if admittedly scattered, thoughts on these matters. For you and all others, how might be interested:
https://twitter.com/marcventresca/status/1129732132207845376
Many thanks for these prompts! Yes, there is high value in literature reviews as regards developing critical thinking and research skills. But do those increase if you (need to do it) over and over again? Also, it seems we tend to stick within our own fields, while there might be others offering more pertinent insights to the questions we have in mind. But we are often blind to them. Therefore we see a lot of fragmentation and “gated” communities as shown in the graph on social innovation research. Plus, getting a good content-based (!) overview through dynamic knowledge maps wouldn’t exempt people from thinking critically. I would argue the opposite is true: Researchers could invest more into interpretation and developing original thought. So I believe both can be combined and existing virtues preserved, while pushing knowledge production and systemization.
Nice article.
This comment just puts a big smile on my face! Thank you. Exactly, all you said.