Research is often reported on and assessed in singular, rather than aggregate terms. For example, single papers, datasets and findings. As a debate around the way research syntheses are valued within national research systems, such as the REF, continues, Michael Matthews and Thomas Kelemen advocate for the benefits of meta-research for communicating and making sense of research.
Headlines often proclaim, “science shows…” or “a recent study proves…” Given that much of our belief structures are based on anecdotal insights, evidence is often welcomed wholesale. This practice is certainly a step in the right direction—consuming research is part of what it means to be “data-driven.”
Yet, what goes unrecognized is all the tradeoffs and limitations in any single study. When first reading a research article, it can be hard to identify what happened (or did not happen) in the background. Clearly, translation needs to occur between the specialized and highly technical process of generating knowledge and disseminating that information to policymakers and everyday consumers of information. One option is to have the same scholars who conducted the research explain their findings. However, this approach may even cause more confusion.
Case in point, statisticians often disagree with each other. In the famous SFFA v. Harvard case, two economic experts (one from Duke and one from UC Berkley) testified in court in defence of the plaintiff and defendant, respectively. Between the two data-driven perspectives, which should be trusted? Further, like the rest of the world, the scientific community has bad actors, and some studies can be downright lies based on fabricated data. Overall, it can be hard to contextualize, trust, or even know how to interpret an individual academic study. Readers rarely have the wherewithal to evaluate each article against the backdrop of the scientific community.
Enter meta-research. Meta-research investigates the process and norms of how knowledge is generated. Instead of conducting a study, the input of meta-research is published findings, and in many ways, meta-research is the lynchpin of science. Or, in the words of Adam Grant from his best-selling book Think Again, “in an ideal world, every insight would come from a meta-analysis.” Indeed, some of the most highly cited articles are meta-research because scholars often reference these manuscripts given their ability to synthesize a vast body of literature.
We are organizational psychologists that specialize in this practice and have published multiple meta-research articles. Throughout all these projects, we have learned a few lessons regarding the value that meta-research offers. We recently worked with several other scholars (Nicolas Bastardoz, Gwendolin Sajons, Tyler Ransom, and Samuel Matthews) on a meta-research project in which we systematically evaluated 77 studies that use a statistical technique known as instrumental variables. Instrumental variables are helpful to make causal claims, but must be used with care; hence, our meta-research project. Using this manuscript as an example, we highlight how the strengths of meta-research.
Meta Research provides a High-level synthesis
Science advances slowly and incrementally. Of course, this pace is necessary to ensure that conclusions are relatively stable; however, the consequence of this is that any one study is extremely narrowed and focused. Consider a paper published in one of the most prestigious management journals, which featured only one hypothesis:
The positive association between alter’s task competence and ego’s likelihood of seeking alter out for task-related interaction is smaller when ego has negative affect for alter than when ego has positive affect for alter.
If you are like us, you may need to reread that statement. Of course, this hypothesis has implications for our understanding of organizations and is very well tested. But this level of detail is unapproachable if it is not eventually translated and contextualized for a broader context.
The first advantage of meta-research is that it provides a high-level snapshot of a general phenomenon. Meta-research is often trying to cover a lot of ground, which ensures that the purpose of the paper remains generalizable and “big picture.” For example, in our paper, a major portion of the article focuses on how instrumental variables fit into the larger picture of science to increase robustness. Even if you are not an expert on this topic, our manuscript is designed to contextualize how everything fits together by taking a step-by-step approach. This broad coverage ensures that the findings and implications are connected to a recognizable phenomenon, which is a must-have for non-academic readers.
Meta-Research is Balanced
The primary strength of the peer-review process is that it ensures that sloppy or misleading research does not gain legitimacy. Make no mistake—science is challenging to conduct properly. By subjecting a paper to scholars in the field, this “referee” process increases the quality of science.
A tradeoff of this feature is that authors are highly motivated to present findings that will become published. Thus, any one study is generally forced to represent only one side of the story. Instead of recognizing how a particular argument could go both ways, papers often adopt a formal hypothesis and then test that hypothesis. Too many times specificity and confidence are prioritized over open-mindedness and humility.
The second major advantage of meta-research is that it is primarily descriptive, and thus, the authors are relatively agnostic to any particular “side.” Instead, scholars generally try to present a balanced overview of the state of the field. For example, in our review of instrumental variables, we discuss the various ways in which this technique is operationalized across multiple different contexts (e.g., studies on CEOs, top executives, and frontline supervisors). Our paper aimed to be as exhaustive and representative as possible. Other excellent reviews will similarly try to be faithful to multiple paradigms and approaches. In this way, meta-research is one of the best sources to find a balanced approach.
Meta-Research is Introspective
Scientists are often propped up as “experts,” and indeed, the journey of gaining a Ph.D. is a rigorous activity that requires specialization. However, this title can obfuscate the reality that scientists are people—people with shortcomings, biases, and blind spots. Individual people are fallible, and as such, science is often punctuated and zig zagged: two steps forward, one step back. Unfortunately, when a scholar’s finding gains social coverage, this shortcoming may be overlooked or brushed to the side.
However, when meta-research zooms out to compare various approaches and findings, readers can more easily identify inconsistencies and weaknesses by focusing on scientific consensus. For example, in our paper, we discuss how only 22% of the studies satisfied one of the core assumptions of instrumental variables. Indeed, only a handful of studies correctly applied this statistical method. Of course, that’s a problem, and we challenge the field to improve with a step-by-step guide on using instruments. Other reviews have also raised red flags and called for the field to reconsider the status quo. Indeed, “best practices” for engaging in meta-research stipulates that writers are critical and reveal the field’s weakness.
In 1907, Francis Galton (statistician and leading proponent of eugenics) attended a fair and asked onlookers to guess the weight of an ox. After aggregating everyone’s estimations, he found that the median response was correct within 1 percent of the real value. Even more impressive, later reanalyses found that the mean of the collective was essentially perfect. There is a lesson here for scholars, policymakers, and everyday consumers of research. Good science is scattered, and no one has a monopoly. Instead of focusing on any one expert’s opinion or arguments, we should appreciate resources that forward the collective wisdom of science.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: LSE Impact Blog via Canva.
Kindly do organise a training on meta-research and analysis. Moreover, how does it differ from systematic research?