You-Na Lee and John P. Walsh argue that the solution to rising incidences of unreliable findings and research pathologies does not necessarily lie with preventing individual malpractice, but rather with promoting structural research integrity and developing better research teams and organizations.
There is increasing concern amongst the scientific community, policymakers and the general public about the unreliability of science. This has been driven by stories of scientific fraud, high profile retractions, failures to reproduce well-known findings and other concerns about pathologies in the science system. Much of this discussion focuses on individual-level deviance and the need for social control. However, science is now primarily a team sport and the organisation of these teams can be a critical source of weakness that increases the likelihood of pathological outcomes. For several years now, we have studied scientific teams as work organisations. This research has led us to consider the structural causes of pathologies in science. In particular, we focus on the division of labour in research teams and how this can generate research pathologies, potentially leading to retracted papers and irreproducible results.
Two high profile examples of research pathologies illustrate our point. In the David Baltimore case, Baltimore described his inability to independently judge whether the results he was receiving from colleagues on his research team were sound: “The study that gave rise to the paper was conducted as a classic collaboration, with each laboratory performing independent research in its particular area. … Imanishi-Kari provided the expertise in serology that I lacked.” Similarly, the investigation of accounting professor James Hunton found “Dr. Hunton was the sole source of the data used in the analyses, and he only provided his co-authors summaries of the data supposedly collected from field research at the CPA and consulting/training firms. In both cases, he told his co-authors that he was subject to strict confidentiality agreements that prohibited him from disclosing the identity of the firms even to them.”
This balkanised division of labor can make scientific projects vulnerable to pathologies for a variety of reasons. Such structures make other researchers unlikely to know the details of how one part of the study was done, causing problems in communication and joint misunderstandings about what is being done at each stage. One person might collect and prepare samples, while another runs the assay to generate the data, and a third does the statistical analysis, with each team member viewing the requirements of the inputs and the outputs of these processes differently. As these interim results get passed from one to the other, the information also loses the uncertainties and contingencies associated with the local context, leading to over-confidence in possibly shaky results
Image Credit: David S. Goodsell via Wellcome Collection (licensed under a CC BY 4.0 license)
Responsibilities may also diffuse across the division of labor and status hierarchies in research teams, with each team member assuming that the others will have checked and re-checked methods and results. This can in turn lead to problems falling through organisational cracks, and encourage modifications, errors and even falsifications in order to satisfy the requirements imposed on each team member to get her work done quickly. All of these vulnerabilities are exacerbated by the high level of trust associated with collaborations among scientific peers, which leads researchers to be reluctant to rigorously question the interim procedures and results of their collaborators. Furthermore, such division of labour can generate a hired hand mentality, whereby researchers lose commitment to the overall project and their efforts are instead focused on just producing something that will get their colleagues off their backs. On top of all these vulnerabilities are problems caused by individuals who are willing to abuse the trusting nature of research collaborations. However, the vulnerabilities we highlight do not require anybody to have ill intent. Simple error and miscommunication are sufficient to generate pathological outcomes. Such team structures are thus vulnerable to both malfeasance and ‘honest mistakes’.
To test these arguments, we used data on papers published in PubMed. Our results revealed that the greater the share of tasks done by exactly one researcher on the team, the higher the probability that the paper was retracted. Hence, we detect two faces to the division of labour in scientific teams. Specialisation may increase team productivity, which is critical to survival in an increasingly competitive academic environment. At the same time, specialisation can also make projects vulnerable to pathogenic outcomes.
A key implication of our research is that recent attempts to address this problem through individual-focused ethics training, changing university cultures to emphasise research integrity and altering journal review mechanisms to detect misconduct are insufficient. We also need to change the practices of team science to emphasise redundancy and mutual understanding of tasks in the division of labour: independent checking, rotating of tasks, cross-training team members, and overlapping the task sets at the interface of two researchers to increase their joint understanding of the problems being addressed. All of these could increase the reliability of science, but at the same time they may also reduce the productivity of scientific teams. This should therefore motivate a debate about whether we are over-emphasising (often incremental) productivity gains at the, possibly substantial, cost of reliability.
We posit that structural-level research integrity can reduce the risks from individual wrongdoings. Furthermore, a lack of structural research integrity can increase pathological outcomes even without any individual-level malfeasance. While we still need to put effort into training individuals to be ethical and diligent, we should also develop and implement training modules and best practice guidelines for designing research teams that are reliable, rather than pathogenic.
This blog post is based on the authors’ co-written article, “Pathogenic organization in science: Division of labor and retractions”, published in Research Policy
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
Featured Image Credit: by David S. Goodsell via Wellcome Collection (licensed under a CC BY 4.0 license)
About the authors
You-Na Lee is Assistant Professor in the Lee Kuan Yew School of Public Policy at National University of Singapore. Her research applies organization theory to the study of science and technology, and uses a sociological lens to examine the innovation process and explain innovation outcomes. Her research areas include technology and innovation management and science and technology policy.
John P. Walsh is Professor of Public Policy and Affiliated Professor of Strategic Management at Georgia Institute of Technology. A sociologist by training, Walsh studies the work and organization of academic and industrial research and how these are affected by the policy environment. Key themes in this research are collaboration, creativity, and learning.