LSE - Small Logo
LSE - Small Logo

Juan Pablo Pardo-Guerra

May 2nd, 2023

When it comes to research culture, why do folktales carry more weight than evidence?

0 comments | 20 shares

Estimated reading time: 7 minutes

Juan Pablo Pardo-Guerra

May 2nd, 2023

When it comes to research culture, why do folktales carry more weight than evidence?

0 comments | 20 shares

Estimated reading time: 7 minutes

Academia is nominally an evidence-based profession, but when developing policy around academic careers and management, higher education institutions more often that not draw on individual experience and anecdote, rather than systematic knowledge. Discussing work for his recent book, The Quantified Scholar, Juan Pablo Pardo-Guerra, argues higher education still has much work to do before it can implement findings from the ‘science of science’.


Not too long ago, I was involved in an engrossing conversation with other sociologists around an always important issue: how best to support doctoral students throughout their degree. The problem framing our exchanges was how best to support the research and publication strategies of doctoral students. To prepare students for a competitive job market, went the conversation, we needed to foster greater levels of collaboration between PhD students and faculty. This required creating more structural opportunities for students to work with us, under the assumption that this would lead to more robust research training, a more substantive record of publications, and thus greater odds of successful placement.

The conversation I had with other sociologists was not unique. I’ve heard it and had it in other times and places (at conferences, harried hallways, and slowly over dinner). What made this instance intriguing, however, was how it made clear the assumptions held by academics about how they ought to organise their workplace. Having been part of this conversation after several years enmeshed in the literature on metrics, scientific careers, and the sociology of science more generally, it became clear that the ideas proposed, however well intended, were untethered from a growing body of evidence on how contemporary academia works.

Research may exist on what works and what doesn’t but, making that research relevant (getting science to actually “believe in science”) becomes a Sisyphean struggle.

You see, going against the grain of our best intentions, studies about collaboration between doctoral students and their advisors tend to highlight the importance of independent publication paths. In a formidable study of protégé-mentee relations among 40,000 STEM scientists, Ma, Mukherjee and Uzzi find that scholars who pursue research interests different than those of their direct advisors and publish but a few papers with them have overall more successful academic careers. This is confirmed by a study of computer scientists by Rosenfeld and Maksimov who go as far as to conclude that “young computer scientists should be reasonably encouraged to stop collaborating with their doctoral advisors–the sooner, the better”. That doesn’t mean mentorship doesn’t matter: as Wuestman et al show, more numerous and diverse mentorship relations have a positive effect on academic careers. It means, however, that betting on collaborations is not the most sensible strategy.

For over a century, scientists have studied their own labour and its products as legitimate objects of inquiry. From the early literature on bibliometrics to the recent ‘science of science’, from philosophical discussions about demarcation to recent contributions in science and technology studies, from ethnographies to large computational analyses, we have amassed a mountain of evidence about our craft, its challenges, and its various possible points of intervention and reform. We know science like we know the mechanisms of social class. We know science like we do the genomes of living things. We know science like we do the complicated climate models that portend our collective future. We know. Yet all too often, knowledge about our craft is detached from its regulation. Research may exist on what works and what doesn’t but, making that research relevant (getting science to actually “believe in science”) becomes a Sisyphean struggle.

Writing The Quantified Scholar, I encountered this as a recurring theme. The book shows how research evaluations shape the production of social scientific knowledge in Britain. The top-level claim is that, evaluation after evaluation, the epistemic and organisational diversity of British social sciences has reduced in connection to the multiple assessments. I hoped the claim would serve as an instrument for action. Yet, in making sense of these effects, I found colleagues recurring to something closer to the folk methods described by ethnomethodologists (shared, tacit, and habituated ways of explaining the world), rather than organised knowledge about our fields. When talking about how the assessments affected our knowledge practices, for example, scholars described independence where the literature notes a contested process of active shaping. When thinking about how the planning for the assessments in our organisations occurs, scholars talked in almost naturalistic terms, without considering the possibility of other documented best practices. And when shown evidence of how we, as scholars attuned to a particular vocational view of the world, co-produce the effects of the research evaluations, I encountered incredulity. In one memorable occasion, upon presenting the book to an audience, I was asked why I blamed academics and not managers for our woes? We are, too, responsible for giving power to the numbers through which we are managed, I responded, only to be met with a grumble of disapproval. Shortly after, the next speaker was introduced, along with the list of notoriously well-cited journals where they had published. The numbers, given life, by the scholars in the room.

This is not entirely surprising. As academics, few know of more options. We may be trained in methods and theories, in coding languages and historiographies, but we are rarely inducted into more reflexive knowledges about ourselves as professional groups. Many academics learn to be academics by largely “winging it”, following what others did, emulating what worked (however imperfectly) in the past. Handbooks about academic life exist, but they tend to be reading materials for those already in despair rather than those coming into the profession. This is, of course, ironic particularly for a professional group devoted to producing knowledge about the world. “Physician, heal thyself”, it is said, unless you are an actual doctor.

We may be trained in methods and theories, in coding languages and historiographies, but we are rarely inducted into more reflexive knowledges about ourselves as professional groups.

The central conclusion of The Quantified Scholar is not descriptive (that research evaluations make for more homogeneous fields), but prescriptive (that knowing how evaluations and our profession work is a powerful instrument for change). The kind of reflexive solidarities it calls for are precisely the type of science-based interventions that could make our working lives both more balanced and fairer. We may think, for example, that the best way of helping PhD students is by collaborating more with them, but this would only serve the assumptions of our own habituated way of being academics. A better form of solidarity, a better commitment to their work, would come from really thinking about how to foster successful independent research, supported through mentorship. This may involve changes in teaching. It may require changes in the design of the program. But it necessarily involves seeing the data, judging what works, evaluating evidence, and testing interventions.

As a new cycle of research evaluations is gathering shape in the United Kingdom, reflexive solidarity becomes particularly important. This is the formative period when decisions about internal institutional processes, the structure of mock assessments, the nature of administrative rules, and other critical choices are cast. But this formative moment is also one for seeing in the science of science, in what we know about how knowledge is produced, the roads and alternatives available to UK academics, both at the institutional and sectorial level. What will happen with an increased emphasis on impact? Are there opportunities to introduce alternative metrics that attenuate some of the problems of those used in the past if not explicitly then by proxy? While specific simulations do not exist, we count with a wealth of expertise. Indeed, the elements for a more reflexive professional exercise are widely available—a prominent example is the work coordinated by the Research on Research Institute of the University of Sheffield, visible in the recent report by Curry, Gadd and Wilsden, Harnessing the Metric Tide. Being attentive to these, and other, knowledges about ourselves matters fundamentally. Let’s be scientific, with all the caveats involved.

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Ryoji Iwata via Unsplash.


Print Friendly, PDF & Email

About the author

Juan Pablo Pardo-Guerra

Juan Pablo Pardo-Guerra is an Associate Professor in sociology at the University of California, San Diego, a founding faculty member of the Halicioğlu Data Science Institute, co-founder of the Computational Social Science program at UCSD, and Associate Director of the Latin American Studies Program at UC San Diego. His research concerns markets and their location in contemporary societies with an emphasis on finance, knowledge, and organizations.

Posted In: Measuring Research | Research evaluation

Leave a Reply

Your email address will not be published. Required fields are marked *