Scientists not only rely on knowledge reflected in textbooks and papers, but on their intuitions and experience. Answering scientific questions requires imagining what might be the case and then exploring it. Eric Giannella argues the uncertainty of science makes intuition and judgement essential. Yet the effect of metrics is to reduce the role of judgment. Even the most sophisticated set of metrics will not be able to account for scientists’ tacit knowledge.
Is there a scientific way to make science more productive? Many appear to believe there must be. In 2014, the UK’s Research Excellence Framework (REF) experimented with metrics, which were intended to make assessment more objective. Instead, many academics and administrators found the process counterproductive and suggested that relying primarily on metrics for research evaluation would be short-sighted. Efforts similar in spirit can be seen across the Atlantic. Five years ago, the U.S. National Science Foundation (NSF) gathered economists to create a “science of science policy” – with the longer-term hopes of making more economically fruitful funding decisions. And the corporate demand for a more scientific approach is reflected in the glut of books promising to impart methodologies for improving research and development.
The main trouble with such efforts is not that they do not work (and they do not); it’s that they are difficult to abandon. Even when metrics backfire, they remain politically preferable to more qualitative and subjective approaches. Metrics carry legitimacy. They appear scientific.
Yet, the legitimacy of metrics for research rests upon a caricature of science and scientists. Scientists are not merely computers plus a bit of creativity. Certainly, the way that science is presented to us in grade school supports this naïve view. And for rhetorical effect, many scientific papers, and newspaper articles about them, convey the impression that scientists are akin to robots. If science were purely linear and deductive, perhaps we could someday have a computer derive hypotheses, design experiments, interpret and present results. But those who have done research know that this simplified schema only reflects small fragments of the work behind any published finding. In fact, hundreds of studies illustrate how much more human and messy science is (brief review).
Most scientists know that science would not be possible without the interaction between human intuition and the intended objectivity of the research and publication process. The intuition essential to science is often captured by the narrower term judgment. In fact, the advocates of metrics for science regularly insist they do not wish to replace judgment. Yet the effect of metrics, as anyone with common sense or who has worked in a publicly-traded company knows, is to reduce the role of judgment. Moreover, if metrics do exist, science funders will want to use them to avoid being held accountable for their own failures of judgment.
The uncertainty of science makes intuition essential. Without uncertainties, research would not be research. These are unquantifiable uncertainties – the possible outcomes and their likelihoods are unknown, making quantitative planning intractable. Answering scientific questions despite such uncertainties requires imagining what might be the case and then exploring it. In order to imagine what might be the case, scientists not only rely on knowledge reflected in textbooks and papers, but on their intuitions and experience – on a wealth of tacit knowledge. This is a hugely subjective and hugely valuable exercise. Put simply, without being able to rely upon intuition, scientists could no longer do science.
A couple examples will help clarify what I mean by intuition. Chemist and philosopher of science Michael Polanyi wrote:
… among mathematicians I have heard a series of discoveries by one person described as follows: The first discovery is like a solitary island in a borderless expanse of sea. Then a second and third island are discovered without any apparent connection. But gradually it becomes clear that the waters are ebbing away in mass and leaving behind what were at first little isolated islands as the peaks of one great chain of mountains. That is precisely what one would expect to happen if intuition first sensed the fundamental chain of thought, i.e., the mountain range, and consciousness then proceeded to describe it little by little (1946: 36-7).
Intuition also informs the everyday activities that distinguish an experienced scientist from a novice: how to use equipment, analyze data, interpret findings and effectively present one’s work. For instance, a classic study of its importance illustrated how physicists’ efforts to replicate a new laser were necessarily efforts to develop the tacit knowledge required to build the laser. Many scientists also testify to the importance of intuition in their work. Einstein wrote of physics, “There is no logical way to the discovery of these elemental laws. There is only the way of intuition, which is helped by a feeling for the order lying behind the appearance…” (1932: 10). More recent Physics Nobel Laureate, Theodor Hänsch remarked, “If I want not only to solve problems according to the book, but if I want to invent new tools, new directions, I need to develop some intuitive feeling…”
This very human dimension of science matters for governance. Investigating what might be the case requires suspending disbelief. Giving money to people who are themselves suspending disbelief requires trusting their intuitions. Funders must acknowledge that funding science means trusting people’s intuitions.
By ignoring the importance of trust in people’s intuitions, metrics create two harms. They lead people to game the system. Second, they have social consequences within scientific fields that one could argue are bad for governance. If we believe that experts are the best suited to evaluate the work of other experts, we would want a system of oversight that encourages scientists to take one another seriously (not merely as competitors or gate-keepers). Metrics turn their attention away from the flesh-and-blood individuals with whom they interact and toward a scoreboard.
When we acknowledge that science must be done by real people, not merely robots that regurgitate explicit knowledge, it stops making sense to hope that metrics will someday improve scientific governance. Instead, other avenues open up. Tuning scientists in to the evaluations of their peers would require really reading one another’s work. It would require more qualitative evaluations of merit. That may mean that fewer, better, more substantial papers would become preferable to dividing research findings into as many “least publishable units” as possible. This would be a welcome change for those most interested in advancing knowledge.
A fatal implicit assumption behind the metrics movement is that one should take trust in intuition out of governing science, but intuition is critical for science. More frequent and in-depth discourse within and outside scientific communities will provide more beneficial scientific oversight than the most sophisticated system of metrics.
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
Eric Giannella is a PhD candidate in sociology at UC Berkeley. His published work related to this topic can be found here. In addition to the sociology of science, he studies social support. His dissertation examines changes in social life in the California Bay Area over the past forty years. His twitter handle is @egiannella.