LSE - Small Logo
LSE - Small Logo

Igor Martins

June 11th, 2025

The pressure to quantify research is erasing conceptual depth

2 comments | 35 shares

Estimated reading time: 6 minutes

Igor Martins

June 11th, 2025

The pressure to quantify research is erasing conceptual depth

2 comments | 35 shares

Estimated reading time: 6 minutes

As universities increasingly prioritise research that can be measured, academics face a pressure to translate complex ideas into quantifiable outputs. Igor Martins explores how metrics shape not only how knowledge is evaluated but how it is formed, raising questions about whether depth and ambiguity are sacrificed for comparability and clarity.


In universities, as in many other sectors, there is a growing emphasis on what can be counted. Researchers are increasingly expected to translate complex ideas into measurable findings that satisfy funders, journals and institutional benchmarks. The same logic that drives performance metrics in healthcare, standardised testing in education or impact assessments in public policy has become central to academic life. This shift affects what is researched, how it is framed and which findings are deemed legitimate.

Over time, these proxies become institutionalised, and we risk mistaking the measurement for the phenomenon itself.

Qualitative and quantitative research are pressured into standardised formats…making measurability itself a precondition for what counts as valid knowledge.

These dynamics are not new. Concerns about how institutions privilege what can be counted have circulated for more than a decade, as Howard Aldrich noted when describing the hierarchies that arise from equating legitimacy with quantifiability. What began as a debate between qualitative and quantitative methods has since evolved into a deeper problem: institutions increasingly prioritise research that can be measured, which shapes not only how work is evaluated but also what kinds of questions are even conceived. Both qualitative and quantitative research are pressured into standardised formats that allow for comparison, making measurability itself a precondition for what counts as valid knowledge.

Today, the pressure to produce measurable outputs has become more entrenched. This is especially the case for early-career researchers working within funding, publication and evaluation systems that prioritise comparability over complexity.

The pressure to transform research into something measurable

I encountered this tension during my postdoctoral fellowship at Lund University, where I was involved in a project developing a new theory of economic catching up. The central idea was that countries close development gaps not by growing faster but by shrinking less. The theory relied on the concept of social capabilities, which includes abstract but essential dimensions such as transformation, inclusion, accountability, autonomy and stability. My task was to translate these into something measurable.

Each proxy I proposed, including indicators of political inclusiveness, economic diversification, or institutional resilience, met the same critique: not clear enough, not measurable enough, not grounded enough. And the critique was fair. These were difficult concepts to clearly define. But I noticed a shift in how we were approaching the problem. At one point, after another round of pushback, I found myself saying: “If we ever want to present this to a policymaker, we cannot just say ‘improve your social capabilities.’ We need to give them a metric. Something they can track over time.”

In that moment, I was not just defending the work. I was conceding to a logic I was not fully comfortable with. The demand for a metric had begun to displace the conceptual work itself. I had internalised the pressure to quantify. I had started with the hope of clarifying a concept, but I ended up simplifying it. It was not that I resorted to metrics because the ideas were empty or vague; they were theoretically substantive, but that was no longer enough.

To be seen as viable, these ideas had to be translated into measurable terms. In trying to make the work legible to others, I had accepted that it needed to be countable first. That recognition stayed with me. It became harder to ignore the extent to which the demand for quantification was not just shaping how we present ideas, but how we form them in the first place.

The problem of comparing measurable research findings

These problems become more acute when evaluation moves beyond individual projects or researchers and to attempts to compare across disciplines. As Jon Adams has argued, attempts to rank thinkers or entire academic fields rest on the assumption that fundamentally different kinds of knowledge can be assessed using shared standards. Yet disciplines are often task specific. What counts as excellence in history may be irrelevant in physics, and what works in economics may fail entirely in philosophy. The more we enforce uniform metrics, the more we risk erasing the very differences that make intellectual diversity valuable.

In some fields, careers are built on refining proxies for concepts that remain conceptually unstable. Over time, the pursuit of clarity can displace conceptual depth.

Proxies, after all, do more than just measure. They shape what we consider measurable. As the availability of data begins to filter which questions are asked, entire disciplines risk reorganising themselves around what can be tracked. In some fields, careers are built on refining proxies for concepts that remain conceptually unstable. Over time, the pursuit of clarity can displace conceptual depth.

In economic history, for example, applying fixed metrics across wide historical distances can lead to what Professor Gareth Austin called the compression of history, where categories like “institutional quality” impose continuity onto fundamentally different social and economic realities. Does it even make sense to apply metrics among cohorts of individuals who may have lived centuries apart? Sure, we can keep the unit stable and thus comparable, but we risk projecting modern assumptions onto people whose lives were shaped by entirely different cultures.

Even so, measurement remains one of the few ways that abstract ideas can leave the page and enter the world. It enables comparison, replication and portability. These are qualities that allow ideas to circulate and scale. While no metric perfectly captures the concept it represents, measurement offers a way to make sense of research findings and take action as a result.

This is especially visible in development economics, where indicators like GDP or institutional quality guide funding decisions and shape research agendas. Similar tensions emerge in well-being research, where subjective self-reports compete with biomedical or behavioural measures, and in academic evaluation, where influence is often inferred from citation counts and journal rankings. In each case, measurement creates a shared reference point. When those points harden into fixed objectives, however, the indicator can begin to obscure the idea it was meant to approximate.

The challenge is not that we measure, but that measurement becomes too easy to rely on and too difficult to interrogate. When a concept is absorbed by its proxy, we stop asking what it was meant to capture in the first place. Metrics should clarify, not simplify. They should sharpen thinking, not foreclose it. If measurement is to remain a tool for understanding rather than a substitute for it, we need to stay vigilant not only about how we measure but about what disappears when measurement becomes routine. Ambiguity is not always a problem to be solved. Sometimes, it is what makes a concept worth measuring at all.


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: TarikVision on Shutterstock.


About the author

Igor Martins

Igor Martins is an economic historian whose research focuses on African economic history, with particular attention to slavery, labor, and colonial legacies in Southern and West Africa, as well as resilience to economic shrinking in the Global South. He received his PhD from Lund University, has held postdoctoral positions at Lund and the University of Cambridge, and was a visiting scholar at Stellenbosch University. He teaches at Lund University School of Economics and Management and serves as a consultant to STINT (The Swedish Foundation for International Cooperation in Research and Higher Education).

Posted In: Research evaluation

2 Comments