LSE - Small Logo
LSE - Small Logo

Blog Admin

August 19th, 2011

Measuring thoughts and thinkers: why the ongoing conflict about measuring the value of science and humanities may be ultimately fruitless

1 comment

Estimated reading time: 5 minutes

Blog Admin

August 19th, 2011

Measuring thoughts and thinkers: why the ongoing conflict about measuring the value of science and humanities may be ultimately fruitless

1 comment

Estimated reading time: 5 minutes

Small competitions, such as the BBC/AHRC New Generation Thinkers scheme, preserve the essential features of larger assessments of research quality such as the REF, argues Jon Adams. But what does it mean to compare achievements in such disparate fields as history and physics?

The BBC and AHRC recently ran a joint competition to locate ten ‘New Generation Thinkers’, a talent scheme for “emerging academics with a passion for communicating the excitement of modern scholarship to a wider audience and who have an interest in broader cultural debate”. The chosen ten for 2011 were Alexandra Harris, University of Liverpool; Corin Throsby, University of Cambridge; David Petts, Durham University; Laurence Scott, Kings College London; Lucy Powell, University College London; Philip Roscoe, University of St. Andrews; Rachel Hewitt, Queen Mary, University of London; Shahidha Bari, Queen Mary, University of London; and Zoe Norridge, University of York, and finally myself.

When the results came in, some people, inevitably, felt the ten they selected were the wrong ten. The objections raised an interesting counterfactual: if not us, then who? And that, in its turn, raises another interesting question: what criteria ought to be used to select the ten best thinkers? Is it even a sensible task?

Maybe ten is the problem – maybe it calls for an unavailable precision. So let’s say the ranking is looser – just corral the top 500, without ordering within that set. That looks easier, but it would still be a ranking problem – it would still have threshold cases (who would be number 501?), and it would still have composition problems (what if the top 500 were all physicists?).

You might object on Kuhnian grounds, complaining that the really good thinkers would be impossible to identify using our current cognitive framework – that the task of identifying good thinkers was always a retrospective one, something possible only after the revolutions their thoughts triggered.

Research bodies and competition

But while that’s a conscientious objection, it’s not a useful one when we really need to rank thinkers. And we really do. For even if the question of how to rank thinkers and thoughts is relatively frivolous and inconsequential in this form, the question has a serious version: I argue that this type of competition is a pocket-sized version of the problem faced by research bodies and government agencies tasked with distributing finite (and increasingly scarce) resources. Scaled up, the analogous problem of how to identify which are the best institutions, departments, and academics to support – ultimately, how much to spend on which universities – is hugely significant. And that task is also a task about ranking thoughts and their thinkers.

However distasteful the task, it needs to be done. So how? Well I don’t have a how. What I do have are a few whys that might point to (or usefully exclude) hows. To wit: Why is this sort of macro-level assessment so upsetting for academics? Why is “impact” particularly upsetting? And why, more generally, might the problem of macro-level assessment be intractable here?

Ranking thinkers, ranking disciplines

Measuring intelligence has had a bad reputation for a long time now. Indeed, one thing intelligent people seem to agree upon is that intelligence isn’t the type of thing that can be quantified in this fashion. But although there is a widespread resistance to measuring intelligence generally, there is much less resistance to measuring specific intelligences. Examinations (for matriculation) and peer-review (for publication) provide quite fine-grained means of ranking academics.

Generally, we have good mechanisms for ranking within disciplines, but we’re much less sure about how to rank between them. There might even be something fundamentally inappropriate about this sort of inter-disciplinary ranking. Each discipline is surely good for a different purpose, and, as Richard Rorty once pointed out, it doesn’t make sense to rank the tools in a toolbox. By this account, disciplines are task specific and therefore categorically incomparable. There is no ‘level field’ on which they can be sensibly compared. Trying to find the best thinkers is like asking “who is the world’s best sportsperson?” – the question will remain divisive (and any answer elusive) because the category is simply too broad.

To expand: those attributes which make Rory McIlroy a good golfer contribute very little to Usain Bolt’s success, and by the time you have squared off all the discipline-specific skills separating McIlroy from Bolt, all you have left is two healthy and dedicated men in their early twenties. The common denominators are too common to capture the particularity of each man’s achievements. Perhaps academic disciplines are similarly incommensurable?

The “two cultures” debate

“Comparisons between orders so disparate are meaningless,” said F. R. Leavis, replying to C. P. Snow’s claim that scientific argument was “usually much more rigorous and almost always at a higher conceptual level, than a literary person’s arguments.” The exchange was part of the “two cultures” debate, fought 50 years ago over two competing definitions of culture. The heat of the argument was generated by the subtext – who was smarter: people in the humanities, or people in the sciences?

By the 1990s, when the conflict reignited into the ‘Science Wars’, that subtext was explicit – with Raymond Tallis, a medical doctor and philosopher, inveighing that “those whose experience of disciplined thought has been confined to the humanities… cannot be considered adequately educated.”

This ongoing conflict between the sciences and the humanities has been stimulating and productive for scholars on both sides precisely because the humanities and the sciences do such different things that we can’t usefully compare them, so disputants will be unlikely to find enough common ground to achieve argumentative closure.

Again, were comparisons at this level meaningful, the dispute about which is best wouldn’t still be ongoing. So Leavis was right. But for the funding allocation problem, comparisons between orders so disparate are necessary.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Rankings | Research funding

1 Comments