The Research Excellence Framework (REF) is too busy playing catch-up with American styles of impact assessment to notice that its model is tired, old and outdated, argues Danny Quah. Any assessment of academic impact must include engagement with the public, and therefore must acknowledge the growth of academic blogging.
Mark Thoma’s thoughtful article, “New Forms of Communication and the Public Mission of Economics: Overcoming the Great Disconnect”, describes the factors that, through the 1980s and after, led to academic economics disengaging from its long-standing public mission: Addressing the questions important to society.
Once it started to withdraw, academic economics became ever more self-contained and self-affirming. Along that path these developments encountered no reality check or market test. The profession grew to have no way to ask how the questions it addressed might matter to anyone, to anyone that is beyond those inside the profession itself involved in posing and answering those questions. Instead, the profession developed a disdain for those outside it – government economists, business economists, journalists, the general public – who were concerned with matters it considered mundane. Academic economics saw a choice between only two extremes: one, that of super-streamlined professionalism and the other, that of ambulance-chasing opportunism, and it convinced all the PhD students it could find there was only one way to go. The system faced no countervailing pressure to change.
Economics no longer had a public mission; it had turned its back on the rest of society. Thoma’s earlier op-ed pointed out:
“How much confidence would you have in the medical profession if the teaching faculty in medical schools had very little experience actually treating patients, and very little connection to – even a lack of respect for – the practitioners in the field? Would your confidence be improved if medical research had little to do with the questions that are important to the doctors trying to serve patients?
Fortunately, however, this disengagement has begun to turn around, not least since the global economic crisis following 2008 but also, a little before then, through academic economists – top-flight respected researchers – communicating again directly with the public. In Thoma’s analysis, it is blogging – with all the attendant openness, immediacy, and direct connection with the readership (facilitated by a supporting information and communications technology) – that has brought economics back to its public mission of understanding, explaining, and convincing on questions that matter. This does not replace research. But it breathes life back into the latter and suggests why certain kinds of research have genuine validity.
The inroads from there, moreover, have allowed economists again to have the confidence to engage openly with journalists, with policy-makers, and with a suspicious public nonetheless eager to learn. This not only improves research but raises economic and financial literacy. We cannot pretend to value the ideals of liberal democracy if we don’t think it important that the general public understands better what happens around them.
Thoma’s examples are almost entirely US, and that is appropriate. That is where change has been greatest.
But this makes me wonder if, in the UK in our own headlong RAE/REF-directed rush to academic excellence, we are now following the path that, in Thoma’s analysis, is already old and tired – i.e., from the pre-blogging era. What passes for hiring/firing discussion in many economics departments is rumour mixed with currency: a researcher with four publications in the top 4* journals is worth, in UK government REF-derived funds, £100,000 a year. So hiring someone in that category is, upon amortization, a half-million pound proposition. Some departments might even mortgage an expensive hire like that today, discounting against REF future income prospects. (Does anyone else think this resembles a subprime mortgage deal?) Impact studies might count so if some social scientists developed a new pharmaceutical assembly line that might raise your REF income.
Engagement with the public? ‘Sorry, that’s not in the REF. The 4* Americans don’t do that, you see’.
This post was originally published on Danny Quah’s personal blog.
I had just published this post this morning on my HASTAC blog. The truth many in academia did not start taking blogging seriously until there was funding for it. Now let’s hope there’s also funding for those who have been blogging against all odds.
But engagement does factor in the REF. The impact case studies allow you to show that you are engaging in the broadest sense (with businesses, govt, and the public) and in the assessment of the research environment engagement could also play a part. I think there is a positive move towards valuing those that engage in more applied research and research that is user-involved.
And, although publications are the most standard form of research quality assessment, REF does allow for you to submit a patent, program, sculpture even… how they would be fairly assessed I don’t know.
Granted REF is not a perfect system (and perhaps can never be), but there is progress from RAE 2008 surely?
But the REF does include public engagement…
From the REF guidance on submissions [pdf], page 30
At least one of the Lang/Lit/Culture case studies references a Twitter feed. How much more public engagement do you want?
The case studies are here. It’s made very clear that a broad range of possible forms of public engagement can ‘count’ for impact.
I am concerned by this post as it seems to put engagement with the public and journalists and bloggin as a very high priority. Obviously, it is good to deseminate your work form part of the public understanding of an issue. However, all these things are missing 1 vital thing: Peer review. I could write a blog that said “Heart disease is clearly linked to moral weakness, because it is poor people and non caucasians that get higher rates of heart disease”. Now, journalists and the public may like that idea, and may widely spread it. However, any cardiologist or any scientist would immediately dismiss my suggestion for lack of evidence, and lack of considering other explinations. Trusting the public to properly critically appraise data is unreasonablecurrently. Yes, there are problems with REF, but it’s failure to acknowledge non peer reviewed publications is a strength.
The article does not suggest that a peer-review system never produces good science, fine scholarship, or useful findings. Hardly. It does, however, point out that such a mechanism displays extreme persistence, and therefore initial conditions matter greatly. Other situations characterized by such internal momentum include, for instance, asset market bubbles – where it is the speed of recent history that determines future dynamics – at least until external validation provides a reality check. Or, imagine a situation where only top bankers decide each other’s salaries. The rest of society, quite rightly, insists on asking for some proof of how their contributions actually measure up.
Again, to be clear, there certainly exist instances where peer assessment produces the best results. But one doesn’t have to look very hard to find situations where it does not.
I don’t think either extreme – exclusive reliance on peer review or its total denial – is useful. The question is, how do we find a balance that is optimal.
I hear that publication assessment is at 60% or so, but that the impact assessment (an impact case study) is at 20% for the next REF. However, only 1 impact case study is submitted for every 10FTE. That potentially creates a multiplier effect for impact, such that a 4* impact case study could have an equivalence of 2 4* papers. If this is true, then I think the metric works – because it encourages team working, aggregated capabilities rather than solo generalists or even solo specialists and allow for a diversity of academic skills be employed for the good of society. Question is – is that what has been decided?
I had thought my post was more about how the economics profession works, its unspoken conventions, and the results that thus emerge.
But fair enough, many readers have interpreted the post to remark on REF – and I did make mention of that, and so the subbing is not unfair.
I agree – absolutely, REF does include public engagement. My understanding (via a colleague deeply embedded frontline in REF work) is that any potentially successful economics/social science impact study needs to answer four questions :
Q1: What happened? – A decline in elderly poverty? An improvement in participation in higher education? Increased economic growth?
Q2: So what? – Why does it matter – what is the reach, the significance, etc.
Q3: On what research was the impact based?
Q4: What are the links between the research and the claimed impact?
Public engagement is viewed to have traction in Q4. (Irene Ng – the process remains ongoing, I believe, but I’d be very surprised if we end up having something very different from this. And, probably like you, I’ve seen different numbers on how much impact case studies count up compared to 4* publications.)
Obviously, simply having, say, a twitter feed doesn’t quite cut it (Sorry, Sharon Howard – we’re talking social science here).
Not part of this is, say, “A marked increase of economic literacy, the kind critically needed in well-functioning liberal democracies.”
Regarding conventions in the economics profession, Raquel Fernandez (who works on culture in economic performance) made a number of interesting points in her Straddler interview (October 2011), some in disagreement with, some supporting, the position laid out above.