There has been a marked trend for the increased use of hyperbolic adjectives in academic writing in recent decades. Drawing on a study of REF 2014 impact case studies, Ken Hyland finds this genre to include even greater degrees of hype than research writing and explores how this persuasive language is deployed differently across different fields of research.
The claim that academics hype their research is not news. The use of subjective or emotive words that glamorise, publicise, embellish or exaggerate results and promote the merits of studies has been noted for some time and has drawn criticism from researchers themselves. Some argue hyping practices have reached a level where objectivity has been replaced by sensationalism and manufactured excitement. By exaggerating the importance of findings, writers are seen to undermine the impartiality of science, fuel scepticism and alienate readers. In 2014 the editor of Cell Biology International, for example, bemoaned an increase in ‘drama words’ such as drastic decrease, new and exciting evidence and remarkable effect, which he believed had turned science into a ‘theatrical business’. Nor are the humanities and social sciences unaffected by this, as writers have been observed to explicitly highlight the significance of their work in literary studies and applied linguistics.
All this, of course, is the result of an explosion of publishing fuelled by intensive audit regimes, where individuals are measured by the length of their resumes, as much as the quality of their work. Metrics, financial rewards and career prospects have come to overwhelm and dominate the lives of academics across the planet, creating greater pressure, more explicit incentives and fiercer competition to publish. The rise of hyperbole in medical journals has been illustrated by Vinkers, Tijdink and Otte, who found a nine-fold increase in 25 ‘positive-sounding words’ such as novel, amazing, innovative and unprecedented in PubMed journals between 1974 and 2014. Looking at four disciplines and more hype terms, myself and Kevin Jiang found twice as many ‘hypes’ in every paper compared with 50 years ago. While increases were most marked in the hard sciences, the two social science fields studied, sociology and applied linguistics, out-hyped the sciences in terms of their use of these words.
While hype now seems commonplace in the scramble for attention and recognition in research articles, we shouldn’t be surprised to find it appearing in other genres where academics are evaluated. Foremost among these are the impact case studies submitted in bids for UK government funding. Now familiar to academics around the globe, evaluation of research to determine funding to universities was extended beyond ‘scientific quality’ to include its real-world ‘impact’ with the Research Excellence Framework in 2014. More intrusive, time-consuming, subjective and costly than judging the contribution of published outputs, the ‘impact agenda’ seeks to ensure that funded research offers taxpayers value for money in terms of social, economic, environmental or other benefits.
For some observers, the fact that impact is being more highly valued is a positive step away from the Ivory Tower perception of research-for-research-sake. A serious problem, however, is how the evaluating body, the University Grants Council, chose to define and ‘capture’ impact. The structure they hit upon, the submission of a 4-page narrative case study supporting claims for positive research outcomes, was almost an invitation to embellish submissions. With over £4 billion on offer, this is an extremely competitive and high stakes genre, so it isn’t surprising that writers rhetorically beef up their submissions.
Our analysis of 800 ‘impact case studies’ submitted to the 2014 REF shows the extent of hyping. Using the cases on the REF website, we searched eight target disciplines for 400 hype terms. We found that hyping is significantly more common in these impact cases than in research articles, with 2.11 per 100 words compared with 1.55 terms in leading articles using the same inventory of items. Our spectrum of eight focus disciplines shows that writers in the social sciences were prolific hypers, but by no means the worst offenders. In fact, there is a marked increase in hyping along a scale with the most abstract and rarefied fields taking greater pains to ensure that evaluators get the message (Fig 1). Research in physics and chemistry, for instance, are likely to be several steps removed from informing real-world applications than social work and education.
We also found differences in the ways that fields hype their submissions. Terms emphasising certainty dominate the most frequent items, comprising almost half the forms overall. Words such as significant, important, strong and crucial serve to boost the consequences of the claims made with a commitment that almost compels assent. Apart from certainty, the fields deviated in their preferred hyping styles. The STEM disciplines tended to employ novelty as a key aspect of their persuasive armoury, with references to the originality and inventiveness of the work (first, timely, novel, unique). Social scientists, on the other hand, stressed the contribution made by their research, referring to its value, outcomes or take-up in the real world (essential, effective, useful, critical, influential). So, while social scientists may not be the most culpable players, they have, unsurprisingly, bought into the game and are heavily invested in rhetorically boosting their work.
The study shows something of how impact criteria have been interpreted by academics and how they influence their narrative self-report submissions. The results point to serious problems in the use of individual, evidence-based case studies as a methodology for evaluating research impact, as they not only promote the selection of particularly impressive examples but encourage hyping in presenting them. The obvious point arises is whether we should regard authors as being in the best position to make claims for their work. Essentially, impact is a receiver experience: how target users understand the effect of an intervention. We have to wonder, then, how far, even in a perfect world, the claims of those with a vested interest in positive outcomes should be relied on. While the impact agenda is undoubtedly well intentioned, perhaps there are lessons to be learnt here. One might be that is never a good idea to set up evaluative goals before you are sure how they might best be measured.
This post draws on the authors co-authored article, Hyping the REF: promotional elements in impact submissions, published in Higher Education.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: LSE Impact Blog via Canva.
I would be super interested in an analysis of the Art and Design unit of assessment (unit 32) using the same methods. Would this be relatively straight forward for you to do?
thanks
joe
We are currently looking at comparing the 2014 submissions with the most recent REF. So it may be a while before we are able to get to this. Sorry Joe
Thanks for your interest Jo. An interesting idea, but we are pretty well under the cosh at the moment. We are, however, working on a comparison with the impact cases submitted to the most recent REF. Best, Ken