Research assessments regularly focus on outstanding and unique achievements, rather than the everyday failures and disappointments associated with academic work. Discussing a recent self-assessment and annual research report at Maastricht University that took a more candid approach to failure, Sally Wyatt suggests that research culture can benefit from a more realistic appraisal of failure.
‘Third time lucky’ definitely applies to my efforts to obtain a PhD. My first attempt began when I was 21, having finished my MA very early in life. I gave up after a year or two as I was certainly not ready. I started again about 10 years later. That ended in a spectacular breakdown in the relationship with my supervisor. Luckily, I found a wonderful new supervisor, and eventually defended my PhD when I was 39. Only the latter appears on my CV, and very few people know (or care) about the first two attempts.
I have recently stepped down as associate dean for research after completing a four-year term, a turn of events that suggests those early failures have not hindered my career. One of my main tasks as associate dean was to shepherd my faculty through a research evaluation. In the Netherlands, this is not nearly as consequential as the UK’s Research Excellence Framework (REF). No money is at stake, but reputations are. An international committee visits, poses questions, and prepares a report to be sent to the university’s most senior management. The purpose of the committee’s visit is to be collegial and provide advice on how best to go forward in the coming years. That does not make it less anxiety inducing. I desperately wanted to do justice to the work of my colleagues.
In the Netherlands all research entities are assessed every six years, and can set their own objectives. We work with a national protocol, prepared by the Dutch scientific academy, the main research funding agency, and the national association of universities. The current version explicitly moves away from reductive indicators such as journal impact factors. Of course, we were still expected to provide evidence to justify our claims of conducting high quality, societally relevant research that is both interdisciplinary and innovative.
in the most recent report, we decided to relativise this glorification of our work. Not because we are not proud of what we individually and collectively do, but because all these efforts to present our best selves are exhausting, and not at all reflective of the hard reality of getting research funded, published, and otherwise out in the world.
Our self-evaluation document is rather like a collective CV, celebrating our achievements, including publications, research grants, prizes, invitations. The self-evaluation and the committee’s report are made public once the process is over. This level of transparency is another significant difference from the UK REF.
As a faculty, we also produce an annual research report, with a few facts and figures about our staffing, publications, grant income, and finished PhDs. In recent years, we have featured interviews with colleagues who had positive and interesting stories to tell about their own research achievements or collaborations with societal partners. However, in the most recent report, we decided to relativise this glorification of our work. Not because we are not proud of what we individually and collectively do, but because all these efforts to present our best selves are exhausting, and not at all reflective of the hard reality of getting research funded, published, and otherwise out in the world.
Our most recent research report features four interviews with colleagues about their failures to get a grant, or get a manuscript accepted, or to reach new audiences. Our communications officer, Eva Durlinger, conducted the interviews very elegantly. A couple of themes emerge: the difficulty of conveying interdisciplinary work and the challenges of working across national borders. Being international and interdisciplinary were two features we highlighted in our self-evaluation, but they don’t always convince our peers. Despite the decades of rhetoric about the importance of interdisciplinarity by research funding agencies and their research policy masters, research is still too often judged on narrow notions of disciplinary quality. And while the Dutch might be moving away from using journal impact factors and h-indices, other countries still use them to determine salaries and promotions.
It would be an even bigger step if university websites and more research institute reports started to pull back from the focus on excellence, both individual and collective.
We send our annual report to the great and good in the university and beyond. One never knows if people actually read such things. But this time we received unsolicited praise from far and wide, particularly for our bravery in sharing these stories of ‘failure’. Bravery can sometimes be code for foolish, and I still worry about possible long-term negative consequences for my colleagues. I applaud their candour, humility, self-awareness, and ability to reflect and learn from things that didn’t quite go as they had hoped or planned.
It is now almost fifteen years since Melanie Stefan invited us all to keep a CV of failure, as a reminder of what it really means to be a scientist or scholar. If we dare, she asks us to share it with others, to help them overcome their own experiences of rejection. This was picked up by Staci Zavattaro on this blog. She gives some good tips for dealing with those upsetting rejections, and reminds us to look for the failures lurking behind the stories of success in CVs of individuals, and their cousins in the form of research self-evaluations and university websites.
In the opening paragraph of this post, I outed myself as a very slow and third-time-lucky PhD. I’m not sure what the PhD candidates I now supervise will make of this revelation. At the moment, my CV and my personal website are like social media, full of successes. It would certainly be much longer and would require an enormous amount of work to include not only the false starts on my PhD but all of my failed grant applications, fellowship nominations, and submitted manuscripts.
It would be an even bigger step if university websites and more research institute reports started to pull back from the focus on excellence, both individual and collective. In the wonderfully titled article by Samuel Moore and his colleagues, ‘Excellence R Us’, the authors point to the ubiquity of rather meaningless excellence talk, and the ways it serves to intensify competition between individuals and institutions. This is similar to the argument in the ‘Academic Manifesto’ by Willem Halffman and Hans Radder. They too bemoan the ways in which university managers want to reduce research to the smallest manageable unit, leading to all sorts of perverse effects, not only on research, but also on teaching and collegiality.
I look forward to the day when universities promote stories with the following headlines: ‘all ERC applications unsuccessful this year’, or ‘most highly paid professor in the university not cited in the past five years’, or ‘we dropped 35 places in the Shanghai Ranking, and given the dubious nature of the process, we don’t care’. Better yet would be if more universities followed Zurich, Utrecht and others and stopped taking part in such exercises altogether. A healthy research culture needs to be open about failure and rejection at all levels.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image credit: Igor Omilaev via Unsplash.
‘celebrating our achievements, including publications, research grants, prizes, invitations’ these remain Orwellian default rather than true markers of quality which is why so much research is non replicable, so a cynical academic culture emerges where failure ends up packaged in indirect indices of quality that have little connection with reality. The entire system is set up for inherent failure but self regulation and priorities from publishing corporate imperatives to University rankings to individual entitled ghettoes to undemocratic life tenures help create a world of make believe with no incentive to change because ultimate goals are self preservation not scientific excellence. To be frank academia has become like a bloated failed state that employs a large proportion of its population in unproductive activity