LSE - Small Logo
LSE - Small Logo

Taster

February 20th, 2019

Counting is not enough – How plain language statements could improve research assessment

0 comments | 6 shares

Estimated reading time: 10 minutes

Taster

February 20th, 2019

Counting is not enough – How plain language statements could improve research assessment

0 comments | 6 shares

Estimated reading time: 10 minutes

Academic hiring and promotion committees and funding bodies often use publication lists as a shortcut to assessing the quality of applications. In this repost, Janet Hering argues that in order to avoid bias towards prestigious titles, plain language statements should become a standard feature of academic assessment.


Let’s start with the obvious. Evaluation and assessment are part and parcel of the scientific profession. Universities want to hire the best faculty, funding agencies want to support the best projects, and journals want to publish the best papers. To this end, scientists serve as members of hiring and promotion committees and on panels for funding agencies. We also write evaluation letters and reviews of manuscripts and proposals. Of course, this all takes time, which could otherwise be spent on our own research.

These competing demands on our time tempt us to rely on indicators as proxies for quality. But saving time does not justify disregarding the many critiques of indicators such as the Journal Impact Factor (JIF) and h-index. Indicators are often non-transparent and fail to accommodate field-specific characteristics; furthermore, they can be subject to manipulation and create perverse incentives, in the worst case, compromising scientific integrity (Alberts et al., 2015Hering, 2018Hicks et al., 2015Klein and Falk-Krzesinski, 2017). This issue becomes even more pressing when the scientists being evaluated are conducting applied research whose outputs are not well aligned with conventional indicators (Hering et al., 2012Moher et al., 2018).

I would certainly not argue that we should ignore an individual’s professional record, but we also need to be aware of the anchoring effect (Kahneman, 2011) of our first glance at a CV, list of publications or even the affiliations that commonly appear on the cover page of a proposal or manuscript. But what if the first information that we received about someone applying for a faculty position or submitting a proposal or manuscript did not provide us with shortcuts for assessing quality?

Starting with a plain language statement

I would propose that all reviews (i.e., of job applications, funding proposals, and manuscripts) start with the reviewer reading a plain language statement (AGU, 2018). This document should be concise, clearly-written, and focused on the scientific and societal significance of the job application, funding proposal or manuscript.  Authors would be instructed to focus on content and to avoid, as much as possible, mentioning specific institutions or other aspects of their “pedigree”.

Reviewers would be asked to assess the plain language statement(s) (i.e., as exceptional, adequate or inadequate) before they are given access to the complete application file, proposal or manuscript.  This would help to anchor their subsequent responses to content rather than to proxies for quality (i.e., indicators).

This process would have the greatest benefit for bodies like hiring committees or panels for funding agencies that have to compare a large number of potential applicants or proposals.  Faced with this workload, committee and panel members are more likely to feel overburdened and to be tempted to take shortcuts. Pre-assessment of the plain language statement could encourage committee or panel members to consider applications or proposals with exceptional statements even if the applicants are not the strongest when judged from the perspective of conventional indicators. Taking this even further, applications with “inadequate” statements could be excluded from further consideration.

Even when the evaluation is made on a case-by-case basis (i.e., for promotion and tenure decisions or individual reviews of manuscripts or proposals), pre-assessment based on a plain language statement could help to reinforce the recommendation that “the candidate should be evaluated on the importance of a select set of work, instead of using the number of publications or impact rating of a journal as a surrogate for quality” (Alberts et al., 2015).

I would certainly not claim that such adaptations of review processes would change the culture of academic assessment of scientists overnight.  But I think it could go a long way toward shifting the focus from indicators to content. I feel that such a shift is vital to the future success of the scientific enterprise, not only in the context of problem-driven or solution-oriented research but also for truly creative curiosity-driven research.

 


This blog post originally appeared on Elephant in the Lab and is reposted with the author’s permission. 

Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

Image credit: Thaovy via Pixabay

About the author:

Janet Hering is a Professor in the Department of Environmental Systems Science at the Swiss Federal Institute of Technology (ETH) in Zurich and in the School of Architecture, Civil and Environmental Engineering at ETH Lausanne. As the Director of the Swiss Federal Institute of Aquatic Science and Technology (Eawag), she interacts with stakeholders from policy and practice. She is a member of the U.S. National Academy of Engineering.


 

Print Friendly, PDF & Email

About the author

Taster

Posted In: Measuring Research | Peer review | Research evaluation

Leave a Reply

Your email address will not be published. Required fields are marked *