LSE - Small Logo
LSE - Small Logo

Anne Boring

April 9th, 2020

Surveying student evaluations of teaching: Vital tool or flawed methodology – What do you think?

0 comments | 22 shares

Estimated reading time: 10 minutes

Anne Boring

April 9th, 2020

Surveying student evaluations of teaching: Vital tool or flawed methodology – What do you think?

0 comments | 22 shares

Estimated reading time: 10 minutes

Student evaluations of teaching (SETs) are an established feature of many academic systems and have been implemented across national research systems in many different forms, often influencing the prospects of individual academics and their institutions. Whilst SETs have recently been critiqued, it is unclear how academics themselves view these evaluations. As a first step in assessing academic perceptions towards SETs, Anne Boring, presents the opportunity for readers to participate in a survey of the global landscape of SETs.


Student evaluations of teaching (SETs) are ubiquitous in higher education. They appear to have first been implemented about a century ago at the University of Washington and Purdue University. They were then quickly adopted by other universities in North America. With the globalisation of higher education, and more universities becoming wary of their reputation and the quality of the education they offer to students, many if not most universities around the world now use SETs.

The literature on SETs is huge: a Google Scholar search of “student evaluation of teaching” yields close to 9,000 results

Despite their widespread use, SETs are highly controversial among instructors. In modern practice, this assessment tool has two main purposes: measure teaching effectiveness (summative purpose), and help instructors improve their teaching practices (formative purpose), through student feedback. Both purposes generate lively discussions amongst academics. Do SETs really measure teaching effectiveness? Do they help instructors improve their teaching practices? These questions have become even more salient since the beginning of the Covid-19 crisis, especially for instructors who have had to swiftly switch to online teaching. 

The literature on SETs is huge: a Google Scholar search of “student evaluation of teaching” yields close to 9,000 results. This body of research has focused on many issues regarding the reliability and validity of SETs, how different factors bias responses, what types of questions should the surveys contain, and how scores are used and interpreted by universities (e.g. Becker 2000; Sproule 2002; Spooren, Brockx, & Mortelmans 2013; Stark & Freishtat 2014; Boring et al., 2017; Uttl, White, & Gonzalez 2017).

The recent literature has been critical of SETs, leading to a pushback on their use for summative purposes. Papers on biases in SETs in particular have led universities to reconsider their weight in tenure decisions, and retention of lecturers. Some universities and professional associations are reconsidering their use: for instance, the University of Southern California, the University of Oregon, and the Ontario Confederation of University Faculty Associations. The American Sociological Association has also cited the body of research on biases in SETs in a formal statement, to highlight some of the limitations of using SET scores as a measure of teaching effectiveness for promotion decisions in academia. This statement has been endorsed by 17 other scholarly associations.

Despite the pushback, SETs are unlikely to disappear anytime soon. Indeed, alternatives are impractical and costly to implement, and can also have drawbacks. Significantly, they do not necessarily eliminate the issue of bias. Universities generally consider that having some information (even somewhat invalid, unreliable or biased information) on instructors’ interactions with students, is better than not having any information at all.

If SETs are here to stay, how can universities improve their practices? There are significant variations in the ways that universities evaluate instructors. Some universities make them mandatory, others let their students complete their evaluations voluntarily. Some universities ask questions before exams, others after. Some universities release grades only after students have completed their evaluations, others do not. Some universities attribute a large weight to overall satisfaction scores, others do not. What is not particularly well known, is what practices instructors like the most or find to be fairest?

As I have been presenting my research on gender biases in SETs at different universities throughout the world, I have become interested in the strong beliefs that instructors hold regarding this assessment tool, and how SETs should be implemented. Some love, others hate, the way their departments use them.

To try and better understand academic perceptions of SETs, I have started a new research project on this issue at Erasmus University Rotterdam. The goal of this project is to document instructors’ opinions about SETs, as a function of the way they are used in different departments and universities throughout the world. It focuses on four broad questions. How do different higher educational institutions implement SETs? How do instructors experience SETs? How do instructors’ experiences correlate with the way in which SETs are implemented? What is the impact of switching to online teaching on instructors’ perceptions of SETs?

If you teach at the higher educational level, then this survey is for you!  The survey takes about ten minutes to complete, all you have to do is click the icon below:

The survey includes four sets of questions: how SETs are implemented and used at your department or university, what is your opinion about SETS, how your university has adapted its practices concerning SETs to the Covid-19 crisis, and descriptive questions to conduct the statistical analysis.

The survey is anonymous, and is in compliance with the European Union’s General Data Protection Regulation. The data collected will only be used for research purposes, and the research project has received approval from the university’s Institutional Review Board. A summary and initial analysis of the results will subsequently be published on the LSE Impact Blog. 

Whether instructors believe that SETs accurately measure teaching effectiveness or help them improve their teaching practices remains an open question. Given how important SETs can be for careers in higher education, the opinion of instructors needs to be understood better and given greater prominence. This survey is an opportunity for you to contribute to the very lively debate about the use of SETs in higher education. Make your opinion count!

Thank you very much for your contribution to this research project.

 


Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

Featured Image Credit Hurca; icon image mcmurryjulie via pixabay.

To take part you can access the survey following this link: https://erasmusuniversity.eu.qualtrics.com/jfe/form/SV_aeEpJbT4qWElRJ3


Print Friendly, PDF & Email

About the author

Anne Boring

Anne Boring is Assistant Professor at the Department of Economics at Erasmus University Rotterdam. She also heads the Women in Business Chair at Sciences Po, Paris. Her main research interests lie in the fields of Economics of Education, Labor Economics, and Applied Microeconomics. Her main topics of research are related to student evaluations of teaching and assessing teaching effectiveness, students' higher educational choices and their labor market consequences, the influence of stereotypes on performance assessments, and the efficiency of policies designed to reduce discrimination.

Posted In: Higher education

Leave a Reply

Your email address will not be published. Required fields are marked *