LSE - Small Logo
LSE - Small Logo

Leo Falabella

June 28th, 2024

How can quantitative methods enhance research in HE?

0 comments

Estimated reading time: 10 minutes

Leo Falabella

June 28th, 2024

How can quantitative methods enhance research in HE?

0 comments

Estimated reading time: 10 minutes

In explaining how the use of quantitative data in pedagogic research can surface important insights and establish causation, Leo Falabella addresses three common concerns – perceived lack of critical depth, ethics, and small numbers of participants. 

Pedagogic research is central to the social role of higher education, especially in the face of recent developments. Access to higher education is becoming broader and less exclusive. Debates on university funding populate the news from the UK and US to Brazil and Pakistan, and authoritarian voices assault academic freedom. Quality evidence-based teaching can go a long way towards responding to such pressures.

In this post, I argue that increasing our use of quantitative data is a promising avenue for research in higher education. Quantitative research is not a panacea but holds special promise to uncover causal effects of interventions. Moreover, concerns with quantitative studies can be overcome with appropriate choices of research design. This post discusses three potential sources of concern with quantitative studies in higher education. In so doing, I focus on experimental methods, as they are especially powerful in identifying causal relationships. Importantly, I do not argue for replacing qualitative with quantitative data. Rather, I claim that quantitative studies support qualitative inquiry to enhance our knowledge on teaching and learning.

Leveraging complementarities 

On that note, this study published in Proceedings of the National Academy of Sciences (PNAS) is a good example of how quantitative evidence is critical to advancing pedagogical research. Through an experiment comparing active learning practices to traditional teaching methods, Louis Deslauriers and his co-authors found that students in the active classroom learnt more, even though they believed they learnt less. This shows that while students’ perspectives cannot be ignored, self-reported accounts should be taken with a pinch of salt, at least when the goal is to understand the impact of pedagogical interventions on student learning.

Moreover, the study identifies a source of resistance to practices that have long been defended in the field of critical pedagogy, a predominantly qualitative tradition. Paulo Freire’s Pedagogy of the Oppressed – the foundation of critical pedagogy – emphasises the need to stimulate active participation, prompting students to both educate and be educated by peers and instructors. The choice of active learning strategies over traditional methods is validated by empirical research, but traditional didactic teaching methods are still widespread. The experiment conducted by Deslauriers and colleagues helps us understand why. It suggests that active learning practices may be interrupted due to inaccurate perceptions of inefficacy. Without quantitative research, these perceptions could have remained unchallenged.

Methodological pluralism can facilitate dialogue with other areas of knowledge

Despite its value and promise, quantitative research in higher education still faces resistance in the scholarly community. While I cannot determine the precise reasons for this, three hypotheses come to mind. First, researchers in pedagogy may believe that quantitative studies sacrifice critical reflection by glossing over the depth of qualitative data. Second, researchers may fear that conducting experiments in the classroom would violate research ethics. And third, researchers may feel constrained by classes with too few students, amounting to sample sizes too small for reliable statistical testing. All such concerns are valid. In what follows, I propose ways to address them so we can unlock the full potential of quantitative data for pedagogical research.

Overcoming challenges 

Let us start with the belief that quantitative analysis lacks the depth of qualitative approaches. One could argue that quantifying student outcomes is a reductionist approach, done at the expense of critical reflection. In addition, researchers may question the reliability of quantitative measures. Anyone with experience writing and marking exams has seen cases where a numerical mark did not seem to reflect a student’s learning. These factors could lead some researchers to believe that qualitative data is more appropriate for pedagogical research.

We can address these concerns by reiterating that quantitative analysis is a supplement, not a substitute, for qualitative research. The in-depth critical reflection made possible by qualitative data need not and must not be lost. Instead, insights from critical reflection can be bolstered through quantitative studies, as demonstrated by the above study on the effectiveness of active learning strategies. Moreover, whereas quantitative data sometimes results in measurement error (for example, a student’s grade often does not accurately reflect the learning or understanding), this is also true for qualitative data. Finally, methodological pluralism can facilitate dialogue with other areas of knowledge, enhancing the impact of research on higher education.

Quantitative analysis is a supplement, not a substitute, for qualitative research

Besides concerns with loss of analytical depth, researchers may also have reservations regarding the ethics of experiments in the classroom. For example, one may expect experiments to disadvantage certain students. This is a valid concern since testing the causal effects of a pedagogical intervention requires a benchmark – a control group where the intervention is absent. Therefore, experiments can create asymmetries in access to teaching methods and resources.

Despite the validity of these concerns, an adequate research design can ensure equitable teaching in experimental studies. For example, within-subject designs expose all participants to control and treatment conditions. Imagine a calculus teacher who wants to test a new method in a class with content on integrals and derivatives. This teacher-researcher could randomly assign half of their students to learn derivatives through the new method but learn integrals through a traditional method. Conversely, the remaining students would learn integrals through the new method and derivatives through a traditional method. A formative or ungraded assessment (exam) with questions on integrals and derivatives would allow the researcher to detect the average effect of the new teaching method, all the while safeguarding equitable access to teaching methods. Additionally, the researcher could allow students to choose their preferred method when preparing for subsequent exams. This would bring the added benefit of the delayed intervention design, where participants can choose to receive an intervention (in this case, the new teaching method) at later stages of the study.

Methodological diversity can help us reach wider audiences

Even if the design ensures equitable teaching, one may find it inherently unethical to have students as participants in experiments. Accordingly, students could feel coerced into consenting to participate in a study. This concern can also be addressed with appropriate design procedures. On informing students of the experiment, researchers can circulate opt-out forms with students and only view the responses after marks are submitted. Informing students that their choice to drop out will remain confidential until after the course should prevent them from feeling coerced.

Lastly, researchers may avoid quantitative studies because several classrooms have too few students, resulting in sample sizes too small to allow for reliable statistical testing. Once again, the appropriate design choice can mitigate these constraints. Let us go back to the within-subject experiment in the calculus course. Each student would be taking a formative exam with questions on integrals and derivatives. The resulting statistical test would have a number of observations equal to the number of students multiplied by the number of questions. A classroom with 20 students taking an exam with 10 questions would yield a statistical test with 200 observations, thus addressing the problem of small sample sizes.

Diverse and mutually supportive methodologies can enhance the social impact of research in higher education. As this post has sought to demonstrate, experimental studies can supplement qualitative data to test for causality without a loss of analytical depth and critical reflection. Further, methodological diversity can help us reach wider audiences, thereby enhancing collaboration with other fields and improving our ability to influence higher education policy.

_____________________________________________________________________________________________ This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions.    _____________________________________________________________________________________________ 

Main image: Nick Hillier on Unsplash

About the author

Leo Falabella profile pic

Leo Falabella

Leo Falabella is a Fellow, Department of Government, London School of Economics, UK

Posted In: Shifting Frames

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.