As AI and other technologies become a part of the edtech landscape, LSE HE Blog Fellow Maha Bali exhorts educators to ask these five questions when considering a new technology (or pedagogy) for their course
We are bombarded daily with new technologies that claim to transform or disrupt education even though we know that, historically, tech in education has been overhyped. Although educators and educational developers don’t always have a say in the technologies our institutions adopt (for example, Turnitin) or our students have access to (such as generative AI), we still, usually, have some agency over how to respond to new technologies and whether or not to adopt them in our practice. However, by the very nature of their newness, there is rarely much in the way of best practice or evidence-based practice when it comes to new technologies, and little certainty or consensus on the long-term impact of using them.
As an educational developer, I deliberately eschew technologically deterministic positions, I do not celebrate tech evangelism, and I certainly do not support tech-shaming people who refuse to adopt new technologies. Instead, I prefer to promote a culture of exploration and experimentation in spaces where it is safe to make mistakes and take small risks – the size of the risk is important because education involves the futures of other people, often young people, and we do not want to make decisions that may harm them.
Decisions about technology are complex and entangled, because pedagogy involves many interdependent elements, including our purposes, context, and the values within which we work. So I propose five questions to help us make wiser decisions about technology that align better with our values and contexts. These questions need to be reiterated and revisited throughout the process of integrating technology, intentionally adapting as needed.
1. What is your teaching philosophy?
At some point or another, you have written or thought about your teaching philosophy. Go back to this every now and then, reflect on it and consider revising/updating it. What do you believe good learning is? Good teaching? How might this influence whether or how you adopt this particular technology? If you believe in the importance of social constructivism, the presence of multiple interactive whiteboards in your class can be an opportunity for students to solve problems together in groups. If you value experiential learning, you may integrate community-based learning in your class; and if community and trust are important to you, use semi-formal social media for communication (for example, Slack) to help nourish ongoing community building. Or consider rejecting technologies such as Turnitin, remote proctoring, and AI detectors which signal an outright lack of trust before students do anything to warrant it.
2. How might edtech integration affect learning?
I find some models helpful for exploring whether integrating a new technology has a deep or superficial impact. The SAMR model questions whether integrating the new technology will result in:
- Substitution – the same thing but done with technology, such as using Zoom to lecture in exactly the same way
- Augmentation – where there is a little bit of added value, such as the ways word processing tools make it easier to edit what we’ve written, unlike typewriters
- Modification – doing something new such as multimedia blogging instead of traditional essay writing
- Redefinition – allowing us to do things that were nearly impossible before. Google Docs transformed the nature of remote (synchronous and asynchronous) collaborative writing across borders, not just facilitated it. Co-authors are able to edit the same file (not versions of a file) as a result of Google Docs being cloud-based, and track in real-time who edited what since Suggesting mode was introduced in 2015.
Remember to assess a particular technology before using it. Does it improve student learning or your own teaching? If not, can you adapt it so that it does?
3. What are the social justice and care considerations?
Which learners could be advantaged or disadvantaged by the use of this technology? Who has access, who does not? Is access officially sanctioned by the institution or not? For example, we should not assign assessments that require the use of tools that are not available for free or made available by our institution, to ensure all learners have the same level of access. Some technologies can enhance equity: automated closed captions on Zoom and videos enhance accessibility for people with hearing impairments and non-fluent speakers of the language spoken.
To work towards socially just care i.e. care with social justice, we can also ask: how might integrating this technology impact students’ mental health? And our own? Algorithmic proctoring of exams, for example, discriminates against neurodivergent learners and exacerbates anxiety for others. On the other hand, offering video office hours instead of in-person ones might provide relief to students who have social anxiety, work or family responsibilities.
4. What are the trade-offs?
What do we lose when we use this technology? Consider, for instance, conducting a literature review. While GenAI tools sometimes produce fake references, there are several literature review AI tools that search for and use real, scholarly references, and can very quickly bring up a variety of sources, summarize them, synthesize and compare them. We gain speed, but we end up with decontextualized AI-generated summaries and synthesis. In fast-tracking the reading and writing of a literature review, there is the possibility of bypassing the thinking process of engaging deeply with the literature. We have lost opportunities to build valuable knowledge and cognitive skills such as close reading from the process.
5. How might it impact human agency?
Seymour Papert warned us about uses of educational technology where we allow the machine to control the child, rather than the child to control the machine. With technologies that are marketed as personalized learning, what kind of loss of human agency occurs when the machine decides the learning path, rather than the teacher or even the learner? Personalization algorithms involve collecting observable behavioral data (which is an incomplete picture of the whole learner, and constitutes a kind of surveillance), and categorizing it in order to guide learners through a particular pathway determined by an obscure algorithm which may reproduce some inequalities, given how such algorithms have been trained. Rather than personalizing the learning journey, it’s a homogenizing categorization exercise that circumvents a learner’s choice of how to learn and at what pace. It also ignores the socio-emotional aspects of learning, which is not just about having a tutor answering your questions correctly (which AI tools can be trained to do, to an extent).
Parting thoughts
We must each make our choices and decisions after careful deliberation on the many dimensions within ourselves and our classrooms and our institutions, and the powers and dynamics outside of our institutions that also have an impact.
We should also ask who created each technology, what are their politics, where is the money going, and who do we empower by adopting such technology? Tressie McMillan-Cottom reminds us to follow the money. We need to critically question the often hyperbolic narratives around technology and examine its sociopolitical context.
To whatever degree we have agency, we should be exercising it thoughtfully and carefully without losing sight of who we are and the values we hold dear.
Image: Gerd Altmann/Pixabay
_____________________________________________________________________________________________ This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions. ______________________________________________________________________________________________
Note: The image of this blogpost was changed to the current image on 14 January 2025.