Batting away the hype, bias, and botshit, LSE HE Blog Fellow, Maha Bali, champions the need for cultivating critical AI literacy in our students, and shares tried and tested teaching ideas and exercises for educators
I’ve been teaching a course on digital literacies and intercultural learning for six years now, and there has always been an element of discussing artificial intelligence (AI) and its impact on society. More recently, though, with the widespread use of generative artificial intelligence (GenAI) such as ChatGPT, this has become a central topic in many of my conversations and speaking engagements. I believe that when both educators and learners improve their critical AI literacy, they will be better able to resist using AI in ways that are harmful or inappropriate. Instead, they will feel empowered to use it constructively, while being aware of the limitations. I also believe those who report on and teach about technology have a responsibility to avoid techno-determinism, but for now, learners need to spot the hype.
What does it mean to be critical?
For me, being critical goes beyond critique and scepticism: it includes subscribing to critical theory and critical pedagogy – developing awareness of social justice issues and cultivating in learners a disposition to redress them. The elements of critical AI literacy in my view are:
- Understanding how GenAI works
- Recognising inequalities and biases within GenAI
- Examining ethical issues in GenAI
- Crafting effective prompts
- Assessing appropriate uses of GenAI
Understanding how GenAI works
I always start by asking my students to list examples of AI they are familiar with. Besides inevitable mentions of ChatGPT and GenAI, this discussion brings up things like Google’s search engine, Turnitin’s plagiarism detection, YouTube’s recommendations, and facial recognition. It is important to recognise that AI has existed for a long time, and that machine learning is only one type of AI.
I then explain very quickly how machine learning works – the idea of giving a computer program lots of data and letting it learn on its own. I remember the first time I learned this as an undergraduate, the biomimicry of neural networks resembling the way the brain transmits signals – the idea of pattern recognition mimicking the way a child learns the difference between a cat and a dog. It’s fascinating and yet really difficult to absorb.
So I invite students to play a game called QuickDraw, which invites the user to create a quick doodle of a word, then the machine tries to guess what was drawn. After students play the game, I invite them to take a look at how other people drew the same thing, and find differences and similarities. We discuss words like ‘nail’ (could be a fingernail or toenail, or could be a nail you hit with a hammer – QuickDraw recognises both) and ‘bat’ (the animal or the one you play baseball or cricket with).
We also discuss the fact that QuickDraw exists in different languages, but it does not seem to differentiate the images culturally. For example, if it asks you to draw an angel, you’d probably draw something with wings and a halo, right? In Arabic, it expects the same thing, whereas the majority Muslim population of the Arab world are religiously opposed to depicting angels in visual form. Also, to draw a hospital, you’re likely to put a cross on it to differentiate it from any other building, right? But in the Muslim world, most hospitals have a crescent on them, not a cross (which is what QuickDraw suggests). So this gives them a slight inkling of how AI perpetuates a bias towards the dominant knowledge data sets it has been trained on.
Recognising inequalities and biases within GenAI
We need to work on recognising “AI not as a neutral technology but as political technology that can be used to serve particular policy objectives and ideologies.” When speaking of inequalities related to AI, it helps to look back at work that has been done for years before the advent of ChatGPT, and to recognise that it has always historically been racist, sexist, ableist, and so we should always be wary of its engineered inequality.
Last semester, we dived into uncovering bias against the Palestinian cause , which we noticed across several AI platforms. ChatGPT’s responses to “Does Palestine have the right to exist?” and “Does Israel have the right to exist?” are quite different. On Palestine, it tends to start with how the issue is complex and sensitive, whereas on Israel, it tends to state that Israel is officially recognised to have the right to exist. If asked longer questions, it tends to always refer to the issue as the Israeli-Palestinian conflict rather than Palestinian-Israeli. This order reflects the dominance of that prioritisation of Israel over Palestine in texts ChatGPT has likely been trained on.
The bias was stronger in visual AI. My students wanted to make the AI imagine a free Palestine. The video that resulted had several problems: firstly, many of the images of free Palestine depicted destruction rather than prosperity; secondly, it had a lot of blank images because it could not find enough positive images of Palestine, and thirdly, and most shockingly, some of the images featured the star of David extensively, conflating peace in Israel with the liberation of Palestine. When the students and I re-prompted it to remove the blank parts and include images of pre-1948 Palestine, it became better, but it also increased the number of videos and images of the Israeli flag, star of David, Orthodox Jews, and Hebrew signage in the streets in parts where it was narrating Palestinian victory. This seemed to suggest that the AI tended to connect prosperity in the region with Israel, not Palestine.
Of course, when AI does work and is useful, such as for supporting people with visual impairments to recognise what is in their surroundings, it becomes important to ask about inequality of access: who has access to this technology? Is it affordable? Does it require internet or electricity that may be routinely unavailable or unstable for certain populations? When available, is it culturally nuanced, or is it trained on a white/Western/Northern dataset and unable to recognise a person’s local surroundings?
Examining ethical issues surrounding GenAI
Unethical practices in the creation and training of AI have been well documented from the harm caused to Kenyan contract workers to its deleterious effect on the environment.
A problematic practice more pertinent to higher education is the amount of data used to train AI without necessarily getting consent from the creators, nor citing the sources of the synthetic outputs. One thing few of us do with any tech platform, let alone AI platforms, is to read the terms of service and privacy policy before using it. Autumm Caines recommends we spend time doing this with learners before we invite them to use the tools. This may mean that some people will decide they do not want to use AI tools at all, and if they do so, as educators we need to ensure we have alternatives to any AI-based exercises we do – such as giving them our account use for the assignment in question, or offering pair work or a more theoretical exercise related to AI.
some people will decide they do not want to use AI tools at all, and if they do so, as educators we need to ensure we have alternatives to any AI-based exercises we do
Developing prompt crafting skills
Prompt crafting is a term that I started using instead of the more commonly used prompt engineering, as it recognises the creative process of trying to create the prompt that yields the output we’re hoping for. Prompt engineering can be defined as “the process of iterating a generative AI prompt to improve its accuracy and effectiveness”. There are free courses on prompt engineering, but the most important, pedagogically speaking, are modelling and practice. I model for students writing prompts and refining them when I don’t get what I like, I showcase different AI platforms, and I include visual AI which can benefit from different prompting approaches as well.
For example, if you simply ask a visual AI to create an image of a classroom, it won’t know what kind of classroom you want, or what kind of style of image you want. But you can be more detailed and specify that it shows diverse representation in the classroom and/or a classroom where students are engaged (as opposed to disruptive). You can also specify whether you want the style to be realistic or cartoon-like or anime or any other style. For text-based GenAI, I find Mark Marino’s tips really helpful as a starting point – a reminder that we can give the GenAI tool a personality “You are a teacher…,” a context, “In a public university in Egypt”, and give it criteria for the output, “Create a syllabus for a course on X that contains three projects, two assignments, and 10 readings by Arab and American authors”.
I assign students activities to do using multiple AI tools and ask them to keep trying different prompts to improve the output of the AI tool – whether text-based or visual, and to modify the way they write the prompt and amount of data they give the tool. A really useful exercise is to compare what they would have written themselves versus what the AI tool produced and assess the strengths and limitations of the tool on a variety of tasks, while learning to hone their own prompt writing ability.
Assessing appropriate use of GenAI
We need to help our students recognise when AI can usefully help with a task (albeit with some risk of inaccuracy), which tasks requires human attention, and which tasks AI has shown are now redundant not only for humans, but altogether.
Botshit
In practice, GenAI performs better with topics familiar to its training data, and more likely “hallucinates” or produces “botshit” (if we use it uncritically) with material less familiar to it, or unexpected combinations in prompts. When you are from my part of the world, you will notice it making random untrue associations with things closely connected to your own culture. A hilarious example of this was when I prompted Bard/Gemini to create a table of ancient and contemporary Egyptian kings/rulers with their photos and their achievements. For Muhammad Ali (1805-1848), it displayed a photo of Muhammad Ali, the American boxer. Visual AI is particularly striking when it hallucinates – we need to learn to identify such inaccuracies and not perpetuate them.
Metaphors
I often invite people to come up with metaphors to represent AI. We discuss how some metaphors are helpful and others can be misleading, and how metaphors can promote utopian and dystopian views of AI. In this way, I get my students to adopt a critical approach to understanding how the metaphors we use can affect our attitudes towards AI.
I sometimes invite the students in my course to think of AI like cake-making. In this course, what things do you need to create from scratch (without support), and what can you bake with support (for example, using a readymade cake box in which you just mix wet ingredients). For which assignments might you buy a readymade cake from a supermarket and just add the decorating touches? My students begin to see who has all the ingredients and skills to bake from scratch and who does not. How can we level the playing field so that no one feels the need to get unauthorised support from AI? Over two semesters, my students and I have reached the conclusion that if a course is not directly teaching students how to read or summarise, or how to cite correctly, then using AI tools to help them understand a complex reading or format their references is acceptable.
we need to decide which aspects of critical AI literacy our particular students need and can benefit from
What is critical AI literacy good for?
I want to be clear that I don’t believe that every teacher needs to teach all the elements of critical AI literacy, nor that every course should integrate AI into the syllabus. What I do believe is that we need to decide which aspects of critical AI literacy our particular students need and can benefit from. I think we should always question the use of AI in education for several reasons. Can we position AI as a tutor that supports learning, when we know AI hallucinates often? Even when we train AI as an expert system that has expert knowledge, are we offering this human-less education to those less privileged while keeping the human-centric education to more privileged populations? Why are we considering using technology in the first place – what problems does it solve? What are alternative non-tech solutions that are more social and human? What do we lose from the human socioemotional dimensions of teacher-student and student-student interactions when we replace these with AI? Students, teachers, and policymakers need to develop critical AI literacy in order to make reasonable judgments about these issues.
More resources and information on the AI literacy model and its dimensions and applications are available on Maha’s blog.
_____________________________________________________________________________________________
This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions. _____________________________________________________________________________________________
Images
Picture of Muhammad Ali, ruler of Egypt: Auguste Couder Bap in Picryl
Picture of Muhammad Ali, boxer: Ira Rosenberg in Picryl
This is very interesting as usual, Maha. What great course activities.
Thank you for reading and responding.
Thanks Maha,
This is fantastic! I especially love the idea of AI as metaphor. Will be trying that myself this week. 🙂
Let me know how it goes! It can be so much fun! Especially when people disagree on whether a certain metaphor is actually positive or negative about AI or somewhere in between, or to the extent to which it is accurate.
Thanks, Maha! This post comes at a perfect time — I’m teaching a software design course and next week we’ll be talking about the ethics of generative AI in developing software. There are so many parallels between writing in general and writing code in particular, and this post has given me so much to think about as I put the class plan together. I may assign this particular reading to my students, too!
This is amazing to hear, especially that I am always advocating for deeper teaching of ethics during teaching computer science (I had so little of it in my undergraduate CS education). Thank you for this.
Exploration via metaphor is great. In the past I and colleagues have used 3D Lego metaphors to great effect – would be fascinating to ask students to express what AI means to them through the medium of Lego 🙂
Ooh I love that and I love Legos in general
Thanks for this illuminating post. It has provided me with some interesting context that I had not yet considered.
Thank you, Maha, for once again sharing your important thoughts and experiments in the classroom so that we can benefit from them. I tried the Quick Draw game. What a great one to do with students and faculty who are continuing to learn and bring critical AI literacy into the mainstream.
This is a superb essay with layers of much needed complexities. I’m do glad I came across it. Thank you, Maha.