LSE - Small Logo
LSE - Small Logo

Maha Bali

February 26th, 2024

Where are the crescents in AI?

17 comments | 64 shares

Estimated reading time: 20 minutes

Maha Bali

February 26th, 2024

Where are the crescents in AI?

17 comments | 64 shares

Estimated reading time: 20 minutes

Batting away the hype, bias, and botshit, LSE HE Blog Fellow, Maha Bali, champions the need for cultivating critical AI literacy in our students, and shares tried and tested teaching ideas and exercises for educators

I’ve been teaching a course on digital literacies and intercultural learning for six years now, and there has always been an element of discussing artificial intelligence (AI) and its impact on society. More recently, though, with the widespread use of generative artificial intelligence (GenAI) such as ChatGPT, this has become a central topic in many of my conversations and speaking engagements. I believe that when both educators and learners improve their critical AI literacy, they will be better able to resist using AI in ways that are harmful or inappropriate. Instead, they will feel empowered to use it constructively, while being aware of the limitations. I also believe those who report on and teach about technology have a responsibility to avoid techno-determinism, but for now, learners need to spot the hype.

What does it mean to be critical?

For me, being critical goes beyond critique and scepticism: it includes subscribing to critical theory and critical pedagogy – developing awareness of social justice issues and cultivating in learners a disposition to redress them. The elements of critical AI literacy in my view are:

  • Understanding how GenAI works
  • Recognising inequalities and biases within GenAI
  • Examining ethical issues in GenAI
  • Crafting effective prompts
  • Assessing appropriate uses of GenAI
Model od Critical AI Literacy showing the five dimensions The dimensions of critical AI literacy

Understanding how GenAI works

I always start by asking my students to list examples of AI they are familiar with. Besides inevitable mentions of ChatGPT and GenAI, this discussion brings up things like Google’s search engine, Turnitin’s plagiarism detection, YouTube’s recommendations, and facial recognition. It is important to recognise that AI has existed for a long time, and that machine learning is only one type of AI.

I then explain very quickly how machine learning works – the idea of giving a computer program lots of data and letting it learn on its own. I remember the first time I learned this as an undergraduate, the biomimicry of neural networks resembling the way the brain transmits signals – the idea of pattern recognition mimicking the way a child learns the difference between a cat and a dog. It’s fascinating and yet really difficult to absorb.

So I invite students to play a game called QuickDraw, which invites the user to create a quick doodle of a word, then the machine tries to guess what was drawn. After students play the game, I invite them to take a look at how other people drew the same thing, and find differences and similarities. We discuss words like ‘nail’ (could be a fingernail or toenail, or could be a nail you hit with a hammer – QuickDraw recognises both) and ‘bat’ (the animal or the one you play baseball or cricket with).

We also discuss the fact that QuickDraw exists in different languages, but it does not seem to differentiate the images culturally. For example, if it asks you to draw an angel, you’d probably draw something with wings and a halo, right? In Arabic, it expects the same thing, whereas the majority Muslim population of the Arab world are religiously opposed to depicting angels in visual form. Also, to draw a hospital, you’re likely to put a cross on it to differentiate it from any other building, right? But in the Muslim world, most hospitals have a crescent on them, not a cross (which is what QuickDraw suggests). So this gives them a slight inkling of how AI perpetuates a bias towards the dominant knowledge data sets it has been trained on.

A screengrab of two images generated by A screenshot from QuickDraw in Arabic: The first image is my doodle of an angel that QuickDraw asked me to draw even though angels are not depicted in Islam. The second image is by QuickDraw - a typical angel doodle - which shouldn't feature in the Arabic-language version as most Arabic speakers are Muslim.

Recognising inequalities and biases within GenAI

We need to work on recognising “AI not as a neutral technology but as political technology that can be used to serve particular policy objectives and ideologies.” When speaking of inequalities related to AI, it helps to look back at work that has been done for years before the advent of ChatGPT, and to recognise that it has always historically been racist, sexist, ableist, and so we should always be wary of its engineered inequality.

Last semester, we dived into uncovering bias against the Palestinian cause , which we noticed across several AI platforms. ChatGPT’s responses to “Does Palestine have the right to exist?” and “Does Israel have the right to exist?” are quite different. On Palestine, it tends to start with how the issue is complex and sensitive, whereas on Israel, it tends to state that Israel is officially recognised to have the right to exist. If asked longer questions, it tends to always refer to the issue as the Israeli-Palestinian conflict rather than Palestinian-Israeli. This order reflects the dominance of that prioritisation of Israel over Palestine in texts ChatGPT has likely been trained on.

The bias was stronger in visual AI. My students wanted to make the AI imagine a free Palestine. The video that resulted had several problems: firstly, many of the images of free Palestine depicted destruction rather than prosperity; secondly, it had a lot of blank images because it could not find enough positive images of Palestine, and thirdly, and most shockingly, some of the images featured the star of David extensively, conflating peace in Israel with the liberation of Palestine. When the students and I re-prompted it to remove the blank parts and include images of pre-1948 Palestine, it became better, but it also increased the number of videos and images of the Israeli flag, star of David, Orthodox Jews, and Hebrew signage in the streets in parts where it was narrating Palestinian victory. This seemed to suggest that the AI tended to connect prosperity in the region with Israel, not Palestine.

Of course, when AI does work and is useful, such as for supporting people with visual impairments to recognise what is in their surroundings, it becomes important to ask about inequality of access: who has access to this technology? Is it affordable? Does it require internet or electricity that may be routinely unavailable or unstable for certain populations? When available, is it culturally nuanced, or is it trained on a white/Western/Northern dataset and unable to recognise a person’s local surroundings?

An image of space with stars and an inconspicuous crescent moon An AI image crafted around the headline of this blogpost (created in Microsoft Image Creator)

Examining ethical issues surrounding GenAI

Unethical practices in the creation and training of AI have been well documented from the harm caused to Kenyan contract workers to its deleterious effect on the environment.

A problematic practice more pertinent to higher education is the amount of data used to train AI without necessarily getting consent from the creators, nor citing the sources of the synthetic outputs. One thing few of us do with any tech platform, let alone AI platforms, is to read the terms of service and privacy policy before using it. Autumm Caines recommends we spend time doing this with learners before we invite them to use the tools. This may mean that some people will decide they do not want to use AI tools at all, and if they do so, as educators we need to ensure we have alternatives to any AI-based exercises we do – such as giving them our account use for the assignment in question, or offering pair work or a more theoretical exercise related to AI.

some people will decide they do not want to use AI tools at all, and if they do so, as educators we need to ensure we have alternatives to any AI-based exercises we do

Developing prompt crafting skills

Prompt crafting is a term that I started using instead of the more commonly used prompt engineering, as it recognises the creative process of trying to create the prompt that yields the output we’re hoping for. Prompt engineering can be defined as “the process of iterating a generative AI prompt to improve its accuracy and effectiveness”. There are free courses on prompt engineering, but the most important, pedagogically speaking, are modelling and practice. I model for students writing prompts and refining them when I don’t get what I like, I showcase different AI platforms, and I include visual AI which can benefit from different prompting approaches as well.

For example, if you simply ask a visual AI to create an image of a classroom, it won’t know what kind of classroom you want, or what kind of style of image you want. But you can be more detailed and specify that it shows diverse representation in the classroom and/or a classroom where students are engaged (as opposed to disruptive). You can also specify whether you want the style to be realistic or cartoon-like or anime or any other style. For text-based GenAI, I find Mark Marino’s tips really helpful as a starting point – a reminder that we can give the GenAI tool a personality “You are a teacher…,” a context, “In a public university in Egypt”, and give it criteria for the output, “Create a syllabus for a course on X that contains three projects, two assignments, and 10 readings by Arab and American authors”.

I assign students activities to do using multiple AI tools and ask them to keep trying different prompts to improve the output of the AI tool – whether text-based or visual, and to modify the way they write the prompt and amount of data they give the tool. A really useful exercise is to compare what they would have written themselves versus what the AI tool produced and assess the strengths and limitations of the tool on a variety of tasks, while learning to hone their own prompt writing ability.

Assessing appropriate use of GenAI

We need to help our students recognise when AI can usefully help with a task (albeit with some risk of inaccuracy), which tasks requires human attention, and which tasks AI has shown are now redundant not only for humans, but altogether.

Botshit
In practice, GenAI performs better with topics familiar to its training data, and more likely “hallucinates” or produces “botshit” (if we use it uncritically) with material less familiar to it, or unexpected combinations in prompts. When you are from my part of the world, you will notice it making random untrue associations with things closely connected to your own culture. A hilarious example of this was when I prompted Bard/Gemini to create a table of ancient and contemporary Egyptian kings/rulers with their photos and their achievements. For Muhammad Ali (1805-1848), it displayed a photo of Muhammad Ali, the American boxer. Visual AI is particularly striking when it hallucinates – we need to learn to identify such inaccuracies and not perpetuate them.

 

A picture of a man with a beard in a turban and robe juxtaposed with a picture of a man with closely cropped hair in a shirt Muhammad Ali: what happens when AI puts the wrong face to the name?

Metaphors
I often invite people to come up with metaphors to represent AI. We discuss how some metaphors are helpful and others can be misleading, and how metaphors can promote utopian and dystopian views of AI. In this way, I get my students to adopt a critical approach to understanding how the metaphors we use can affect our attitudes towards AI.

I sometimes invite the students in my course to think of AI like cake-making. In this course, what things do you need to create from scratch (without support), and what can you bake with support (for example, using a readymade cake box in which you just mix wet ingredients). For which assignments might you buy a readymade cake from a supermarket and just add the decorating touches? My students begin to see who has all the ingredients and skills to bake from scratch and who does not. How can we level the playing field so that no one feels the need to get unauthorised support from AI? Over two semesters, my students and I have reached the conclusion that if a course is not directly teaching students how to read or summarise, or how to cite correctly, then using AI tools to help them understand a complex reading or format their references is acceptable.

we need to decide which aspects of critical AI literacy our particular students need and can benefit from

What is critical AI literacy good for?

I want to be clear that I don’t believe that every teacher needs to teach all the elements of critical AI literacy, nor that every course should integrate AI into the syllabus. What I do believe is that we need to decide which aspects of critical AI literacy our particular students need and can benefit from. I think we should always question the use of AI in education for several reasons. Can we position AI as a tutor that supports learning, when we know AI hallucinates often? Even when we train AI as an expert system that has expert knowledge, are we offering this human-less education to those less privileged while keeping the human-centric education to more privileged populations? Why are we considering using technology in the first place – what problems does it solve? What are alternative non-tech solutions that are more social and human? What do we lose from the human socioemotional dimensions of teacher-student and student-student interactions when we replace these with AI? Students, teachers, and policymakers need to develop critical AI literacy in order to make reasonable judgments about these issues.

More resources and information on the AI literacy model and its dimensions and applications are available on Maha’s blog.

_____________________________________________________________________________________________

This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions.    _____________________________________________________________________________________________ 

 

Images
Picture of Muhammad Ali, ruler of Egypt: Auguste Couder Bap in Picryl
Picture of Muhammad Ali, boxer: Ira Rosenberg in Picryl

About the author

Head and shoulders photo of Maha Bali

Maha Bali

Professor Maha Bali is a professor of practice at the Center for Learning and Teaching at the American University in Cairo, Egypt, and co-facilitator of Equity Unbound.

Posted In: AI

17 Comments