LSE - Small Logo
LSE - Small Logo

Fabian Lütz

January 7th, 2024

Book Review | More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech

0 comments | 2 shares

Estimated reading time: 8 minutes

Fabian Lütz

January 7th, 2024

Book Review | More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech

0 comments | 2 shares

Estimated reading time: 8 minutes

In More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, Meredith Broussard scrutinises bias encoded into a range of technologies and argues that their eradication should be prioritised as governments develop AI regulation policy. Broussard’s rigorous analysis spotlights the far-reaching impacts of invisible biases on citizens globally and offers practical policy measures to tackle the problem, writes Fabian Lütz.

More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. Meredith Broussard. MIT Press. 2023.


More than a glitch-coverAs the world witnesses advancements in the use of Artificial Intelligence (AI) and new technologies, governments around the world such as the UK and US the EU and international organisations are slowly starting to propose concrete measures, regulation and AI bodies to mitigate any potential negative effects of AI on humans. Against this background, More than a Glitch offers a timely and relevant contribution to the current AI regulatory debate. It provides a balanced look at biases and discriminatory outcomes of technologies, focusing on race, gender and ability bias, topics that tend to receive less attention in public policy discussions. The author’s academic and computer sciences background as well as her previous book Artificial Unintelligence – How Computers Misunderstand the World make her an ideal author to delve into this important societal topic. The book addresses algorithmic biases and algorithmic discrimination which not only receives increasing attention in academic circles but is of practical relevance due to its potential impacts on citizens and considering the choice of regulation in the coming months and years.

[More than a Glitch] provides a balanced look at biases and discriminatory outcomes of technologies, focusing on race, gender and ability bias, topics that tend to receive less attention in public policy discussions

The book’s cornerstone is that technology is not neutral, and therefore racism, sexism and ableism are not mere glitches, but are coded into AI systems.

Broussard argues that “social fairness and mathematical fairness are different. Computers can only calculate mathematical fairness” (2). This paves the way to understand that biases and discriminatory potential are encoded in algorithmic systems, notably by those who have the power to define the models, write the underlying code and decide which datasets to use. She argues that rather than just making technology companies more inclusive, the exclusion of some demographics in the conceptualisation and design of frameworks needs to stop. The main themes of the book, which spans eleven short chapters, are machine bias, facial recognition, fairness and justice systems, student grading by algorithms, ability bias, gender, racism, medical algorithms, the creation of public interest technology and options to “reboot” the system and society.

Biases and discriminatory potential are encoded in algorithmic systems, notably by those who have the power to define the models, write the underlying code and decide which datasets to use.

Two chapters stand out in Broussard’s attempt to make sense of the problems at hand: Chapter Two, “Understanding Machine Bias” and more specifically Chapter Seven “Gender Rights and Databases”. Both illustrate the author’s compelling storytelling skills and her ability to explain complex problems and decipher the key issues surrounding biases and discrimination.

Chapter Two describes one of the major applications of AI: machine learning which Broussard defines as to take

“..a bunch of historical data and instruct a computer to make a model. The model is a mathematical construct that allows us to predict patterns in the data based on what already exists. Because the model describes the mathematical patterns in the data, patterns that humans can’t easily see, you can use that model to predict or recommend something similar” (12).

The author distinguishes between different forms of training a model and discusses the so called “black box problem” – the fact that AI systems are very often opaque – and explainability of machine decisions. Starting from discriminatory treatment of bank loan applications, for example credit score assessment on the basis of length of employment, income or debt, the author explains with illustrative graphs how algorithms find correlations in datasets which could lead to certain discriminatory outcomes. She explains that contrary to humans, machines have the capacity to analyse huge amounts of datasets with data points which enable for example banks to make predictions on the probability of loan repayment. The mathematics underlying such predictions are based on what similar groups of people with similar variables have done in the past. The complex process often hides underlying biases and potential for discriminations. As Broussard points out,

“Black applicants are turned away more frequently than white applicants [and] are offered mortgages at higher rates than white counterparts with the same data […]” (25).

The book also demonstrates convincingly that the owners or designers of the model wield a powerful tool to shape decisions for society. Broussard sums up the chapter and provides crucial advice for AI developers when she states, advice for AI developers when she states,

“If training data is produced out of a system of inequality, don’t use it to build models that make important social decisions unless you ensure the model doesn’t perpetuate inequality” (28).

Chapter Seven looks at how databases impact gender rights, starting with the example of gender transition which is registered in Official Registers. This example illustrates the limitations of algorithmic systems as compared to humans, not only in light of the traditional binary system for assigning gender as male and female, but more generally the binary system that lies at the heart of computing. Both in the gender binary and computer binary framework, choices need to be made between one or the other leaving no flexibility. Broussard describes the binary system as follows:

“Computers are powered by electricity, and the way they work is that there is a transistor, a kind of gate, through which electricity flows. If the gate is closed, electricity flows through, and that is represented by a 1. If the gate is open, there is no electricity, and that is represented by a 0” (107).

When programmers design an algorithm, they “superimpose human social values onto a mathematical system.” Broussard urges us to ask ourselves, “Whose values are encoded in the system?” (109).

The resulting choices that need to be made within AI systems or forms used in administration often do not adequately represent reality. For people who do not feel represented by the options of male and female, such as gender non-conforming people, they are asked to make the choice in which category they fall even though this would not reflect their gender identity. Here again, Broussard reminds us of the importance of design choices and assumptions of coders which impact people’s everyday life. When programmers design an algorithm, they “superimpose human social values onto a mathematical system.” Broussard urges us to ask ourselves, “Whose values are encoded in the system?” (109). The chapter concludes with the challenge of making “technological systems more inclusive” (116) and argues that computers constitute not only mathematical but sociotechnical systems that need to be updated regularly in order to reflect societal change.

Computers constitute not only mathematical but sociotechnical systems that need to be updated regularly in order to reflect societal change.

The book successfully describes the invisible dangers and impacts of these rapidly advancing technologies in terms of race, gender and ability bias, making these ideas accessible through concrete examples. Ability bias is discussed in Chapter Seven, “Ability and Technology”, where she gives several examples, how technology companies try to provide technology to serve the disabled community in their daily jobs or lives. She gives the example of Apple shops where either sign language interpreters are available or where Apple equips employees with an iPad to communicate with customers. For consumers, she also highlights Voiceover screen reader software, auto-captioning and transcripts of audio or read-aloud functions of newspaper sites. Broussard points both to the advantages and the limitations of those technological solutions.

She also introduces the idea of tackling biases and discrimination with the help of audit systems

Readers are invited to reflect on concrete policy proposals and suggestions, on the basis of some ideas sketched out in last chapter, “Potential Reboot” where she shows her enthusiasm for the EU’s proposed AI Act and the US Algorithmic Accountability Act. She also introduces the idea of tackling biases and discrimination with the help of audit systems and presents a project for one such system based on the regulatory sandbox idea, which is a “safe space for testing algorithms or policies before unleashing them on the world” (175). The reader might wish that Broussard‘s knowledge of technology and awareness of discrimination issues could have informed the ongoing policy debate even further.

In sum, the book will be of interest and use to a wide range of readers, from students, specialised academics, policy makers and AI experts to those new to the field who want to learn more about the impacts of AI on society.

 


About the author

Fabian Lütz

Fabian Lütz is currently a PhD candidate (University of Lausanne) researching EU, Gender Equality and Non-Discrimination law, Regulation of AI/Algorithms and Algorithmic Discrimination (Publications: www.algorithmic-discrimination.com ). Between 2015 and 2020, Fabian was a Legal Officer at the European Commission (Gender Equality Unit). Previously, he worked for law firms and completed his Referendariat at the European Commission and the CJEU.

Posted In: Book Reviews | Democracy and culture

Leave a Reply

Your email address will not be published. Required fields are marked *

LSE Review of Books Visit our sister blog: British Politics and Policy at LSE

RSS Latest LSE Events podcasts