As artificial intelligence (AI) and machine learning techniques increasingly leave engineering laboratories to be deployed as decision-making tools in Human Resources (HR) and related contexts, recognition of and concerns about the potential biases of these tools grows. These tools first learn and then uncritically and mechanically reproduce existing inequalities. Recent research shows that this uncritical reproduction is not a new problem. The same has been happening among human decision-makers, particularly those in the engineering profession. In AI and engineering, the consequences are insidious, but both cases also point toward similar solutions.
Bias in AI
One common form of AI works by training computer algorithms on data sets with hundreds of thousands of cases, events, or persons, with millions of discrete bits of information. Using known outcomes or decisions (what is called the training set) and the range of available variables, AI learns how to use these variables to predict outcomes important to an organisation or any particular inquirer. Once trained by this subset of the data, AI can be used to make decisions for cases where the outcome is not yet known but the input variables are available.
For example, AI can “learn” using a training set that includes curriculum vitae data from job applicants and information describing already hired employees. After learning, AI can select applicants who have not yet been hired but who look like the kinds of persons the firm is most likely to hire based on applicants’ CVs. AI tools are being developed and adopted to help with decisions in hiring, promotion, college admission, and more.
Assuming that the existing patterns of selection decisions have been conducted fairly and correctly, AI can be an efficient aid in the decision process. If, however, the human-based selection processes upon which AI is trained are somehow flawed or biased, then not only will AI be unable to detect and fix those biases, the AI will actually encode and perpetuate those flaws. In this way, AI reproduces patterns of discrimination.
“… accepting the idea that engineering is a meritocracy necessarily means that the observed under-representation of women (and people of colour) in engineering must be the result of fair and meritocratic processes.”
These machine-learning algorithms are designed to replicate with increasing accuracy the outcome decisions in its training data. Once learned by the AI program, the biases represented within the learning data set get codified and applied to incoming data for new decisions. Once embedded in the program’s procedures for sorting through applicants, these biased practices appear to be neutral outcomes of an ostensibly objective process. Once encoded, these biases are even harder to detect and correct. The algorithm relies on and reproduces the past practices.
Bias in engineering
In our longitudinal study of men and women pursuing engineering careers, we found what at first appeared to be an odd contradiction, which upon further analysis revealed a similar pattern of reproducing embedded preferences. First, most of the women in our study reported first-hand experiences of sexist treatment and exclusion by classmates, professors, internship co-workers and supervisors, and more. Second, despite these reported experiences, the women continued to express their belief that the engineering profession to be a meritocracy that accurately recognises and rewards hard work and achievement. Although their own personal observations provided abundant evidence that engineering is not an objective meritocracy, they embraced the idea that it is. How does this happen?
Meritocracy – the idea that the best ideas and the best work win out – is an important value in the engineering profession. Part of the informal training of engineers via socialisation reproduces the belief that the profession is a meritocracy. While it may be meritocratic in many ways, it is an imperfect meritocracy composed of imperfect humans. But accepting the idea that engineering is a meritocracy necessarily means that the observed under-representation of women (and people of colour) in engineering must be the result of fair and meritocratic processes. Like the AI example, the possible role of bias in creating the current state is neglected. Instead, the current state is taken as the implicit standard.
In a manner similar to the AI example, where the AI learns how to reproduce the current state, male and female engineers alike internalise the idea that engineering ability may be unequally distributed by gender. Having internalised these ideas, efforts to promote gender diversity in engineering come to be viewed as a threat to the assumed meritocracy. Engineers come to see diversity-promoting efforts as coming at the cost of quality – a situation that can only arise if the gender segregation in engineering is the outcome of meritocratic processes. This diversity-quality trade-off is a commonly expressed view among engineers despite voluminous empirical evidence to the contrary.
“In AI and engineering education, currently flawed and unmeritocratic systems are the accepted standard taught and learned by both machines and persons. This way, systemic inequality is solidified rather than challenged.”
By accepting claims that the under-representation of women in engineering is the result of just and fair meritocratic processes, the potential criticisms of engineering by women remain isolated and muted. Experiences of exclusion get re-interpreted as anecdotes rather than conventional and firmly institutionalised professional practices. Relevant experiences that reveal the non-meritocratic nature of engineering coalesce neither into broader institutional criticisms of engineering, nor into calls for change.
Just as AI algorithms cannot overcome the limitation of their training on existing data, engineering education will not by itself self-correct routine habits and organisational patterns. In both cases, AI and engineering education, currently flawed and unmeritocratic systems are the accepted, conventional standard taught and learned by both machines and persons. This way, systemic inequality is solidified rather than challenged.
Solutions?
Efforts to correct bias in AI and engineering can learn from each other. With AI, the biased output is not taken as evidence of faulty AI, but as a problem that requires solving. Biased output from AI is recognised as the necessary result of uncritical training that assumes the status quo should be the standard. Solving this problem requires being critical about the current state of a system and working to identify and counteract any biasing processes that are present. The same applies to engineering and other human endeavours.
But when humans are involved, we tend to focus on individual decision-making and prejudice, and ignore more systemic biasing processes. We make limited progress toward equality because the current diagnosis and the cure share a basic misconception: that inequality and bias are problems of individual decision-making. While prejudice and other individual-level biases exist and continue to contribute to inequality, the sources of bias in engineering and elsewhere are not solely individual. Many important sources of bias are collective, deeply embedded in routine forms of interaction and communication that constitute the everyday world as familiar and comfortable and meritocratic.
We learn to be adults and citizens in interaction with others, most often in small personal groups. That learning includes the motivations, drives, and rationalisations for our actions. Bias and stereotypes, discrimination and difference are executed and reinforced not only in moments of individual decision-making, like allocating students to work teams, or hiring or promoting one person rather than another, but also in day-to-day intergroup interactions and relations (or lack thereof) at work. So long as the organisation of work and patterns of interaction are masked by an exaggerated attention to the actions of individuals, we will be unable to recognise the durable structures that sustain these patterns. And, machine learning and AI certainly cannot improve upon persistent patterns unless specifically designed to reject the existing patterns in their learning data.
Rather than thinking about discrimination as a series of individual instances of problematic choice-making, we might think about how we might participate in a culture and institutional arrangement of equality, and then ask what supports (or challenges) that culture. How is that culture learned, and what are its lessons or messages? From this transactional and relational perspective of human behaviour, we might think of engineering education as integrating lessons in organisation, management, sociology and anthropology with mechanics, magnetism, and thermodynamics. Preparation for a career in STEM might include attention to the organisation of laboratories, the incentives and pitfalls of different forms of funding, the role of gender in group and team activities. Preparation for a career in computer science, for example, might also pay special attention to historical examples of technological catastrophes and the transformation of technologies into systems of social control. To the extent, however, that engineering education supports rather than subverts claims of individualist causality, by actively tracing and revealing how social structures link the particular to the general, there is limited prospect for reducing the hegemony of engineering’s meritocratic ideology. True meritocracy is a worthwhile goal. Asserting or assuming a non-meritocratic system as meritocracy is not. It is a practice that perpetuates gender (and other) inequalities.
♣♣♣
Notes:
- Some of the findings discussed come from a project called “Future Paths: Developing Diverse Leadership for Engineering,” funded by the National Science Foundation (Grant # 0240817, 0241337, 0503351, & 0609628). Any opinions, findings, and conclusions or recommendations expressed in this material are the authors’ own and do not necessarily reflect the views of the National Science Foundation.
- This blog post is based on the authors’ paper ‘I am Not a Feminist, but. . .’: Hegemony of a Meritocratic Ideology and the Limits of Critique Among Women in Engineering, Work and Occupations, Volume: 45 issue: 2, page(s): 131-167 (2018).
- The post gives the views of the author, not the position of LSE Business Review or the London School of Economics.
- Featured image by Fabian Grohs on Unsplash
- When you leave a comment, you’re agreeing to our Comment Policy.
Brian Rubineau is an associate professor of organisational behaviour at the Desautels Faculty of Management at McGill University. His research investigates how informal social dynamics contribute to inequalities in occupations and labour markets.
Susan Silbey is Leon and Anne Goldberg professor of humanities, professor of sociology and anthropology, and professor of behavioural and policy sciences, Sloan School of Management, at the Massachusetts Institute of Technology. She also serves as chair of the MIT faculty from 2017-2019. Her current research focuses on the creation of management systems for containing risks, including ethical lapses, as well as environment, health and safety hazards.
Erin Cech is an assistant professor of sociology at the University of Michigan. She was previously a postdoctoral fellow at the Clayman Institute for Gender Research at Stanford University and was on faculty at Rice University. Her research examines cultural mechanisms of inequality reproduction–specifically, how inequality is reproduced through processes that are not overtly discriminatory or coercive, but rather those that are built into seemingly innocuous cultural beliefs and practices.
Carroll Seron is a professor in the department of criminology, law and society at the University of California, Irvine. She also holds appointments in the department of sociology and the School of Law. At UCI, she has served in numerous roles. During 2011-2012, she was a visiting professor at Flinders University in Australia and at the Institute for Advanced Legal Studies in England. Her research agenda primarily focuses on the organisations and professions of law.
1 Comments