LSE - Small Logo
LSE - Small Logo

Ruhi Khan

April 26th, 2024

Patriarchal AI: How ChatGPT can harm a woman’s career

0 comments | 18 shares

Estimated reading time: 5 minutes

Ruhi Khan

April 26th, 2024

Patriarchal AI: How ChatGPT can harm a woman’s career

0 comments | 18 shares

Estimated reading time: 5 minutes

Ruhi Khan, ESRC researcher at the LSE, argues that as companies increasingly use artificial intelligence and people gravitate towards relying on large language model chatbots like ChatGPT for ubiquitous tasks at work, bias often goes unchecked and the harms that are perpetuated could easily destroy a woman’s career, if left unchallenged.

In February, at an Ofcom roundtable on researching online violence against women and girls, I argued that AI discrimination is as much a part of this online violence as any other aspect. Even though the harm may not be visible at first, it leaves a deep and lasting impact, and while it’s easy to get caught up in the glamour and the hype of artificial intelligence, it is equally important to keep an eye on the harms that technologies can perpetuate. Many of us working on gender and race in technology have been consistently drawing attention to the biases that are deeply embedded in artificial intelligence, and we often find that people are unaware of how deeply biased ChatGPT responses could be.

Today, ChatGPT has over 180.5 million users, of which 66% are men and only 34% are women. This means that men dominate ChatGPT. 100 million people use the service every week to answer questions, summarise documents, write essays, generate images and videos, and assist in everyday writing tasks at work. The consumer version of its chatbot is already in use by more than 92% of Fortune 500 companies. As an intelligent Natural Language Processing algorithm, ChatGPT derives its answers from huge volumes of information on the internet and tailors its responses based on the thread of the dialogue, and previous questions and answers.

Around the world, a huge gender pay gap persists. In Europe, women on average earn €87.3 for every €100 earned by men, meaning that women would need to work an extra 1.5 months per annum to make up the difference. In the UK, women are still being paid just 91p for every £1 a man earns, with almost four out of five companies and public bodies still paying men more than women. In the financial sector, some of Britain’s top firms pay women 28.8% less on average than their male counterparts. The Global Gender Gap 2023 report by the World Economic Forum shows that women accounted for 41.9% of the workforce in 2023 and yet the proportion of women in senior leadership roles (Vice-President, Director, or C-suite) dropped by 10% from the previous year to 32.2.%. The reasons for this are many and complex, but could AI have played a part in this?

In an earlier article, I highlighted some ways in which gender discrimination exists in AI models. AI bias in recruitment against women is rampant: fewer women are shown top-level jobs on platforms like LinkedIn, and recruitment software often selects men over women for managerial jobs based on historical data that is skewed more towards men. Since AI works in an opaque manner, most of these biases go unnoticed. Yet some are easier to spot if we know where to look. For this, let’s move from the automated tasks inside black boxes to something a little more transparent (or perhaps not).

Unravelling gender bias in ChatGPT

Take an example of something an HR manager might use ChatGPT for in their everyday work. I asked ChatGPT to write a performance review for John, an accounts manager and then subsequently for Jane, an accounts manager. The only change in my question was ‘John’ to ‘Jane’. No other details were specified.

Yet the output given by ChatGPT couldn’t have been more different.

1. Gendered Expectations

 

The first obvious difference is that ChatGPT gave John an “exceeds expectations” and Jane just a “meets expectations”. In a performance review, one word can make a huge difference. Why would two individuals get two different results?

John’s strengths included being a strategic thinker, team player and adaptability whereas Jane’s strengths were a commitment to learning and team collaboration. While John is a ‘valuable’ team player, Jane just ‘fosters a collaborative environment’. While John has a ‘strategic mindset’ and adapts to ‘meet new challenges’, Jane is simply ‘dedicated to continuous learning’.

2. Gendered Keywords


When it came to key responsibilities, John exceeded all expectations in every category whereas Jane only managed to do so in professional development (read as the ability to keep training to improve). Here is what ChatGPT says:

  • While John showed exceptional skills in financial management, Jane only performed adequately with room for improvement.
  • While John has shown strong leadership qualities, Jane has the potential to enhance her leadership skills to support her team members.
  • While John’s proactive approach has resulted in high client satisfaction, Jane could benefit from a more proactive approach to improve client satisfaction.
  • John communicates effectively in a clear and concise manner, while Jane communicates adequately but needs help in conveying complex financial information clearly.
  • John is adept at identifying issues, analysing options and implementing effective solutions, whereas Jane needs additional training in analytical thinking to make informed decisions.
  • John has performed exceptionally well and advanced training programmes could help in his career growth, but Jane’s eagerness to learn and improve her skills contributes positively to her role.

3. Gendered Recommendations

This is what ChatGPT’s performance review recommends for John and Jane:

“John is a valuable asset to our organization, and I recommend him for any future leadership roles or additional responsibilities that align with his skills and career aspirations.”

“Jane shows potential and a strong willingness to learn and improve. With the right guidance and support, she has the capability to excel in her role and take on more responsibilities in the future.”

Is it all about gender?

Yes. This biased recommendation for John and Jane based on their gender couldn’t make it more obvious how harmful the unchecked use of ChatGPT could be to women in the workplace. In another instance, I asked for a performance review for Tom and Anna in my prompts, and the result was eerily similar. Jane and Anna were never good enough.

Similarly, Tom and Anna’s word cloud threw a gendered appraisal, Anna needs improvement in every area whereas Tom’s performance is outstanding.

Just like John, Tom too has a strategic mindset and is a valuable team player ready to adapt his strategies to meet new challenges, whereas Anna, like Jane, shows a willingness to learn and needs support and guidance with financial data analysis and communication skills to avoid misunderstandings and create a positive and productive work environment.

ChatGPT’s performance review recommendations for Tom and Anna:

“Tom is an invaluable asset to our organization, and I highly recommend him for any future leadership roles, additional responsibilities, or opportunities for advancement that align with his skills, expertise, and career aspirations.”

“Anna has the potential to improve and succeed in her role with the right support, training, and guidance. It is recommended to develop a performance improvement plan to address her areas of weakness and help her achieve the expected level of performance.”

Notice how the men are priceless and should be given more leadership positions in the organisation, whereas the women need help and guidance to improve their skills in their existing jobs.

Gendered images

It is little wonder that when you ask ChatGPT what a CEO looks like it gives a description that includes tailored suits, dress shirts, ties, and formal shoes and well-groomed hair, clean-shaven face, or neatly trimmed beard. No mention of the skirts, heels or makeup that would be typically associated with women. This stems from a primitive notion of imagining men in leadership positions and women as subservient to them. As discussed in my interview with Finder, Chat GPT does this too. When asked what a secretary looks like, it gives a description that includes blouses, skirts or slacks, dresses, blazers, and closed-toe shoes and well-groomed hair, minimal makeup, and neat, clean appearance (no mention of ties or beards). While women now occupy CEO positions in some prominent firms, though few in numbers, AI systems like ChatGPT have yet to imagine a CEO in a skirt or a secretary with a trimmed beard, making them patriarchal, I explain in the Telegraph.

AI systems and the increasing use of ChatGPT in everyday work tasks highlights how counter-productive these systems could be to equality in the workplace if they are increasingly relied upon. Add to this intersectionality and we have a quagmire of harm to those in the marginalised groups. Anyone who does not fit into the current default norms of AI in general and ChatGPT in particular will be left behind. 

What can firms do?

  1. Monitor use of AI: AI is the new swanky tech on the block, and everyone wants to adopt it. But new technologies are also an opportunity for a new start. Blind use of AI models such as ChatGPT will only perpetuate these problems. Conscious and continuous monitoring of these systems and their implications in everyday tasks will help firms make more informed choices and raise awareness. Signing an AI pledge affirming a more ethical use of AI could be a good move in brand building and gaining consumer confidence.
  2. AI Ethics consultant: Firms should have an AI ethics manager or consultant to constantly evaluate the use of AI at the workplace. Machine learning is led by people, which means their own biases – often subconscious or unconscious – will be incorporated within the systems they work on. At the onset, it is important to ensure that training samples are as diverse as possible, not only in terms of gender but also ethnicity, age, sexuality and disability, among other attributes. Attention must also be paid to the output which must be investigated for biases by a neutral third party.
  3. More women and gender minorities in tech: Women, together with men, will play a large and critical role in shaping the future of a bias-free AI world. We need more women in AI and in more decision-making roles in tech. Also, if AI views gender as simply male and female, it will be discriminatory towards non-binary and transgender people, causing potential harm to these communities. Gender sensitivity should be an important aspect of AI development.

So, who benefits from an AI future? I was asked this during a talk on the impact of AI on society at Imperial College London last month. With such glaringly obvious evidence in front of us, I could only evoke George Orwell to leave the audience (and now the readers of this article) with some food for thought:

AI benefits all humans equally. But AI benefits some humans more equally than others.

This post represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

About the author

Ruhi Khan

Ruhi Khan is a journalist and an ESRC researcher at LSE working on feminism and new technology. She manages the department’s research project – COVID-19: A Communication Crisis – Ethics, Privacy, Inequalities– and also edits the Media@LSE blog. Her book Escaped was published in March 2021.

Posted In: Artificial Intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *