LSE - Small Logo
LSE - Small Logo

Kayleigh Gibson

January 9th, 2025

Gender bias, AI, and deepfakes are promoting misogyny online

0 comments | 8 shares

Estimated reading time: 4 minutes

Kayleigh Gibson

January 9th, 2025

Gender bias, AI, and deepfakes are promoting misogyny online

0 comments | 8 shares

Estimated reading time: 4 minutes

Gendered biases in artificial intelligence, the accessibility of violent pornography, and normalised gender bias in mainstream internet culture can perpetuate misogyny and harmful stereotypes. Comprehensive measures need to be taken to address these issues, writes Kayleigh Gibson.

Online spaces perpetuate the biases of those who create them because the creators can only proactively protect users from harms they are aware of. Largely male engineers and tech entrepreneurs have sustained societal inequalities by creating spaces where users can be harassed, stalked and abused anonymously.

Normalised biases have a disproportionate risk to women and girls. This phenomenon is particularly evident in artificial intelligence (AI) technology, which encode and amplify harmful stereotypes. For example, research shows that AI-generated imagery disproportionately associates certain professions or societal roles with specific demographics, marginalising women and minorities.

AI’s design features can also reinforce harmful gender biases. AI assistants are predominantly feminised, perpetuating stereotypes that women are subservient. When looking for an audio interface, ChatGPT copied Scarlett Johansson’s voice from the film Her, in which she played an AI assistant love-interest. The case highlights how female identities can be exploited for commercial gain by appealing to male sexualisation of women. The demographic of those programming AI systems are overwhelmingly white men, leading to biases in the development of AI tools and cybersecurity systems.

Search engines disproportionately filter out images of female bodies, often mischaracterising them as inappropriate or “racy.” A study revealed that a shirtless man’s chest was rated 22 per cent “racy,” but the same man wearing a bra was rated 97 per cent “racy”. Such biases hinder access to vital information on women’s health, fitness, and medical research. The image is coded based on the male designer’s viewpoint rather than the perspective of the female internet user looking for medical, health and fitness information. This creates online spaces that work for the designers rather than diverse end users.

Deepfake pornography and the law

Deepfake pornography manipulates images and videos to create realistic but fabricated explicit content. In 2023, 98 per cent of 95,820 deepfakes online were pornography and 99 per cent of those videos targeted women. Again, female identities have been exploited for commercial gain by appealing to male sexualisation of women. Despite significant trauma and reputational harm for victims, legal protections against such abuses remain inadequate. The European Union has taken steps to address cyberstalking, online harassment and incitement to hatred and violence. Scotland’s 2021 hate speech law criminalises incitement to prejudice hatred but excludes misogynistic hate. The UK removed the clause regarding intent to cause distress from the criminalisation of non-consensual image sharing but enforcement remains inconsistent. Ultimately, platforms face little accountability for hosting harmful content or for profiting from it.

Campaigners are calling for the creation of non-consensual deepfake images to constitute a crime regardless of perpetrator intent. Until then, the exploitation of women’s sexual images without consent, coupled with the lack of robust oversight or age verification for mainstream platforms, perpetuates a cycle of harm. The normalisation of violent pornography on easily accessible platforms desensitises younger audiences to gender based violence. Learning about intimate relationships through these channels teaches younger generations to develop harmful expectations of gender roles and sexual behaviour.

The impact of internet culture

Mainstream internet culture trends are exacerbating misogynistic attitudes. Social media platforms have played a significant role in spreading harmful ideologies. Andrew Tate, who promotes misogyny as a supposed means to become successful, amassed millions of followers on social media platforms and inspired copycat podcasters and influencers. Although platforms such as TikTok have banned Tate for his hateful ideology, X (formerly Twitter), has reinstated his account after previous bans for harmful rhetoric.

Misogynistic content may constitute incitement to hatred and violence but would only be punishable legally if within the relevant jurisdiction and it may be difficult to quantify the harm caused. Therefore, it is at the discretion of social media platforms to remove content that breaks their own regulations to protect users from harm.

Employing equitable numbers of women may reduce instances of misogynistic content as the diversified designers’ perspectives would better represent all users’ perspectives, rather than disproportionately represent male users. The rise of incel culture underscores the need for gender equity in content regulation. Meanwhile, seemingly innocuous trends, such as “Girl Math” and “Trad Wife,” reinforce stereotypes. “Girl Math” perpetuates the unfounded myth that women are financially incompetent. “Trad Wife” was initially a celebration of homemaking, which has been co-opted by far-right influencers to idealise regressive gender roles. These narratives normalise lower levels of misogyny, making it difficult to draw a clear line between what constitutes hate speech and socially accepted behaviour.

Addressing the challenges

The scale of the problem stems from the normalised behaviours and biases that enable GBV, as younger generations learn from internet culture and access harmful content. The rise in child perpetrators of sexual violence supports hypotheses that exposure to violent pornography can shape harmful behaviours and expectations.

Addressing these issues requires a multi-faceted approach:

  1. Laws should ensure accountability for those who profit from, create and distribute harmful, non-consensual content.
  2. Tech companies must diversify development teams and implement robust content moderation practices.
  3. Schools and community programs should promote digital literacy and foster critical thinking about the implications of violent pornography and harmful internet trends.
  4. Academic institutions and advocacy groups must continue investigating the societal impacts of AI and internet culture.

Currently, AI, deepfake pornography, and internet culture exacerbate gender bias and misogyny. However, these challenges also present opportunities to reset our legal protections and normalised culture. By addressing the structural inequities embedded in digital technologies and creating a more inclusive and equitable online environment, society can work toward mitigating these harms. A multifaceted approach is necessary to ensure that online spaces empower, rather than harm, internet users.


Photo credit: Ministerie van Buitenlandse Zaken used with permission CC BY-SA 2.0

About the author

Kayleigh Gibson

Kayleigh Gibson

Kayleigh Gibson is a PhD Candidate in Security and Intelligence Studies at the University of Buckingham. Kayleigh will work with LSE’s Women Peace Security Centre for her post-doctorate and is writing grant proposals for projects investigating the backlash against women’s rights and extreme online misogyny.

Posted In: Online Violence Against Women