This year, we were fortunate to review close to 200 exceptional essay entries to our Annual Essay Competition 2024. The Second Place winner is Bronwyn McCawley from Birkenhead Sixth Form College:
How might AI technologies influence the fairness and security of elections in 2024, and what should be done to mitigate potential risks?
The advent of Large Language Models (LLMs) and AI has continued the dramatic chain of technological innovation we have seen throughout the information age. This technological innovation is often posited as a lead driver of economic growth and societal development in the modern era, but there is often failure to recognise the threat this poses to democracy (Schaake, 2024). Emerging LLMs are no different, now as we begin to see the shift from AI-curated content to AI-produced content, the unique vulnerability of elections and democracy to AI technologies is becoming more apparent (Panditharatne, 2023).
Present-day LLMs, in brief, work as a form of hyper-charged autocomplete (by utilising advanced algorithms trained on human data to predict and generate text), rapidly generating, curating and disseminating vast amounts of human knowledge, epitomising the so-called modern ‘information crisis’. However, political messaging and the forums of democratic discourse are increasingly influenced by AI-content, especially with the rise of deepfakes, and malicious disinformation. For example, in a 2024 New Hampshire Primary, a fake ‘robocall’ used an AI model to accurately simulate President Biden’s voice, and urged voters not to participate (Noti, 2024). This unchecked influence of AI on elections could create a cycle of harm, as the aforementioned deepfakes and misinformation promote a culture of deregulation in Big Tech and AI-Development under the guise of innovation, driving unregulated AI advancement, without concern for its broader sociological effects, making it increasingly difficult to counter political misinformation. (UNRIC, 2024). This proves not all technological advancement can coexist with democratic principles (Schaake, 2024), and how AI renders modern political science insecure against the threat of ever marching advancement.
Furthermore, AI-driven disinformation is on the rise at an accelerating pace, no longer do we see harmless misinformation; but instead targeted campaigns of disinformation used to swing entire populations. January of this year saw one such campaign against outgoing Taiwanese president Tsai Ing Wen, with AI-generated clips of news anchors revealing the ‘secret life’ of her and her party, as well as clips of former candidate Terry Gou endorsing a Pro-Chinese Party. (The Economist, 2024). Such tactics have the capacity to manipulate the fairness and security of elections not only undermining national democratic processes, but also shifting the balance of international geopolitical power. This disinformation is not limited to individual voters either, but also shapes broader social dynamics. Social media platforms utilise AI technology that has been proven to contribute to polarisation and radicalisation, as users are sorted into ‘filter bubbles’. These filter bubbles provide the user with personalised, AI-curated content that work as echo chambers to amplify beliefs, and drum up intense emotional responses. (Rodilosso 2024). These effects are often not only by-products either, but an insidious result of the economic climate we live in, as companies profit off the engagement caused by strong emotional responses, discouraging any solution to this issue. This environment compromises the fairness of global elections through the induction of a new hyper-polarised, hyper-radicalised voter base, that weakens democratic integrity.
Despite this overall negative outlook of the multilateral relationship between democracy and AI, there are some scholars who point to a brighter future, where such technologies could be leveraged to enhance security. A key issue with e-voting is cybersecurity and data handling, with this infrastructure now labelled in the US as ‘critical infrastructure’ (CISA, 2017), there is an increasing awareness of how it is especially vulnerable to cyberattack. However utilising AI-powered technologies, such as blockchain (a method of tracking information transfer in a network by breaking it into ‘blocks’) and LLMs could improve election security by offering a decentralised, transparent method of recording and verifying votes securely, improving transparency and identifying potential irregularities in real time (Kshetri and Voas, 2018).
Beyond securing voting systems, AI technologies, such as LLMs, can also be powerful tools in addressing election-related misinformation. Such systems excel at identifying and analysing patterns in language and content to detect disinformation (Li & Callegari, 2024). These models are especially suited to recognise and remove disinformation, however, they may not be wholly effective tools depending on the rules enforced upon them, i.e on social media, where AI-powered moderation is often heavy-handed in deeming content offensive and unacceptable. (Kersley, 2023). While AI will be a critical tool, overly stringent content on social media platforms may stifle legitimate political discourse, potentially undermining engagement.
Ultimately, it is not AI itself that threatens the security and fairness of elections this year and beyond, but its source from large unregulated, and unknowable private entities without government oversight (Schaake, 2024). With appropriate regulation, AI can support our democratic system rather than undermine them, allowing elections that are both secure and fair. However the mitigation of runaway deregulation and preventing monopolised control of AI models will be essential in protecting modern democracy.
Marietje Schaake and Tyson Barker (Lawfare, 2020) proposed a ‘new US-EU tech alliance’, taking advantage of the unique order we live in to establish a digital international democratic era, through a multilateral regulatory framework, and opening a discussion to how the technology should be utilised and regulated on a global scale. This would combat the decline in openness of technology the world has seen, as key players in the industry moved eastwards towards authoritarian regimes, such as China, and make sure these technologies work for all citizens democratically, upholding our modern principles of human rights.
In conclusion, emerging technologies have been proven to revolutionise societies at–large, however they pose complex, multi-faceted issues when looked at in the context of preserving election security and fairness. The rise of disinformation and deep fake technology, as well as AI-powered social media platforms has driven the decline into hyper-radicalisation political systems have endured over the past decade. This highlights that there is a more urgent need than ever for stronger regulation of this technology, to give humanity time to stop and think about how LLMs and AI will influence our daily lives, and if we are ready to trade in democracy for digital innovation.
Reference List
Adav Noti, Campaign Legal Centre, ‘How Artificial Intelligence Influences Elections and What We Can Do About It’, 2024.
Andrew Kersley, “The one problem with AI content moderation? It doesn’t work,” ComputerWeekly, 2023: https://www.computerweekly.com/feature/The-one-problem-with-AI-content-moderation-I t-doesnt-work
Cathy Lee and Agustina Callegari, ‘Stopping AI disinformation: Protecting truth in the digital world’, World Economic Forum, 2024: https://www.weforum.org/stories/2024/06/ai-combat-online-misinformation-disinformatio n/
CISA: https://www.cisa.gov/topics/election-security
Ermelinda Rodilosso, “Filter Bubbles and the Unfeeling: How AI for Social Media Can Foster Extremism and Polarisation,” 2024: https://link.springer.com/article/10.1007/s13347-024-00758-4
Marietje Schaake & Tyson Barker, Democratic Source Code for a New U.S.-EU Tech Alliance, 2020: https://www.lawfaremedia.org/article/democratic-source-code-new-us-eu-tech-alliance
Mekela Panditharatne, ‘How AI puts elections at risk – and the needed safeguards’, Brennan Centre for Justice, 2023: https://www.brennancenter.org/our-work/analysis-opinion/how-ai-puts-elections-risk-and -needed-safeguards
Nir Kshetri & Jeffery Voas, 2018: https://www.researchgate.net/publication/326239528_Blockchain-Enabled_E-Voting
Professor Luigi Zingales and Bethany McLean, Capitalisn’t, University of Chicago Booth Podcast Network, ‘Can Democracy Coexist With Big Tech? With Marietje Schaake’, Sep. 2024: https://open.spotify.com/episode/4ml8NyyyGcDNJGb3MJ1qk0?si=68c5450d43254a8c, https://www.capitalisnt.com/
The Economist, ‘Disinformation is on the rise. How does it work?’, 2024: https://www.economist.com/science-and-technology/2024/05/01/disinformation-is-on-the -rise-how-does-it-work
United Nations Regional Information Centre for Western Europe, 2024: https://unric.org/en/can-artificial-intelligence-ai-influence-elections/
Bibliography
Anne-Gabrielle Haie, Tod Cohen, Andrew Golodny, Maury Shenk, Maria
Avramidou, Elizabeth Goodwin, Vito Arethusa, Steptoe, “A Comparative Analysis of the EU, US and UK Approaches to AI Regulation,” 2024: https://www.steptoe.com/en/news-publications/steptechtoe-blog/a-comparative-analysis-of-the-eu-us-and-uk-approaches-to-ai-regulation.html
Eli Pariser, The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think, 2011
Marietje Schaake, The Tech Coup: How to Save Democracy from Silicon Valley, Princeton University Press, 2024
Sam Stockwell, CETAS, Alan Turing Institute, “AI-Enabled Influence Operations: Threat Analysis of the 2024 UK and European Elections,” 2024: https://cetas.turing.ac.uk/publications/ai-enabled-influence-operations-threat-analysis-2024-uk-and-european-elections
United Nations Educational, Scientific and Cultural Organization, Elections in Digital Times, 2022: https://unesdoc.unesco.org/ark:/48223/pf0000382102