As generative AI disrupts industries, global policy makers are trying to keep up with the fast pace of AI’s development to optimise its opportunities while minimising its associated risks and harms. AI’s potential to have a positive impact on the world will depend on the adoption of a unified approach to regulation, argues Sara Boscolo Zemello, but this is hindered by the current patchwork of global laws.
During the Italian Tech Week – an annual event that I attended in Turin (Italy) at the end of September, promoting technology and innovation – Sam Altman, Co-Founder & CEO of OpenAI, delivered a keynote on the limitless potential of AI. On the risks associated with AI, Altman shed light on the issues of misinformation, bias and cyberattacks. However, he stressed how over-regulation can be detrimental as it could stifle the development of AI.
Altman’s speech illustrated the tricky balance between promoting technological innovation and the need for its regulation. AI has the capacity to address significant global challenges like diseases, climate change, and sustainability, but its potential depends on responsible and ethical use. AI is making supply chains more efficient, improving medical diagnosis, treatment and healthcare delivery, and has the potential to reduce worldwide emissions of greenhouse gases.
On the other side, regulatory concerns around the use of AI include its reinforcement of human bias, its potential to spread misinformation, to cause cyberattacks, to widen the digital divide, to complicate data protection and to increase energy consumption. One instance highlighting the potential negative consequences of AI is a concerning revelation of AI bias in healthcare that emerged in 2019: six algorithms, affecting 60-100 million patients, showed a racial bias favouring white patients over black patients with equivalent illness levels. This bias stemmed from the algorithms being trained on historical insurance claims data, where they predicted future healthcare expenses based on past spending. Since less money was historically allocated to black patients compared to white patients, this perpetuated existing healthcare disparities.
Different stakeholders, from technology companies, to policymakers and researchers, are calling for a global approach in regulating AI, to regulate its associated risks and harms.
In May, a group of AI experts, engineers and CEOs, issued a joint statement warning about the threat they believe that AI poses to humanity. The statement – signed by CEOs and scientists from companies such as OpenAI, Google DeepMind and Microsoft – invoked the threat of human eradication and urged that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. These are meaningful concerns and it is vital to address them in the context of more imminent challenges that demand our immediate focus.
We need to chart a different path for AI compared to the route we drew for social media. While these digital platforms have played a pivotal role in fostering global connectivity and information exchange and empowering movements like Black Lives Matter, they have also exposed us to serious consequences, including privacy concerns, misinformation, and polarisation. Drawing lessons from the journey of social media, we should approach the development of AI with a balanced perspective, aiming to harness its potential for good while mitigating potential harms.
To chart a path that encourages fair and responsible use of AI, a global approach to regulation is needed. International and multi-stakeholder institutions could help advance AI development and minimise its related risks, by setting worldwide guidelines for the detection and treatment of models with harmful capabilities and facilitating global consensus on the harms that certain AI capabilities represent to society.
At the Italian Tech Week, Brando Benifei, Member of the European Parliament and co-rapporteur of the AI Act, discussed the critical importance of user data management, consumer protection, and safeguarding fundamental rights in the age of AI. He also emphasised how transparency is key in distinguishing between human and AI-generated content. With the AI Act we are seeing the harmonisation of digital markets within the EU, but Benifei emphasised the need to elevate our efforts further. This calls for enhanced international cooperation to navigate the evolving AI landscape effectively.
Different regulatory approaches
However, the introduction of different AI regulations by governments around the world is increasing fragmentation and regionalisation, which make a global approach on AI regulation difficult to achieve. The European Union, the US and China have taken different regulatory approaches, in line with different cultural norms and legislative landscapes.
The EU wants to be the main global tech regulator with its AI Act, which is in the final legislative stages. The EU’s ambition is to set the worldwide standard for AI regulation, much like it achieved with data protection laws through the European General Data Protection Regulation (GDPR).
The most significant push of the Act is to label AI tools and models as either “high” risk or “unacceptable.” It delves into the different levels of risks associated with this technology, with a framework that includes four risk tiers: unacceptable, high, limited and minimal. Tools for biometric identification, education, workforce management, the legal system, and law enforcement, as well as any other tool that would need to be “assessed” and approved before deployment, fall within the high-risk category of AI.
AI devices and systems considered to pose an “unacceptable” risk – such as government social scoring and real-time biometric identification systems in public spaces – will be prohibited in the EU. The AI Act also emphasises how transparency is key in distinguishing between human and AI-generated content, as disclosure of what is artificially generated content would be mandatory.
The US has traditionally favoured innovation over regulation, preferring the market to establish its own self-regulatory standards. The American tech industry’s rivalry with China has strengthened this strategy. When it comes to AI regulation, the US had until recently been lagging behind. In July, the Biden-Harris Administration called on seven leading AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to meet voluntary transparency and security standards.
However, at the end of October president Joe Biden issued an executive order intended to enhance artificial intelligence safety and security. The measures include requiring AI companies to share safety test results with the federal government, protecting consumer privacy, preventing AI discrimination, and creating guidelines for the use of AI in the justice system. The order also initiates programs to evaluate healthcare practices related to AI. The US will collaborate with international partners to establish AI standards worldwide, and the creation of a new AI safety institute was also announced at the UK-hosted AI Safety Summit.
Beijing looks at AI as a tool that can help the country to achieve its economic and geopolitical goals, in order to have a privileged position in the technology race with the US. The focus of its new regulations on generative AI is on control and power, with respect of socialist values at the core. Generative AI providers would need to “uphold the integrity of state power, refrain from inciting secession, safeguard national unity, preserve economic and social order, and ensure the development of products aligning with the country’s socialist values”, says a brand contributor from Forbes. In other words, AI companies in China must ensure that their technology does not challenge the government, disrupt the country, or go against socialist values.
It is evident that the US, EU, and Chinese models all represent the national interests and societal values of their respective regions. These divergent frameworks are potentially in conflict with each other, as complying with Beijing’s demands for communist principles may directly conflict with US and EU norms. These divergent approaches, criteria and values make the AI regulatory framework increasingly fragmented and complex, with a global approach to AI difficult to achieve. Hence, there is a pressing need for governments, businesses and researchers to engage in profound discussions on the concepts of justice, human values, and the economic considerations related to AI.
This post represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.
Featured image: Photo by Michael Dziedzic on Unsplash