The European Union plans to apply specific regulations with hefty fines to enforce the ethical application of artificial intelligence. AI has inherited biases from people. However, in a global marketplace, AI should be as bias-free as possible, as it may be applied beyond the borders of the country in which it was initially developed. Hector Gonzalez-Jimenez writes that If we want to create an ethical AI it is crucial to understand the link between culture and ethics. He suggests two proactive steps to start dealing with the challenge.
A recent article highlights that the European Union (EU) is planning to fine companies with up to 4% of their global annual turnover if their artificial intelligence (AI) applications endanger the safety and fundamental rights and freedoms of persons. Considering the increasing number of AI applications in a variety of sectors and industries, this initiative seeks to encourage the development and use of trustworthy and human-centric AI that reduces potential harms to citizens.
AI surely entails numerous benefits in terms of efficiencies and convenience. Nevertheless, one of the mentioned concerns is the decision-making performed by these AI, which can lead to discriminatory and thus unethical practices. This notion is integral, because in terms of ethics, there are already many complexities even in human interactions. Due to their cultural background individuals can differ greatly with regard to what is ethically appropriate. This raises the question: Whose ethics should AI reflect in the global marketplace to reduce potential harms and inequalities?
Recent articles show that AI can be biased and even racist, leading to non-ethical decision-making. For instance, AI-empowered computer vision is used to automatically label images, with the potential to make a decision based on that label: In the midst of the COVID-19 pandemic, research showed that a computer vision service by Google labelled a picture of a dark-skinned person holding a handheld thermometer as a “gun”. In contrast, a similar picture with a light-skinned person, was labelled as “electronic device”. The difference between gun and electronic device can lead to un-ethical profiling, labelling the dark-skinned person as dangerous. Now imagine that this AI-empowered software is used in law enforcement. Indeed, the dark-skinned person would be in unfair danger by holding a device as harmless as a thermometer. The inherited bias in AI is likely due to some societal and culturally embedded unfair stereotypes that have been fed directly or indirectly into its algorithm. Therefore, if we want to create ethical AI it is crucial to understand the link between culture and ethics.
Geert Hofstede describes culture as “the collective programming of the mind that distinguishes the members of one group or category of people from others”. Culture influences how a person views the world and can determine how a person may view different ethical scenarios. For instance, gift-giving in business is very prevalent and accepted in Chinese society. However, this practice may not be seen as ethical in other cultural contexts where such gifts may quickly be seen as a form of bribery. If detected, should an AI notify and “punish” such practices? Now think even further. Imagine an AI-powered military device needs to decide who to kill or save. Culturally induced ethical biases in such a scenario could lead to terrible outcomes. This raises the question: Where do AI biases come from and how can we tackle this problem?
A recent article states that many AI systems are programmed and designed by people with a similar background. For instance, the article calls it a Silicon Valley approach to ethics, referring to the vast amount of technology companies that are based in this part of the world. The authors argue that many technology students may be trained with a singular approach to ethics that embraces western perspectives. Meanwhile, countries such as China, South Korea and Japan are heavily investing into AI development. Similarly, AI developed in these nations may entail ethical biases that are based on their cultural frames. However, in a global marketplace an AI should be as “bias-free” as possible, as it may be applied beyond the borders of the country in which it was initially developed.
What should we do?
As mentioned above, the EU plans to enforce specific regulations with hefty fines to control the application of unethical AI. However, arguably, preventative and proactive measures should also be considered.
First, diversity and collaboration in the programming of AI will be key. It is integral that stakeholders in AI development across the globe work together to assure that major ethical biases are eliminated. This will be challenging, as companies may not be willing to collaborate due to IP and technology protection. Therefore, it is important to have an international governing body of AI ethics that sets a global standard to which AI developers should adhere. Indeed, this in one of the initiatives proposed by the EU. Such a governing body should be composed of interdisciplinary teams that include specialists from backgrounds such as computer science, philosophy, law, engineering, business, etc.
Second, individuals should act responsibly online. Some AI applications draw and learn directly from what is available on the internet. For example, AI can process text, sound recordings and images to learn what human behaviour looks like. Thus, we as humanity need to act responsibly in terms of what we post online to reduce biases, hatred, and stereotypes. Similarly to a child, if an AI draws online on racist messages, the AI may develop racist tendencies. Understandably, freedom of speech online still needs to be granted, but ethical conduct should be prioritised.
These are just some thoughts. The initial solutions may not be perfect, but critically tackling and raising awareness about these issues is already a great step toward a safer and more ethical future for all of us.
♣♣♣
Notes:
- This blog post expresses the views of its author(s), and do not necessarily represent those of LSE Business Review or the London School of Economics.
- Featured image by geralt, under a Pixabay licence
- When you leave a comment, you’re agreeing to our Comment Policy