LSE - Small Logo
LSE - Small Logo

Tahereh Sonia Saheb

Sahand Sasha Karimi

Mark Esposito

Terence Tse

May 30th, 2023

How to protect artists from generative artificial intelligence

1 comment | 9 shares

Estimated reading time: 4 minutes

Tahereh Sonia Saheb

Sahand Sasha Karimi

Mark Esposito

Terence Tse

May 30th, 2023

How to protect artists from generative artificial intelligence

1 comment | 9 shares

Estimated reading time: 4 minutes

Generative artificial intelligence (AI) machines use human-made media to create new content. Most of the images they create are based on the work of skilful artists or on physical features of real people. Should that be allowed? Tahereh Sonia Saheb, Sahand Sasha Karimi, Mark Esposito and Terence Tse suggest ways to ensure fair use of an artist’s work, and to protect people’s privacy and personal data.


 

God-given talent and years of education mean nothing to the new and, mostly, freely available market competitor. Training to perfect your skills can be squandered by this opponent, a non-human generative artificial intelligence (AI) machine that has been programmed to learn its “skills” by consuming human-generated media and populating new content. Metaphorically, AI works like Aladdin’s lamp. It obeys its human master who can type their wishes in the prompt box and see them generated as content within seconds.

Most generative AI machines (such as ChatGPT) are based on publicly available media and training. Many argue this means that AI is owned by the public and free to use. For instance, public datasets such as Common Crawl, Wikipedia, the Gutenberg Project, and the OpenSubtitles corpus were widely used in the development of language models like GPT-3, created by the research laboratory OpenAI. However, when it comes to art, the majority of the training data for generative paintings or avatars is based on the content created by talented and skilful artists or in some cases facial and body features of real people. The question is whether such generative AI machines should be allowed to use freely such media to generate new content.

In a highly publicised lawsuit, Hermès won a copyright lawsuit against an artist who generated NFT formats of their Birkin bag. It was only a few months ago that Greg Rutkowski, well-known for his unique style of painting, learned that there are many paintings in the public space that are similar to his but were not created by him.

Similarly, Lensa, a cheaply available generative AI phone app has also been trained on publicly available artwork and images. The app takes user uploaded photos and generates mythical avatars that are slimmer, lighter-skinned, and more nude. Not surprisingly, Lensa has ignited multiple controversies: the fair use of artists’ idiosyncratic styles, use of geometric data on unique facial characteristics of individuals’ photos, and how generative avatars are widening socio-cultural and racial disparities and causing psychological distress by undervaluing the user’s true self.

Fair use of artists’ work

In the above-mentioned examples, one of the main concerns is the fair use of artists’ work and how generative AI will negatively impact the demand for an artist’s authentic talent. This concern becomes more acute considering that most artists depend on the small income they receive from their projects to support their livelihoods. Although generative AI software mostly mimics the artist’s idiosyncratic style, not copying their work directly, in most cases it is very difficult for the untrained eye of a potential buyer to distinguish between machine-generated and authentic work.

Historically, watermarks have been widely employed to prove ownership of art on the internet and to prevent their illegal use. In the era of generative art, the same conventional technique can be used to add watermarks behind AI-generated works in order to lessen online theft and unauthorised ownership. For example, if an artist’s specific style is invoked in an AI app to create a painting, the app can mark that painting so that credit is given to the artist. Alternatively, the developers of such apps can identify and directly engage with artists whose work has been used in their app’s training dataset. This may result in an agreement between the two parties so that the artist’s signatures or trademarks are added to the generative paintings as a useful remedy for identifying the artists whose works and aesthetics have been imitated. Such approaches allow for a new series of technological solutions to arise for real-time tracking and notification systems that alert artists when their art is used online in order to protect their intellectual property.

The resolution of generative AI paintings may be relevant to yet another approach. After the artist whose style was employed has been paid, the high-resolution art can be made accessible for public use. The initial drafts of the content can be left in a low-resolution format. This can potentially lessen the stealing of an artist’s aesthetic.

Moreover, generative AI calls for more specific copyright laws and regulations. For instance, in the US the Digital Millennium Copyright Act was introduced in 1998 to address copyright issues on the internet. We need similar laws and regulations for generative art that protects the artists and their unique talents.

It is our duty as a society to ensure that artworks are used fairly and that artists are compensated for any machine-generated art that incorporates their aesthetic. This will assure the economic security of artists, who will continue to push the creative boundaries of our minds.

AI-based swap apps

Another series of class action litigation is around the use of biometric data such as unique facial characteristics or the voice of an individual without their permission. The applications that use this data, such as Lensa, appear amusing for the average user, but there are several privacy infringement consequences behind their use.

One of the most recent legal battles was around a facial biometrics program that employed imprecise or non-existent informed consent. On the grounds of laws enacted by the Biometric Information and Privacy Act (“BIPA”), a class action lawsuit was filed against the creators of Lensa and its exploitation of users’ “facial geometry” data, its collection, storage, processing and destruction.

Another group of generative AI-based applications specialise in altering the voice or facial characteristics of the user to make them sound or look like professional singers and celebrities. Applications that use AI to “swap” or “alter” a human’s voice, face, or body are multiplying exponentially. Given the rapid increase of such generative AI apps, it is crucial that rules and safeguards beyond BIPA be put in place to address permissions that must be granted by the rightful owners of the data before that data can be used.

AI is still a new and emerging technology. It is important that the public understands its basis, what data it is collecting, and from where. We not only need to ensure fair use of an artist’s work, but we also need to protect everyone’s privacy and personal data.

♣♣♣

Notes:

  • This blog post represents the views of its author(s), not the position of LSE Business Review or the London School of Economics.
  • Featured image by Google DeepMind on Unsplash
  • When you leave a comment, you’re agreeing to our Comment Policy.

About the author

Tahereh Sonia Saheb

Tahereh Sonia Saheb is an assistant professor of business analytics at Menlo College.

Sahand Sasha Karimi

Sahand Sasha Karimi is an adjunct faculty member at Hult International Business School.

Mark Esposito

Mark Esposito is a professor of economics and strategy at Hult International Business School. He also teaches at Harvard University’s Division of Continuing Education.

Terence Tse

Terence Tse is a professor of finance at Hult International Business School.

Posted In: Management | Technology

1 Comments