LSE - Small Logo
LSE - Small Logo

Mathias Vermeulen

Laureline Lemoine

February 12th, 2024

From ChatGPT to Google’s Gemini: when would generative AI products fall within the scope of the Digital Services Act?

0 comments | 13 shares

Estimated reading time: 5 minutes

Mathias Vermeulen

Laureline Lemoine

February 12th, 2024

From ChatGPT to Google’s Gemini: when would generative AI products fall within the scope of the Digital Services Act?

0 comments | 13 shares

Estimated reading time: 5 minutes

In this post, digital policy specialists Mathias Vermeulen and Laureline Lemoine of law firm and consultancy AWO explain in which circumstances generative AI product and tools would be subject to the European Union’s Digital Services Act, which regulates online intermediaries and platforms with the aim of preventing harmful and illegal activities online, and the spread of disinformation.

Recent months have seen rapid growth in the use of generative artificial intelligence (AI) products, prompted by OpenAI’s release of large language model ChatGPT-3 in November 2022. Most recently, Google announced the deployment of its model Gemini into its products. These developments have triggered a public debate about the opportunities and societal challenges of generative AI, including in the context of the negotiations around large language models in the EU’s Artificial Intelligence Act.

The term “generative AI” is used in this blog post to refer to a type of artificial intelligence system, such as large language models (LLMs) which are capable of generating text, images or other media in response to prompts. There is currently no definition of AI or AI systems in EU law. It should be noted, however, that the political agreement reached on the AI Act on 8 December 2023 includes such systems under the definition of “general-purpose AI models”.

Some of these systems and products are already covered by obligations that are imposed on a specific number of actors, so-called Very Large Online Platforms (VLOPs) in the EU’s Digital Services Act (DSA), which came into force on 16 November 2022 and which will apply to all platforms from 17 February 2024. The DSA updates the EU’s rules to regulate online intermediaries, including online platforms and search engines, and has the potential “to create a fundamental paradigm shift to hold technology platforms to account”. Hence the question arises: when would generative AI products fall within the scope of the Digital Services Act?

This blog post is a short summary of a longer article that aims to assess the extent to which generative AI applications fall within the scope of the Digital Services Act (DSA), both as self-standing products and when incorporated within platforms and services.

Standalone Generative AI products

The DSA applies to “intermediary services” offered to users located in the EU (Article 2) which are defined as “mere conduit”, “caching” and “hosting” services (Article 3.g). When would standalone generative AI products be considered as hosting services or search engines?

Generative AI as a Hosting Service?

A “hosting” service consists of the “storage of information provided by, and at the request of, a recipient of the service” (Article 3.g(iii)). Classifying generative AI products as hosting services would hinge on the interpretation of terms like “stored” information and “provided”. It could be argued that generative AI, especially large language models (LLMs) temporarily store, i.e. hold in memory on their servers, certain information such as the users’ prompts (the input). The model input would be considered to be provided by the users and stored at their request by the LLM. More extensively, a generative AI output could also be considered as “provided” by users since they are prompted by their queries. Yet even if this interpretation was taken up, most generative AI products would be excluded from falling under the definition of a Very Large Online Platforms (VLOPs) in the DSA since they typically don’t disseminate information to the public.

Generative AI as Search Engines?

The status of search engines exemplifies how services can fall within the DSA even without fitting conventional definitions. Search engines don’t straightforwardly fit the intermediary services framework but are included under the DSA’s purview and are defined as such:

“an intermediary service that allows users to input queries in order to perform searches of, in principle, all websites, or all websites in a particular language, on the basis of a query […] and returns results in any format in which information related to the requested content can be found” (Article 3.j DSA).

Some LLMs typically operate on fixed datasets, arguably not performing “searches of all websites”, which would exclude them from the definition of an online search engine. Some LLMs like ChatGPT or Google Bard, however, incorporate results from current web-pages, potentially fitting the search engine definition. Consequently, they could qualify as an intermediary service, and be subject to all relevant obligations.

As “search engines”, these products would have to comply with procedural requirements in the DSA (such as establishing points of contact for authorities and users, or having a legal representative in the EU) as well as substantive requirements as laid out in Section 1 of Chapter III of the DSA. As a result, they might also be designated by the Commission as a Very Large Online Search Engines (VLOSEs) when they reach 45 million active users in the EU, thereby becoming subject to more stringent due diligence obligations, such as risk assessments, mitigation measures, audits and access to data.

Generative AI as an Embedded Service

Beyond standalone entities, generative AI products are being embedded into existing platforms covered by the DSA, like Bing Chat or Snapchat’s My AI. Google also announced that Gemini would be available to a range of Google products, including Search.

Bing Chat uses GPT-4 to create answers and can generate creative content. It can be considered an integral part of Bing, as one single service. As a result, Bing Chat could either be subject to all DSA obligations (as an integral part of Bing the search engine), including risk assessments, online advertising provisions or access to data for researchers etc, or it could more narrowly be considered a “related system” to Bing, as per Article 34 DSA and merely be subject to risk assessments and mitigation measures. A similar assessment could be made regarding Google Search and Gemini, if deployed in the EU.

Yet Snapchat’s “My AI” chatbot is only accessible within the messaging service part of Snapchat. As an “interpersonal communication services”, the messaging service of Snapchat, including “My AI”, would not be covered by the DSA obligations for online platforms.

This example is a clear demonstration that generative AI features in a product from a VLOP will require a case-by-case analysis to determine which parts of the DSA will apply to them, if any. For instance, Meta announced the launch of chatbot personalities on Instagram, through Instagram’s message service. Although these AI chatbot services can hardly be considered “interpersonal communication services” themselves, as they are embedded in such services, the granularity of application and enforcement of the DSA might not be sufficient to cover them. In that sense, an embedded tool is probably not enough to change the nature of a service.

However, as part of VLOP systems, relevant DSA obligations would apply indirectly, such as the limitation to the use of personal data and profiling for targeted ads, especially concerning minors. Indeed, Snapchat’s My AI clearly states that it is using users’ data for advertising, while Bing Chat mentions the “operation of its businesses”, which includes advertising.

Conclusion

The extent to which the DSA applies very much depends on the design of the product and its links to an existing company that is currently designated as a VLOP or VLOSE. There is a need to distinguish between Generative AI as a “self-standing entity” (ChatGPT, Google Bard) and Generative AI as an embedded service (Bing Chat, Snapchat’s My AI). The European Commission has not taken a clear stance on if and how the DSA can regulate generative AI services, but has at least hinted at the fact that generative AI products that are built in products from VLOPs would need to be covered by their DSA risk assessments. This could ultimately result in increased transparency and safety measures that need to be deployed by these services in order to mitigate risks related to the dissemination of illegal content or fundamental rights.

This post represents the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

Featured image: Photo by Mojahid Mottakin on Unsplash

About the author

Mathias Vermeulen

Mathias Vermeulen is the co-founder and policy director of AWO, a law firm and consultancy specialized in digital policy and data governance issues. Mathias is also a fellow at the Centre for Law, Science, Technology and Society at the Vrije Universiteit Brussel. Earlier he worked for Mozilla, the European Parliament, the European University Institute, the UN High Commissioner for Human Rights, and the UN Special Rapporteur on the protection of human rights while countering terrorism. He has a Ph.D. in European privacy law.

Laureline Lemoine

Laureline Lemoine is a Senior Policy Associate at AWO. Prior to this, she worked for digital rights NGOs in Brussels and was a trainee at the Court of Justice of the European Union and at the European Commission in DG Competition. She holds a master’s in EU law and litigation and a dual degree in French law and Common law from University College Dublin and Paris 2 Panthéon Assas University.

Posted In: Artificial Intelligence | EU Media Policy

Leave a Reply

Your email address will not be published. Required fields are marked *