LSE - Small Logo
LSE - Small Logo

Ryan E. Long

August 16th, 2021

Artificial intelligence liability: the rules are changing

3 comments | 72 shares

Estimated reading time: 3 minutes

Ryan E. Long

August 16th, 2021

Artificial intelligence liability: the rules are changing

3 comments | 72 shares

Estimated reading time: 3 minutes

The law has been relatively slow to regulate artificial intelligence, but the rules are evolving. An important question is whether an AI company can be held liable for malfunctioning AI. Ryan E. Long writes that a company’s liability for its AI depends on whether a defect was present upon the AI release and whether, in the EU at least, the application is “high-risk.”


 

Artificial intelligence (AI) use has blossomed. The AI market was valued at $27.3 billion in 2019 and is projected to grow to $266.92 billion by 2026. Associated AI applications have also grown. For example, the market for facial recognition technology, much of which uses AI, had a value of $3.72 billion in 2020 and is forecasted to grow to $11.62 billion by 2026. At the same time, AI has been known to misidentify faces, among other things, when used in facial recognition technology. If you are an AI investor or entrepreneur, you must know whether and under what circumstances an AI company can be held liable in the US or EU for malfunctioning AI.

The benefits associated with AI applications have grown immensely. In 1996, for example, Lynn Cozart disappeared just days before he was to be sentenced by a Pennsylvania court to spend years in prison for molesting three children. For years, investigators searched for him. However, the case went frigid. Then, in 2015, the Facial Analysis, Comparison and Evaluation Services, the FBI’s team responsible for face recognition search, matched the mug shot to the face of one “David Stone” who lived in Muskogee, Oklahoma, and who worked at a local Wal-Mart. “After 19 years,” FBI program analyst Doug Sprouse says, “[Cozart] was brought to justice.”

AI has also been used to flag “fake news” and “deep fakes.” Cheq, based in Tel Aviv, for example, uses various variables to determine the authenticity of content, including the status of a site’s reputation and whether the source of the content is a bot. This can assist with online digital reputation management.

Notwithstanding, AI-programmed facial recognition technology can misidentify subjects. For example, a 2012 study titled “Face Recognition Performance: Role of Demographic Information”, which was co-authored by the FBI, found females more difficult to recognize than males. It also found that that the commercial algorithms tested had the lowest matching accuracy rates on subjects aged 18-30. These inaccuracy rates can reach high percentages. For example, the algorithm running the London Metropolitan Police’s facial recognition technology was reported at one time to have an error rate as high as 81%.

The law has been relatively slow to regulate AI. There has been some case law in the United States concerning the regulation of computerised robotics. For example, in Jones v. W + M Automation, Inc., New York’s Appellate Division dismissed the plaintiff’s complaint about product defect against a manufacturer and programmer of a robotic loading system. In the court’s view, the defendants were not liable for the plaintiff’s injuries at the GM plant where he worked because these defendants showed they “manufactured only non-defective component parts.” As long as the robot – and associated software – was “reasonably safe when designed and installed,” the defendants were not liable for plaintiff’s damages.

GM, the end user, however, could still be liable for improperly modifying the hardware or software. The implication is that creators of AI software or hardware aren’t liable for any injuries as long as these products were non-defective when made. That being said, defectively made AI, or AI that is modified by a licensee and causes damages as a result, can create liability for both the licensor and/or licensee. Whether AI is defectively made will depend, like in other product liability cases, on prevailing industry standards.

Recently, the Federal Trade Commission proposed guidelines concerning the regulation of AI. On 8 April 2020, the Commission wrote a blog post “Using Artificial Intelligence and Algorithms”,  basically recommending that those who use or license AI in a way that affects consumer well-being do so in a way that is “transparent” – particularly regarding decisions that affect a consumer’s credit. As such, many of the decisions concerning the use and implementation of AI in the consumer context can be regulated by Section (5)(a) of the FTC Act, which provides that “unfair or deceptive acts or practices in or affecting commerce . . . are . . . declared unlawful.”

Thus, to the extent that AI companies warrant or represent things about their products are untrue or deceptive, the FTC – along with private attorneys general – could hold such companies liable for resulting damage. The FTC echoed many of these sentiments in a more recent 19 April 2021 post “Aiming for truth, fairness, and equity in your company’s use of AI.”

The EU has also issued AI liability guidelines. In 2019, it released Liability for Artificial Intelligence and other Emerging Technologies. The document explains that some applications of AI will warrant strict liability — such as in the case of persons operating “AI-driven robots in public places.” Manufacturers of products that incorporate emerging digital technology — including AI — should, as with other products, be “liable for damage caused by defects in their products[.]” The manufacturer can be liable “even if the defect was caused by changes made to the product [so long as it was still] under the producer’s control.”

More recently, the EU released a white paper on artificial intelligence,  explaining that additional compliance requirements would apply to “high-risk AI applications” such as healthcare, transport, and energy. These additional requirements include, among other items, keeping records concerning the algorithm used in AI.

AI liability road rules in the US and EU are developing. One thing is clear: under what circumstances a company will be liable for its AI depends on whether a defect was present upon the AI’s release and whether, in the EU at least, the application is “high-risk.”

♣♣♣

Notes:

  • This blog post expresses the views of its author(s), not the position of LSE Business Review or the London School of Economics.
  • Featured image by geralt, under a Pixabay licence
  • When you leave a comment, you’re agreeing to our Comment Policy

 

About the author

Ryan E. Long

Ryan E. Long is a non-residential fellow of Stanford Law School’s Center for Internet and Society. He counsels growing tech / media / design clients with respect to their intellectual property-related litigation / transactional issues in the US. He has written for or been interviewed by El Pais, Cognitive Times, and Digital Trends about “deep fakes,” among other emerging tech issues.

Posted In: Technology

3 Comments