LSE - Small Logo
LSE - Small Logo

Helena Vieira

May 23rd, 2016

Automation should complement professional expertise, not replace it

0 comments

Estimated reading time: 5 minutes

Helena Vieira

May 23rd, 2016

Automation should complement professional expertise, not replace it

0 comments

Estimated reading time: 5 minutes

medical robot

Will your next doctor be an app? A cost-cutting NHS wants more patients to act as “self-carers,” with some technologized assistance. A series of flowcharts and phone trees might tell parents whose children have chicken pox how best to care for them—no visits to surgeries required.  Or a mole-checking app might tell a worrywart when a given skin discoloration looks harmless, and when to go to a dermatologist, by comparing it to thousands of images in a database.

Cost-cutters in the legal field also promise an algorithmically cheapened future. Tax software simplifies the process of filing by walking the filer through a series of questions. Documents that might have taken human attorneys months to read, can be scanned for keywords in a matter of seconds. Predictive policing promises to deploy force with surgical precision.

All these initiatives have some promise, and may make health care and legal advice more accessible. But they are also prone to errors, biases, and predictable malfunctions. Last year, the US Federal Trade Commission settled lawsuits against firms who claimed their software could aid in the detection of skin cancer, by evaluating photographs of the user’s moles. The FTC argued that there was not sufficient evidence to support such claims. The companies are now prohibited from making any “health or disease claims” about the impact of the apps on the health of users unless they provide “reliable scientific evidence” grounded in clinical tests. If algorithms designed merely to inform patients aren’t ready for prime time, why presume diagnostic robots are imminent?

Legal automation has also faced some serious critiques lately. The University of North Carolina legal scholar Dana Remus has questioned the value and legitimacy of the “predictive coding” now deployed in many discovery proceedings. She and co-author Frank S. Levy (of MIT) raise serious questions about more advanced applications of legal automation as well. The future cannot be completely anticipated in contracts, nor can difficult judgment calls be perfectly encoded into the oft-reductionist formulae of data processing. Errant divorce software may have caused thousands of errors in the UK lately, just as US software systems have disrupted or derailed proper dispositions of benefits applications.

Moreover, several types of opacity impede public understanding of algorithmic ranking and rating processes in even more familiar contexts, like credit scoring or search rankings. Consumers do not understand all the implications of the US credit scoring process, and things are about to get worse as “alternative” or “fringe” data moves into the lending mix for some startups. If the consequences of being late on a bill are not readily apparent to consumers, how can they hope to grasp new scoring systems that draw on their social media postings, location data, and hundreds of other data points? At the level of companies, many firms do not feel that Google, Facebook, and Amazon are playing a fair game in their algorithmic rankings of websites, ads, and products. These concerns, too, are stymied by widespread secrecy of both algorithms and the data fed into them.

In response, legal scholars have focused on remediable legal secrecy (curbing trade secrets and improving monitoring by watchdogs) and complexity (forbidding certain contractual arrangements when they become so complicated that regulators or citizens cannot understand their impact). I have recommended certain forms of transparency for software—for example, permitting experts to inspect code at suspect firms, and communications between managers and technical staff. The recent Volkswagen scandal served as yet another confirmation of the need for regulators to understand code.

But there is a larger lesson in these failures of algorithmic ordering. Rather than trying to replace the professions with robots and software, we should instead ask how professional expertise can better guide the implementation of algorithmic decision making procedures. Ideally, doctors using software in medical settings should be able to inspect the inputs (data) that go into them, restrict the contexts in which they are used, and demand outputs that avoid disparate impacts. The same goes for attorneys, and other professionals now deploying algorithmic arrangements of information. We will be looking at “The Promise and Limits of Algorithmic Accountability in the Professions” at Yale Law School this Spring, and welcome further interventions to clarify the complementarity between professional and computational expertise.

♣♣♣

Notes:

  • This post was originally published on the website of Nesta.
  • The post gives the views of its author, not the position of LSE Business Review or the London School of Economics.
  • Featured image credit: Cmglee, CC BY-SA 3.0
  • Before commenting, please read our Comment Policy

frank_pasqualeFrank Pasquale (JD, MPhil) is Professor of Law at the University of Maryland Francis King Carey School of Law. He is an expert on the law of big data, predictive analytics, artificial intelligence, and algorithms. He has advised business and government leaders in the health care, internet, and finance industries, including the U.S. Department of Health and Human Services, the U.S. House Judiciary Committee, the Federal Trade Commission, and the European Commission. Quoted in top global media outlets, including the Financial Times, the New York Times, and The Economist, he is the author of The Black Box Society, (Harvard University Press, 2015) and a member of the Council on Big Data, Ethics, & Society.

About the author

Helena Vieira

Posted In: Technology

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.