LSE - Small Logo
LSE - Small Logo

Blog Administrator

February 10th, 2016

Accountable Algorithms (A Provocation)

4 comments | 1 shares

Estimated reading time: 5 minutes

Blog Administrator

February 10th, 2016

Accountable Algorithms (A Provocation)

4 comments | 1 shares

Estimated reading time: 5 minutes

Joshua KrollJoshua Kroll, Systems Engineer at CloudFlare, Inc., presents the case for using techniques from computer science as the most appropriate means of reviewing the fairness of computerised decision making.

Consequential decisions which have historically been made through human-operated processes are increasingly being automated, leading to questions about how to deal with incorrect, unfair, or unjustified outcomes that emerge from computer systems. Public and private policymakers around the world have turned their attention to the problem of updating traditional governance standards to accommodate automated decision making. Some scholars have suggested that increasing the transparency of computer systems will improve their accountability. However, while it can be useful, transparency alone is often counterproductive and insufficient for governing these new systems.

Why? Two reasons stand out.

Transparency

First, transparency can be undesirable. Processes that are adversarial, such as selecting travellers for enhanced security review, or that enjoy trade secret protection, such as consumer credit scoring, cannot be made totally public without undermining their efficacy. Even disclosure only to a privileged oversight entity (e.g., a court or regulator) carries risks of information leakage and still may not convince sceptics that a process is fair.

Second, transparency can fail to provide accountability. A decision maker may release all of the code purportedly used to make a randomised decision, but this transparency alone does not provide any assurances that the decision was made using that code, rather than on a whim. More generally, computer code can be difficult to interpret unless it has been specifically designed to be reviewed. It is impossible even in theory to determine what all programs will do in all cases. The rise of ‘big data’ systems that make decisions using machine learning may lead to divergence between what a system’s programmers believe it does and how it actually behaves. In many cases, access to the software code used to make a decision, even with the data fed into that software, is not sufficient to explain the basis for that decision to a sceptical decision subject or overseer, or to a member of the public.

Consider a lottery. Both human-mediated and computerised lotteries have suffered failures of accountability. Human-mediated lotteries gain trustworthiness through use of public ceremonies, such as picking balls out of a machine on a televised draw. For computerised lotteries, transparency is insufficient. A computer scientist could verify that the software calls for a random selection, but without access to the system’s environment from which randomness is drawn (which often includes factors such as time of day, hard drive and network usage, or user interaction such as mouse and keyboard activity), it is not possible to reproduce any particular selection. Because this system state cannot be replicated by a later reviewer, even fully revealing the code would not allow confirmation that the correct winner was chosen.

Accountability by design

In my own work on accountable algorithms, I propose an alternative to traditional transparency: accountability by design. Accountability by design sheds light on the workings of important automated platforms. Techniques from computer science can be used to create systems with properties that can be checked by regulators or the public without revealing the underlying code and data. Systems can be designed to be transparent about the properties that matter to automated decision making while protecting private data and withholding trade secrets within platforms’ decision processes.

Counterintuitively, this creates new opportunities for governance. It can be much easier to review the decisions of a computer system than of a team of bureaucrats: computerised decisions can be inspected in ways that human decisions cannot, as the inner workings of a computer program can be made subject to scrutiny. Just as regulators demand that other engineered systems (such as vehicles and buildings) meet engineering standards for safety and inspection, policy focused systems should demand that programs be written to a standard that allows such scrutiny.

For example, cryptographic techniques can let a system prove its procedural regularity—in other words, that the same process was applied in all cases, similar to the idea of procedural due process in the law—even without revealing what the process is or how it operated in specific cases. Procedural regularity is an important basis for further examination of computer system behaviour—without knowing whether a particular system was actually used to make some decision (vs. whether that decision was made ‘on a whim’ and in an arbitrary way), it is very difficult to ask whether the system is being fair or complying with the law. And without direct evidence, it is extremely difficult from a technical perspective to say whether a particular system was, in fact, used in a particular case without completely re-computing the system’s decisions.

Methods from computer science also can be deployed to design systems that provide strong guarantees of particular behaviours or that detect anomalous and illegitimate behaviours. Indeed, software developers regularly test their software to ensure that it behaves as they intend. More advanced methods can prove properties of entire programs or systems, rather than just guaranteeing the proper functionality of individual components. It is even sometimes possible to prove properties which are not demonstrative, such as security properties: while it is possible to demonstrate that a program is functioning as expected, it is not possible to demonstrate that it resists every possible attack. In some cases, certain large classes of attack can be demonstrably ruled out and the function of the system can be cabined to the portions covered by technical assurances.

Some examples

Take the case of data secrecy. Although it is difficult to demonstrate that no attacker can penetrate a system or tap a communications channel, a system can be designed to represent data in an encrypted form such that it is unintelligible even when an attacker obtains it. Then, to ensure that the system properly secures data, a reviewer of the system needs only to confirm that the system limits the availability of the cryptographic key material necessary to decrypt those data to particular times or to particular components of a computer system that actually need them. For example, keys may be kept in a special device such as a hardware security module or smart card, which can be better trusted to function properly (or at least more carefully examined and observed for faults or compromise).

To return to the example of a lottery, the potential power of designing accountability mechanisms into the software becomes evident. Using techniques from computer science, a lottery system can be redesigned to demonstrate to all participants that they were given an equal chance and that the actual selection of winners was unpredictable to the operators. Using software verification, we can show that the system implements a random selection without any bugs or unintended additional behaviours. Using cryptography, we can allow for reproducible random choices that are still unpredictable to the software’s operators.

With the right tools, computerised processes can be designed to accommodate accountability, just as human-mediated decision processes are currently designed to accommodate oversight by enabling after-the-fact review by a competent authority. Indeed, only by deploying the right technical tools can we hope to give the policy process a real way to review the fairness of computerised decision making. We would not expect human-operated processes to be accountable without designing in proper safeguards. The same is true for these new methods.

This article gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics.

This post was published to coincide with a workshop held in January 2016 by the Media Policy Project, ‘Algorithmic Power and Accountability in Black Box Platforms’. This was the second of a series of workshops organised throughout 2015 and 2016 by the Media Policy Project as part of a grant from the LSE’s Higher Education Innovation Fund (HEIF5). To read a summary of the workshop, please click here.

About the author

Blog Administrator

Posted In: Algorithmic Accountability | LSE Media Policy Project

4 Comments