LSE - Small Logo
LSE - Small Logo

Blog Administrator

August 18th, 2016

Big data analytics: Q&A with Professor Oscar H. Gandy, Jr

1 comment | 1 shares

Estimated reading time: 5 minutes

Blog Administrator

August 18th, 2016

Big data analytics: Q&A with Professor Oscar H. Gandy, Jr

1 comment | 1 shares

Estimated reading time: 5 minutes

Oscar H Gandy JrProfessor Oscar H. Gandy, Jr., is Emeritus Professor at the Annenberg School for Communication, University of Pennsylvania. Following a public lecture at LSE titled ‘Surveillance and the Public Sphere: confronting a democratic dilemma’, Catherine Speller interviews Professor Gandy about some of the issues around big data analytics that he raised in his lecture.

In your talk at LSE, your focus was very much on the unaccountable nature of big data analytics, and the potential harms that can happen to individuals as a result of government/agency interpretations of transaction-generated information (TGI). Can you briefly outline some of the ways that this happens on a day to day basis? Which communities are particularly affected by this?

I think that it is important for me to be clear that while we may suffer harms as individuals, we do so primarily as a function of having been assigned to a group, category or class through continually adjusted analytical frameworks that we primarily refer to these days as algorithms. Decisions being made about the life chances or opportunities being set before each individual will affect that person differently because of the diversity of actual circumstances and experiences that each individual will bring to an interaction with an institutional actor (such as a government or a bank), or agent.

It is important, as your question suggests, for us to develop some sense of the number and variety of encounters that individuals will have on a daily basis with different kinds of institutional actors. We understand risks in terms of the rate at which we are exposed to them as well as in terms of the nature of the consequences, or harms that might be experienced as a result of that exposure. For example, we would probably agree that the kinds of decisions that are made by agencies of the government, such as the courts, or other elements of the criminal justice system, are both rare and consequential. Decisions made by commercial firms about the nature of the offers that will be made for a host of goods and services are far more numerous, but comparatively less consequential.

I would suggest that far more interactions with systems that generate assessments and decisions take place between individuals and agents of business oriented, profit-seeking corporations. Still, government agencies that deliver services to individuals increasingly make use of analytical resources in order to make decisions about which of the many individuals whose applications or requests have triggered a warning will be denied a service, or will suffer a delay while an investigation and further assessment is completed.

In my comments, and in my recent thinking about these interactions, I have tried to shift my attention from my usual concern with the markets for commercial goods and services to the kinds of choices we are expected to make within the public sphere.

This means that the kinds of policy relevant communications we receive as members of analytically constructed ‘communities’ need not necessarily reflect the kinds of communities that we normally refer to, such as those defined by race, gender, class, or spatial location.

That said, and given my emphasis on negative outcomes or harms, it is my sense that members of ‘communities’ defined by their low levels of economic, social and political power and resources are more likely than others to be subject to algorithmic assessments that limit their opportunities for advancement along the paths to the ’good life’.

You describe the public as unwilling and/or unknowing subjects in continuous online experiments as the space between the public space and the market place becomes increasingly blurred. You seemed to be making the case that the public is not yet aware of what, precisely, is happening to their data (and should be). But is there a chance that people actually accept that a degree of surveillance is necessary as a kind of quid pro quo in return for certain social and economic benefits that come with being digitally connected?

The framing of the question reflects a dominant, and I believe erroneous description of the process often described as an exchange, or a ‘quid pro quo’. Joseph Turow, among others has referred to us as being ‘resigned’ to the fact that there are few alternatives to the relationships between those who gather, process and exchange TGI and the rest of us. Resignation is not the same as recognition of a fair bargain in which we willingly accept benefits in exchange for information. Part of the difficulty, which is related to what is actually implied by a fair bargain, is the fact that it is virtually impossible for an individual to know, and have a basis for evaluating the present and future consequences of the uses to which TGI might be put. Thus, they enter into these ‘bargains’, if they are actually explicitly offered as such, at a deep disadvantage in terms of the individual and collective risks they face by virtue of this ‘exchange’.

You suggest there should be regular audits of algorithmic processes. How would this work in practice, and who would be responsible? And is this the sort of thing we’re likely to see unfolding in the US in the near future?

Individual researchers have, at some risk, produced and reported the results of their own audits of algorithmic processes. I would rather see an institutional effort, supported by government agencies and foundations that would enable these specialists to produce these assessments routinely.

I have been pleasantly surprised by a recent report by the National Science and Technology Council (NTSC) operating out of the Executive Office of the President of the United States. This report not only not emphasises the risks to privacy that accompany the increased use of analytic algorithms, but recommends that these concerns be a central aspect of a privacy research initiative that seems likely to bring about a major shift in scholarly attention to these problems akin to an earlier national commitment to explore the societal impact of televised violence. This report seems to extend the recommendations included in an earlier report from the Executive Office of the President on Big Data and algorithmic systems that explicitly called for the promotion of academic and industry development of ‘algorithmic auditing and external testing of big data systems to ensure that people are being treated fairly’. I see this as a basis for some optimism.

You raised an interesting point in the lecture when you said that Internet Service Providers (ISPs) should be bound by the same ethical values as libraries. What did you mean by this? Are there any specific companies or individuals that you had in mind? What would help this kind of mindset to take root?

That was an expression of my love and respect for librarians and their support for user privacy. No one should have the right to know what any individual is reading (and thereby have access to what they might be thinking about). I would extend that barrier on access to include any of the information about users derived from their consumption records that would be used to characterise them as members of identifiable groups. This is a generalised concern in support of meaningful anonymity: no one really needs to know more than that something is, or is not being used/accessed to help organisations to make decisions about acquisition, storage and the continued availability of material.

As Julie Cohen famously argued in her article about ‘a right to read anonymously’, not even the interests of copyright holders to maximize the benefits they derive for charging what a regulated market will bear for access to information should stand in the way of the rights of individuals to develop their attitudes and opinions without fear of observation by corporate or governmental entities. Unfortunately, it is not only copyright holders that have worked to weaken the protections of this orientation toward the privacy of our developing ideas. At the same time that ‘intellectual property’ interests have gained the right to capture, process and share information about our search for and consumption of information, social network services like Facebook have not only managed to help cultivate uninhibited self-disclosure as a desirable social norm, but they have made active use of this sharing in the context of massive experiments to assess how alterations in the newsfeeds delivered to particular audience segments generates changes in political activity.

This troubles me greatly.

This blog gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

About the author

Blog Administrator

Posted In: Algorithmic Accountability | LSE Media Policy Project

1 Comments