By Ilka Gleibs
Informed consent is important in large-scale social media research to protect the privacy, autonomy, and control of social media users. Ilka Gleibs argues for an approach to consent that fosters contextual integrity where adequate protection for privacy is tied to specific contexts. Rather than prescribing universal rules for what is public (a Facebook page, or Twitter feed) and what is private, contextual integrity builds from within the normative bounds of a given context and illustrates why researchers must attend to the context in information flows and its use when thinking about research ethics.
During the US mid-term elections in 2010, the news feeds of all US Facebook users changed subtly: without users’ knowledge, researchers manipulated the feeds to show whom of their friends had already voted – for some users this included a picture of those friends, for some it didn’t. Subsequently, this information was matched with the voter-records to understand who actually went out to vote and whether this was depending on their friends’ behaviour. This 61-million-person experiment investigating social influence and voting behaviour was published by Adam Kramer, a Facebook data scientist, and colleagues in Nature last year.
This year Adam Kramer and his colleagues published another large-scale experiment that manipulated the feeds’ emotional content and examined how Facebook friends’ emotions affected one another. The later study on “massive-scale emotional contagion through social networks”generated significant debate in both public and scientific spheres. Even the editor-in-chief of PNAS, where the study was published, voiced concern that the “collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining fully informed consent and allowing participants to opt out”.
Whether these studies were in essence unethical is a matter of debate. Both studies (and others that use large-scale social media data) largely followed ethical guidelines of their institutions (Facebook, Cornell University, and UC San Diego). In addition, Facebook’s terms of agreement make it clear that research may be conducted and (legally) users who sign off the terms of agreement give consent. Lastly, one could argue that an online environment (for example messages on the home page like the newsfeed in Facebook) is constantly altered and changed for marketing and web-development reasons. Thus, the researchers argued that anticipated harm of no direct informed consent (the explicit permission of taking part in a study) did not outweigh the benefits of scientific discovery.
Yet, most of the critics pointed out that it was problematic that Facebook users were part of a study without their knowledge. Many Facebook ‘users’ (or ‘potential participants’) felt concerned or even betrayed after publication of results. Thus, it seems that many people would have liked to be informed that they were part of an experiment and would have appreciated a more comprehensive form of consent. This highlights that the subjective expectations of consent might not always match the legal and institutional requirements that organisations follow and raises questions on how we conduct studies with large-scale data from Social media platforms and hence their research ethics.
In a recently published paper in Analysis of Social Issues and Public Policy I focus on this question of the appropriate role of informed consent in large-scale online studies. Informed consent is an important cornerstone of ethical research that has important implications of using data from social media platforms and I argue that informed consent is (still) vital for conducting large-scale experiments to protect the privacy, autonomy, and control of users and ultimately our participants.
Based on the concept of privacy in context (Nissenbaum, 2009), I propose that this is because the norms of distribution and appropriateness are violated when researchers manipulate online contexts and collect data without consent. Contextual integrity refers to a theory of privacy in which the protection of personal information is linked to the norms of information flow in a specific context. Thus, contextual integrity ties adequate protection for privacy to norms in a specific context. In essence, it demands that information collection and its dissemination should be appropriate to the context (Nissenbaum, 2004). For example, in a healthcare context, patients expect to share personal information on their health and they most likely accept that this information is shared with a specialist. Their expectations are violated, however, if they learn that the information is sold to a marketing company.
Rather than prescribing universal rules for what is public (a Facebook page, or Twitter feed) and what is private, contextual integrity builds from within the normative bounds of a given context and illustrates why we must attend to the context in information flows and its use— not the nature of the information itself—when thinking about research ethics. Thus, it is this difference in how the information flow is perceived by researchers (as accessible and easy to ‘manipulate’) and by users (as private and shared only among ‘friends’) that creates the ethical tension and which should be taken into account when we make ethical decisions on the use of SNS data.
To go back to the studies that stirred the controversy; whereas users or participants expected to share information with their known social circle (i.e., similar to telling a friend how I feel today, or whether I voted), the flow of information was changed in the way that this information was modified and responses were studied and then widely published (without consent). Thus, in the “faceless” context of online experiment, the users became “human subjects” and Facebook an experimental field; turning a virtual space into a virtual laboratory. Thus, changing and using information on the newsfeed or personal profiles for research purpose that is geared toward behavioral change impacted on the autonomy and freedom of participants. This is troublesome (and many users picked up on this) because it harms the perception of control and autonomy (which could be witnessed in the many negative responses especially to the second paper). Moreover, it threatens the trust between the community of social scientists and participants, which might have been another reason for the many concerned voices after the publication of the articles. From this standpoint the control of information and what is done with it seems crucial for the management of privacy and autonomy concerns and the ethical handling of research in SNS and has to be discussed in light of the overall values of the context.
Thus, I think that informed consent is vital for conducting large-scale experiments to protect the privacy, autonomy, and control of users and ultimately our participants. Lack of ethical research can hinder academic progress, our regard as a community and trustworthiness. We ultimately need an earnest, innovative and creative discussion in the field on how to implement ethical guidelines that first and foremost protect participants but also allow researchers to conduct sound research. I propose that we start to reconsider the conceptions of risk, benefit and harm of potential participants (e.g. SNS users) and treat participants as stakeholder in research and not passive objects we observe. Various potential ways of gaining consent are discussed my article.
Researchers, ethics committees, and funders must reconsider current approaches to consent to live up to the challenges provided by large-scale online experiments. Shapiro and Ossorio warned that the private sector is charging ahead and de facto creates standards for data use that provide broad — I would argueoverly broad — access to personal information and behaviour. As a field we should make sure that our work has social value that goes beyond selling products and that we are on the front line of setting standards for accessing and working with people’s online information that are in line with our ethical consciousness and research practice.
This piece originally appeared on British Politics & Policy@LSEblogs and is reposted with permission.
Author: Ilka Gleibs
Dr Ilka Gleibs is an Assistant Professor in Social and Organisational Psychology at Department of Social Psychology at the London School of Economics. Previously she worked as a Postdoctoral Fellow at the University of Exeter and as a Lecturer in Experimental Social Psychology at the University of Surrey. She obtained a PhD from the Friedrich-Schiller-University Jena (Germany), and has a range of interests in social psychology but focuses mainly on understanding the consequences of multiple social identities, changes in (of) social identities and well-being. She is a member of the LSE Research Ethics Committee.