LSE - Small Logo
LSE - Small Logo

Dipa Patel

April 30th, 2020

What does the Covid-19 pandemic mean for digital rights?

0 comments

Estimated reading time: 5 minutes

Dipa Patel

April 30th, 2020

What does the Covid-19 pandemic mean for digital rights?

0 comments

Estimated reading time: 5 minutes

In the first instalment of a two-part series exploring the implications of Covid-19 for digital rights and data ethics, MSc Health and International Development alumna Elise Racine looks at what lessons can be drawn from her 2019 dissertation.

While at the London School of Economics, I had the opportunity to delve into the societal impacts and human rights implications of emerging technologies, particularly for vulnerable groups. I am a huge believer in the power of social innovation to tackle some of our world’s most pressing challenges—including pandemics like COVID-19. But I realize that to accomplish their intended purpose of doing good while reducing harm, such solutions must be ethically designed, implemented, and utilized, as well as critically and consistently scrutinized.

Exceptional circumstances, however, can lead to poorly designed and/or implemented projects that bypass democratic procedures. Such projects can be misused and, ultimately, place individuals at greater risk. I examined this reality in my MSc dissertation on humanitarian digital identity systems for displaced populations. As I see a growing number of tech initiatives aimed at helping trace and contain the spread of the coronavirus, this research seems increasingly relevant. So, I reviewed my dissertation to see what lessons could be applied as we navigate these difficult circumstances and the unprecedented technological deployments they have inspired.

COVID-19-Related Technological Solutions

Before diving into the implications of these technologies, I want to quickly review the novel digital surveillance tools that are surfacing around the world. Many of these tools, including various mobile applications, have been used for contact tracing, or the process of identifying everyone with whom an infected individual has recently had contact. Contact tracing is an important communicable disease control measure and has been crucial to stopping the transmission of the coronavirus. But many of these digital tracking initiatives have been involuntary, raising questions of government oversight and the erosion of data privacy.

China, where the virus first emerged, has been leading the charge. Mobilizing the country’s extensive surveillance apparatus, authorities have employed a mix of travel records, CCTV cameras, drones, and location data from cellphones to trace the coronavirus’ spread and enforce quarantines. At least 28 other countries are similarly utilizing mobile data to analyze social patterns, track confirmed cases, and/or monitor quarantined individuals. These include Austria, Australia, Germany, India, Israel, Italy, South Korea, Switzerland, Taiwan, and the United States. In several of these countries, this data has been supplemented with intelligence gathered from credit card transactions, security footage, and other sources. In South Korea, some of this information—like the ages, work and home addresses, and frequented restaurants of confirmed patients—has appeared on government websites. While names are omitted, there have been cases where these reports have provided enough details for the public to identify these individuals.

How Does This All Relate to Fundamental Rights? 

There is a huge power differential between those collecting and supplying data in COVID-19-related surveillance projects, many of which are government-controlled. This asymmetry combined with the exceptional circumstances and lack of transparency have ignited doubts around the ability of these digital tools to achieve informed and meaningful consent. Informed consent is not only critical to upholding human dignity and autonomy, but a core component of a rights-based approach to both health and innovation. For consent to be meaningful, it must be an ongoing process, revocable, and adaptable to different digital abilities. Most importantly, it must be freely given. But in the current situation do people truly have a choice? Accounts, including those from democracies like South Korea, indicate that in certain circumstances individuals cannot opt out of these surveillance measures and are not even being notified when their personal information is collected and used.

The value attached to personal information has led to claims that data is the most valuable resource on the planet, or the so-called “new oil.” But data is not oil. Rather, I and others would argue that the ability to control your personal data is a fundamental right. In which case, such commodification is highly problematic. When facing a global crisis, like a pandemic, this commodification has the potential to occur under the guise of advancing the “public good.” In general, our fundamental rights may be restricted during a public health emergency through curfews, lockdowns, forced border closures, and other limits on personal freedoms. In the case of COVID-19, these countermeasures have been necessary to contain the virus’ spread.

But they have also affected the balance between individual rights and the protection of the population at large and may have serious implications for data privacy. For example, citizens may be willing to forgo privacy protections if they believe it is necessary to fight COVID-19. In other cases, governments are demanding that individuals do so for the sake of public safety. But if these ad hoc measures lack sound democratic oversight, they could greatly infringe civil liberties. This may occur now as these programs are being deployed in the context of the pandemic. Or, it may happen once this crisis ends if such tools continue to be utilized.

Function Creep and the Potential for Mass Surveillance

These temporary restrictions are concerning especially when we consider the risk for function creep. Or in other words, the possibility that projects may exceed their original purpose. Not only is it difficult to rollback surveillance practices once they are in place, but underlying political logics and technological capabilities are in constant flux. For instance, an authoritarian regime may come into power and use COVID-19 digital measures to locate and imprison dissidents. Or, analytic advancements may enable unprecedented surveillance capabilities such that seemingly innocuous data provides unintentional information that can be manipulated and abused (including retroactively).

Digital initiatives may not always be able to withstand these changes, especially if they are poorly designed and/or implemented. In particular, they may be repurposed as political tools and/or coopted into mass surveillance networks. And while the rush to apply pandemic surveillance solutions may be necessary to combat the exponential growth of the virus, many projects have forged ahead with little scrutiny. This raises questions as to whether proper safeguards have been put in place to prevent these tools from being misused now or in the future. History, unfortunately, provides multiple examples where technology has been misappropriated to monitor, manipulate, and/or control individuals.

The fact that we are already seeing governments utilize technology that can identify and track individuals without their consent or knowledge is not a promising sign. Take for example Hong Kong and India, where officials have employed geofencing to draw virtual parameters around quarantine zones and monitor signals from smartphones and wristbands to pinpoint offenders—who can be jailed for their actions. Ultimately, the COVID-19 pandemic has the potential to usher in a new era of mass digital surveillance. Averting this prospect will require significant due diligence on our part.

These developments have particular implications for vulnerable populations, which I will expand upon in Part II of this series.


Elise Racine is an MPA candidate at the Hertie School and a research associate for A Path for Europe (PfEU), a nonprofit think tank where she focuses on digital rights and data ethics. She holds a MSc with Distinction from LSE and a BA with Honors from Stanford University.

The views expressed in this post are those of the author and in no way reflect those of the International Development LSE blog or the London School of Economics and Political Science. 

About the author

Dipa Patel

Posted In: Covid-19 | Department Alumni | Topical and Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

RSS Justice and Security Research Programme

RSS LSE’s engagement with South Asia

  • India Goes to the Polls 2
    Beginning later this week, national elections in India (the world’s most populous country) will happen over 6+ weeks, with results being declared on 4 June. Vignesh Rajahmani and Raghunath Nageswaran look at the context in which the elections are happening, and how Opposition political parties — through formal alliances or individually — continue to challenge […]
  • India Goes to the Polls 1
    National elections in India — the world’s most populous country — is a logistical marvel, and the most important event in its political life. Beginning later this week, India’s elections will happen over 6+ weeks, with results being declared on 4 June. Pranav Gupta looks at various issues at play, what survey data reveals and […]