LSE - Small Logo
LSE - Small Logo

Esra Ozkan

Sanne Stevens

February 18th, 2021

Policing in Europe: The Nexus Between Structural Racism and Surveillance Economies

0 comments | 25 shares

Estimated reading time: 10 minutes

Esra Ozkan

Sanne Stevens

February 18th, 2021

Policing in Europe: The Nexus Between Structural Racism and Surveillance Economies

0 comments | 25 shares

Estimated reading time: 10 minutes

In a special long read, Esra Ozkan and Sanne Stevens, Table Co-Directors in LSE’s Justice, Equity and Technology Project, take a deep dive into the use of data-driven tools in policing across Europe, and the implications of these for injustices linked to the oppression of marginalized groups and communities.

Europe is amidst a shift towards data-driven tools in policing. This move towards data-driven policing stems not only from institutional changes within law enforcement but also from technology companies aggressively seeking clients in the public sector. Yet, while hi-tech policing features regularly in headline news and glossy reports, public discourse tends to neglect bigger questions of unfairness and injustice and occlude the lived experiences of those groups most harmed and hindered by hi-tech policing. Taking a holistic perspective, we unfold a convergence theory: growing markets for surveillance technology and persistent institutionalised and systemic problems of disregard and discrimination in European security cultures are mutually reinforcing undignified treatment of racialised and marginalised groups. With myriad civil society reports and testimonies pointing to the problem of intensified racially targeted policing and criminalisation, any organised responses to data-driven policing will need to take this convergence into account.

Racialised criminalisation and policing in Europe

As a main tool for the exertion of power for control by the state, policing is inextricably and inherently linked to the oppression of marginalized groups and communities. On a continent where white supremacy runs deep but hardly acknowledged, control by the state has structurally included a racialised control. From the control of the colonial subject, to the criminalized ‘second-generation’ immigrant, the history of policing in Europe is fraught with examples of the criminalisation and targeting of racialised communities. For example, immigration is actively constructed as an existential threat to society, that needs to be policed into control. This logic mobilizes a vast border security apparatus which tries to ward off immigrants by any means, continuous immigration controls and mass-incarceration in ‘detention centres’. Another example is the prioritisation of counter-terrorism policies, motivating and legitimising extremely invasive security measures and the stigmatisation and control of Muslims or those perceived as such. In both examples, racialised communities are the main targets – and both examples are catalysts for more policing.

Both examples are fundamental to the European Union project. There is an uncharacteristically active and relatively well-functioning collaboration between nation states when it comes to the ‘protection from’ immigration and ‘terrorism’. The EU border policing unit Frontex even has the questionable honour of being the first European unit with its own uniforms.

Data-collection and synchronisation between databases is a top-priority for more efficient security control, and invasive programs of data-sharing are legitimized by the fight against ‘terrorism’ or immigration. The New Frontex Regulation that came into force in December 2019 offered unprecedented possibilities of data-sharing and collection. Currently, a quite similar new Europol regulation is in preparation that would allow the agency to process high volumes of data and increase the possibilities of cooperation with private parties. These extended powers of Europol are part of the ‘Security Union Package’ and announced with the Counter-Terrorism Agenda. This new agenda announces to “anticipate, prevent, protect and respond” to so called security threats and not surprisingly, proposes more policing and surveillance as a solution.

Counter-terrorism policies are helping to renew old goals of stigmatising racialised communities, fuelling an environment which warrants discriminatory policing, over-surveillance and crack-downs on civil society organisations. This in turn reinforces a context for policing in which certain communities are continuously treated as suspects, and mistrust and divisiveness foments therein. Discussing narratives about suspicious and threatening minorities, Princeton Professor Ruha Benjamin captures the internalising impacts, whereby “Family, friends, and neighbours – all of us caught in a carceral web, in which other people’s safety and freedom are predicated on our containment.”

Stigmatisation and the culture of distrust is further charged as policing stretches beyond the police itself into the controls of repressive welfare schemes or programs in which public institutions get the task of keeping watch on certain communities. Counter-terrorism policies include a range of ‘preventative’ programs which are broadening the scope of policing and control, using schools, healthcare institutions and social service organisations as informants in order to identify the individuals who are in ‘pre-criminal space’.  In the search for illegalised immigrants, ID checks are ramped up, with or without the use of high-tech biometric scanning technologies. For the undocumented, any interaction with a public institution, a hospital, public transport, a COVID test location, can become a point for ID checks and possible avenues for arrest and deportation.

The disproportionate criminalisation of marginalized communities is also particularly clear in the response to the COVID-19 crisis. In many countries, the response to the pandemic has been one of further control and surveillance. The way lockdown measures were enforced led to increased police brutality – in some cases with lethal consequences as people who were allegedly disobeying these measures have died in incidents with the police. Racial profiling during arbitrary checks is rampant. In the UK, Black people are nine times more likely be stopped and searched compared to white people and people of colour are 54% more likely to be fined. Informal Roma settlements, and the camps where migrants are living have been heavily policed, in some countries this even meant deployment of army and being subjected to mandatory testing. Governments argued that these examples of excessive use of force are necessary to protect the public health. The question arises: whose health is counted as public? As the governance of a health crisis involves mobilising a security apparatus targeted at marginalised communities, the protection of ‘public health’ is only meant to serve the status quo, with the police as conduit for control.

The push for data-driven automation in digital capitalism

As digitalisation strides forward, data-driven automation functions as an effective driver of racialised criminalisation. A whole category of ‘data-driven policing’ has emerged, which includes the deployment of automated surveillance technologies, such as facial recognition, data-driven profiling and so-called ‘predictive policing’ by computational risk analysis. It includes programs aiming to deploy swarms of AI controlled drones at the European borders, or to introduce algorithms for schools to flag those at risk of ‘radicalisation’ in the context of preventative counterterrorism programmes. It also includes crowd control sensors for COVID-19 measures. Tech companies marketing these tools promise greater control with fully automated surveillance and improved pattern recognition and predictive capabilities, increasing appetites in public institutions for data collection, analysis and storage.

It is important to acknowledge the long history of information collection, analysis, categorisation and profiling and its relation to racialisation. Administration and information systems, analogue or digital, have been long indispensable for any state to make its subject litigable, available, governable. The history of administering populations is fraught with examples of racism, discrimination and stigmatisation of marginalised populations.

What is specific for this day and age, however, is the market’s push for ‘data-driven solutions’. In the era of digital capitalism, computational data-analysis is sold as the most superior method to manage any problem or concern. As data becomes a form of capital, the imperative to capture, store, and compute data influences many key decisions about political governance. In the case of policing, it accelerates the turn of state-actors towards data-driven technologies. In the European context, these are often developed partly in-house, with the support of renowned government information and security-technology players such as IBM, Deloitte or Thales. Furthermore, the authorities have the vast infrastructure of Microsoft, Apple, Facebook, Alphabet and Amazon at their disposal for further surveillance and data capture.

On top of that, the bonds between states and corporations are strengthened through the shift to the tech industry’s ingenious ‘software-as-service’ model. Through subscription models, companies promise more ‘agile’ software, which is continuously maintained and updated. However, this also entails a constant monitoring of how computational tools are being used. Not only does this model reinforce round the clock data-capture, but it creates ‘lock in’ of clients and drives up costs of lack of adoption or switching services. This makes their customers, including government institutions, security forces and the police, dependent on the corporate programmable digital infrastructure for their daily operations.

A picture starts to emerge, in which a continuum of data-driven surveillance is built which serves the interest of both states and tech corporations, and mutually reinforces their powers. As over-policed, criminalized and marginalized communities are disproportionately targeted by the tools and policies of control that these technologies serve, the programmable infrastructures of surveillance are in fact tools for institutional racism. This is not surveillance capitalism, but rather a racialised digital capitalism that boosts technologies of discriminatory policing. It is not ‘public’ versus ‘private’ actors, but rather the powerful versus the marginalised.

In the most cynical and exploitative cases, intrusive new technologies of control and data collection are tested on those who are most literally pushed to the margins, as is the case in the migration and refugee-chain for many years. In other cases, inside of European cities, the harms are more hidden as all of this is being progressed under the seemingly natural and benevolent denominator of innovation. One way this plays out, is under the public-private partnership umbrella of the ’Smart City’.

The Smart City as an example of racialised digital capitalism sold by innovation

Since the start of the pandemic, tech evangelists and innovators have been celebrating the ways in which COVID-19 has accelerated public sector adoption of data-driven, cloud-based services. Recently, at a ‘SmartCity’ event in Barcelona, Julia Glidden, vice president public sector at Microsoft and former IBM, observed with great enthusiasm how COVID-19 had given an exponential boost to the digital transformation. “We have seen twenty years of innovation in those two months, as barriers to cloud adoption in a traditionally risk adverse sector, like the public sector, came down in an unprecedented rate.’’ Cloud infrastructure ‘literally held the fabric of civil society together’ she stated proudly. This remark is not far-fetched as Big Tech enjoys its own momentum of disaster capitalism, benefitting from what University of Westminster’s Miriyam Aouragh and her co-authors call the power-grabbing ’let’s first get things done’ moment.

This disaster-driven opportunism and technocratic framing, however, obscures the power and politics of private sector takeover of public infrastructure. Whether driverless cars or other IoT devices, the Smart City program functions through a regime of completely integrated infrastructure of surveillance. Indeed, in the paradigm of smart cities, infrastructure and surveillance become indistinguishable, as vendors sell surveillance as a service which will improve the city; placing sensors and data-collection in the centre of addressing traffic congestion, waste problems and other urban management issues.

Some citizens might indeed appreciate certain conveniences or will be relieved by more efficient waste management and the like. Still, these benefits seem quite meagre compared to the complete onslaught of public structures and space by corporate high tech surveillance systems. More importantly however, as the impact of surveillance is not equally distributed, some communities will be barred, expulsed, overpoliced, and harassed out of the cities of future, as they have been in the past. The Smart City is a telling example of the implementation of data-driven technologies which conserve and deepen discrimination, while appearing neutral and even benevolent compared to the racist practices of a previous era. As Smart City mavens like Glidden delight at the almost overnight transformation in demand for data-driven services, these new technologies will only come to entrench and obfuscate existing urban injustices.

Interestingly, although the term ‘digital transformation’ suggests technological innovation is an inevitable force of nature, even Smart City evangelists admit the turn-of-fortunes during the pandemic happened due to concerted effort over years. As Glidden explains, recent developments would not have been possible without the leverage created by decades of relationship building. ‘Whether is academia or civil society,’ she said. ‘It is trusted relationships that allowed for these rapid deployments, for pulling things forward. Allowing in a week what would have taken three years to do.’ Indeed, Glidden flawlessly represents that relationship building. She serves as expert advisor for the UN and the EU – she was part of the working group for the 2018/19 Horizon 2020 Program and is a Senior Research Fellow at the Vrije Universiteit Brussel. The close ties between academia, the government and the big tech companies are fundamental to a public agenda that pushes data-driven cloud-based technology projects as solutions of myopically conceived societal problems. And it is this strategic and gradual effort that is quietly powering a racialised digital capitalism that harms and hinders racialised and marginalised communities.

As policing becomes enmeshed with data-driven technologies and high-tech surveillance partly run by the tech-industry, structural racism and the over-policing of marginalized communities inherent in policing practises is being outsourced to corporate players. Here we can see processes of racialisation historically foundational to the dispossessions of capitalist exploitation play out within the new regime of surveillance capitalism. As CUNY Professor Ruth Wilson Gilmor rightly remarks “Capitalism requires inequality, and racism enshrines it.”

Organising for Justice, Resisting Technologies of Control

If justice is to be achieved in Europe, any response to data-driven policing must understand the interrelated nature of racialised criminalisation and growth in surveillance technology markets. One path forward is to draw inspiration from the Black Lives Matter movement and related uprisings engage in a critical dialogue about race and the economy by adopting Cedric Robinson’s work on racial capitalism. Siddhant Issar demonstrates that today BLM and the global movement for Black lives turn to the ‘notion of racial capitalism precisely to highlight the historical, ongoing, and structural interconnections between race and capitalism’. This framework helps us to make a strong case for resisting policing technologies and the economic model driving these technologies in the first place. Resisting means not only addressing surveillance, but the unequal distribution of harms of that surveillance and how it is weaponised against specific communities. It means not only addressing the discrimination of this tool or that algorithm but understanding the deployment of technologies in the contexts of the structural racism which is upheld by the institutions that these technologies mean to serve. It means both an understanding of how forms of oppression are historically embedded, as well as the specifics of its transformation in the day and age of data-driven tools. It means, as A. Sivanandan explains, not only addressing ‘the racism that discriminates’ but also ‘the racism that kills’.

In some instances, this might also mean recognizing the ways in which conventional critiques against surveillance technologies can be self-contradictory and counterproductive. For example, approaches that stress the dangers of ‘indiscriminate mass surveillance’ – for example Shoshana Zuboff’s widely lauded critique of surveillance capitalism — flatten out historical disparities between particular groups. These kinds of universalist claims often fall short by implying that surveillance only matters when it affects a general public. They painfully neglect that the impact and harms of mass-surveillance are unequally distributed and could even legitimize targeted surveillance directed at specific criminalized communities.

Another important path forward entails centring the lived experiences of racialised and marginalised communities. This practice, which has roots in the histories of anti-racist resistance, offers one of the most compelling frameworks for organising. Everyday experiences of those bearing the brunt of policing inform the most relevant understanding of the realities of racialised digital capitalism, as of the strategies for meaningful change of these realities. Moving organising from the institutions to the street, building power on the ground, working to move from paranoia to power.

To be clear, the very specifics of the European situation might also mean acknowledging that there is no such thing as ‘The’ European perspective. Local contexts differ immensely. Be it linguistic history, colonial history, a history of secularism or, conversely, religious plurality, jurisdictional specificities, and more, these differences are brushed over as ‘European’ often implies mostly the Western European nations who dominate the narrative, policies and resources.

While centring lived experiences is one necessary step, here too caution is necessary to ensure its purpose within a strategy for transformative justice. All too often personal experiences of harm from marginalised communities are put forward in a move of victimisation and stigmatisation which not only has a disempowering effect but manages to serve a supremacist agenda as it portrays communities as dysfunctional ‘others’, fraught with misery, and even pitches one community against another. Violence is presented as a consequence of life in a particular community, instead as a product of structural and institutional racism. In contrast, contextualised stories are powerful when shared by the communities to unpack the current conditions of systemic injustices, and to organise against them.

Caution is also needed with respect to calls from ‘diversity and inclusion in tech’. This mantra can be heard emanating from technology companies or other institutions in the supply chain for data-driven policing players. These actors promote participatory design processes that include members of marginalised groups or the hiring of more people of colour within existing structures of power. Yet, these prescriptions fall short for they fail to address neither the causes for structural violence of data-driven technologies and racist policing, nor the underlying systems of white supremacy and capitalist extraction. Moreover, this framework serves to legitimise these companies and institutions for welcoming so-called participation and diversity.

By contrast, abolitionist and decolonial perspectives on policing offer a deeply transformative approach by linking the critique of data-driven policing to proposals to redirect resources from the police to communities and by stressing community well-being and collective self-reliance.

There are many opportunities throughout European countries and their contexts for organising which centres lived realities, honouring differences while strategizing around shared issues, for learning from each other’s struggles, for constructing knowledge for our co-liberation. The Justice, Equity and Technology Table hopes to offer one convening space for this – as one of the efforts in support for the work done by organisers throughout Europe.

This article draws on a wealth of work of civil society organisations and watchdogs have published reports which offer crucial insights into the specific harms caused by data-driven technologies used by the police and the structural injustices of racialised policing in Europe. These reports address the many issues of racist policing and data-driven technologies, from a lack of oversight and accountability, to the ways data-driven technologies hardwire discrimination and reinforce existing inequalities. An overview of these reports can be found in this public Zotero Library (an ongoing work in progress). 

The Table for Justice, Equity and Technology is one effort of many and we would love to connect! Please get in touch if you would like to collaborate and join forces.

This article represents the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

About the author

Esra Ozkan

Esra is a Table Co-Director in the Justice, Equity and Technology Table. She has been active in a variety of grassroots groups and civil society organisations in Turkey and in Belgium. She is interested in how social change happens and how social movements can provide a context for individual and community transformation.

Sanne Stevens

Sanne Stevens is Co-Director of the Table of Justice, Equity and Technology. She has many years of experience working with civil society organisations in the field of technology and digital safety. Her interest is critical analysis and organising which addresses underlying power structures of data-driven technology and depart from a deliberate social justice framework.

Posted In: Digital Inequalities | Privacy

Leave a Reply

Your email address will not be published. Required fields are marked *