New technology has played a key role in managing the refugee crisis caused by Russia’s invasion of Ukraine. Niamh Kinchin writes that while this technology can have a positive impact, it also runs the risk of exposing vulnerable people to insecurity and inequities.
When inclusively designed, technology can empower vulnerable populations and lower barriers to participation in political processes and local decision-making. ‘Humanitarian tech’, for example, uses information to help people affected by devastating events, conflict or forced displacement.
In the case of refugees, humanitarian tech has been used for many well-intended purposes, including fostering social inclusion, rehabilitation, better health outcomes, and permanent settlement. But humanitarian tech can also be constructed with the aim of emphasising ‘the generosity of the elite’ rather than as a means to reflect the worldviews, skills, and political sensibilities of the vulnerable people it claims to help.
In the context of migration, technology often means surveillance, biometrics, and automated decision-making. Take President Biden’s ‘smart wall’ as an example. Although not a physical wall like that proposed by Donald Trump, Biden’s wall uses cameras, sensors, large-scale x-ray machines, and fixed towers to seek out people trying to cross the border from Mexico.
Frontex, Europe’s border control agency, detects people attempting to cross the Mediterranean with drones and vehicles equipped with thermal and day cameras, surveillance radars, and motion sensors. Canada is trialling machine learning in its asylum decision-making processes. Hungary, Latvia, and Greece have piloted an automated lie-detection test called ‘iBorderCtrl’ that uses artificial intelligence to record and analyse facial micro-gestures of travellers crossing international borders to determine whether they are answering questions truthfully.
Technologies that collect, mine, and analyse personal data, such as biometrics and the forensic analysis of mobile meta-data and social media, pose risks to privacy rights, data protection, the right to liberty, and freedom from discrimination. However, technology that manages and controls migration, such as surveillance tech, tends to be shielded from scrutiny because of its emergency nature.
Technology in Ukraine
Within this complex matrix of humanity and technology, the ‘tech world’ was one of the first to respond when Ukraine called for help. Examples include Vodafone offering free mobile connectivity to 200,000 Ukrainian refugees and Israel’s Sheba Medical Center opening a virtual hospital in Moldova to enable medical staff in Israel to treat refugees from Ukraine crossing the border with the help of telemedicine technologies.
On 9 May, Prague announced that it had become the first city in the EU to introduce a chatbot for Ukrainian refugees. The chatbot provides answers to the most frequently asked questions about issues such as support, housing, schooling, and employment. Social media has spread awareness of the realities of refugees ‘on the ground’ in quicker and more graphic ways than ever before imagined. Inbuilt tech features such as device finder applications have helped Ukrainians locate their stolen tech, and even to bring attention to raids and thefts by Russian soldiers.
Caveats, however, loom large. Some technology, although well-meaning, potentially exposes refugees to harm. Altruistic projects developed with limited humanitarian knowledge and without collaboration with governments or NGOs can increase the risk of human trafficking and exploitation. A prominent example is Ukraine Take Shelter, which a Seattle-based teenager built to help Ukrainian refugees find potential safe spaces in neighbouring countries and elsewhere. The website, which one cybersecurity expert labelled a ‘Craigslist for paedophiles’, was criticised for its lax attitude to vetting potential hosts and for potentially breaching the EU’s General Data Protection Regulation (GDPR).
Technology can also be used maliciously. In late February, a Ukraine border control station was struck by a data wiper cyberattack that slowed the process of allowing refugees to cross into Romania. Border control technologies that promise greater security and efficiency can be explicitly harmful. Biometrics collection used on refugees poses particular risks for abuse of data. Refugee host governments often do not have robust data protection legislation, and if refugees’ biometric data is shared with either the host nation or the nation of persecution, international protection may be compromised.
The misuse of biometric data may leave refugees exposed to discrimination and rights abuses if authoritarian states utilise the data to identify individuals and groups whose loyalty they question and target them for surveillance or punitive action. For Ukrainian refugees, requirements for biometric data have created onerous administrative burdens. Both the UK and Canada have reported processing ‘bottlenecks’ with some Ukrainians needing to travel long distances to get their biometric data captured before a visa can be issued.
And what of the most ominous of surveillance technologies, facial recognition? In March, the US company Clearview AI offered the Ukrainian government free use of its facial recognition technology to uncover infiltrators, combat misinformation, identify the dead and reunite refugees with their families. For now, Ukraine is limiting the technology to identifying dead Russian soldiers as well as its own casualties. However, working with Clearview raises ethical and legal concerns. Clearview has repeatedly fallen foul of the GDPR for scraping biometric information from the web and disclosing it through a facial recognition tool. It has been heavily sanctioned by data security agencies in Italy and France.
In November 2021, the Australian Information Commissioner and Privacy Commissioner found, after a joint investigation with the UK’s Information Commissioner’s Office (ICO), that Clearview AI had breached Australia’s Privacy Act 1988. In May, the ICO ordered Clearview to stop processing people’s personal data in the UK and to delete it. It also fined Clearview £7.5 million for failing to follow the UK’s data protection laws. On 9 May, the American Civil Liberties Union announced that Clearview had agreed to abide by the Illinois Biometric Information Privacy Act (BIPA), meaning that the company is permanently banned in the US from making its database available to most businesses and other private entities.
Good intentions, mixed outcomes
Perhaps the most striking message to emerge from the tech/humanity matrix in Ukraine is the way technology shines a light on the inequity of refugee treatment. In November 2021, Poland approved a $398 million, 18-foot-high wall with cameras and motion sensors to be built along its border with Belarus to keep refugees out. It was also sending automated texts to mainly Syrian and Iraqi people telling them not to attempt to cross from Belarus. This is a stark difference from the seemingly open arms policy it has announced for Ukrainian refugees.
Technology is not neutral. It carries the values and intentions of those who build it, use it, and attempt to subvert it. The war in Ukraine shows how technology in humanitarian spaces must be conceived according to gradations of opportunity and risk. Opportunities for technological altruism should be encouraged, but with care and safety. Ultimately, technology tells a story, and in Ukraine, that story is about good intent, vulnerability, potential harm, and inequity.
Note: This article gives the views of the author, not the position of EUROPP – European Politics and Policy or the London School of Economics. Featured image credit: European External Action Service