As a companion piece to the UN Special Rapporteur’s report on racism, technology, and borders, the Refugee Law Lab and EDRi (European Digital Rights) published a report based on over 40 interviews with refugees and people on the move, exploring the systemic factors that create migration management experiments at and around the border.
"This report offers the beginning of a systemic analysis of migration management technologies, foregrounding the experiences of people on the move who are interacting with and thinking about surveillance, biometrics, and automated decision-making during the course of their migration journeys. Our reflections highlight the need to recognise how uses of migration management technology perpetuate harms, exacerbate systemic discrimination and render certain communities as technological testing grounds...
This report first presents recommendations for policy makers, governments, and the private sector on the use of migration management technologies, foregrounding the need to focus on the harmful impacts of these interventions and abolish the use of high risk applications. We then provide a brief snapshot of the ecosystem of migration management technologies, highlighting various uses before, at, and beyond the border and analysing their impacts on people’s fundamental human rights. The report concludes with reflections on why and how states are able to justify these problematic uses of technologies, exacerbating and creating new barriers to access to justice through the allure of technosolutionism, the criminalization of migration, and border externalization—all occuring in an environment of dangerous narratives stoking antimigrant sentiments. Technology replicates power relations in society that render certain communities as testing grounds for innovation. These experiments have very real impacts on people’s rights and lives."
Watch this presentation about the report findings and recommendations by Petra Molnar:
Introduction
"States are increasingly turning to novel techniques to ‘manage’ migration. Across the globe, an unprecedented number of people are on the move due to conflict, instability, environmental factors, and economic reasons. As a response to increased migration into the European Union over the last few years, many states and international organizations involved in migration management are exploring technological experiments in various domains such as border enforcement, decision-making, and data mining. These experiments range from Big Data predictions about population movements in the Mediterranean and Aegean seas to automated decision-making in immigration applications to Artificial Intelligence (AI) lie detectors and risk-scoring at European borders. These innovations are often justified under the guise of needing new tools to ‘manage’ migration in novel ways. However, often these technological experiments do not consider the profound human rights ramifications and real impacts on human lives.
Now, as governments move toward biosurveillance6 to contain the spread of the COVID-19 pandemic, we are seeing an increase in tracking projects and automated drones. If previous use of technology is any indication, refugees and people crossing borders will be disproportionately targeted and negatively affected. Proposed tools such as virus-targeting robots,8 cellphone tracking, and AI-based thermal cameras can all be used against people crossing borders, with far-reaching human rights impacts. In addition to violating the rights of the people subject to these technological experiments, the interventions themselves do not live up to the promises and arguments used to justify these innovations. This use of technology to manage and control migration is also shielded from scrutiny because of its emergency nature. In addition, the basic protections that exist for more politically powerful groups that have access to mechanisms of redress and oversight are often not available to people crossing borders. The current global digital rights space also does not sufficiently engage with migration issues, at best only tokenizing the involvement from both migrants and groups working with this community.
Technology and migration are at the forefront of European policy development. For example, on September 23rd 2020, the European Commission published its long-awaited “Pact on Migration and Asylum,” along with a host of legislative proposals, guidance and other texts. This Pact explicitly mentions a study “on the technical feasibility of adding a facial recognition software...for the purposes of comparing facial images, including of minors”12 to eu-LISA (or The European Union Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice). The pact also broadens border screening and increases immigration detention capabilities;13 includes a proposed “pre-entry” screening process with biometric data, security, health, and vulnerability checks; expands the EURODAC database for the comparison of biometric data; and strengthens the mandate of FRONTEX, the European Border and Coast Guard Agency. More broadly, this year’s EU White Paper on Artificial Intelligence and accompanying documents insufficiently engaged with the specific context of migration management technologies, relying on overly broad categories of “high risk” applications without analysis of how AI-type technologies impinge on people’s human rights in the migration context.
Ultimately, the primary purpose of the technologies used in migration management is to track, identify, and control those crossing borders. The issues around emerging technologies in the management of migration are not just about the inherent use of technology but rather about how it is used and by whom, with states and private actors setting the stage for what is possible and which priorities matter. The data gathering inherent in the development of these technologies also includes the expansion of existing mass-scale databases that underpin these practices to sensitive data, especially biometrics. The implementation of an EU-wide overarching interoperable smart border management system is also imminent in the coming years.A Such data and technology systems provide an enabling infrastructure for many automated decision-making projects with potentially harmful implications. The development and deployment of migration management is ultimately about decision-making by powerful actors on communities with few resources and mechanisms of redress.
Politics also cannot be discounted, as migration management is inherently a political exercise. Migration data has long being politicised by states to justify greater interventions in support of threatened national sovereignty and to bolster xenophobic and antimigrant narratives. The state’s ultimate power to decide who is allowed to enter and under what conditions is strengthened by ongoing beliefs in technological impartiality.
The unequal distribution of benefits from technological development privileges the private sector as the primary actor in charge of development, with states and governments wishing to control the flows of migrant populations benefiting from these technological experiments. Governments and large organizations are the primary agents who benefit from data collection and affected groups remain the subject, relegated to the margins. It is therefore not surprising that the regulatory and legal space around the use of these technologies remains murky and underdeveloped, full of discretionary decision-making, privatized development, and uncertain legal ramifications.
These power and knowledge monopolies are allowed to exist because there is no unified global regulatory regime governing the use of new technologies, creating laboratories for high risk experiments with profound impacts on people’s lives. This type of experimentation also foregrounds certain framings over others that prioritize certain types of interventions (ie ‘catching liars at the border’ vs ‘catching racist border guards’). Why is it a more urgent priority to deport people faster rather than use technological interventions to catch mistakes that are made in improperly refused immigration and refugee applications?
The so-called AI divide—or the gap between those who are able to design AI and those who are subject to it—is broadening and highlights problematic power dynamics in participation and agency when it comes to the roll out of new technologies. Who gets to participate in conversations around proposed interventions? Which communities become guineapigs for testing new initiatives? Why does so little oversight and accountability exist in this opaque space of high stakes and high risk decision-making?
The human rights impacts of these state and private sector practices is a useful lens through which to examine these technological experiments, particularly in times of greater border control security and screening measures, complex systems of global migration management, the increasingly widespread criminalization of migration, and rising xenophobia. States have clear domestic and international legal obligations to respect and protect human rights when it comes to the use of these technologies and it is incumbent upon policy makers, government officials, technologists, engineers, lawyers, civil society, and academia to take a broad and critical
look at the very real impacts of these technologies on human lives.
Unfortunately, the viewpoints of those most affected are routinely excluded from the discussion, particularly around areas of no-go-zones or ethically fraught usages of technology. There is a lack of contextual analysis when thinking through the impact of new technologies resulting in great ethical, social, political, and personal harm."
[pdf-embedder url="https://km4s.ca/wp-content/uploads/Technological-Testing-Grounds-2020.pdf" title="Technological-Testing-Grounds (2020)"]
Please take this short 7-question survey where you can tell us how we are doing and how we might do better. This survey is anonymous. Your feedback will be used to improve the KM4S.ca website. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)