Publications, reports, and articles.

Bots at the Gate: A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System

Posted on:
November 27, 2019

This report focuses on the impacts of automated decision-making in Canada’s immigration and refugee system from a human rights perspective. It highlights how the use of algorithmic and automated technologies to replace or augment administrative decision-making in this context threatens to create a laboratory for high-risk experiments within an already highly discretionary system. Vulnerable and under-resourced communities such as non-citizens often have access to less robust human rights protections and fewer resources with which to defend those rights. Adopting these technologies in an irresponsible manner may only serve to exacerbate these disparities.

The use of these technologies is not merely speculative: the Canadian government has already been experimenting with their adoption in the immigration context since at least 2014. For example, the federal government has been in the process of developing a system of “predictive analytics” to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. The government has also quietly sought input from the private sector related to a 2018 pilot project for an “Artificial Intelligence Solution” in immigration decision-making and assessments, including in Humanitarian and Compassionate applications and Pre-Removal Risk Assessments. These two applications are often used as a last resort by vulnerable people fleeing violence and war to remain in Canada.

The ramifications of using automated decision-making in the immigration and refugee space are far-reaching. Hundreds of thousands of people enter Canada every year through a variety of applications for temporary and permanent status. Many come from war-torn countries seeking protection from violence and persecution. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others. These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.

This report first outlines the methodology and scope of analysis and provides a few conceptual building blocks to situate the discussion surrounding automated decision systems. It then surveys some of the current and proposed uses of automated decision-making in Canada’s immigration and refugee system. Next, it provides an overview of the various levels of decision-making across the full lifecycle of the immigration and refugee process to illustrate how these decisions may be affected by new technologies. The report then develops a human rights analysis of the use of automated decision systems from a domestic and international perspective. Without a critical human rights analysis, the use of automated decision-making may result in infringements on a variety of rights, including the rights to equality and non-discrimination; freedom of movement, expression, religion, and association; privacy rights and the rights to life, liberty, and security of the person. These technologies in the immigration and refugee system also raise crucial constitutional and administrative law issues, including matters of procedural fairness and standard of review. Finally, the report documents a number of other systemic policy challenges related to the adoption of these technologies—including those concerning access to justice, public confidence in the legal system, private sector accountability, technical capacity within government, and other global impacts.

The report concludes with a series of specific recommendations for the federal government, the complete and detailed list of which are available at the end of this publication. In summary, they include recommendations that the federal government:

  1. Publish a complete and detailed report, to be maintained on an ongoing basis, of all automated decision systems currently in use within Canada’s immigration and refugee system, including detailed and specific information about each system.
  2. Freeze all efforts to procure, develop, or adopt any new automated decision system technology until existing systems fully comply with a government-wide Standard or Directive governing the responsible use of these technologies.
  3. Adopt a binding, government-wide Standard or Directive for the use of automated decision systems, which should apply to all new automated decision systems as well as those currently in use by the federal government.
  4. Establish an independent, arms-length body with the power to engage in all aspects of oversight and review of all use of automated decision systems by the federal government.
  5. Create a rational, transparent, and public methodology for determining the types of administrative processes and systems which are appropriate for the experimental use of automated decision system technologies, and which are not.
  6. Commit to making complete source code for all federal government automated decision systems—regardless of whether they are developed internally or by the private sector—public and open source by default, subject only to limited exceptions for reasons of privacy and national security. 7. Launch a federal Task Force that brings key government stakeholders alongside academia and civil society to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly

Summary

This report focuses on the impacts of automated decision-making in Canada’s immigration and refugee system from a human rights perspective.
arrow-circle-upenter-downmagnifier