This report focuses on the impacts of automated decision-making in Canada’s immigration and refugee system from a human rights perspective. It highlights how the use of algorithmic and automated technologies to replace or augment administrative decision-making in this context threatens to create a laboratory for high-risk experiments within an already highly discretionary system. Vulnerable and under-resourced communities such as non-citizens often have access to less robust human rights protections and fewer resources with which to defend those rights. Adopting these technologies in an irresponsible manner may only serve to exacerbate these disparities.
The use of these technologies is not merely speculative: the Canadian government has already been experimenting with their adoption in the immigration context since at least 2014. For example, the federal government has been in the process of developing a system of “predictive analytics” to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. The government has also quietly sought input from the private sector related to a 2018 pilot project for an “Artificial Intelligence Solution” in immigration decision-making and assessments, including in Humanitarian and Compassionate applications and Pre-Removal Risk Assessments. These two applications are often used as a last resort by vulnerable people fleeing violence and war to remain in Canada.
The ramifications of using automated decision-making in the immigration and refugee space are far-reaching. Hundreds of thousands of people enter Canada every year through a variety of applications for temporary and permanent status. Many come from war-torn countries seeking protection from violence and persecution. The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others. These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.
This report first outlines the methodology and scope of analysis and provides a few conceptual building blocks to situate the discussion surrounding automated decision systems. It then surveys some of the current and proposed uses of automated decision-making in Canada’s immigration and refugee system. Next, it provides an overview of the various levels of decision-making across the full lifecycle of the immigration and refugee process to illustrate how these decisions may be affected by new technologies. The report then develops a human rights analysis of the use of automated decision systems from a domestic and international perspective. Without a critical human rights analysis, the use of automated decision-making may result in infringements on a variety of rights, including the rights to equality and non-discrimination; freedom of movement, expression, religion, and association; privacy rights and the rights to life, liberty, and security of the person. These technologies in the immigration and refugee system also raise crucial constitutional and administrative law issues, including matters of procedural fairness and standard of review. Finally, the report documents a number of other systemic policy challenges related to the adoption of these technologies—including those concerning access to justice, public confidence in the legal system, private sector accountability, technical capacity within government, and other global impacts.
The report concludes with a series of specific recommendations for the federal government, the complete and detailed list of which are available at the end of this publication. In summary, they include recommendations that the federal government:
Subscribe to get the latest posts sent to your email.
Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)