This report delivers a human‑rights‑centred, comparative audit of how automated recommendation‑making tools are shaping immigration decisions in three major democracies, exposing systemic bias, opacity, and accountability deficits, and offering a concrete roadmap—particularly for the UK—to embed ethical safeguards, transparency, and robust oversight into any future deployment of such tools.
Guiding aims:
The report warns that unchecked automated recommendation‑making tools in immigration threaten fairness, transparency and human rights, and urges swift, binding regulation—especially in the UK—based on lessons from Canada’s mandatory AI assessments and the USA’s fragmented oversight.
| Theme | Findings (with quotes) |
|---|---|
| Seven risk categories – bias, transparency, controversial uses, terminology, psychological effects, data, accountability. | “These risks fall under seven categories… this list facilitates regulators to consider where the trade‑offs must lie.” |
| Bias is pervasive – historical, data, developer, proxy, automation, feedback‑loop, confirmation, computational, quantitative. | “Historical bias… if past decisions were biased, those biases can be embedded into the algorithm and applied to future cases.” |
| Transparency gaps – Companies hide trade‑secrets; ATRS records are voluntary and rarely published (only 9 records, none from Home Office). | “Only nine records across all government departments had been published and none by the Home Office.” |
| Controversial ‘automated suspicion’ – tools that flag applications for extra scrutiny (e.g., US ICM, UK GPS‑tagging). | “These tools do not make final decisions but generate suspicion… leading to increased scrutiny or deprioritisation.” |
| Terminology matters – Using “human‑in‑the‑loop” lets minimal human checks bypass regulation. Recommendation: shift to “chain of decision‑making”. | "The lack of clarity over definitions risks allowing companies and government departments to describe their ARMTs in terms that avoid regulation." |
| Psychological impact – Migrants feel “indignity” when a computer influences life‑altering decisions. | “There is a unique sense of injustice or indignity when an administrative decision … relies … on an algorithmic output.” |
| Data stewardship failures – No systematic collection of protected‑characteristic data, hampering bias monitoring. | "There is a lack of public information about the collection, storage and use of data by algorithmic systems, especially when data is provided by a commercial entity. This leaves numerous important questions unanswered." |
| Accountability erosion – Algorithms are often “black‑box”, making it impossible to trace why a decision was made. | “People will never know why the computer says ‘yes’ or ‘no’.” |
| Case‑Study Insights | • UK GPS‑tagging – ARMT decides whether a person should wear an ankle tag, yet individuals are never notified of its use. • Canada ITAT – Risk‑assessment tool used by a “Risk Assessment Unit”; the tool is claimed to be “non‑black‑box” but lacks public AIAs, GBA+, or peer‑review disclosures. • US ICM (Palantir) – Used for investigative case management; controversy over its role in immigration enforcement. • US GeoMatch – Predictive placement tool for refugees; praised for efficiency but criticized for lack of explainability and for treating refugees as “economic objects”. |
| Outlier finding – Some tools (e.g., GeoMatch) are opt‑in for skilled migrants in Canada, showing a different governance model where consent and user control are built‑in. | "This tool demonstrates the potential of ARMTs to improve migrants’ lives but re-emphasises the need to proactively regulate ARMTs to safeguard against possible harms." |
| Audience | Practical Take‑aways |
|---|---|
| Policymakers (UK Government, Home Office) | Enact a binding ban on black‑box algorithms and on automated refusals (mirroring Canada).Adopt the “chain of decision‑making” terminology and require disclosure at every stage.Make AIAs and ATRS records mandatory, publicly searchable, and updated on a set schedule (e.g., every 6 months).Require collection of protected‑characteristic data for bias monitoring, with strict data‑protection safeguards.Create an independent oversight body (audit, peer‑review, GBA+ assessment) with enforcement powers (funding withdrawal). |
| Legal practitioners & NGOs | Use the report’s checklist of disclosure failures (e.g., lack of notice of ARMT use) to frame FOI/SAR requests and judicial review arguments.Advocate for statutory duties under the Equality Act 2010 to consider algorithmic bias.Educate clients that they have a right to ask whether an ARMT was involved and to request an explanation. |
| Immigration officials & public‑sector managers | Implement training programmes on algorithmic bias, explainability, and the “chain of decision‑making”.Introduce internal audit logs that record when an ARMT is consulted and the human decision‑maker’s final action.Pilot transparent, explainable models (e.g., rule‑based scoring) for high‑risk decisions. |
| Technology developers & vendors | Design tools to be explainable by design (avoid opaque ML models for public‑sector use).Provide documentation packages (model cards, data sheets) that satisfy future mandatory AIAs.Include human‑rights impact assessments early in the development lifecycle. |
| Academics & Researchers | Build on the seven‑risk framework for comparative studies in other jurisdictions.Investigate feedback‑loop dynamics in ARMTs (e.g., GPS‑tagging, risk‑scoring).Explore opt‑in consent models like GeoMatch’s Canadian version as a pathway to ethical AI. |
| Future‑research agenda (as identified by the author) | Empirical measurement of bias outcomes once protected‑characteristic data become available.Longitudinal studies on psychological effects of algorithmic suspicion on migrants.Evaluation of independent audit mechanisms and their impact on tool redesign. |
| Method | Details |
|---|---|
| Field research (summer 2024) | Conducted over 50 semi‑structured interviews with more than 25 organisations across Canada, the USA, and the UK. |
| Stakeholder mix | Government officials, civil servants, technology developers, software engineers, academics, lawyers, and migrant‑rights organisations. |
| Geographic coverage | Canada (Toronto, Ottawa, Montreal, Vancouver); USA (New York, Washington DC, San Francisco); UK (London, York). |
| Case‑study selection | Five in‑depth case studies: • UK GPS‑tagging • Canada Integrity Trend Analysis Tool (ITAT) • Canada privately‑sponsored refugee applications • US Investigative Case Management (ICM) • US GeoMatch. |
| Document analysis | Reviewed policy documents, legislative texts, public‑sector AI guidelines, FOIA requests, and published AIAs. |
| Qualitative synthesis | Identified recurring risk categories, mapped regulatory gaps, and extracted direct quotations from interviewees (e.g., “There is a unique sense of injustice…”). |
| Limitations | Research reflects the state of affairs up to September 2024; rapid tech‑policy changes after that date are not captured. |

Subscribe to get the latest posts sent to your email.

Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)