Publications, reports, and articles.

The computer says so: automated recommendation-making tools in immigration systems (2024)

Posted on:
October 25, 2025

This report delivers a human‑rights‑centred, comparative audit of how automated recommendation‑making tools are shaping immigration decisions in three major democracies, exposing systemic bias, opacity, and accountability deficits, and offering a concrete roadmap—particularly for the UK—to embed ethical safeguards, transparency, and robust oversight into any future deployment of such tools.

Guiding aims:

  • Identify the risks and harms of Automated Recommendation‑Making Tools (ARMTs) in immigration.
  • Map where ARMTs are being developed and deployed in Canada and the USA.
  • Analyse how these tools are regulated in the three jurisdictions.Derive policy recommendations for the UK.

What is this research about?

The report warns that unchecked automated recommendation‑making tools in immigration threaten fairness, transparency and human rights, and urges swift, binding regulation—especially in the UK—based on lessons from Canada’s mandatory AI assessments and the USA’s fragmented oversight.

What do you need to know? – Context & Why It Matters

  • Policy vacuum – The UK Home Office has embraced automation (Digital‑by‑Design strategy, 2021) but no binding regulation exists for ARMTs, leading to a “human‑rights‑free zone” (UN Special Rapporteur).
  • Comparative advantage – Canada introduced the world’s first mandatory Directive on Automated Decision‑Making (DADM) (2019) and publishes Algorithmic Impact Assessments (AIAs); the USA relies on executive orders and agency memos with limited enforceability.+
  • Novel framing – Introduces the term “Automated Recommendation‑Making Tools (ARMTs)” to capture tools that assist rather than replace decisions, arguing that current “automated decision‑making” language lets many tools slip through regulation.

What did the researchers find? – Key Highlights & Themes

ThemeFindings (with quotes)
Seven risk categories – bias, transparency, controversial uses, terminology, psychological effects, data, accountability. “These risks fall under seven categories… this list facilitates regulators to consider where the trade‑offs must lie.”
Bias is pervasive – historical, data, developer, proxy, automation, feedback‑loop, confirmation, computational, quantitative. “Historical bias… if past decisions were biased, those biases can be embedded into the algorithm and applied to future cases.”
Transparency gaps – Companies hide trade‑secrets; ATRS records are voluntary and rarely published (only 9 records, none from Home Office). “Only nine records across all government departments had been published and none by the Home Office.”
Controversial ‘automated suspicion’ – tools that flag applications for extra scrutiny (e.g., US ICM, UK GPS‑tagging). “These tools do not make final decisions but generate suspicion… leading to increased scrutiny or deprioritisation.”
Terminology matters – Using “human‑in‑the‑loop” lets minimal human checks bypass regulation. Recommendation: shift to “chain of decision‑making”."The lack of clarity over definitions risks allowing companies and government departments to describe their ARMTs in terms that avoid regulation."
Psychological impact – Migrants feel “indignity” when a computer influences life‑altering decisions. “There is a unique sense of injustice or indignity when an administrative decision … relies … on an algorithmic output.”
Data stewardship failures – No systematic collection of protected‑characteristic data, hampering bias monitoring."There is a lack of public information about the collection, storage and use of data by algorithmic systems, especially when data is provided by a commercial entity. This leaves numerous important questions unanswered."
Accountability erosion – Algorithms are often “black‑box”, making it impossible to trace why a decision was made. “People will never know why the computer says ‘yes’ or ‘no’.”
Case‑Study InsightsUK GPS‑tagging – ARMT decides whether a person should wear an ankle tag, yet individuals are never notified of its use. • Canada ITAT – Risk‑assessment tool used by a “Risk Assessment Unit”; the tool is claimed to be “non‑black‑box” but lacks public AIAs, GBA+, or peer‑review disclosures. • US ICM (Palantir) – Used for investigative case management; controversy over its role in immigration enforcement. • US GeoMatch – Predictive placement tool for refugees; praised for efficiency but criticized for lack of explainability and for treating refugees as “economic objects”.
Outlier finding – Some tools (e.g., GeoMatch) are opt‑in for skilled migrants in Canada, showing a different governance model where consent and user control are built‑in."This tool demonstrates the potential of ARMTs to improve migrants’ lives but re-emphasises the need to proactively regulate ARMTs to safeguard against possible harms."

Particularly Interesting Themes & Outliers

  • Ban on automated refusals (Canada) – IRCC refuses to automate negative decisions, only automates positive ones.
  • Mandatory AIAs vs. Voluntary ATRS – Canada’s mandatory, publicly posted AIAs provide a concrete accountability loop; the UK’s ATRS is still voluntary and poorly enforced.
  • Dual‑use exemption loophole (US) – Executive Order permits agencies to waive AI safeguards for national‑security reasons, potentially covering immigration tools.
  • Human‑rights framing of ‘automated suspicion’ – The report treats suspicion‑generation as a distinct ethical category, not just a technical feature.
  • GeoMatch’s “economic object” critique – Highlights a value‑alignment problem: tools optimise for host‑country labor markets rather than refugee wellbeing.
  • Feedback‑loop bias – Demonstrated with GPS‑tagging: once a person is tagged, data feeds back into the algorithm, reinforcing future tagging.
  • Lack of protected‑characteristic data – Both Canada and the UK collect no race/ethnicity data in immigration AI pipelines, making bias audits impossible.

How can you use this research?

AudiencePractical Take‑aways
Policymakers (UK Government, Home Office)Enact a binding ban on black‑box algorithms and on automated refusals (mirroring Canada).Adopt the “chain of decision‑making” terminology and require disclosure at every stage.Make AIAs and ATRS records mandatory, publicly searchable, and updated on a set schedule (e.g., every 6 months).Require collection of protected‑characteristic data for bias monitoring, with strict data‑protection safeguards.Create an independent oversight body (audit, peer‑review, GBA+ assessment) with enforcement powers (funding withdrawal).
Legal practitioners & NGOsUse the report’s checklist of disclosure failures (e.g., lack of notice of ARMT use) to frame FOI/SAR requests and judicial review arguments.Advocate for statutory duties under the Equality Act 2010 to consider algorithmic bias.Educate clients that they have a right to ask whether an ARMT was involved and to request an explanation.
Immigration officials & public‑sector managersImplement training programmes on algorithmic bias, explainability, and the “chain of decision‑making”.Introduce internal audit logs that record when an ARMT is consulted and the human decision‑maker’s final action.Pilot transparent, explainable models (e.g., rule‑based scoring) for high‑risk decisions.
Technology developers & vendorsDesign tools to be explainable by design (avoid opaque ML models for public‑sector use).Provide documentation packages (model cards, data sheets) that satisfy future mandatory AIAs.Include human‑rights impact assessments early in the development lifecycle.
Academics & ResearchersBuild on the seven‑risk framework for comparative studies in other jurisdictions.Investigate feedback‑loop dynamics in ARMTs (e.g., GPS‑tagging, risk‑scoring).Explore opt‑in consent models like GeoMatch’s Canadian version as a pathway to ethical AI.
Future‑research agenda (as identified by the author)Empirical measurement of bias outcomes once protected‑characteristic data become available.Longitudinal studies on psychological effects of algorithmic suspicion on migrants.Evaluation of independent audit mechanisms and their impact on tool redesign.

What did the researchers do? – Methodology

MethodDetails
Field research (summer 2024)Conducted over 50 semi‑structured interviews with more than 25 organisations across Canada, the USA, and the UK.
Stakeholder mixGovernment officials, civil servants, technology developers, software engineers, academics, lawyers, and migrant‑rights organisations.
Geographic coverageCanada (Toronto, Ottawa, Montreal, Vancouver); USA (New York, Washington DC, San Francisco); UK (London, York).
Case‑study selectionFive in‑depth case studies: • UK GPS‑tagging • Canada Integrity Trend Analysis Tool (ITAT) • Canada privately‑sponsored refugee applications • US Investigative Case Management (ICM) • US GeoMatch.
Document analysisReviewed policy documents, legislative texts, public‑sector AI guidelines, FOIA requests, and published AIAs.
Qualitative synthesisIdentified recurring risk categories, mapped regulatory gaps, and extracted direct quotations from interviewees (e.g., “There is a unique sense of injustice…”).
LimitationsResearch reflects the state of affairs up to September 2024; rapid tech‑policy changes after that date are not captured.
AI transparency statement

Discover more from Knowledge Mobilization for Settlement

Subscribe to get the latest posts sent to your email.

Summary

This report delivers a human‑rights‑centred, comparative audit of how automated recommendation‑making tools are shaping immigration decisions in three major democracies, exposing systemic bias, opacity, and accountability deficits, and offering a concrete roadmap—particularly for the UK—to embed ethical safeguards, transparency, and robust oversight into any future deployment of such tools.
arrow-circle-upenter-downmagnifier

Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)