This thesis uncovers a process‑centric fairness crisis in Canadian immigration stemming from opaque, poorly accountable automated (often also termed ‘algorithmic’) decision‑making systems (‘ADMs’). By articulating a three‑pillar framework (transparency, accountability, ex‑ante rule‑making) and mapping it onto existing legal structures, the work supplies a concrete roadmap for lawyers, policymakers, and scholars to demand and design a more just, auditable immigration system.
The convergence of an outdated statutory framework, a permissive soft‑law regime, rapid ADM deployment, and an overburdened judicial system creates a systemic risk that procedural fairness is being eroded in Canadian immigration. Understanding these contextual forces is essential because they explain why the fairness deficits identified in the thesis are not isolated glitches but structural vulnerabilities that affect thousands of applicants and challenge the rule of law.
Lay summary:
"This thesis contributes to recent Canadian scholarship that has explored the impact of automated (algorithmic) decision making systems (“ADMs”) on core Canadian administrative law concepts. Combining doctrinal, law and technology methods with administrative justice theory, this thesis describes a “process problem” with the use of ADMs by Canadian immigration officials. The impact of this problem has led to the judicial system’s struggle to review ADM decisions for procedural fairness, in a way that has impacted also its lens on substantive review. However, to better explore the process problem – three pre-requisite concepts – transparency, accountability, and ex ante rulemaking must be defined and interrogated. This paper concludes by suggesting a shift towards an administrative justice model of “getting it right the first time” and the development of procedural protections through an amended Immigration and Refugee Protection Act and procedural code, inviting refinement."
Goal / Guiding Questions
| Goal | Guiding Question(s) |
|---|---|
| Diagnose the process problem created by ADMs in Canadian immigration. | How have ADMs changed the IRCC decision‑making workflow? What procedural‑fairness and reasonableness issues arise? |
| Evaluate fair‑process prerequisites. | What role do transparency, accountability and ex‑ante rule‑making play in ensuring a “fair process”? |
| Propose reforms. | How can legislative, regulatory or procedural changes restore “getting it right the first time”? |
| Contextual Point | Explanation | Why It Matters |
|---|---|---|
| Legal vacuum in the IRPA – Section 186 gives IRCC a blanket authority to use “electronic means” but contains no specific rules for AI/ADM. | Courts must interpret a statute that was drafted before modern AI existed. | Without clear statutory limits, IRCC can deploy powerful algorithms unchecked, leaving applicants without a solid legal footing to challenge decisions. |
| Directive on Automated Decision‑Making (DADM) – Canada’s only soft‑law governance tool; it is non‑binding, allows low‑impact projects to avoid detailed disclosure, and limits transparency to “plain‑language notices.” | The DADM is the de‑facto regulator for all federal ADMs, including immigration. | Its weak enforcement means agencies can sidestep meaningful oversight, making it difficult for the public or courts to know how a decision was produced. |
| Rapid ADM rollout in immigration – By mid‑2025 IRCC had published 26 Algorithmic Impact Assessments (AIAs) and operates tools such as Advanced Analytics‑TRV, Chinook, and ITAT. Most are classified as “medium impact,” escaping the DADM’s stricter requirements. | The sheer number and diversity of tools show that ADMs are now integral to everyday immigration processing. | The scale magnifies any procedural defect: thousands of applicants are affected, yet the mechanisms to monitor fairness are minimal. |
| Federal Court backlog and boiler‑plate refusals – Over 75 % of immigration cases sit unresolved; many decisions are issued with templated reasons that omit any reference to ADMs. | Judicial review is the primary external check on administrative decisions. | When courts cannot see the underlying algorithmic reasoning, they cannot assess whether a decision complies with procedural fairness, effectively rendering review a rubber‑stamp. |
| Scholarly gap – Existing literature focuses on technical bias or high‑profile AI ethics; few works connect ADM mechanics to procedural fairness and administrative‑justice theory. | This thesis applies Adler’s and Mashaw’s administrative‑justice typologies directly to IRCC’s ADM ecosystem. | It provides a novel analytical lens that bridges law and technology, offering concrete doctrinal tools for courts and policymakers to evaluate fairness. |
| Unique methodological blend – Combines doctrinal legal analysis, exhaustive policy‑document coding, and a narrative case study (“Xi & Wang”) to illustrate real‑world impacts. | The case study grounds abstract legal concepts in a lived‑experience scenario. | Demonstrates how abstract procedural deficiencies translate into tangible hardships for immigrants, making the problem palpable for non‑specialists. |
| Policy relevance – The thesis coincides with ongoing parliamentary reviews (e.g., CBA’s 2025 submission, Treasury Board’s modernization agenda). | Legislators are already debating AI governance reforms. | The research supplies timely, evidence‑based recommendations that can be directly incorporated into upcoming amendments to the DADM or new ADM‑specific statutes. |
The thesis makes three calls to action:
| Audience | Practical Take‑aways & Actions |
|---|---|
| Immigration lawyers & litigants | Request specific ADM disclosures (e.g., model version, risk‑score thresholds) in ATIP or judicial motions. Frame procedural‑fairness arguments around the three‑pronged test (transparency, accountability, ex‑ante rules). |
| Policy‑makers / Treasury Board | Amend the DADM to make “medium‑impact” projects subject to full AIAs (including source‑code summaries). Introduce a statutory ADM‑Transparency Act requiring public registries of all IRCC‑used models. |
| IRCC administrators | Adopt audit‑log retention (minimum 12 months) for all ADM‑generated notes. Publish model cards (dataset, training method, performance metrics) on the IRCC transparency portal. |
| Academic researchers | Extend the “process‑first” framework to other federal agencies (e.g., Canada Revenue Agency). Conduct empirical studies on the impact of ADM‑deletion practices on appellate success rates. |
| Civil‑society NGOs | Use the thesis’s checklist (transparency/accountability/ex‑ante) to evaluate new AIAs and launch public‑interest litigation where gaps appear. Advocate for an Independent ADM Ombudsperson (see CBA recommendation). |
| Future‑research agenda (as identified by the author) | Comparative analysis of ADM procedural fairness across Commonwealth jurisdictions. • Empirical measurement of “automation bias” in IRCC officers using controlled experiments. Development of a legal‑tech prototype that automatically extracts ADM‑related metadata from immigration files for judicial review. |

Subscribe to get the latest posts sent to your email.

Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)