The agreement seeks to create a shared vocabulary for digital‑literacy‑related concepts, provide implementation guidance for organizations across sectors, and propose a standardized set of indicators for measuring progress and barriers.
The National Workshop Agreement offers a comprehensive, actionable blueprint for advancing digital literacy across Canada. By unifying language, outlining clear implementation steps, and supplying a ready‑made measurement toolkit, it equips policymakers, educators, community groups and private firms with the means to coordinate efforts, monitor progress and ensure that digital‑skill development is equitable, inclusive and future‑ready, particularly as AI becomes an integral part of everyday digital interaction.
Implicit guiding questions include:
The NWA was authored by SETI (Social Economy Through Social Inclusion) on behalf of the Standards Council of Canada (SCC), in partnership with the Digital Governance Council (DGC). It entered into force 24 September 2025 and will be reviewed after three years. This document consolidates terminology, implementation methodology, sector‑specific pathways, and a unified measurement framework in a single, nationally‑endorsed agreement. It also supplies sample survey instruments (Likert‑scale) that can be deployed across jurisdictions for bench-marking.
This is not a formal Canadian Standard but a code of practice that can evolve into a National Standard after expert review. The National Workshop Agreement provides a holistic, sector‑spanning blueprint for defining, implementing, and measuring digital‑literacy initiatives in Canada. Its unique blend of terminology, methodological guidance, and ready‑made measurement tools makes it a practical reference for anyone tasked with closing digital‑skill gaps. By adopting its common language, implementation steps, and indicator set, stakeholders can coordinate more effectively, track progress, and ensure that digital‑literacy programs are equitable, inclusive, and future‑ready (including AI literacy).
Why it matters
Canada’s rapid digital transformation is widening gaps in who can access, understand and benefit from new technologies. The document notes that inequities in digital literacy can lead to exclusion, reduced economic opportunity and erosion of trust in digital systems. The NWA is positioned as a response to these systemic challenges, seeking to ensure that youth, elders, newcomers, Indigenous peoples and other historically marginalized groups are not left behind.
Although the document itself is not a qualitative study, it embeds stakeholder perspectives, stating that co‑design “ensures cultural relevance and respects lived experiences,” thereby reflecting the voices of the communities it intends to serve.
Shared terminology – The agreement defines 24 core concepts, each with a simplified and a standard definition. For example, Digital Literacy is described as “the ability to find, understand, create, and share information using digital tools and technologies safely and responsibly.” Similar paired definitions are provided for Digital Fluency, AI Literacy, Digital Trust and many others, establishing a common language for all stakeholders.
Implementation methodology – A systematic approach is laid out, beginning with purpose definition and audience identification, moving through co‑design with communities, selection of delivery models, accessibility considerations, train‑the‑trainer (TOT) models, provision of tools and infrastructure, measurement and adaptation, ethics and safety, and finally ongoing learning. The document stresses that co‑design “ensures cultural relevance and respects lived experiences,” highlighting the importance of community input throughout the process.
Sector‑specific pathways – Separate recommendations are given for the public sector (integrating digital literacy into curricula, workforce development and lifelong learning), the nonprofit sector (leveraging trusted community relationships and trauma‑informed pedagogy), the private sector (embedding literacy in onboarding, CSR programmes and responsible product design), and cross‑sector collaboration (collective‑impact models, shared resources and data sharing while respecting privacy).
Measurement framework – Six families of indicators are proposed: Access & Infrastructure; Skills & Confidence; Inclusion & Equity; Program Reach & Engagement; Capacity & Ecosystem Support; and Outcomes & Impact. Each family contains concrete metrics—for instance, the percentage of the target population with reliable internet, or the number of trained facilitators in a community.
Best‑practice patterns – The agreement repeatedly cites community‑led partnerships, embedding digital‑skill training within existing services, pairing device access with training, hybrid and mobile delivery models, and open‑access repositories as proven strategies.
The framework places AI Literacy and Personal Agents / AI Agency alongside traditional digital‑skill concepts, signalling a forward‑looking view of the skills needed in a world increasingly mediated by artificial intelligence. The inclusion of AI‑specific literacy suggests that understanding of machine learning, algorithmic bias and responsible AI use should be considered on equal footing with basic device operation. The emphasis on cross‑sector collaboration calling for collective‑impact models that bring together governments, nonprofits and private firms suggests that no single entity can close the digital divide alone.
Policymakers and government departments can adopt the shared terminology in legislation and program documentation, embed the indicator set into national reporting dashboards, and allocate funding for co‑design pilots that follow the NWA methodology.
Educators and school administrators should weave the digital‑literacy definitions into curriculum maps, use the sample survey to assess baseline student confidence and track progress year over year, and implement TOT modules for teachers based on the recommended training‑of‑trainers model.
Non‑profit and community organizations can tailor programs for marginalized groups by following the sector‑specific guidance, partner with libraries or municipal bodies to create hybrid or mobile delivery hubs, and contribute data to the open‑access repository to strengthen the national evidence base.
Private‑sector firms and tech companies are encouraged to embed digital‑literacy components into employee onboarding and corporate‑social‑responsibility initiatives, sponsor device‑access‑plus‑training bundles for underserved neighbourhoods, and align product roadmaps with the AI‑literacy standards to ensure responsible design.
Researchers and academics may use the indicator taxonomy as a framework for comparative studies across provinces or countries, conduct longitudinal evaluations of programs that adopt the NWA, and explore outlier findings such as perceptions of AI agency in greater depth.

This thesis uncovers a process‑centric fairness crisis in Canadian immigration stemming from opaque, poorly accountable automated (often also termed ‘algorithmic’) decision‑making systems (‘ADMs’). By articulating a three‑pillar framework (transparency, accountability, ex‑ante rule‑making) and mapping it onto existing legal structures, the work supplies a concrete roadmap for lawyers, policymakers, and scholars to demand and design a more just, auditable immigration system.
The convergence of an outdated statutory framework, a permissive soft‑law regime, rapid ADM deployment, and an overburdened judicial system creates a systemic risk that procedural fairness is being eroded in Canadian immigration. Understanding these contextual forces is essential because they explain why the fairness deficits identified in the thesis are not isolated glitches but structural vulnerabilities that affect thousands of applicants and challenge the rule of law.
Lay summary:
"This thesis contributes to recent Canadian scholarship that has explored the impact of automated (algorithmic) decision making systems (“ADMs”) on core Canadian administrative law concepts. Combining doctrinal, law and technology methods with administrative justice theory, this thesis describes a “process problem” with the use of ADMs by Canadian immigration officials. The impact of this problem has led to the judicial system’s struggle to review ADM decisions for procedural fairness, in a way that has impacted also its lens on substantive review. However, to better explore the process problem – three pre-requisite concepts – transparency, accountability, and ex ante rulemaking must be defined and interrogated. This paper concludes by suggesting a shift towards an administrative justice model of “getting it right the first time” and the development of procedural protections through an amended Immigration and Refugee Protection Act and procedural code, inviting refinement."
Goal / Guiding Questions
| Goal | Guiding Question(s) |
|---|---|
| Diagnose the process problem created by ADMs in Canadian immigration. | How have ADMs changed the IRCC decision‑making workflow? What procedural‑fairness and reasonableness issues arise? |
| Evaluate fair‑process prerequisites. | What role do transparency, accountability and ex‑ante rule‑making play in ensuring a “fair process”? |
| Propose reforms. | How can legislative, regulatory or procedural changes restore “getting it right the first time”? |
| Contextual Point | Explanation | Why It Matters |
|---|---|---|
| Legal vacuum in the IRPA – Section 186 gives IRCC a blanket authority to use “electronic means” but contains no specific rules for AI/ADM. | Courts must interpret a statute that was drafted before modern AI existed. | Without clear statutory limits, IRCC can deploy powerful algorithms unchecked, leaving applicants without a solid legal footing to challenge decisions. |
| Directive on Automated Decision‑Making (DADM) – Canada’s only soft‑law governance tool; it is non‑binding, allows low‑impact projects to avoid detailed disclosure, and limits transparency to “plain‑language notices.” | The DADM is the de‑facto regulator for all federal ADMs, including immigration. | Its weak enforcement means agencies can sidestep meaningful oversight, making it difficult for the public or courts to know how a decision was produced. |
| Rapid ADM rollout in immigration – By mid‑2025 IRCC had published 26 Algorithmic Impact Assessments (AIAs) and operates tools such as Advanced Analytics‑TRV, Chinook, and ITAT. Most are classified as “medium impact,” escaping the DADM’s stricter requirements. | The sheer number and diversity of tools show that ADMs are now integral to everyday immigration processing. | The scale magnifies any procedural defect: thousands of applicants are affected, yet the mechanisms to monitor fairness are minimal. |
| Federal Court backlog and boiler‑plate refusals – Over 75 % of immigration cases sit unresolved; many decisions are issued with templated reasons that omit any reference to ADMs. | Judicial review is the primary external check on administrative decisions. | When courts cannot see the underlying algorithmic reasoning, they cannot assess whether a decision complies with procedural fairness, effectively rendering review a rubber‑stamp. |
| Scholarly gap – Existing literature focuses on technical bias or high‑profile AI ethics; few works connect ADM mechanics to procedural fairness and administrative‑justice theory. | This thesis applies Adler’s and Mashaw’s administrative‑justice typologies directly to IRCC’s ADM ecosystem. | It provides a novel analytical lens that bridges law and technology, offering concrete doctrinal tools for courts and policymakers to evaluate fairness. |
| Unique methodological blend – Combines doctrinal legal analysis, exhaustive policy‑document coding, and a narrative case study (“Xi & Wang”) to illustrate real‑world impacts. | The case study grounds abstract legal concepts in a lived‑experience scenario. | Demonstrates how abstract procedural deficiencies translate into tangible hardships for immigrants, making the problem palpable for non‑specialists. |
| Policy relevance – The thesis coincides with ongoing parliamentary reviews (e.g., CBA’s 2025 submission, Treasury Board’s modernization agenda). | Legislators are already debating AI governance reforms. | The research supplies timely, evidence‑based recommendations that can be directly incorporated into upcoming amendments to the DADM or new ADM‑specific statutes. |
The thesis makes three calls to action:
| Audience | Practical Take‑aways & Actions |
|---|---|
| Immigration lawyers & litigants | Request specific ADM disclosures (e.g., model version, risk‑score thresholds) in ATIP or judicial motions. Frame procedural‑fairness arguments around the three‑pronged test (transparency, accountability, ex‑ante rules). |
| Policy‑makers / Treasury Board | Amend the DADM to make “medium‑impact” projects subject to full AIAs (including source‑code summaries). Introduce a statutory ADM‑Transparency Act requiring public registries of all IRCC‑used models. |
| IRCC administrators | Adopt audit‑log retention (minimum 12 months) for all ADM‑generated notes. Publish model cards (dataset, training method, performance metrics) on the IRCC transparency portal. |
| Academic researchers | Extend the “process‑first” framework to other federal agencies (e.g., Canada Revenue Agency). Conduct empirical studies on the impact of ADM‑deletion practices on appellate success rates. |
| Civil‑society NGOs | Use the thesis’s checklist (transparency/accountability/ex‑ante) to evaluate new AIAs and launch public‑interest litigation where gaps appear. Advocate for an Independent ADM Ombudsperson (see CBA recommendation). |
| Future‑research agenda (as identified by the author) | Comparative analysis of ADM procedural fairness across Commonwealth jurisdictions. • Empirical measurement of “automation bias” in IRCC officers using controlled experiments. Development of a legal‑tech prototype that automatically extracts ADM‑related metadata from immigration files for judicial review. |

This dissertation uncovers a dual accountability regime in Canadian settlement services, where relational, empathy‑driven practices coexist, and often clash, with rigid, number‑focused reporting demands. The work exposes a temporal bias that privileges speed and quantification, shaping data cultures, technology choices, and ultimately the quality of support offered to newcomers. By foregrounding workers’ lived experiences, the study offers concrete pathways for policy reform, organizational redesign, and technology development that could make settlement services more humane, effective, and truly accountable.
Abstract (quoted)
“Immigration and settlement policies, much like organizational policies or digital technology policies, are built from the coming together of discourses, tools, and people—people standing behind service desks, using databases and making decisions. These interdependent systems, beliefs, and tools affect how decisions are shaped as well as the information and data cultures of an organization (or a whole sector). Digital technologies and commitments to evidence‑based, accountable service delivery are critical to the capacity of settlement organizations to manage data flow. The dissertation addresses these questions through a multi‑sited study with the settlement organizations that serve refugees and immigrants in Canada, a country often recognized as a global leader in immigration and settlement policies.”
Goal & Guiding Questions
The dissertation aims to uncover how information practices shape and mediate accountability mechanisms in Canadian immigrant‑settlement services. It is driven by three research questions:
| Point | Explanation |
|---|---|
| Policy pressure | Most settlement organizations are funded by the federal government (IRCC) and must report detailed service data to demonstrate “accountability” for public money. |
| Digital transformation | Recent years have seen a rapid shift to digital service delivery (e‑mail, Zoom, mobile apps) and the introduction of mandated databases such as iCARE. |
| Knowledge gap | While much research has examined immigrants’ information‑seeking behaviour, little has looked at the information practices of the frontline workers who collect, organize, and report that data. |
| Temporal bias | The author introduces the concept of temporal bias – a cultural tilt that privileges speed, clock‑time, and easily quantifiable data over slower, relational, narrative work. |
| Unique approach | The study blends practice theory, social theories of time, and critical data studies to analyze both the sociotechnical and temporal dimensions of accountability. It combines a large‑scale qualitative interview program (32 workers, 15 clients) with document analysis of 30+ sector reports, policy papers, and artefacts. |
These points locate the work at the intersection of information science, migration studies, and organizational sociology, offering a fresh lens on how “accountability” is lived and negotiated on the ground.
| Theme | Why it matters |
|---|---|
| Empathy as formal accountability | Shows that caring practices can be framed as “accountable” actions, challenging the purely numeric view of performance. |
| Temporal bias (Chronos vs. Kairos) | Highlights a structural tension: clock‑time reporting vs. the “right moment” needed for trust‑building with clients. |
| Data drift | Managers adjust data collection to anticipated funder expectations, even without explicit directives – a subtle form of governance. |
| Gatekeeping & time‑keeping | Workers act as gatekeepers of information and of time, deciding which client needs are urgent enough to be recorded. |
| Technology‑induced inequities | Digital tools (e.g., Zoom, WhatsApp) improve reach but also create barriers for clients lacking devices or digital literacy. |
| Non‑use of data | Despite massive data collection, organisations rarely analyse the data for internal improvement; it remains a reporting artefact. |
| Component | Details |
|---|---|
| Design | Qualitative, exploratory case study using practice theory and social theories of time. |
| Data sources | Semi‑structured interviews – 32 settlement workers (mix of front‑line staff, managers, coordinators) and 15 immigrant clients.Document & artefact analysis – 30+ sector reports, policy papers, conference proceedings, internal manuals, consent forms, database screenshots. |
| Sampling | Purposive + snowball sampling across Ontario, Alberta, Manitoba, Saskatchewan, British Columbia; aimed for diversity in gender, age, ethnicity, tenure, and organizational size. |
| Demographics (workers) | 35 % male, 65 % female; ages 25‑64; experience 1‑31 years; roles ranged from settlement workers, managers, digital navigators, administrative assistants. |
| Demographics (clients) | 12 % male, 88 % female; ages 18‑35; mix of refugees, permanent residents, recent arrivals (2019‑2022). |
| Analysis | Reflexive Thematic Analysis (Braun & Clarke, 2022) – six‑step process: familiarization, coding (NVivo), theme generation, review, definition, write‑up. Theoretical lens (practice theory, temporality) guided coding of concepts such as “accountability,” “data culture,” “temporal bias.” |
| Ethics | Approved by University of Toronto REB; informed consent obtained; pseudonyms used; data stored securely. |
| Limitations noted | Pandemic‑forced shift to remote interviews; non‑probability sampling limits generalizability; urban‑centric sample (few rural participants). |

The study investigates whether artificial‑intelligence (AI) tools can help Canadian settlement agencies deliver higher‑quality employment services to refugee clients, moving them from “survival” jobs toward work that matches their prior qualifications.
This study demonstrates that AI holds promise for improving the efficiency and quality of refugee employment services in Canada, but successful adoption hinges on addressing knowledge gaps, data‑privacy safeguards, bias mitigation, and aligning funding incentives with quality employment outcomes.
It asks:
Abstract:
“Canada increasingly welcomes new refugees, but after arrival they face numerous personal and systemic barriers to securing employment within their fields of experience, causing them to take on low‑paying jobs. … This pilot study … thirteen interviews with counsellors, managers, and I.T. experts … showed that they had an optimistic perception of AI’s ability to support their work but limited knowledge about AI and concerns such as data privacy. … AI tools – such as mock interviews, resume building, customer‑support chatbots, and customized job search – may be potential solutions to increase efficiency and quality of services, which can free up counsellors to spend more quality time for tailored support … Accessible trainings … securing funding, testing for algorithmic bias, and protecting sensitive client data are essential.”
| Why the study matters | Key background points |
|---|---|
| Persistent mismatch – Refugees in Canada earn < ½ of the average Canadian income in the first year and are often over‑qualified for the jobs they obtain (e.g., 80 % of Winnipeg refugees work in unrelated sales/services after three years). | Over‑qualification rates: 70 % of employed refugees are dissatisfied with their occupation; 60 % feel over‑qualified (Lamba 2003). |
| Systemic pressures on settlement agencies – Funding models reward number of placements rather than quality, caseloads are high, and staff lack formal training for employment counselling. | Counselors report large caseloads, limited time for one‑on‑one support, and reliance on “survival‑job” placements (Kosny et al., 2020). |
| AI already reshaping HR – Recruiters use AI for screening, interview automation, and job matching, yet bias and privacy concerns remain. | AI can exacerbate language‑bias (HireVue, Amazon resume‑screening failures). |
| Gap in literature – Little research exists on AI adoption within refugee‑focused settlement services; most work focuses on private‑sector recruitment. This pilot fills that niche with qualitative insight from frontline staff. | First Canadian‑focused qualitative study on AI + refugee employment services. |
| Theme | Findings | Representative Quote |
|---|---|---|
| Perceptions of AI – Mixed but leaning positive. | Majority view AI as “helpful, time‑saving”. Knowledge is limited; most know only ChatGPT. | “I just asked ChatGPT to do proofreading … it saved me a lot of time.” (Manager 2) |
| Positive expectations – Efficiency, quality, and scalability. | AI could automate routine tasks, freeing counsellors for personalized support. Mock‑interview tools seen as a way to increase client self‑practice. | “AI can improve our ways of doing things more effectively and efficiently.” (Manager 6) |
| Negative concerns / limitations – Data privacy, bias, authenticity, and loss of human touch. | Fear of client data exposure on cloud platforms. Concern that AI‑generated resumes sound “robotic”. Skepticism about AI understanding accents or cultural nuances. | “Clients have escaped unsafe areas; they want privacy. Weak AI privacy could threaten them.” (Counsellor 5) |
| Barriers to adoption (counsellors) – Time to learn, organisational culture, funding. | Training must be embedded in work schedules. Resistance from less tech‑savvy staff. Need for clear ROI to justify budget reallocation. | “If the tool is too difficult to learn, we risk frustration and reduced motivation.” (Counsellor 5) |
| Barriers to adoption (clients) – Digital literacy, language, device access. | Younger refugees adapt faster; seniors may need extra support. Multilingual interfaces essential. Access via agency computer labs, libraries, or shared devices. | “80 % own a mobile phone, but computers are scarce; we need a lab or loaner laptops.” (Researcher note) |
| Current AI tools in use – Predominantly ChatGPT (4/5 agencies) and Microsoft Co‑Pilot for note‑taking. | Used for grammar checks, keyword extraction, mock‑interview question generation. | Many mentioned their organization is “not officially” using it, it is not formally “encouraged”, but it is the user’s choice whether they want to use it. |
| Tools in development / testing – Resume‑keyword matchers, website chatbots, VR soft‑skill simulators (Bodyswaps). | Pilot phases focus on usability and bias testing. | Two out of the five organizations were in the development or testing phase of adopting new AI tools. Both had mentioned that these programs were not funded by the government. |
| Wishlist – Mock‑interview platforms, resume builders, customized job‑search engines, multilingual chatbots, admin‑automation. | Ranked by participant interest (Figure 4). | Tools to help write cover letters and assist with learning like English language classes were also brought up as potentially beneficial. |
| Bias concerns – Potential profiling of refugees, racial/ gender bias, political bias (e.g., preferential treatment of Ukrainian refugees). | Calls for algorithmic bias audits before deployment. | “Algorithms may favor ‘rich, white people’; we must guard against that.” (Participant) |
| Policy recommendations – Formal AI‑use guidelines, data‑privacy safeguards, funding for training, mandatory outcome metrics (quality of employment, not just placement counts). | All participants reported that there were no formal organizational policies around AI usage, while risks of data privacy and lack of transparency were barriers for adoption. More funding should be allocated to having settlement agencies collect and report data such as client education and qualifications, employment type intended and obtained, wage, and satisfaction. | Regardless of the potential efficiency improvements obtainable by AI adoption, achieving the objective of refugees obtaining employment that matches their qualifications remains challenging without first establishing a method to measure data limitations. A policy shift is essential, wherein funding agencies mandate the collection of additional data from settlement organizations in order to set targets of increasing employment outcomes that match client qualifications, resulting in reduced deskilling of refugees. |
Outlier findings
| Audience | Practical Actions |
|---|---|
| Settlement‑agency leaders / managers | Conduct a needs‑assessment to map which AI tools (mock‑interview, resume‑builder) would yield the greatest time‑savings. Secure pilot funding earmarked for AI training and bias‑audit services. Draft AI‑use policies covering data minimisation, client consent, and vendor vetting. |
| Front‑line counsellors | Participate in short, on‑site AI workshops (e.g., “ChatGPT for résumé polishing”). Adopt a dual‑review workflow: AI‑generated output reviewed by a human before sharing with clients. Share client feedback on AI tools to inform continuous improvement. |
| IT / data‑security teams | Implement privacy‑by‑design: anonymize client data before any AI API call; use on‑premise or encrypted cloud solutions. Run bias‑testing scripts on any recruitment‑related AI model before rollout. |
| Policymakers / funders (e.g., Immigration, Refugees and Citizenship Canada) | Revise settlement‑program metrics to include quality‑of‑employment indicators (alignment with qualifications, wage levels). Allocate grant streams specifically for AI‑adoption pilots that include evaluation components. |
| Researchers / academia | Build on this pilot with a larger mixed‑methods study (survey > 100 agencies, longitudinal outcome tracking). Explore comparative bias analyses of AI tools across refugee sub‑populations. |
| Technology vendors | Design multilingual, low‑bandwidth AI modules tailored to settlement‑agency workflows. Provide transparent model documentation and easy‑to‑use bias‑mitigation dashboards for non‑technical staff. |
Future‑research directions noted by authors
The project combines a qualitative thematic analysis of in‑depth semi‑structured interviews with staff from five agencies across Ontario, British Columbia, and a national organization, capturing both positive optimism and privacy‑risk anxieties. It also maps concrete tool‑wish lists (mock‑interview platforms, multilingual chatbots, resume‑optimizers) and proposes a policy‑framework for AI governance in settlement contexts.
| Aspect | Detail |
|---|---|
| Design | Qualitative pilot study employing inductive thematic analysis of interview transcripts. |
| Data collection | 13 semi‑structured interviews (July 2024) with staff from five Canadian settlement agencies (three in Ontario, one in British Columbia, one national/international). |
| Participant roles | 7 Program managers, 5 Client‑facing counsellors, 1 IT expert (some participants held dual roles). |
| Recruitment | Purposive + snowball sampling; agencies contacted via publicly listed emails/phone numbers; consent obtained; ethics approval from McMaster University REB. |
| Interview medium | Virtual Microsoft Teams; video recorded; auto‑transcribed then manually corrected. |
| Analysis | Coding framework iteratively refined; themes identified: Perceptions of AI, Barriers & Mitigations, Applications of AI. Visualised via flow‑chart (Fig 2). |
| Supplementary data | Figures showing participant distribution (Fig 1), AI tool usage (Fig 3), and interest rankings (Fig 4). Table 1 lists example AI tools for settlement services. |
| Limitations acknowledged | Small, non‑random sample; geographic concentration in Ontario/BC; reliance on self‑reported perceptions; no direct client input. |

This report delivers a human‑rights‑centred, comparative audit of how automated recommendation‑making tools are shaping immigration decisions in three major democracies, exposing systemic bias, opacity, and accountability deficits, and offering a concrete roadmap—particularly for the UK—to embed ethical safeguards, transparency, and robust oversight into any future deployment of such tools.
Guiding aims:
The report warns that unchecked automated recommendation‑making tools in immigration threaten fairness, transparency and human rights, and urges swift, binding regulation—especially in the UK—based on lessons from Canada’s mandatory AI assessments and the USA’s fragmented oversight.
| Theme | Findings (with quotes) |
|---|---|
| Seven risk categories – bias, transparency, controversial uses, terminology, psychological effects, data, accountability. | “These risks fall under seven categories… this list facilitates regulators to consider where the trade‑offs must lie.” |
| Bias is pervasive – historical, data, developer, proxy, automation, feedback‑loop, confirmation, computational, quantitative. | “Historical bias… if past decisions were biased, those biases can be embedded into the algorithm and applied to future cases.” |
| Transparency gaps – Companies hide trade‑secrets; ATRS records are voluntary and rarely published (only 9 records, none from Home Office). | “Only nine records across all government departments had been published and none by the Home Office.” |
| Controversial ‘automated suspicion’ – tools that flag applications for extra scrutiny (e.g., US ICM, UK GPS‑tagging). | “These tools do not make final decisions but generate suspicion… leading to increased scrutiny or deprioritisation.” |
| Terminology matters – Using “human‑in‑the‑loop” lets minimal human checks bypass regulation. Recommendation: shift to “chain of decision‑making”. | "The lack of clarity over definitions risks allowing companies and government departments to describe their ARMTs in terms that avoid regulation." |
| Psychological impact – Migrants feel “indignity” when a computer influences life‑altering decisions. | “There is a unique sense of injustice or indignity when an administrative decision … relies … on an algorithmic output.” |
| Data stewardship failures – No systematic collection of protected‑characteristic data, hampering bias monitoring. | "There is a lack of public information about the collection, storage and use of data by algorithmic systems, especially when data is provided by a commercial entity. This leaves numerous important questions unanswered." |
| Accountability erosion – Algorithms are often “black‑box”, making it impossible to trace why a decision was made. | “People will never know why the computer says ‘yes’ or ‘no’.” |
| Case‑Study Insights | • UK GPS‑tagging – ARMT decides whether a person should wear an ankle tag, yet individuals are never notified of its use. • Canada ITAT – Risk‑assessment tool used by a “Risk Assessment Unit”; the tool is claimed to be “non‑black‑box” but lacks public AIAs, GBA+, or peer‑review disclosures. • US ICM (Palantir) – Used for investigative case management; controversy over its role in immigration enforcement. • US GeoMatch – Predictive placement tool for refugees; praised for efficiency but criticized for lack of explainability and for treating refugees as “economic objects”. |
| Outlier finding – Some tools (e.g., GeoMatch) are opt‑in for skilled migrants in Canada, showing a different governance model where consent and user control are built‑in. | "This tool demonstrates the potential of ARMTs to improve migrants’ lives but re-emphasises the need to proactively regulate ARMTs to safeguard against possible harms." |
| Audience | Practical Take‑aways |
|---|---|
| Policymakers (UK Government, Home Office) | Enact a binding ban on black‑box algorithms and on automated refusals (mirroring Canada).Adopt the “chain of decision‑making” terminology and require disclosure at every stage.Make AIAs and ATRS records mandatory, publicly searchable, and updated on a set schedule (e.g., every 6 months).Require collection of protected‑characteristic data for bias monitoring, with strict data‑protection safeguards.Create an independent oversight body (audit, peer‑review, GBA+ assessment) with enforcement powers (funding withdrawal). |
| Legal practitioners & NGOs | Use the report’s checklist of disclosure failures (e.g., lack of notice of ARMT use) to frame FOI/SAR requests and judicial review arguments.Advocate for statutory duties under the Equality Act 2010 to consider algorithmic bias.Educate clients that they have a right to ask whether an ARMT was involved and to request an explanation. |
| Immigration officials & public‑sector managers | Implement training programmes on algorithmic bias, explainability, and the “chain of decision‑making”.Introduce internal audit logs that record when an ARMT is consulted and the human decision‑maker’s final action.Pilot transparent, explainable models (e.g., rule‑based scoring) for high‑risk decisions. |
| Technology developers & vendors | Design tools to be explainable by design (avoid opaque ML models for public‑sector use).Provide documentation packages (model cards, data sheets) that satisfy future mandatory AIAs.Include human‑rights impact assessments early in the development lifecycle. |
| Academics & Researchers | Build on the seven‑risk framework for comparative studies in other jurisdictions.Investigate feedback‑loop dynamics in ARMTs (e.g., GPS‑tagging, risk‑scoring).Explore opt‑in consent models like GeoMatch’s Canadian version as a pathway to ethical AI. |
| Future‑research agenda (as identified by the author) | Empirical measurement of bias outcomes once protected‑characteristic data become available.Longitudinal studies on psychological effects of algorithmic suspicion on migrants.Evaluation of independent audit mechanisms and their impact on tool redesign. |
| Method | Details |
|---|---|
| Field research (summer 2024) | Conducted over 50 semi‑structured interviews with more than 25 organisations across Canada, the USA, and the UK. |
| Stakeholder mix | Government officials, civil servants, technology developers, software engineers, academics, lawyers, and migrant‑rights organisations. |
| Geographic coverage | Canada (Toronto, Ottawa, Montreal, Vancouver); USA (New York, Washington DC, San Francisco); UK (London, York). |
| Case‑study selection | Five in‑depth case studies: • UK GPS‑tagging • Canada Integrity Trend Analysis Tool (ITAT) • Canada privately‑sponsored refugee applications • US Investigative Case Management (ICM) • US GeoMatch. |
| Document analysis | Reviewed policy documents, legislative texts, public‑sector AI guidelines, FOIA requests, and published AIAs. |
| Qualitative synthesis | Identified recurring risk categories, mapped regulatory gaps, and extracted direct quotations from interviewees (e.g., “There is a unique sense of injustice…”). |
| Limitations | Research reflects the state of affairs up to September 2024; rapid tech‑policy changes after that date are not captured. |

This report provides an examination of language‑policy and translation strategy on Manitoban government websites and social‑media channels during the COVID‑19 pandemic.
Desjardins shows that multilingual health communication in Manitoba during COVID‑19 was uneven, hidden, and often inconsistent with both policy and demographic realities. By exposing these gaps, this article offers a clear roadmap for governments, NGOs, and scholars to redesign digital health messaging so that “Hello/Bonjour” truly cuts it in a crisis.
Goal: To understand how (or whether) multilingual communication was provided, how usable that communication was for non‑English/French speakers, and what the implications are for translational justice in a health crisis.
Guiding questions (implicit in the text):
| Background point | Why it matters for the study |
|---|---|
| COVID‑19 exposed language barriers – e.g., the Cargill High‑River meat‑packing outbreak where bulletin‑board notices were only in English, leading to confusion and higher case counts. | Shows the concrete health‑risk consequences of monolingual communication. |
| Canadian language legislation – Official Languages Act guarantees English/French only; Indigenous and migrant languages have no legal requirement. | Sets the policy ceiling against which provincial practice is measured. |
| Manitoba’s demographic reality – 2016 Census: ~22 % of residents do not have English as mother tongue; ~11 % speak a non‑official language at home. | Highlights a mismatch between population needs and the official‑language‑only approach. |
| Digital shift in crisis communication – Governments moved heavily to websites, Facebook, Twitter, YouTube; the speed of information delivery is crucial. | Provides the technological arena where translation (or its absence) becomes visible. |
| Research novelty – Uses a digital‑humanities toolbox (web‑scraping, hashtag indexing, network analysis, close‑reading) to study both the content and the UX of multilingual delivery, rather than a simple content‑comparison. | Offers a methodological blend rarely applied to language‑policy evaluation. |
| Finding | Evidence (quoted) |
|---|---|
| Official‑language dominance persists – English and French dominate all COVID‑19 posts; other languages are rare. | “COVID‑19 Bulletin and the COVID‑19 Vaccine Bulletin are systematically posted in separate, language‑specific posts (English and French)… Neither … is translated or available in other languages.” |
| Multilingual resources exist but are hidden – Fact‑sheet translations (8 languages) and silent YouTube videos (12+ languages) are buried behind several clicks and not linked from the French site. | “Three mouse‑clicks from the English homepage will redirect a user to a section… where the Social (Physical) Distancing Factsheet can be found translated into eight languages.” |
| Inconsistent UX across language versions – The French site omits the multilingual fact‑sheets and video library that the English site shows. | “The French page does not have the seven other translated Social (Physical) Distancing Factsheets…” |
| Social‑media strategy is fragmented – Separate accounts for health officials, disjointed navigation to Shared Health Manitoba, and a predominance of English tweets. | “The Government of Manitoba’s Twitter account… tweets and retweets are predominantly in English, though French‑language content does make a somewhat regular appearance.” |
| Unequal YouTube playlists – English COVID‑19 playlist holds 346 videos; French playlist only 203, with far lower view counts. | “The English playlist total video count… is markedly larger than the French playlist… Engagement is also significantly lower for the French‑language playlist.” |
| Outlier success: Low‑German vaccine stickers – Community‑driven demand led to official production of stickers in Low German (Plattdeutsch), generating notable social‑media buzz. | “Andrew Unger … proposed … add Low German … and the Government of Manitoba obliged… the tweet garnered 283 likes, 28 quote tweets and 27 retweets.” |
| Platform omission – No active Instagram presence despite its potential for automated caption translation. | “The Government of Manitoba is oddly absent on Instagram… This seems like a missed opportunity…” |
| Audience | What to do (based on findings) |
|---|---|
| Public‑health officials / policymakers | • Audit all digital touch‑points (website, social media, video libraries) for discoverability of multilingual assets; redesign navigation to surface them from any language version.• Adopt a single, unified social‑media hub for health updates to avoid fragmentation.• Institutionalize a multilingual content checklist (incl. low‑resource languages identified in census data). |
| Government communications teams | • Implement language‑agnostic UI elements (e.g., language selector that stays visible on every page).• Publish parallel playlists on YouTube with identical video counts and promote them equally in both official languages.• Explore automated caption translation on Instagram and TikTok to reach younger, multilingual audiences. |
| Community organisations / NGOs | • Leverage existing multilingual PDFs/videos to create localized outreach kits (print, radio, community‑center displays).• Advocate for co‑creation of content with community members (e.g., the Low‑German sticker model). |
| Researchers / Academics | • Extend the digital‑humanities methodology (web‑scraping + UX analysis) to other provinces or sectors (education, emergency services).• Conduct user‑testing studies to quantify the impact of hidden multilingual resources on information uptake. |
| Future‑research agenda (as suggested by the author) | • Interview end‑users from Indigenous and migrant language groups to capture lived experiences of access barriers.• Compare Manitoba’s approach with provinces that have indigenous‑language mandates (e.g., Nunavut) to identify best practices. |
| Methodological component | Details |
|---|---|
| Case‑study design | Focused on the Government of Manitoba’s digital communication during COVID‑19. |
| Data collection | • Web‑scraping of the provincial website and its “Resources and Links” pages.• Hashtag indexing / searching on Twitter and Facebook to capture COVID‑19‑related posts.• Network analysis of social‑media interactions (e.g., retweets, shares). |
| Qualitative analysis | Close‑reading of selected posts, PDFs, and video descriptions to identify translation cues (e.g., “translate” buttons, multilingual labels). |
| Quantitative snapshots | Counts of followers/subscribers (e.g., Facebook ≈ 58 k, YouTube ≈ 12 k); video counts per language playlist; citation of census statistics (language‑use percentages). |
| Ethical considerations | Only public‑facing data used; personal identifiers removed; compliance with Canadian Tri‑Council Policy on Human‑Subjects. |
| Scope limitations | No systematic surveys or interviews; analysis limited to publicly available digital artefacts up to mid‑2021. |

The paper investigates dark patterns in privacy (DPPs): interface‑design tactics that manipulate users into disclosing more personal data than they intend, to the benefit of the service provider.
Jarovsky’s paper delivers a clear, interdisciplinary framework for identifying and regulating dark patterns in privacy. By tying UI manipulation to cognitive bias theory and positioning the GDPR’s fairness principle as the legal fulcrum, the research equips regulators, designers, and legal practitioners with a concrete roadmap to detect, assess, and ultimately prevent privacy‑harmful design practices.
Guiding questions:
The study aims to (i) propose a precise definition, (ii) build a taxonomy grounded in cognitive‑bias theory, and (iii) evaluate the compatibility of DPPs with current EU data‑protection law, suggesting pathways for regulatory reform.
| Background point | Why it matters |
|---|---|
| Rise of manipulative UI – Users regularly encounter opaque settings, forced‑consent banners, and default‑heavy designs that funnel them toward data‑rich choices. | Demonstrates a systemic problem that undermines the GDPR’s consent requirements. |
| Legal gap – While the GDPR mandates “freely given, specific, informed” consent, it does not explicitly address how consent is obtained via UI. | Creates a loophole where controllers can obtain apparently lawful consent that is actually coerced. |
| Existing scholarship – Prior work on dark patterns (e.g., Brignull, Nouwens, CNIL) treats them mainly as marketing tricks; few connect them to privacy law. | Jarovsky’s work bridges HCI, behavioral economics, and EU law, offering a multidisciplinary lens. |
| Policy momentum – The EU Digital Services Act (DSA) and California CPRA have begun naming dark patterns, but their scope is limited. | Highlights the timeliness of a deeper legal analysis of DPPs. |
| Unique contribution – The paper (a) refines the definition to separate dark patterns from nudges, (b) maps each pattern to a specific cognitive bias, and (c) frames the GDPR’s fairness principle as the primary legal lever. | Provides a concrete analytical toolkit for regulators, designers, and scholars. |
“To be considered a dark pattern, the design must be manipulative and have as an objective goal to make the data subject worse off according to the observed criteria.”
| Category | Core Mechanism | Example(s) |
|---|---|---|
| Pressure | Coercive language or conditional access (“you must share X to use Y”). | “Require marketing consent to complete a purchase.” |
| Hinder | Deliberate friction—hidden settings, complex navigation, privacy‑invasive defaults. | “‘Accept all’ vs. a labyrinth of individual switches.” |
| Mislead | Ambiguous wording, double negatives, visual tricks (color, contrast). | “Green ‘deny’ button, red ‘accept’ button.” |
| Misrepresent | False claims of necessity or benefit. | “Claim data collection is legally required when it isn’t.” |
Each category is linked to a set of cognitive biases (e.g., default effect, framing, anchoring, social proof).
“DPPs breach the principle of fairness, for cumulatively: (a) not respecting reasonable expectations … (c) involving manipulation … (d) negatively affecting data subjects’ privacy.”
| Audience | Practical Applications |
|---|---|
| Privacy Regulators & Policymakers | Draft interpretative guidance that treats DPPs as violations of the fairness principle. Amend the DSA to remove the GDPR exemption for privacy‑related dark patterns. Incorporate the taxonomy into supervisory checklists for DPIA reviews. |
| Designers & Product Teams | Conduct an internal audit using the four‑category taxonomy to spot DPPs in UI flows. Replace pressure and mislead patterns with transparent, opt‑in consent dialogs. Adopt “privacy‑by‑design” checklists that explicitly test for default‑effect exploitation. |
| Legal Counsel & Compliance Officers | Re‑evaluate consent records for evidence of pressure or misrepresent tactics. Advise clients that consent obtained via any of the four categories may be deemed invalid under GDPR and CPRA precedents. Prepare arguments for fairness‑principle defenses in enforcement actions. |
| Researchers & Academics | Extend the taxonomy to other jurisdictions (e.g., Brazil’s LGPD, India’s PDPB). Empirically test the impact of each pattern on user behavior using eye‑tracking or A/B experiments. Explore the intersection of DPPs with algorithmic profiling and AI‑driven personalization. |
| Consumer Advocacy Groups | Use the taxonomy to produce “dark‑pattern scorecards” for popular apps/websites. Educate users about cognitive biases that make DPPs effective, encouraging critical UI scrutiny. |
Future‑research directions highlighted by the author
| Methodological Element | Description |
|---|---|
| Conceptual analysis | Synthesized literature from HCI, behavioral economics, and EU data‑protection law to craft a precise definition separating DPPs from nudges. |
| Taxonomy construction | Mapped a non‑exhaustive list of cognitive biases (anchoring, default effect, framing, etc.) to observable UI manipulations, producing four high‑level categories (Pressure, Hinder, Mislead, Misrepresent). |
| Legal doctrinal review | Examined GDPR Articles 6, 7, 25, Recitals 42‑45, the DSA, and the CPRA, interpreting how each regime treats consent and dark‑pattern‑like practices. |
| Case illustrations | Presented realistic user scenarios (Alice, Bob, Charlie, Danah) to demonstrate each pattern type and its privacy impact. |
| Normative argumentation | Leveraged the EU’s fairness principle and the PECL’s contract‑law concepts (mistake, fraud, pressure, hindrance) to argue for a legal re‑framing of DPPs. |
No empirical data (surveys, interviews, or experiments) were collected; the work is a theoretical‑legal synthesis supported by illustrative examples.

The report is a post‑analysis of the National Call for Proposals (CFP) 2024 that funded Canada’s Settlement and Resettlement Assistance Programs (outside Québec) for the period April 1 2025 – March 31 2028. Its aim is to evaluate how well the CFP achieved its intended outcomes, identify strengths and weaknesses of the funding process, and generate actionable recommendations for future CFP cycles.
Key guiding questions include:
Additional observations
Did the CFP meet the 2024 policy objectives and priorities?
The CFP met the baseline service‑delivery objectives, but fell short of fully operationalizing the higher‑order equity and innovation priorities that were set for 2024.
How effective were the supports, tools, and communications?
The supports and tools were substantially effective, especially the help‑desks, webinars, and EDI resources, but clarity of the funding guidelines and timeliness of communications need considerable improvement.
Impact of the 2024 immigration‑levels adjustment
The immigration‑levels adjustment compressed the negotiation window, extended the overall timeline, and necessitated a shift to shorter‑term funding agreements, which amplified stress for both applicants and IRCC staff.
Which aspects of the process require improvement?
The report identifies these improvement areas as the most critical levers for future CFP cycles:

This research report provides a review and analysis of online and distance education language training in Canada. The report provides a set of recommendations for the implementation of online and distance education language training, including the need for increased access to online and blended learning opportunities, the need to address integrating culture in language learning, the need to provide robust learner orientation and professional development for instructors, ongoing and multi-modal communications, technical support and the development of a centralized repository of learning objects.
What is this research about?
This research examines the evolving landscape of online and distance education for language training, with a focus on the implications for newcomers to Canada, particularly those in English as a Second Language (ESL) and immigrant integration contexts. The report aims to analyze how emerging digital tools and Web 2.0 technologies influence language learning, and to identify effective practices and challenges in delivering online language education1.
Guiding questions and central objectives:
What do you need to know? (Context and background)
What did the researchers find? (Key highlights, themes, and outliers)
Interesting Themes and Outlier Findings
How can you use this research?
What did the researchers do? (Methodology)
Summary Table: Key Points
| Aspect | Key Findings/Recommendations |
|---|---|
| Web 2.0 Tools | Blogs, wikis, podcasts enhance interactivity and engagement |
| Instructor Role | Requires ongoing professional development |
| Learner Role | Self-motivation and digital literacy are critical |
| Cultural Sensitivity | Content must be culturally relevant |
| Flexibility | Online learning supports refugees and newcomers |
| Best Practices | Focus on usability, learner readiness, and LMS integration |

Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)