The agreement seeks to create a shared vocabulary for digital‑literacy‑related concepts, provide implementation guidance for organizations across sectors, and propose a standardized set of indicators for measuring progress and barriers.

The National Workshop Agreement offers a comprehensive, actionable blueprint for advancing digital literacy across Canada. By unifying language, outlining clear implementation steps, and supplying a ready‑made measurement toolkit, it equips policymakers, educators, community groups and private firms with the means to coordinate efforts, monitor progress and ensure that digital‑skill development is equitable, inclusive and future‑ready, particularly as AI becomes an integral part of everyday digital interaction.

Implicit guiding questions include:

What do you need to know?

The NWA was authored by SETI (Social Economy Through Social Inclusion) on behalf of the Standards Council of Canada (SCC), in partnership with the Digital Governance Council (DGC). It entered into force 24 September 2025 and will be reviewed after three years. This document consolidates terminology, implementation methodology, sector‑specific pathways, and a unified measurement framework in a single, nationally‑endorsed agreement. It also supplies sample survey instruments (Likert‑scale) that can be deployed across jurisdictions for bench-marking.

This is not a formal Canadian Standard but a code of practice that can evolve into a National Standard after expert review. The National Workshop Agreement provides a holistic, sector‑spanning blueprint for defining, implementing, and measuring digital‑literacy initiatives in Canada. Its unique blend of terminology, methodological guidance, and ready‑made measurement tools makes it a practical reference for anyone tasked with closing digital‑skill gaps. By adopting its common language, implementation steps, and indicator set, stakeholders can coordinate more effectively, track progress, and ensure that digital‑literacy programs are equitable, inclusive, and future‑ready (including AI literacy).

Why it matters

Canada’s rapid digital transformation is widening gaps in who can access, understand and benefit from new technologies. The document notes that inequities in digital literacy can lead to exclusion, reduced economic opportunity and erosion of trust in digital systems. The NWA is positioned as a response to these systemic challenges, seeking to ensure that youth, elders, newcomers, Indigenous peoples and other historically marginalized groups are not left behind.

Although the document itself is not a qualitative study, it embeds stakeholder perspectives, stating that co‑design “ensures cultural relevance and respects lived experiences,” thereby reflecting the voices of the communities it intends to serve.

What did the researchers find

Shared terminology – The agreement defines 24 core concepts, each with a simplified and a standard definition. For example, Digital Literacy is described as “the ability to find, understand, create, and share information using digital tools and technologies safely and responsibly.” Similar paired definitions are provided for Digital Fluency, AI Literacy, Digital Trust and many others, establishing a common language for all stakeholders.

Implementation methodology – A systematic approach is laid out, beginning with purpose definition and audience identification, moving through co‑design with communities, selection of delivery models, accessibility considerations, train‑the‑trainer (TOT) models, provision of tools and infrastructure, measurement and adaptation, ethics and safety, and finally ongoing learning. The document stresses that co‑design “ensures cultural relevance and respects lived experiences,” highlighting the importance of community input throughout the process.

Sector‑specific pathways – Separate recommendations are given for the public sector (integrating digital literacy into curricula, workforce development and lifelong learning), the nonprofit sector (leveraging trusted community relationships and trauma‑informed pedagogy), the private sector (embedding literacy in onboarding, CSR programmes and responsible product design), and cross‑sector collaboration (collective‑impact models, shared resources and data sharing while respecting privacy).

Measurement framework – Six families of indicators are proposed: Access & Infrastructure; Skills & Confidence; Inclusion & Equity; Program Reach & Engagement; Capacity & Ecosystem Support; and Outcomes & Impact. Each family contains concrete metrics—for instance, the percentage of the target population with reliable internet, or the number of trained facilitators in a community.

Best‑practice patterns – The agreement repeatedly cites community‑led partnerships, embedding digital‑skill training within existing services, pairing device access with training, hybrid and mobile delivery models, and open‑access repositories as proven strategies.

The framework places AI Literacy and Personal Agents / AI Agency alongside traditional digital‑skill concepts, signalling a forward‑looking view of the skills needed in a world increasingly mediated by artificial intelligence. The inclusion of AI‑specific literacy suggests that understanding of machine learning, algorithmic bias and responsible AI use should be considered on equal footing with basic device operation. The emphasis on cross‑sector collaboration calling for collective‑impact models that bring together governments, nonprofits and private firms suggests that no single entity can close the digital divide alone.

How you can use this research

Policymakers and government departments can adopt the shared terminology in legislation and program documentation, embed the indicator set into national reporting dashboards, and allocate funding for co‑design pilots that follow the NWA methodology.

Educators and school administrators should weave the digital‑literacy definitions into curriculum maps, use the sample survey to assess baseline student confidence and track progress year over year, and implement TOT modules for teachers based on the recommended training‑of‑trainers model.

Non‑profit and community organizations can tailor programs for marginalized groups by following the sector‑specific guidance, partner with libraries or municipal bodies to create hybrid or mobile delivery hubs, and contribute data to the open‑access repository to strengthen the national evidence base.

Private‑sector firms and tech companies are encouraged to embed digital‑literacy components into employee onboarding and corporate‑social‑responsibility initiatives, sponsor device‑access‑plus‑training bundles for underserved neighbourhoods, and align product roadmaps with the AI‑literacy standards to ensure responsible design.

Researchers and academics may use the indicator taxonomy as a framework for comparative studies across provinces or countries, conduct longitudinal evaluations of programs that adopt the NWA, and explore outlier findings such as perceptions of AI agency in greater depth.

National-Workshop-Agreement-To-Define-Common-Terms-and-Outline-Best-Practices-For-Development-Digital-Literacy-Download
AI transparency statement

This thesis uncovers a process‑centric fairness crisis in Canadian immigration stemming from opaque, poorly accountable automated (often also termed ‘algorithmic’) decision‑making systems (‘ADMs’). By articulating a three‑pillar framework (transparency, accountability, ex‑ante rule‑making) and mapping it onto existing legal structures, the work supplies a concrete roadmap for lawyers, policymakers, and scholars to demand and design a more just, auditable immigration system.

The convergence of an outdated statutory framework, a permissive soft‑law regime, rapid ADM deployment, and an overburdened judicial system creates a systemic risk that procedural fairness is being eroded in Canadian immigration. Understanding these contextual forces is essential because they explain why the fairness deficits identified in the thesis are not isolated glitches but structural vulnerabilities that affect thousands of applicants and challenge the rule of law.

What is this research about?

Lay summary:

"This thesis contributes to recent Canadian scholarship that has explored the impact of automated (algorithmic) decision making systems (“ADMs”) on core Canadian administrative law concepts. Combining doctrinal, law and technology methods with administrative justice theory, this thesis describes a “process problem” with the use of ADMs by Canadian immigration officials. The impact of this problem has led to the judicial system’s struggle to review ADM decisions for procedural fairness, in a way that has impacted also its lens on substantive review. However, to better explore the process problem – three pre-requisite concepts – transparency, accountability, and ex ante rulemaking must be defined and interrogated. This paper concludes by suggesting a shift towards an administrative justice model of “getting it right the first time” and the development of procedural protections through an amended Immigration and Refugee Protection Act and procedural code, inviting refinement."

Goal / Guiding Questions

GoalGuiding Question(s)
Diagnose the process problem created by ADMs in Canadian immigration.How have ADMs changed the IRCC decision‑making workflow? What procedural‑fairness and reasonableness issues arise?
Evaluate fair‑process prerequisites.What role do transparency, accountability and ex‑ante rule‑making play in ensuring a “fair process”?
Propose reforms.How can legislative, regulatory or procedural changes restore “getting it right the first time”?

What do you need to know? (Context, Significance & Why It Matters)

Contextual PointExplanationWhy It Matters
Legal vacuum in the IRPA – Section 186 gives IRCC a blanket authority to use “electronic means” but contains no specific rules for AI/ADM.Courts must interpret a statute that was drafted before modern AI existed.Without clear statutory limits, IRCC can deploy powerful algorithms unchecked, leaving applicants without a solid legal footing to challenge decisions.
Directive on Automated Decision‑Making (DADM) – Canada’s only soft‑law governance tool; it is non‑binding, allows low‑impact projects to avoid detailed disclosure, and limits transparency to “plain‑language notices.”The DADM is the de‑facto regulator for all federal ADMs, including immigration.Its weak enforcement means agencies can sidestep meaningful oversight, making it difficult for the public or courts to know how a decision was produced.
Rapid ADM rollout in immigration – By mid‑2025 IRCC had published 26 Algorithmic Impact Assessments (AIAs) and operates tools such as Advanced Analytics‑TRV, Chinook, and ITAT. Most are classified as “medium impact,” escaping the DADM’s stricter requirements.The sheer number and diversity of tools show that ADMs are now integral to everyday immigration processing.The scale magnifies any procedural defect: thousands of applicants are affected, yet the mechanisms to monitor fairness are minimal.
Federal Court backlog and boiler‑plate refusals – Over 75 % of immigration cases sit unresolved; many decisions are issued with templated reasons that omit any reference to ADMs.Judicial review is the primary external check on administrative decisions.When courts cannot see the underlying algorithmic reasoning, they cannot assess whether a decision complies with procedural fairness, effectively rendering review a rubber‑stamp.
Scholarly gap – Existing literature focuses on technical bias or high‑profile AI ethics; few works connect ADM mechanics to procedural fairness and administrative‑justice theory.This thesis applies Adler’s and Mashaw’s administrative‑justice typologies directly to IRCC’s ADM ecosystem.It provides a novel analytical lens that bridges law and technology, offering concrete doctrinal tools for courts and policymakers to evaluate fairness.
Unique methodological blend – Combines doctrinal legal analysis, exhaustive policy‑document coding, and a narrative case study (“Xi & Wang”) to illustrate real‑world impacts.The case study grounds abstract legal concepts in a lived‑experience scenario.Demonstrates how abstract procedural deficiencies translate into tangible hardships for immigrants, making the problem palpable for non‑specialists.
Policy relevance – The thesis coincides with ongoing parliamentary reviews (e.g., CBA’s 2025 submission, Treasury Board’s modernization agenda).Legislators are already debating AI governance reforms.The research supplies timely, evidence‑based recommendations that can be directly incorporated into upcoming amendments to the DADM or new ADM‑specific statutes.

What did the researchers find? (Key Findings & Themes)

The thesis makes three calls to action:

  1. Clarity - the process problem created by ADMs needs to be made more transparent, with accountability mechanisms identified, and the ex ante rules of the road clear to all stakeholders.
  2. “Getting it right the first time” - looking closely at administrative justice theories and how they may further implicate the internal and external dimensions of decision-making processes within the context of technological development.
  3. Refinement - Canadian administrative law scholarship can take this crucial work forward – bringing and applying new methods and theories, lockstep with (or at least less behind), emerging technological developments which will continue to alter ADMs.

How can you use this research?

AudiencePractical Take‑aways & Actions
Immigration lawyers & litigantsRequest specific ADM disclosures (e.g., model version, risk‑score thresholds) in ATIP or judicial motions. Frame procedural‑fairness arguments around the three‑pronged test (transparency, accountability, ex‑ante rules).
Policy‑makers / Treasury BoardAmend the DADM to make “medium‑impact” projects subject to full AIAs (including source‑code summaries). Introduce a statutory ADM‑Transparency Act requiring public registries of all IRCC‑used models.
IRCC administratorsAdopt audit‑log retention (minimum 12 months) for all ADM‑generated notes. Publish model cards (dataset, training method, performance metrics) on the IRCC transparency portal.
Academic researchersExtend the “process‑first” framework to other federal agencies (e.g., Canada Revenue Agency). Conduct empirical studies on the impact of ADM‑deletion practices on appellate success rates.
Civil‑society NGOsUse the thesis’s checklist (transparency/accountability/ex‑ante) to evaluate new AIAs and launch public‑interest litigation where gaps appear. Advocate for an Independent ADM Ombudsperson (see CBA recommendation).
Future‑research agenda (as identified by the author)Comparative analysis of ADM procedural fairness across Commonwealth jurisdictions. • Empirical measurement of “automation bias” in IRCC officers using controlled experiments. Development of a legal‑tech prototype that automatically extracts ADM‑related metadata from immigration files for judicial review.

What did the researcher do? (Methodology)

Fair process - an examination of the use of automated decision-making systems in Canadian administrative law through the case study of Canadian immigration (2025)Download
AI transparency statement

This dissertation uncovers a dual accountability regime in Canadian settlement services, where relational, empathy‑driven practices coexist, and often clash, with rigid, number‑focused reporting demands. The work exposes a temporal bias that privileges speed and quantification, shaping data cultures, technology choices, and ultimately the quality of support offered to newcomers. By foregrounding workers’ lived experiences, the study offers concrete pathways for policy reform, organizational redesign, and technology development that could make settlement services more humane, effective, and truly accountable.

What is this research about?

Abstract (quoted)
“Immigration and settlement policies, much like organizational policies or digital technology policies, are built from the coming together of discourses, tools, and people—people standing behind service desks, using databases and making decisions. These interdependent systems, beliefs, and tools affect how decisions are shaped as well as the information and data cultures of an organization (or a whole sector). Digital technologies and commitments to evidence‑based, accountable service delivery are critical to the capacity of settlement organizations to manage data flow. The dissertation addresses these questions through a multi‑sited study with the settlement organizations that serve refugees and immigrants in Canada, a country often recognized as a global leader in immigration and settlement policies.”

Goal & Guiding Questions
The dissertation aims to uncover how information practices shape and mediate accountability mechanisms in Canadian immigrant‑settlement services. It is driven by three research questions:

  1. What does accountability mean for settlement workers and their organizations?
  2. Which practices and systems (e.g., databases, reporting tools, outreach methods) underpin workers’ approaches to accountability?
  3. What are the implications of these practices and systems for the work of settlement organizations and for the people they serve?

What do you need to know? – Context & Significance

PointExplanation
Policy pressureMost settlement organizations are funded by the federal government (IRCC) and must report detailed service data to demonstrate “accountability” for public money.
Digital transformationRecent years have seen a rapid shift to digital service delivery (e‑mail, Zoom, mobile apps) and the introduction of mandated databases such as iCARE.
Knowledge gapWhile much research has examined immigrants’ information‑seeking behaviour, little has looked at the information practices of the frontline workers who collect, organize, and report that data.
Temporal biasThe author introduces the concept of temporal bias – a cultural tilt that privileges speed, clock‑time, and easily quantifiable data over slower, relational, narrative work.
Unique approachThe study blends practice theory, social theories of time, and critical data studies to analyze both the sociotechnical and temporal dimensions of accountability. It combines a large‑scale qualitative interview program (32 workers, 15 clients) with document analysis of 30+ sector reports, policy papers, and artefacts.

These points locate the work at the intersection of information science, migration studies, and organizational sociology, offering a fresh lens on how “accountability” is lived and negotiated on the ground.

What did the researchers find? – Key Highlights & Illustrative Quotes

Dual notions of accountability

Organizational data culture

Socio‑technical realities

Temporal bias & its consequences

Outlier / surprising findings

Interesting Themes & Outlier Findings

ThemeWhy it matters
Empathy as formal accountabilityShows that caring practices can be framed as “accountable” actions, challenging the purely numeric view of performance.
Temporal bias (Chronos vs. Kairos)Highlights a structural tension: clock‑time reporting vs. the “right moment” needed for trust‑building with clients.
Data driftManagers adjust data collection to anticipated funder expectations, even without explicit directives – a subtle form of governance.
Gatekeeping & time‑keepingWorkers act as gatekeepers of information and of time, deciding which client needs are urgent enough to be recorded.
Technology‑induced inequitiesDigital tools (e.g., Zoom, WhatsApp) improve reach but also create barriers for clients lacking devices or digital literacy.
Non‑use of dataDespite massive data collection, organisations rarely analyse the data for internal improvement; it remains a reporting artefact.

How can you use this research?

For Policy Makers / Funders (IRCC, provincial ministries)

For Settlement Organization Leaders & Managers

For Frontline Settlement Workers

For Researchers & Academics

For Technology Designers / Vendors

What did the researchers do? – Methodology Overview

ComponentDetails
DesignQualitative, exploratory case study using practice theory and social theories of time.
Data sourcesSemi‑structured interviews – 32 settlement workers (mix of front‑line staff, managers, coordinators) and 15 immigrant clients.Document & artefact analysis – 30+ sector reports, policy papers, conference proceedings, internal manuals, consent forms, database screenshots.
SamplingPurposive + snowball sampling across Ontario, Alberta, Manitoba, Saskatchewan, British Columbia; aimed for diversity in gender, age, ethnicity, tenure, and organizational size.
Demographics (workers)35 % male, 65 % female; ages 25‑64; experience 1‑31 years; roles ranged from settlement workers, managers, digital navigators, administrative assistants.
Demographics (clients)12 % male, 88 % female; ages 18‑35; mix of refugees, permanent residents, recent arrivals (2019‑2022).
AnalysisReflexive Thematic Analysis (Braun & Clarke, 2022) – six‑step process: familiarization, coding (NVivo), theme generation, review, definition, write‑up. Theoretical lens (practice theory, temporality) guided coding of concepts such as “accountability,” “data culture,” “temporal bias.”
EthicsApproved by University of Toronto REB; informed consent obtained; pseudonyms used; data stored securely.
Limitations notedPandemic‑forced shift to remote interviews; non‑probability sampling limits generalizability; urban‑centric sample (few rural participants).

Welcoming Infrastructures - Designing for Accountability in the Settlement Service Work in Canada (2024)Download
AI transparency statement

The study investigates whether artificial‑intelligence (AI) tools can help Canadian settlement agencies deliver higher‑quality employment services to refugee clients, moving them from “survival” jobs toward work that matches their prior qualifications.

This study demonstrates that AI holds promise for improving the efficiency and quality of refugee employment services in Canada, but successful adoption hinges on addressing knowledge gaps, data‑privacy safeguards, bias mitigation, and aligning funding incentives with quality employment outcomes.

What is this research about?

It asks:

Abstract:

“Canada increasingly welcomes new refugees, but after arrival they face numerous personal and systemic barriers to securing employment within their fields of experience, causing them to take on low‑paying jobs. … This pilot study … thirteen interviews with counsellors, managers, and I.T. experts … showed that they had an optimistic perception of AI’s ability to support their work but limited knowledge about AI and concerns such as data privacy. … AI tools – such as mock interviews, resume building, customer‑support chatbots, and customized job search – may be potential solutions to increase efficiency and quality of services, which can free up counsellors to spend more quality time for tailored support … Accessible trainings … securing funding, testing for algorithmic bias, and protecting sensitive client data are essential.”

What do you need to know? – Context & Relevance

Why the study mattersKey background points
Persistent mismatch – Refugees in Canada earn < ½ of the average Canadian income in the first year and are often over‑qualified for the jobs they obtain (e.g., 80 % of Winnipeg refugees work in unrelated sales/services after three years).Over‑qualification rates: 70 % of employed refugees are dissatisfied with their occupation; 60 % feel over‑qualified (Lamba 2003).
Systemic pressures on settlement agencies – Funding models reward number of placements rather than quality, caseloads are high, and staff lack formal training for employment counselling.Counselors report large caseloads, limited time for one‑on‑one support, and reliance on “survival‑job” placements (Kosny et al., 2020).
AI already reshaping HR – Recruiters use AI for screening, interview automation, and job matching, yet bias and privacy concerns remain.AI can exacerbate language‑bias (HireVue, Amazon resume‑screening failures).
Gap in literature – Little research exists on AI adoption within refugee‑focused settlement services; most work focuses on private‑sector recruitment. This pilot fills that niche with qualitative insight from frontline staff.First Canadian‑focused qualitative study on AI + refugee employment services.

What did the researchers find? – Key Highlights & Illustrative Quotes

ThemeFindingsRepresentative Quote
Perceptions of AI – Mixed but leaning positive.Majority view AI as “helpful, time‑saving”. Knowledge is limited; most know only ChatGPT.“I just asked ChatGPT to do proofreading … it saved me a lot of time.” (Manager 2)
Positive expectations – Efficiency, quality, and scalability.AI could automate routine tasks, freeing counsellors for personalized support. Mock‑interview tools seen as a way to increase client self‑practice.“AI can improve our ways of doing things more effectively and efficiently.” (Manager 6)
Negative concerns / limitations – Data privacy, bias, authenticity, and loss of human touch.Fear of client data exposure on cloud platforms. Concern that AI‑generated resumes sound “robotic”. Skepticism about AI understanding accents or cultural nuances.“Clients have escaped unsafe areas; they want privacy. Weak AI privacy could threaten them.” (Counsellor 5)
Barriers to adoption (counsellors) – Time to learn, organisational culture, funding.Training must be embedded in work schedules. Resistance from less tech‑savvy staff. Need for clear ROI to justify budget reallocation.“If the tool is too difficult to learn, we risk frustration and reduced motivation.” (Counsellor 5)
Barriers to adoption (clients) – Digital literacy, language, device access.Younger refugees adapt faster; seniors may need extra support. Multilingual interfaces essential. Access via agency computer labs, libraries, or shared devices.“80 % own a mobile phone, but computers are scarce; we need a lab or loaner laptops.” (Researcher note)
Current AI tools in use – Predominantly ChatGPT (4/5 agencies) and Microsoft Co‑Pilot for note‑taking.Used for grammar checks, keyword extraction, mock‑interview question generation.Many mentioned their organization is “not officially” using it, it is not
formally “encouraged”, but it is the user’s choice whether they want to use it.
Tools in development / testing – Resume‑keyword matchers, website chatbots, VR soft‑skill simulators (Bodyswaps).Pilot phases focus on usability and bias testing.Two out of the five organizations were in the development or testing phase of adopting new AI tools. Both had mentioned that these programs were not funded by the government.
Wishlist – Mock‑interview platforms, resume builders, customized job‑search engines, multilingual chatbots, admin‑automation.Ranked by participant interest (Figure 4).Tools to help write cover letters and assist with learning like English language classes were also brought up as potentially beneficial.
Bias concerns – Potential profiling of refugees, racial/ gender bias, political bias (e.g., preferential treatment of Ukrainian refugees).Calls for algorithmic bias audits before deployment.“Algorithms may favor ‘rich, white people’; we must guard against that.” (Participant)
Policy recommendations – Formal AI‑use guidelines, data‑privacy safeguards, funding for training, mandatory outcome metrics (quality of employment, not just placement counts).All participants reported that there were no formal organizational policies around AI usage, while risks of data privacy and lack of transparency were barriers for adoption.

More funding should be
allocated to having settlement agencies collect and report data such as client education and
qualifications, employment type intended and obtained, wage, and satisfaction.
Regardless of the potential efficiency improvements obtainable by AI adoption, achieving the objective of
refugees obtaining employment that matches their qualifications remains challenging without first establishing a method to measure data limitations. A policy shift is essential, wherein funding agencies mandate the collection of additional data from settlement organizations in
order to set targets of increasing employment outcomes that match client qualifications, resulting in reduced deskilling of refugees.

Outlier findings

What are some particularly interesting themes & outlier findings?

  1. Optimism vs. Knowledge Gap – Staff are enthusiastic about AI’s potential but lack formal understanding of how models work, raising a risk of misuse.
  2. Human‑Touch Paradox – While AI can free counsellors for deeper rapport, some see AI itself as a possible conduit for clients to disclose sensitive needs without stigma.
  3. Bias Amplification – Participants explicitly linked algorithmic bias to existing systemic inequities (e.g., differential support for Ukrainian vs. other refugees).
  4. Funding‑Driven Adoption – Because current metrics reward quantity of placements, agencies hesitate to divert resources to AI without clear evidence of impact on quality outcomes.
  5. Multilingual & Accessibility Needs – The call for AI tools that operate in multiple languages and run on low‑spec hardware is a distinctive requirement for this sector.

How can you use this research? – Target‑Specific Recommendations

AudiencePractical Actions
Settlement‑agency leaders / managersConduct a needs‑assessment to map which AI tools (mock‑interview, resume‑builder) would yield the greatest time‑savings. Secure pilot funding earmarked for AI training and bias‑audit services. Draft AI‑use policies covering data minimisation, client consent, and vendor vetting.
Front‑line counsellorsParticipate in short, on‑site AI workshops (e.g., “ChatGPT for résumé polishing”). Adopt a dual‑review workflow: AI‑generated output reviewed by a human before sharing with clients. Share client feedback on AI tools to inform continuous improvement.
IT / data‑security teamsImplement privacy‑by‑design: anonymize client data before any AI API call; use on‑premise or encrypted cloud solutions. Run bias‑testing scripts on any recruitment‑related AI model before rollout.
Policymakers / funders (e.g., Immigration, Refugees and Citizenship Canada)Revise settlement‑program metrics to include quality‑of‑employment indicators (alignment with qualifications, wage levels). Allocate grant streams specifically for AI‑adoption pilots that include evaluation components.
Researchers / academiaBuild on this pilot with a larger mixed‑methods study (survey > 100 agencies, longitudinal outcome tracking). Explore comparative bias analyses of AI tools across refugee sub‑populations.
Technology vendorsDesign multilingual, low‑bandwidth AI modules tailored to settlement‑agency workflows. Provide transparent model documentation and easy‑to‑use bias‑mitigation dashboards for non‑technical staff.

Future‑research directions noted by authors

6. What did the researchers do? – Methods Overview

The project combines a qualitative thematic analysis of in‑depth semi‑structured interviews with staff from five agencies across Ontario, British Columbia, and a national organization, capturing both positive optimism and privacy‑risk anxieties. It also maps concrete tool‑wish lists (mock‑interview platforms, multilingual chatbots, resume‑optimizers) and proposes a policy‑framework for AI governance in settlement contexts.

AspectDetail
DesignQualitative pilot study employing inductive thematic analysis of interview transcripts.
Data collection13 semi‑structured interviews (July 2024) with staff from five Canadian settlement agencies (three in Ontario, one in British Columbia, one national/international).
Participant roles7 Program managers, 5 Client‑facing counsellors, 1 IT expert (some participants held dual roles).
RecruitmentPurposive + snowball sampling; agencies contacted via publicly listed emails/phone numbers; consent obtained; ethics approval from McMaster University REB.
Interview mediumVirtual Microsoft Teams; video recorded; auto‑transcribed then manually corrected.
AnalysisCoding framework iteratively refined; themes identified: Perceptions of AI, Barriers & Mitigations, Applications of AI. Visualised via flow‑chart (Fig 2).
Supplementary dataFigures showing participant distribution (Fig 1), AI tool usage (Fig 3), and interest rankings (Fig 4). Table 1 lists example AI tools for settlement services.
Limitations acknowledgedSmall, non‑random sample; geographic concentration in Ontario/BC; reliance on self‑reported perceptions; no direct client input.
Refugees-and-Robots-vs-Recruitment-Integrating Artificial Intelligence into Refugee Employment Services of Canada (2024)Download
AI transparency statement

This report delivers a human‑rights‑centred, comparative audit of how automated recommendation‑making tools are shaping immigration decisions in three major democracies, exposing systemic bias, opacity, and accountability deficits, and offering a concrete roadmap—particularly for the UK—to embed ethical safeguards, transparency, and robust oversight into any future deployment of such tools.

Guiding aims:

What is this research about?

The report warns that unchecked automated recommendation‑making tools in immigration threaten fairness, transparency and human rights, and urges swift, binding regulation—especially in the UK—based on lessons from Canada’s mandatory AI assessments and the USA’s fragmented oversight.

What do you need to know? – Context & Why It Matters

What did the researchers find? – Key Highlights & Themes

ThemeFindings (with quotes)
Seven risk categories – bias, transparency, controversial uses, terminology, psychological effects, data, accountability. “These risks fall under seven categories… this list facilitates regulators to consider where the trade‑offs must lie.”
Bias is pervasive – historical, data, developer, proxy, automation, feedback‑loop, confirmation, computational, quantitative. “Historical bias… if past decisions were biased, those biases can be embedded into the algorithm and applied to future cases.”
Transparency gaps – Companies hide trade‑secrets; ATRS records are voluntary and rarely published (only 9 records, none from Home Office). “Only nine records across all government departments had been published and none by the Home Office.”
Controversial ‘automated suspicion’ – tools that flag applications for extra scrutiny (e.g., US ICM, UK GPS‑tagging). “These tools do not make final decisions but generate suspicion… leading to increased scrutiny or deprioritisation.”
Terminology matters – Using “human‑in‑the‑loop” lets minimal human checks bypass regulation. Recommendation: shift to “chain of decision‑making”."The lack of clarity over definitions risks allowing companies and government departments to describe their ARMTs in terms that avoid regulation."
Psychological impact – Migrants feel “indignity” when a computer influences life‑altering decisions. “There is a unique sense of injustice or indignity when an administrative decision … relies … on an algorithmic output.”
Data stewardship failures – No systematic collection of protected‑characteristic data, hampering bias monitoring."There is a lack of public information about the collection, storage and use of data by algorithmic systems, especially when data is provided by a commercial entity. This leaves numerous important questions unanswered."
Accountability erosion – Algorithms are often “black‑box”, making it impossible to trace why a decision was made. “People will never know why the computer says ‘yes’ or ‘no’.”
Case‑Study InsightsUK GPS‑tagging – ARMT decides whether a person should wear an ankle tag, yet individuals are never notified of its use. • Canada ITAT – Risk‑assessment tool used by a “Risk Assessment Unit”; the tool is claimed to be “non‑black‑box” but lacks public AIAs, GBA+, or peer‑review disclosures. • US ICM (Palantir) – Used for investigative case management; controversy over its role in immigration enforcement. • US GeoMatch – Predictive placement tool for refugees; praised for efficiency but criticized for lack of explainability and for treating refugees as “economic objects”.
Outlier finding – Some tools (e.g., GeoMatch) are opt‑in for skilled migrants in Canada, showing a different governance model where consent and user control are built‑in."This tool demonstrates the potential of ARMTs to improve migrants’ lives but re-emphasises the need to proactively regulate ARMTs to safeguard against possible harms."

Particularly Interesting Themes & Outliers

How can you use this research?

AudiencePractical Take‑aways
Policymakers (UK Government, Home Office)Enact a binding ban on black‑box algorithms and on automated refusals (mirroring Canada).Adopt the “chain of decision‑making” terminology and require disclosure at every stage.Make AIAs and ATRS records mandatory, publicly searchable, and updated on a set schedule (e.g., every 6 months).Require collection of protected‑characteristic data for bias monitoring, with strict data‑protection safeguards.Create an independent oversight body (audit, peer‑review, GBA+ assessment) with enforcement powers (funding withdrawal).
Legal practitioners & NGOsUse the report’s checklist of disclosure failures (e.g., lack of notice of ARMT use) to frame FOI/SAR requests and judicial review arguments.Advocate for statutory duties under the Equality Act 2010 to consider algorithmic bias.Educate clients that they have a right to ask whether an ARMT was involved and to request an explanation.
Immigration officials & public‑sector managersImplement training programmes on algorithmic bias, explainability, and the “chain of decision‑making”.Introduce internal audit logs that record when an ARMT is consulted and the human decision‑maker’s final action.Pilot transparent, explainable models (e.g., rule‑based scoring) for high‑risk decisions.
Technology developers & vendorsDesign tools to be explainable by design (avoid opaque ML models for public‑sector use).Provide documentation packages (model cards, data sheets) that satisfy future mandatory AIAs.Include human‑rights impact assessments early in the development lifecycle.
Academics & ResearchersBuild on the seven‑risk framework for comparative studies in other jurisdictions.Investigate feedback‑loop dynamics in ARMTs (e.g., GPS‑tagging, risk‑scoring).Explore opt‑in consent models like GeoMatch’s Canadian version as a pathway to ethical AI.
Future‑research agenda (as identified by the author)Empirical measurement of bias outcomes once protected‑characteristic data become available.Longitudinal studies on psychological effects of algorithmic suspicion on migrants.Evaluation of independent audit mechanisms and their impact on tool redesign.

What did the researchers do? – Methodology

MethodDetails
Field research (summer 2024)Conducted over 50 semi‑structured interviews with more than 25 organisations across Canada, the USA, and the UK.
Stakeholder mixGovernment officials, civil servants, technology developers, software engineers, academics, lawyers, and migrant‑rights organisations.
Geographic coverageCanada (Toronto, Ottawa, Montreal, Vancouver); USA (New York, Washington DC, San Francisco); UK (London, York).
Case‑study selectionFive in‑depth case studies: • UK GPS‑tagging • Canada Integrity Trend Analysis Tool (ITAT) • Canada privately‑sponsored refugee applications • US Investigative Case Management (ICM) • US GeoMatch.
Document analysisReviewed policy documents, legislative texts, public‑sector AI guidelines, FOIA requests, and published AIAs.
Qualitative synthesisIdentified recurring risk categories, mapped regulatory gaps, and extracted direct quotations from interviewees (e.g., “There is a unique sense of injustice…”).
LimitationsResearch reflects the state of affairs up to September 2024; rapid tech‑policy changes after that date are not captured.
Automated Recommendation-Making Tools in Immigration Systems (2024)Download
AI transparency statement

This report provides an examination of language‑policy and translation strategy on Manitoban government websites and social‑media channels during the COVID‑19 pandemic.

Desjardins shows that multilingual health communication in Manitoba during COVID‑19 was uneven, hidden, and often inconsistent with both policy and demographic realities. By exposing these gaps, this article offers a clear roadmap for governments, NGOs, and scholars to redesign digital health messaging so that “Hello/Bonjour” truly cuts it in a crisis.

 What is this research about?

Goal: To understand how (or whether) multilingual communication was provided, how usable that communication was for non‑English/French speakers, and what the implications are for translational justice in a health crisis.

Guiding questions (implicit in the text):

What do you need to know? – Context & Why It Matters

Background pointWhy it matters for the study
COVID‑19 exposed language barriers – e.g., the Cargill High‑River meat‑packing outbreak where bulletin‑board notices were only in English, leading to confusion and higher case counts.Shows the concrete health‑risk consequences of monolingual communication.
Canadian language legislation – Official Languages Act guarantees English/French only; Indigenous and migrant languages have no legal requirement.Sets the policy ceiling against which provincial practice is measured.
Manitoba’s demographic reality – 2016 Census: ~22 % of residents do not have English as mother tongue; ~11 % speak a non‑official language at home.Highlights a mismatch between population needs and the official‑language‑only approach.
Digital shift in crisis communication – Governments moved heavily to websites, Facebook, Twitter, YouTube; the speed of information delivery is crucial.Provides the technological arena where translation (or its absence) becomes visible.
Research novelty – Uses a digital‑humanities toolbox (web‑scraping, hashtag indexing, network analysis, close‑reading) to study both the content and the UX of multilingual delivery, rather than a simple content‑comparison.Offers a methodological blend rarely applied to language‑policy evaluation.

What did the researchers find? – Key Highlights & Supporting Quotes

FindingEvidence (quoted)
Official‑language dominance persists – English and French dominate all COVID‑19 posts; other languages are rare.COVID‑19 Bulletin and the COVID‑19 Vaccine Bulletin are systematically posted in separate, language‑specific posts (English and French)… Neither … is translated or available in other languages.
Multilingual resources exist but are hidden – Fact‑sheet translations (8 languages) and silent YouTube videos (12+ languages) are buried behind several clicks and not linked from the French site.Three mouse‑clicks from the English homepage will redirect a user to a section… where the Social (Physical) Distancing Factsheet can be found translated into eight languages.
Inconsistent UX across language versions – The French site omits the multilingual fact‑sheets and video library that the English site shows.The French page does not have the seven other translated Social (Physical) Distancing Factsheets…
Social‑media strategy is fragmented – Separate accounts for health officials, disjointed navigation to Shared Health Manitoba, and a predominance of English tweets.The Government of Manitoba’s Twitter account… tweets and retweets are predominantly in English, though French‑language content does make a somewhat regular appearance.
Unequal YouTube playlists – English COVID‑19 playlist holds 346 videos; French playlist only 203, with far lower view counts.The English playlist total video count… is markedly larger than the French playlist… Engagement is also significantly lower for the French‑language playlist.
Outlier success: Low‑German vaccine stickers – Community‑driven demand led to official production of stickers in Low German (Plattdeutsch), generating notable social‑media buzz.Andrew Unger … proposed … add Low German … and the Government of Manitoba obliged… the tweet garnered 283 likes, 28 quote tweets and 27 retweets.
Platform omission – No active Instagram presence despite its potential for automated caption translation.The Government of Manitoba is oddly absent on Instagram… This seems like a missed opportunity…

Especially Interesting Themes & Outlier Findings

How can you use this research? – Practical Take‑aways for Different Audiences

AudienceWhat to do (based on findings)
Public‑health officials / policymakers• Audit all digital touch‑points (website, social media, video libraries) for discoverability of multilingual assets; redesign navigation to surface them from any language version.• Adopt a single, unified social‑media hub for health updates to avoid fragmentation.• Institutionalize a multilingual content checklist (incl. low‑resource languages identified in census data).
Government communications teams• Implement language‑agnostic UI elements (e.g., language selector that stays visible on every page).• Publish parallel playlists on YouTube with identical video counts and promote them equally in both official languages.• Explore automated caption translation on Instagram and TikTok to reach younger, multilingual audiences.
Community organisations / NGOs• Leverage existing multilingual PDFs/videos to create localized outreach kits (print, radio, community‑center displays).• Advocate for co‑creation of content with community members (e.g., the Low‑German sticker model).
Researchers / Academics• Extend the digital‑humanities methodology (web‑scraping + UX analysis) to other provinces or sectors (education, emergency services).• Conduct user‑testing studies to quantify the impact of hidden multilingual resources on information uptake.
Future‑research agenda (as suggested by the author)• Interview end‑users from Indigenous and migrant language groups to capture lived experiences of access barriers.• Compare Manitoba’s approach with provinces that have indigenous‑language mandates (e.g., Nunavut) to identify best practices.

What did the researchers do? – Methodology Overview

Methodological componentDetails
Case‑study designFocused on the Government of Manitoba’s digital communication during COVID‑19.
Data collectionWeb‑scraping of the provincial website and its “Resources and Links” pages.• Hashtag indexing / searching on Twitter and Facebook to capture COVID‑19‑related posts.• Network analysis of social‑media interactions (e.g., retweets, shares).
Qualitative analysisClose‑reading of selected posts, PDFs, and video descriptions to identify translation cues (e.g., “translate” buttons, multilingual labels).
Quantitative snapshotsCounts of followers/subscribers (e.g., Facebook ≈ 58 k, YouTube ≈ 12 k); video counts per language playlist; citation of census statistics (language‑use percentages).
Ethical considerationsOnly public‑facing data used; personal identifiers removed; compliance with Canadian Tri‑Council Policy on Human‑Subjects.
Scope limitationsNo systematic surveys or interviews; analysis limited to publicly available digital artefacts up to mid‑2021.


Renée DesjardinsHelloBonjour_Wont_Cut_It_in_a_Health_CrisisDownload
AI transparency statement

The paper investigates dark patterns in privacy (DPPs): interface‑design tactics that manipulate users into disclosing more personal data than they intend, to the benefit of the service provider.

Jarovsky’s paper delivers a clear, interdisciplinary framework for identifying and regulating dark patterns in privacy. By tying UI manipulation to cognitive bias theory and positioning the GDPR’s fairness principle as the legal fulcrum, the research equips regulators, designers, and legal practitioners with a concrete roadmap to detect, assess, and ultimately prevent privacy‑harmful design practices.

What is this research about?

Guiding questions:

  1. How can DPPs be defined in a way that isolates them from ordinary nudges?
  2. Which cognitive biases do DPPs exploit, and how can they be systematically categorized?
  3. What is the legal status of DPPs under the EU GDPR (and related regimes such as the DSA and CCPA/CPRA)?
  4. Can existing legal principles—especially the GDPR’s fairness principle—be leveraged to curb DPPs?

The study aims to (i) propose a precise definition, (ii) build a taxonomy grounded in cognitive‑bias theory, and (iii) evaluate the compatibility of DPPs with current EU data‑protection law, suggesting pathways for regulatory reform.

What do you need to know? – Context & Significance

Background pointWhy it matters
Rise of manipulative UI – Users regularly encounter opaque settings, forced‑consent banners, and default‑heavy designs that funnel them toward data‑rich choices.Demonstrates a systemic problem that undermines the GDPR’s consent requirements.
Legal gap – While the GDPR mandates “freely given, specific, informed” consent, it does not explicitly address how consent is obtained via UI.Creates a loophole where controllers can obtain apparently lawful consent that is actually coerced.
Existing scholarship – Prior work on dark patterns (e.g., Brignull, Nouwens, CNIL) treats them mainly as marketing tricks; few connect them to privacy law.Jarovsky’s work bridges HCI, behavioral economics, and EU law, offering a multidisciplinary lens.
Policy momentum – The EU Digital Services Act (DSA) and California CPRA have begun naming dark patterns, but their scope is limited.Highlights the timeliness of a deeper legal analysis of DPPs.
Unique contribution – The paper (a) refines the definition to separate dark patterns from nudges, (b) maps each pattern to a specific cognitive bias, and (c) frames the GDPR’s fairness principle as the primary legal lever.Provides a concrete analytical toolkit for regulators, designers, and scholars.

What did the researchers find? – Key Findings & Illustrative Quotes

3.1 Definition & Conceptual Clarification

“To be considered a dark pattern, the design must be manipulative and have as an objective goal to make the data subject worse off according to the observed criteria.”

3.2 Taxonomy (Four Core Categories)

CategoryCore MechanismExample(s)
PressureCoercive language or conditional access (“you must share X to use Y”).“Require marketing consent to complete a purchase.”
HinderDeliberate friction—hidden settings, complex navigation, privacy‑invasive defaults.“‘Accept all’ vs. a labyrinth of individual switches.”
MisleadAmbiguous wording, double negatives, visual tricks (color, contrast).“Green ‘deny’ button, red ‘accept’ button.”
MisrepresentFalse claims of necessity or benefit.“Claim data collection is legally required when it isn’t.”

Each category is linked to a set of cognitive biases (e.g., default effect, framing, anchoring, social proof).

3.3 Legal Analysis

“DPPs breach the principle of fairness, for cumulatively: (a) not respecting reasonable expectations … (c) involving manipulation … (d) negatively affecting data subjects’ privacy.”

3.4 Outlier / Particularly Interesting Insights

How can you use this research? – Audience‑Specific Takeaways

AudiencePractical Applications
Privacy Regulators & PolicymakersDraft interpretative guidance that treats DPPs as violations of the fairness principle. Amend the DSA to remove the GDPR exemption for privacy‑related dark patterns. Incorporate the taxonomy into supervisory checklists for DPIA reviews.
Designers & Product TeamsConduct an internal audit using the four‑category taxonomy to spot DPPs in UI flows. Replace pressure and mislead patterns with transparent, opt‑in consent dialogs. Adopt “privacy‑by‑design” checklists that explicitly test for default‑effect exploitation.
Legal Counsel & Compliance OfficersRe‑evaluate consent records for evidence of pressure or misrepresent tactics. Advise clients that consent obtained via any of the four categories may be deemed invalid under GDPR and CPRA precedents. Prepare arguments for fairness‑principle defenses in enforcement actions.
Researchers & AcademicsExtend the taxonomy to other jurisdictions (e.g., Brazil’s LGPD, India’s PDPB). Empirically test the impact of each pattern on user behavior using eye‑tracking or A/B experiments. Explore the intersection of DPPs with algorithmic profiling and AI‑driven personalization.
Consumer Advocacy GroupsUse the taxonomy to produce “dark‑pattern scorecards” for popular apps/websites. Educate users about cognitive biases that make DPPs effective, encouraging critical UI scrutiny.

Future‑research directions highlighted by the author

5. What did the researchers do? – Methodology Overview

Methodological ElementDescription
Conceptual analysisSynthesized literature from HCI, behavioral economics, and EU data‑protection law to craft a precise definition separating DPPs from nudges.
Taxonomy constructionMapped a non‑exhaustive list of cognitive biases (anchoring, default effect, framing, etc.) to observable UI manipulations, producing four high‑level categories (Pressure, Hinder, Mislead, Misrepresent).
Legal doctrinal reviewExamined GDPR Articles 6, 7, 25, Recitals 42‑45, the DSA, and the CPRA, interpreting how each regime treats consent and dark‑pattern‑like practices.
Case illustrationsPresented realistic user scenarios (Alice, Bob, Charlie, Danah) to demonstrate each pattern type and its privacy impact.
Normative argumentationLeveraged the EU’s fairness principle and the PECL’s contract‑law concepts (mistake, fraud, pressure, hindrance) to argue for a legal re‑framing of DPPs.

No empirical data (surveys, interviews, or experiments) were collected; the work is a theoretical‑legal synthesis supported by illustrative examples.

Dark Patterns in Personal Data Collection- Definition, Taxonomy, and LawfulnessDownload

The report is a post‑analysis of the National Call for Proposals (CFP) 2024 that funded Canada’s Settlement and Resettlement Assistance Programs (outside Québec) for the period April 1 2025 – March 31 2028. Its aim is to evaluate how well the CFP achieved its intended outcomes, identify strengths and weaknesses of the funding process, and generate actionable recommendations for future CFP cycles.

Key guiding questions include:

What do you need to know? – Context & Why It Matters

What did the researchers find? – Key Highlights & Themes

Additional observations

 Did the CFP meet the 2024 policy objectives and priorities?

The CFP met the baseline service‑delivery objectives, but fell short of fully operationalizing the higher‑order equity and innovation priorities that were set for 2024.

 How effective were the supports, tools, and communications?

The supports and tools were substantially effective, especially the help‑desks, webinars, and EDI resources, but clarity of the funding guidelines and timeliness of communications need considerable improvement.

 Impact of the 2024 immigration‑levels adjustment

The immigration‑levels adjustment compressed the negotiation window, extended the overall timeline, and necessitated a shift to shorter‑term funding agreements, which amplified stress for both applicants and IRCC staff.

Which aspects of the process require improvement?

The report identifies these improvement areas as the most critical levers for future CFP cycles:

How can you use this research? – Audience‑Specific Takeaways

National Call for Proposals for the Settlement Program and Resettlement Assistance Program CFP 2024 - Post-Analysis ReportDownload
AI transparency statement

This research report provides a review and analysis of online and distance education language training in Canada. The report provides a set of recommendations for the implementation of online and distance education language training, including the need for increased access to online and blended learning opportunities, the need to address integrating culture in language learning, the need to provide robust learner orientation and professional development for instructors, ongoing and multi-modal communications, technical support and the development of a centralized repository of learning objects.

What is this research about?

This research examines the evolving landscape of online and distance education for language training, with a focus on the implications for newcomers to Canada, particularly those in English as a Second Language (ESL) and immigrant integration contexts. The report aims to analyze how emerging digital tools and Web 2.0 technologies influence language learning, and to identify effective practices and challenges in delivering online language education1.

Guiding questions and central objectives:

What do you need to know? (Context and background)

What did the researchers find? (Key highlights, themes, and outliers)

Interesting Themes and Outlier Findings

How can you use this research?

What did the researchers do? (Methodology)

Summary Table: Key Points

AspectKey Findings/Recommendations
Web 2.0 ToolsBlogs, wikis, podcasts enhance interactivity and engagement
Instructor RoleRequires ongoing professional development
Learner RoleSelf-motivation and digital literacy are critical
Cultural SensitivityContent must be culturally relevant
FlexibilityOnline learning supports refugees and newcomers
Best PracticesFocus on usability, learner readiness, and LMS integration
Fast Forward - An Analysis of Online and Distance Education Language Training (2007)Download
arrow-circle-upmagnifier

Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)