Publications, reports, and articles.

Refugees and Robots versus Recruitment: Integrating Artificial Intelligence into Refugee Employment Services of Canada (2024)

Posted on:
October 25, 2025

The study investigates whether artificial‑intelligence (AI) tools can help Canadian settlement agencies deliver higher‑quality employment services to refugee clients, moving them from “survival” jobs toward work that matches their prior qualifications.

This study demonstrates that AI holds promise for improving the efficiency and quality of refugee employment services in Canada, but successful adoption hinges on addressing knowledge gaps, data‑privacy safeguards, bias mitigation, and aligning funding incentives with quality employment outcomes.

What is this research about?

It asks:

  • What opportunities does AI present for refugee employment services?
  • What challenges or barriers might agencies face when adopting AI?
  • How do settlement‑agency staff perceive AI and what policies would support responsible use?

Abstract:

“Canada increasingly welcomes new refugees, but after arrival they face numerous personal and systemic barriers to securing employment within their fields of experience, causing them to take on low‑paying jobs. … This pilot study … thirteen interviews with counsellors, managers, and I.T. experts … showed that they had an optimistic perception of AI’s ability to support their work but limited knowledge about AI and concerns such as data privacy. … AI tools – such as mock interviews, resume building, customer‑support chatbots, and customized job search – may be potential solutions to increase efficiency and quality of services, which can free up counsellors to spend more quality time for tailored support … Accessible trainings … securing funding, testing for algorithmic bias, and protecting sensitive client data are essential.”

What do you need to know? – Context & Relevance

Why the study mattersKey background points
Persistent mismatch – Refugees in Canada earn < ½ of the average Canadian income in the first year and are often over‑qualified for the jobs they obtain (e.g., 80 % of Winnipeg refugees work in unrelated sales/services after three years).Over‑qualification rates: 70 % of employed refugees are dissatisfied with their occupation; 60 % feel over‑qualified (Lamba 2003).
Systemic pressures on settlement agencies – Funding models reward number of placements rather than quality, caseloads are high, and staff lack formal training for employment counselling.Counselors report large caseloads, limited time for one‑on‑one support, and reliance on “survival‑job” placements (Kosny et al., 2020).
AI already reshaping HR – Recruiters use AI for screening, interview automation, and job matching, yet bias and privacy concerns remain.AI can exacerbate language‑bias (HireVue, Amazon resume‑screening failures).
Gap in literature – Little research exists on AI adoption within refugee‑focused settlement services; most work focuses on private‑sector recruitment. This pilot fills that niche with qualitative insight from frontline staff.First Canadian‑focused qualitative study on AI + refugee employment services.

What did the researchers find? – Key Highlights & Illustrative Quotes

ThemeFindingsRepresentative Quote
Perceptions of AI – Mixed but leaning positive.Majority view AI as “helpful, time‑saving”. Knowledge is limited; most know only ChatGPT.“I just asked ChatGPT to do proofreading … it saved me a lot of time.” (Manager 2)
Positive expectations – Efficiency, quality, and scalability.AI could automate routine tasks, freeing counsellors for personalized support. Mock‑interview tools seen as a way to increase client self‑practice.“AI can improve our ways of doing things more effectively and efficiently.” (Manager 6)
Negative concerns / limitations – Data privacy, bias, authenticity, and loss of human touch.Fear of client data exposure on cloud platforms. Concern that AI‑generated resumes sound “robotic”. Skepticism about AI understanding accents or cultural nuances.“Clients have escaped unsafe areas; they want privacy. Weak AI privacy could threaten them.” (Counsellor 5)
Barriers to adoption (counsellors) – Time to learn, organisational culture, funding.Training must be embedded in work schedules. Resistance from less tech‑savvy staff. Need for clear ROI to justify budget reallocation.“If the tool is too difficult to learn, we risk frustration and reduced motivation.” (Counsellor 5)
Barriers to adoption (clients) – Digital literacy, language, device access.Younger refugees adapt faster; seniors may need extra support. Multilingual interfaces essential. Access via agency computer labs, libraries, or shared devices.“80 % own a mobile phone, but computers are scarce; we need a lab or loaner laptops.” (Researcher note)
Current AI tools in use – Predominantly ChatGPT (4/5 agencies) and Microsoft Co‑Pilot for note‑taking.Used for grammar checks, keyword extraction, mock‑interview question generation.Many mentioned their organization is “not officially” using it, it is not
formally “encouraged”, but it is the user’s choice whether they want to use it.
Tools in development / testing – Resume‑keyword matchers, website chatbots, VR soft‑skill simulators (Bodyswaps).Pilot phases focus on usability and bias testing.Two out of the five organizations were in the development or testing phase of adopting new AI tools. Both had mentioned that these programs were not funded by the government.
Wishlist – Mock‑interview platforms, resume builders, customized job‑search engines, multilingual chatbots, admin‑automation.Ranked by participant interest (Figure 4).Tools to help write cover letters and assist with learning like English language classes were also brought up as potentially beneficial.
Bias concerns – Potential profiling of refugees, racial/ gender bias, political bias (e.g., preferential treatment of Ukrainian refugees).Calls for algorithmic bias audits before deployment.“Algorithms may favor ‘rich, white people’; we must guard against that.” (Participant)
Policy recommendations – Formal AI‑use guidelines, data‑privacy safeguards, funding for training, mandatory outcome metrics (quality of employment, not just placement counts).All participants reported that there were no formal organizational policies around AI usage, while risks of data privacy and lack of transparency were barriers for adoption.

More funding should be
allocated to having settlement agencies collect and report data such as client education and
qualifications, employment type intended and obtained, wage, and satisfaction.
Regardless of the potential efficiency improvements obtainable by AI adoption, achieving the objective of
refugees obtaining employment that matches their qualifications remains challenging without first establishing a method to measure data limitations. A policy shift is essential, wherein funding agencies mandate the collection of additional data from settlement organizations in
order to set targets of increasing employment outcomes that match client qualifications, resulting in reduced deskilling of refugees.

Outlier findings

  • Only two agencies were actively testing AI tools; the rest relied on ad‑hoc personal use of ChatGPT.
  • Some participants imagined AI could replace the counsellor’s role in initial intake, especially for culturally shy clients (“it may be easier to request help from technology”).
  • A minority (≈ 15 %) expressed explicit fear that AI could exacerbate existing inequities by embedding biases against non‑native speakers or certain refugee groups.

What are some particularly interesting themes & outlier findings?

  1. Optimism vs. Knowledge Gap – Staff are enthusiastic about AI’s potential but lack formal understanding of how models work, raising a risk of misuse.
  2. Human‑Touch Paradox – While AI can free counsellors for deeper rapport, some see AI itself as a possible conduit for clients to disclose sensitive needs without stigma.
  3. Bias Amplification – Participants explicitly linked algorithmic bias to existing systemic inequities (e.g., differential support for Ukrainian vs. other refugees).
  4. Funding‑Driven Adoption – Because current metrics reward quantity of placements, agencies hesitate to divert resources to AI without clear evidence of impact on quality outcomes.
  5. Multilingual & Accessibility Needs – The call for AI tools that operate in multiple languages and run on low‑spec hardware is a distinctive requirement for this sector.

How can you use this research? – Target‑Specific Recommendations

AudiencePractical Actions
Settlement‑agency leaders / managersConduct a needs‑assessment to map which AI tools (mock‑interview, resume‑builder) would yield the greatest time‑savings. Secure pilot funding earmarked for AI training and bias‑audit services. Draft AI‑use policies covering data minimisation, client consent, and vendor vetting.
Front‑line counsellorsParticipate in short, on‑site AI workshops (e.g., “ChatGPT for résumé polishing”). Adopt a dual‑review workflow: AI‑generated output reviewed by a human before sharing with clients. Share client feedback on AI tools to inform continuous improvement.
IT / data‑security teamsImplement privacy‑by‑design: anonymize client data before any AI API call; use on‑premise or encrypted cloud solutions. Run bias‑testing scripts on any recruitment‑related AI model before rollout.
Policymakers / funders (e.g., Immigration, Refugees and Citizenship Canada)Revise settlement‑program metrics to include quality‑of‑employment indicators (alignment with qualifications, wage levels). Allocate grant streams specifically for AI‑adoption pilots that include evaluation components.
Researchers / academiaBuild on this pilot with a larger mixed‑methods study (survey > 100 agencies, longitudinal outcome tracking). Explore comparative bias analyses of AI tools across refugee sub‑populations.
Technology vendorsDesign multilingual, low‑bandwidth AI modules tailored to settlement‑agency workflows. Provide transparent model documentation and easy‑to‑use bias‑mitigation dashboards for non‑technical staff.

Future‑research directions noted by authors

  • Conduct quantitative cost‑benefit analyses measuring time saved and employment‑outcome improvements after AI integration.
  • Perform large‑scale sampling across all Canadian provinces to test transferability.
  • Include refugee client perspectives (triangulation) to assess acceptability and perceived usefulness of AI tools.

6. What did the researchers do? – Methods Overview

The project combines a qualitative thematic analysis of in‑depth semi‑structured interviews with staff from five agencies across Ontario, British Columbia, and a national organization, capturing both positive optimism and privacy‑risk anxieties. It also maps concrete tool‑wish lists (mock‑interview platforms, multilingual chatbots, resume‑optimizers) and proposes a policy‑framework for AI governance in settlement contexts.

AspectDetail
DesignQualitative pilot study employing inductive thematic analysis of interview transcripts.
Data collection13 semi‑structured interviews (July 2024) with staff from five Canadian settlement agencies (three in Ontario, one in British Columbia, one national/international).
Participant roles7 Program managers, 5 Client‑facing counsellors, 1 IT expert (some participants held dual roles).
RecruitmentPurposive + snowball sampling; agencies contacted via publicly listed emails/phone numbers; consent obtained; ethics approval from McMaster University REB.
Interview mediumVirtual Microsoft Teams; video recorded; auto‑transcribed then manually corrected.
AnalysisCoding framework iteratively refined; themes identified: Perceptions of AI, Barriers & Mitigations, Applications of AI. Visualised via flow‑chart (Fig 2).
Supplementary dataFigures showing participant distribution (Fig 1), AI tool usage (Fig 3), and interest rankings (Fig 4). Table 1 lists example AI tools for settlement services.
Limitations acknowledgedSmall, non‑random sample; geographic concentration in Ontario/BC; reliance on self‑reported perceptions; no direct client input.
AI transparency statement

Discover more from Knowledge Mobilization for Settlement

Subscribe to get the latest posts sent to your email.

Summary

The study investigates whether artificial‑intelligence (AI) tools can help Canadian settlement agencies deliver higher‑quality employment services to refugee clients, moving them from “survival” jobs toward work that matches their prior qualifications.
arrow-circle-upenter-downmagnifier

Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)