The study investigates whether artificial‑intelligence (AI) tools can help Canadian settlement agencies deliver higher‑quality employment services to refugee clients, moving them from “survival” jobs toward work that matches their prior qualifications.
This study demonstrates that AI holds promise for improving the efficiency and quality of refugee employment services in Canada, but successful adoption hinges on addressing knowledge gaps, data‑privacy safeguards, bias mitigation, and aligning funding incentives with quality employment outcomes.
It asks:
Abstract:
“Canada increasingly welcomes new refugees, but after arrival they face numerous personal and systemic barriers to securing employment within their fields of experience, causing them to take on low‑paying jobs. … This pilot study … thirteen interviews with counsellors, managers, and I.T. experts … showed that they had an optimistic perception of AI’s ability to support their work but limited knowledge about AI and concerns such as data privacy. … AI tools – such as mock interviews, resume building, customer‑support chatbots, and customized job search – may be potential solutions to increase efficiency and quality of services, which can free up counsellors to spend more quality time for tailored support … Accessible trainings … securing funding, testing for algorithmic bias, and protecting sensitive client data are essential.”
| Why the study matters | Key background points |
|---|---|
| Persistent mismatch – Refugees in Canada earn < ½ of the average Canadian income in the first year and are often over‑qualified for the jobs they obtain (e.g., 80 % of Winnipeg refugees work in unrelated sales/services after three years). | Over‑qualification rates: 70 % of employed refugees are dissatisfied with their occupation; 60 % feel over‑qualified (Lamba 2003). |
| Systemic pressures on settlement agencies – Funding models reward number of placements rather than quality, caseloads are high, and staff lack formal training for employment counselling. | Counselors report large caseloads, limited time for one‑on‑one support, and reliance on “survival‑job” placements (Kosny et al., 2020). |
| AI already reshaping HR – Recruiters use AI for screening, interview automation, and job matching, yet bias and privacy concerns remain. | AI can exacerbate language‑bias (HireVue, Amazon resume‑screening failures). |
| Gap in literature – Little research exists on AI adoption within refugee‑focused settlement services; most work focuses on private‑sector recruitment. This pilot fills that niche with qualitative insight from frontline staff. | First Canadian‑focused qualitative study on AI + refugee employment services. |
| Theme | Findings | Representative Quote |
|---|---|---|
| Perceptions of AI – Mixed but leaning positive. | Majority view AI as “helpful, time‑saving”. Knowledge is limited; most know only ChatGPT. | “I just asked ChatGPT to do proofreading … it saved me a lot of time.” (Manager 2) |
| Positive expectations – Efficiency, quality, and scalability. | AI could automate routine tasks, freeing counsellors for personalized support. Mock‑interview tools seen as a way to increase client self‑practice. | “AI can improve our ways of doing things more effectively and efficiently.” (Manager 6) |
| Negative concerns / limitations – Data privacy, bias, authenticity, and loss of human touch. | Fear of client data exposure on cloud platforms. Concern that AI‑generated resumes sound “robotic”. Skepticism about AI understanding accents or cultural nuances. | “Clients have escaped unsafe areas; they want privacy. Weak AI privacy could threaten them.” (Counsellor 5) |
| Barriers to adoption (counsellors) – Time to learn, organisational culture, funding. | Training must be embedded in work schedules. Resistance from less tech‑savvy staff. Need for clear ROI to justify budget reallocation. | “If the tool is too difficult to learn, we risk frustration and reduced motivation.” (Counsellor 5) |
| Barriers to adoption (clients) – Digital literacy, language, device access. | Younger refugees adapt faster; seniors may need extra support. Multilingual interfaces essential. Access via agency computer labs, libraries, or shared devices. | “80 % own a mobile phone, but computers are scarce; we need a lab or loaner laptops.” (Researcher note) |
| Current AI tools in use – Predominantly ChatGPT (4/5 agencies) and Microsoft Co‑Pilot for note‑taking. | Used for grammar checks, keyword extraction, mock‑interview question generation. | Many mentioned their organization is “not officially” using it, it is not formally “encouraged”, but it is the user’s choice whether they want to use it. |
| Tools in development / testing – Resume‑keyword matchers, website chatbots, VR soft‑skill simulators (Bodyswaps). | Pilot phases focus on usability and bias testing. | Two out of the five organizations were in the development or testing phase of adopting new AI tools. Both had mentioned that these programs were not funded by the government. |
| Wishlist – Mock‑interview platforms, resume builders, customized job‑search engines, multilingual chatbots, admin‑automation. | Ranked by participant interest (Figure 4). | Tools to help write cover letters and assist with learning like English language classes were also brought up as potentially beneficial. |
| Bias concerns – Potential profiling of refugees, racial/ gender bias, political bias (e.g., preferential treatment of Ukrainian refugees). | Calls for algorithmic bias audits before deployment. | “Algorithms may favor ‘rich, white people’; we must guard against that.” (Participant) |
| Policy recommendations – Formal AI‑use guidelines, data‑privacy safeguards, funding for training, mandatory outcome metrics (quality of employment, not just placement counts). | All participants reported that there were no formal organizational policies around AI usage, while risks of data privacy and lack of transparency were barriers for adoption. More funding should be allocated to having settlement agencies collect and report data such as client education and qualifications, employment type intended and obtained, wage, and satisfaction. | Regardless of the potential efficiency improvements obtainable by AI adoption, achieving the objective of refugees obtaining employment that matches their qualifications remains challenging without first establishing a method to measure data limitations. A policy shift is essential, wherein funding agencies mandate the collection of additional data from settlement organizations in order to set targets of increasing employment outcomes that match client qualifications, resulting in reduced deskilling of refugees. |
Outlier findings
| Audience | Practical Actions |
|---|---|
| Settlement‑agency leaders / managers | Conduct a needs‑assessment to map which AI tools (mock‑interview, resume‑builder) would yield the greatest time‑savings. Secure pilot funding earmarked for AI training and bias‑audit services. Draft AI‑use policies covering data minimisation, client consent, and vendor vetting. |
| Front‑line counsellors | Participate in short, on‑site AI workshops (e.g., “ChatGPT for résumé polishing”). Adopt a dual‑review workflow: AI‑generated output reviewed by a human before sharing with clients. Share client feedback on AI tools to inform continuous improvement. |
| IT / data‑security teams | Implement privacy‑by‑design: anonymize client data before any AI API call; use on‑premise or encrypted cloud solutions. Run bias‑testing scripts on any recruitment‑related AI model before rollout. |
| Policymakers / funders (e.g., Immigration, Refugees and Citizenship Canada) | Revise settlement‑program metrics to include quality‑of‑employment indicators (alignment with qualifications, wage levels). Allocate grant streams specifically for AI‑adoption pilots that include evaluation components. |
| Researchers / academia | Build on this pilot with a larger mixed‑methods study (survey > 100 agencies, longitudinal outcome tracking). Explore comparative bias analyses of AI tools across refugee sub‑populations. |
| Technology vendors | Design multilingual, low‑bandwidth AI modules tailored to settlement‑agency workflows. Provide transparent model documentation and easy‑to‑use bias‑mitigation dashboards for non‑technical staff. |
Future‑research directions noted by authors
The project combines a qualitative thematic analysis of in‑depth semi‑structured interviews with staff from five agencies across Ontario, British Columbia, and a national organization, capturing both positive optimism and privacy‑risk anxieties. It also maps concrete tool‑wish lists (mock‑interview platforms, multilingual chatbots, resume‑optimizers) and proposes a policy‑framework for AI governance in settlement contexts.
| Aspect | Detail |
|---|---|
| Design | Qualitative pilot study employing inductive thematic analysis of interview transcripts. |
| Data collection | 13 semi‑structured interviews (July 2024) with staff from five Canadian settlement agencies (three in Ontario, one in British Columbia, one national/international). |
| Participant roles | 7 Program managers, 5 Client‑facing counsellors, 1 IT expert (some participants held dual roles). |
| Recruitment | Purposive + snowball sampling; agencies contacted via publicly listed emails/phone numbers; consent obtained; ethics approval from McMaster University REB. |
| Interview medium | Virtual Microsoft Teams; video recorded; auto‑transcribed then manually corrected. |
| Analysis | Coding framework iteratively refined; themes identified: Perceptions of AI, Barriers & Mitigations, Applications of AI. Visualised via flow‑chart (Fig 2). |
| Supplementary data | Figures showing participant distribution (Fig 1), AI tool usage (Fig 3), and interest rankings (Fig 4). Table 1 lists example AI tools for settlement services. |
| Limitations acknowledged | Small, non‑random sample; geographic concentration in Ontario/BC; reliance on self‑reported perceptions; no direct client input. |

Subscribe to get the latest posts sent to your email.

Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)