This environmental scan provides a detailed, sector-specific overview of AI adoption in Canadian settlement services, highlighting both the promise and the challenges of integrating AI in a human-centered, ethical, and collaborative manner. It provides actionable recommendations for organizations, policymakers, and researchers, and sets the stage for future work in this evolving area.
The report analyzes current AI tools, use cases, costs, strengths, limitations, and offers considerations for organizations implementing AI. Researchers examined the current landscape of artificial intelligence (AI) tools and applications in the settlement services sector, focusing on how AI can support Newcomer service delivery. The report summarizes that AI technology should supplement but not replace human interaction and staff positions, emphasizing the sector’s commitment to human-centered service.
What do you need to know?
The guiding questions were:
What are the potential applications of AI in settlement services?
What existing AI tools are being used, and what are their use cases, costs, strengths, and limitations?
What guidelines and policies should organizations consider when using AI?
What are the current implementation practices of AI tools in service provider organizations?
What considerations (legal, privacy, security, accessibility, staff training) must be addressed when using AI?
Context and Relevance:
Settlement service organizations in Canada are increasingly exploring AI to enhance their work.
AI is being used for skill matching, improving service accessibility, and language support.
The research is timely due to growing interest in AI’s potential to increase efficiency, improve accessibility, and generate actionable insights for program delivery.
The report maps the current state of AI adoption and surfacing sector-specific challenges, such as the need for ahuman-centered approach and alignment with organizational values.
What did the researchers find?
Key Highlights:
Benefits of AI:
Automation and service enhancement: AI can automate repetitive tasks, freeing staff for direct client engagement.
Improved accessibility: Tools like chatbots extend reach, offering information and services remotely and outside normal hours.
Data-driven insights: Analytics support reporting and program optimization.
Challenges and Limitations:
Human-centered approach: AI should supplement, not replace, human interaction and jobs.
Alignment with values: AI adoption must fit organizational missions and values.
Data privacy and security: Ongoing monitoring and clear guidelines are essential.
Cost management: Expenses vary widely depending on customization, integration, and subscription needs.
Guidelines and Policies:
Inclusivity: Involve staff, leadership, and clients in AI discussions.
Compliance: Follow relevant legislation and best practices from other sectors.
Transparency: Be open with stakeholders about AI use and practices.
Opportunities for collaboration:
Regional and sectoral organizations, such as Local Immigration Partnerships, can facilitate knowledge sharing and capacity building for AI in settlement services.
What are some particularly interesting themes and outlier findings?
Sector-specific caution: There is a strong emphasis on ethics, equity, and the risks of data misuse, not just technical or operational concerns.
AI ‘hallucinations’ and misinformation: The report references research showing newcomers may be vulnerable to incorrect or biased information from generative AI tools, highlighting a unique risk in this context.
Interest in collaborative learning: Organizations are eager to learn from each other and develop shared guidelines, rather than working in isolation.
Diversity of AI adoption: While some organizations are experimenting with advanced tools, others are just beginning to explore AI, reflecting a wide range of familiarity and comfort across the sector.
How can you use this research?
For Service Provider Organizations:
Use the report’s findings to evaluate and guide AI adoption, ensuring alignment with organizational values and mission.
Develop or update internal guidelines for ethical, inclusive, and transparent AI use.
Prioritize staff and client involvement in discussions about AI implementation.
Leverage sector resources and collaborative opportunities for capacity building and shared learning.
For Policymakers and Funders:
Support the development of sector-wide standards and guidelines for AI use in settlement services.
Fund training and capacity-building initiatives focused on ethical and responsible AI adoption.
Encourage research on the impacts of AI on newcomer experiences, especially regarding data privacy and equity.
For Academics and Researchers:
The report identifies a gap in sector-specific research and calls for further study on the impacts, risks, and best practices for AI in settlement services.
Future research should focus on longitudinal impacts, client outcomes, and the development of sector-specific ethical frameworks.
What did the researchers do?
Methods and Activities:
Literature review: Covered scholarly and professional articles, webinars, presentations, and sector documents.
Survey: Conducted with TEQ LIP partner organizations (8 respondents, 14.8% response rate), assessing familiarity, comfort, and knowledge of AI tools.
Key informant interviews: Three interviews conducted, with written input from a fourth expert.
Stakeholder demographics: Respondents represented organizations providing employment, health, case management, and settlement services, with most from organizations with over 100 full-time employees.
This environmental scan provides a detailed, sector-specific overview of AI adoption in Canadian settlement services, highlighting both the promise and the challenges of integrating AI in a human-centered, ethical, and collaborative manner. It provides actionable recommendations for organizations, policymakers, and researchers, and sets the stage for future work in this evolving area.