Blog Post

Implementing an Artificial Intelligence (AI) Project in Settlement Services Requires an Engaged and Informed Staff

By: Marco Campana
April 1, 2023

(This is one in a series of 10 articles extracted from the publication Canadian Diversity: Technology in the Settlement Sector (2023). I'll be posting each article as a separate post here on my site.)

Farrah Nakhaie works as Project Coordinator, WE Value Partnership, at the YMCA of Southwestern Ontario. She received her Doctor of Philosophy in English Literature from Western University.\

This article explores how to create team readiness for adoption of new AI tools in a service provider organization and how an informed and engaged staff ensures that AI adoption in settlement is consistent with organizational values.

At the WE Value Partnership, we are in the middle of what we aim to be the start of an AI journey. In partnership with the University of Windsor, we are exploring AI functionality in our settlement assessment and partner portal in the form of a recommendation system and chatbot. For effective decision-making and adoption of a new technology, our team has needed to become informed and engaged on issues around AI. We recommend developing a knowledge base for staff reference and engaging staff in regular discussion about relevant issues.

First, we developed a resource library for our team, with articles, videos, interactive training modules, and podcasts relevant to the AI project. To ensure usability, we focused on non-specialist resources that could be drawn on by a general audience. To ensure speed of access, we organized the articles around the themes most important to our project (e.g., business needs, fairness, chatbots). We included short summaries for each resource, allowing staff to easily overview the main focus and points of any resource to determine the relevance of the rest of the article. This library is updated regularly by a designated staff member, with other staff able to provide additions if found. The library provides a foundation for staff understanding of AI’s potential, risks, and limits, as well as where AI currently sits in our sector.

Active engagement, however, is needed to mobilize this knowledge. Carving out time dedicated exclusively to professional development on AI can be difficult with the time demands that are on our staff. To respond to this limit on available time, it helps to repurpose other meetings for periodic discussions relevant to staff. For example, our staff has weekly meetings where we share information and ask questions related to all different aspects of their work. We have identified these meetings as a perfect space for brief presentations and discussions on AI. This has the further benefit of presenting AI issues as part of rather than separate from the workflows and ideas that are regularly engaged with at our meetings.

Together, this knowledge base and engagement support expectation setting around AI. AI is more mythologized than grounded in practice for many. The benefits and risks can be exaggerated in common discourse and media publications; what is actually, tangibly possible for an NGO to implement
with their resources and datasets is less so. A space for the discussion of desires and assumptions ensures that everyone knows what our AI project can and cannot do.

Perhaps even more importantly, keeping staff informed and engaged keeps our AI project consistent with our organization’s values. AI amplifies everything: its promise and its threat is that it scales everything we put into it. An AI’s capacity for harm is unmatched by a human’s capacity for the same because an AI can replicate that harm infinitely. The same is true of AI’s capacity to help. AI implementation can be seen as a question of amplification: What do you want more of? And what do you not want more of, but that could be hiding in your work if you are not very careful with what you are doing?

Staff are essential to help answer these questions as our organization’s heart: they are what sets an organization’s values. Knowledge and support for discussion enable them to do so. An informed staff that understands the capabilities, limits, and risks of AI can participate at every step of the process. In conceptualization, they can provide user stories and suggestions that steer development towards where AI can do the most good and can provide early warning of its harms. In development, they can identify where a project has gone astray and make suggestions for alternatives, identifying risks in data sources or models. In implementation, an informed and empowered staff can identify and report issues as they arise. This last is most vital, because when development is done and the AI is in active use, any issues could become very
real for very many people if they are not quickly identified and corrected. It is staff, working with technology and clients every day, who are in the best position to catch problems as they emerge.

As our partnership continues to explore AI, and as AI finds wider and more embedded use in business, we will explore new projects and new methods of implementation. The foundation for all of these projects must be an informed, engaged staff who are able to have faith in each project and advocate for our organization’s values.

Related:

Human-centric AI - some Immigrant and Refugee-serving sector promising practices

I recently shared episode of my Technology in Human Services podcast where I speak with Meenakshi (Meena) Das about not just focusing on AI, but Human-centric AI. It generated some great comments from a couple of people working on AI projects in their organizations, building on and adding to Meenakshi’s insights, which I wanted to share here.

Leave a Reply

Your email address will not be published. Required fields are marked *

arrow-circle-upmagnifier

Please take this short 7-question survey where you can tell us how we are doing and how we might do better. This survey is anonymous. Your feedback will be used to improve the KM4S.ca website. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)