I’ve been facilitating a small group of Immigrant and Refugee-serving sector folks working on digital projects. We have a monthly gathering, and an email list/discussion group to chat in between live sessions.
I recently shared episode of my Technology in Human Services podcast where I speak with Meenakshi (Meena) Das about not just focusing on AI, but Human-centric AI.
It generated some great comments, building on and adding to Meenakshi’s insights, which I wanted to share here (with their permission). The comments are from a couple of people working on AI projects in their organizations. The richness of what they share is not only what they’re working on, which is super useful to know and share, but also how they’re approaching AI to ensure it is human-centric, complementing their work.
I really enjoyed listening to the podcast episode with Meenakshi – it was very thought-provoking and included practical examples, which I love!
Just a little intro to our AI-based work… ACCES Employment has a virtual attendant on our website and a streamlined version of the chatbot responds to inquiries that come into our Facebook Messenger app. VERA (Virtual Employment & Resource Attendant) offers users both a guided conversation (menu options that cover the most common inquiries) and spontaneous conversation (users can ask their own questions in their own words at any time).
We’ve attempted to bridge the machine-to-human divide by “embodying” VERA with an avatar, asking for the person’s first name and using it in conversation, and responding with common phrases and punctuation to make her written speech sound natural. Based on some user entries, they have thought they were connecting with a live agent via chat so maybe we need to make it more obvious that she is not real! The AI comes into play during spontaneous conversation because VERA uses natural language processing to identify the most accurate response from a library of pre-programmed responses. When we built VERA, we provided several examples of questions that would lead to the same answer. As actual users make inquiries, VERA’s confidence grows as she is able to add to the range of ways that people ask for the same information. The more times VERA has an opportunity to provide a response, the more confident she becomes in providing it. We review a sample of her conversations each month to confirm accuracy and remediate when needed (not this answer but that one), or to identify trends in questions that we don’t yet have a response for but should.
We built VERA to support our staff by diverting some of the routine inquiries (i.e., hours, location, program descriptions) and tasks (i.e., registering for an event, logging interest in a program) they would typically spend hours (cumulatively) answering. This allows staff more time to interact with their clients and do the complex work that machines will never be able to do. VERA is available 24/7. New registrations and leads appear into our client management database, where staff are already accustomed to tending to leads that come in through traditional channels like walk-ins and phone calls.
In these ways, we are using this technology to increase access to information and to our services. As well as complementing rather than competing with the work our staff do. VERA helps get the conversation started and meets initial needs and requirements, and then hands it off to the human experts to do the amazing things they do! I think this makes it human-centric but it’s a great lens through which I will continue to examine what we are doing.
You can learn more about ACCES Employment’s approach to AI, data, and improving service delivery in this 2021 Metropolis presentation focused on how cutting-edge technology has the potential to support and enhance current models of service delivery through automation and the application of solid data analysis. The workshop explored the role that AI and data analytics can play in enhancing service delivery, shared current examples from and practical applications for the settlement sector, discussed the resources required to realize these kinds of solutions, and emphasized the value that corporate partners can bring to identify and achieve tech solutions.
Farrah is the WE Value Partnership Project Coordinator at YMCA of Southwestern Ontario
Since I first saw VERA, I have appreciated ACCES’s approach of diverting the routine and having always-available support so that staff have the resources to do what they’re best at. VERA’s review of logs demonstrates the value of an emphasis on continuous education and growth—not just for us, but for our machines, too. (Also, I love VERA's overall design/UX—big fan!)
For the WE Value Partnership, we've partnered with the University of Windsor to explore AI functionality with K2, the platform that supports our settlement assessment and partner portal. Currently, the AI under development is a recommender system to improve the process of building settlement plans and a chatbot that can help clients with routine settlement actions. Meena and Marco’s discussion really dug into many of the questions that have arisen in my work at WE Value.
AI and Frontline Replacement in Human-Centred Services
The concern that staff will be replaced by AI is one we’ve taken seriously. From textile machinery’s effects on textile workers (Mohammed, 2019) to AI image generators’ effects on artists (Cooper, 2022; Nolan, 2022), automation has threatened livelihood from history to the present. Staff in human-centred services are right to take an interest in how AI will be used in their workplaces, and what that may mean for their future—everyone is.
As an alternative, Meena proposes a symbiotic relationship between humans and AI, where we neither worship technology as magic bullet nor fear technology as replacing us and destroying human connection. Instead, we work with AI to achieve what we couldn’t do alone. This resonates with a lot of what I’ve read in the field (e.g., Wilson & Daugherty, 2018; Colson, 2019). AI is best at doing things that humans aren’t as good at (complex calculations) and tend not to want to spend all day doing (repetitive tasks), so they can complement human strengths rather than replacing people. By seeking symbiosis rather than replacement, leaders can make strong decisions about how to improve programs with AI implementation. Further, this approach empowers staff so that they can use technology to make their lives better, rather than resisting it as a threat to their work.
There is another part to the threat: at least for now, AIs just do not have the data they would need to replace frontline workers. AIs tend to require massive amounts of data to train on, and much of the work that frontline staff draw on (verbal communication with clients, body language, etc.) is not yet recorded in any datasets. Maybe that’s just a matter of time (recording your interactions in the Metaverse and training an AI on that?) but for now, there is too much data that humans access without thinking but AIs can’t reach. Symbiosis isn’t just the best path forward: it is, in many ways, the only feasible option, because only staff can bring that knowledge, depth, and nuance of understanding to helping newcomers on their settlement journey.
What Meena said about people frequently feeling disempowered and unable to dictate their own fate in the face of technological change resonates with what I see even outside of work, and encouraging technological adoption requires that we meet this head-on. If we want human-centred service, we must also centre the humanity of those providing services. We make better technology when we ask the people who will be using that technology, our staff, “What part of your job do you want automated, what part do you want to speed up, so you can spend more of your time doing the work that is valuable and enriching to you and the community?” Ensuring that staff guide decisions about AI implementation empowers them to improvements to their own working lives as well as the services they provide.
Timing for my hiring at the Y was such that one of my first tasks was eliciting user stories from staff about how they would want to see AI support their work. This put that frontline perspective in my mind right from the start, helping me ask what both staff and clients want and need. Our staff had so much enthusiasm for ways to spend more time with the clients, build deeper connections, and provide better support. We orient our technological solutions towards these values that drive our staff.
Data, Ethics, and Equity
We know the stories of models trained on biased datasets and reproducing biased decisions (e.g., Angwin et al., 2016; Begley, 2020; Samuel, 2021; Samuel, 2022). Meena’s assertion that good data is foundational reflects that adage I hear again and again in discussions on biased algorithms: “garbage in, garbage out”—or, more poetically, “Feeding AI systems on the world’s beauty, ugliness, and cruelty but expecting it to reflect only the beauty is a fantasy” (Prabhu and Birhane, 2020, drawing on Ruha Benjamin). How can we expect AI not to learn what we teach it? We must ensure it is fair.
That, in itself, is a more complex question than it may first appear (Verma, 2018). In the field of AI, I have seen no less than 21 kinds of fairness identified:
One definition is “procedural fairness,” where an algorithm is understood as fair as long as the procedure it uses is “fair” by using the same method for everyone. However, historical prejudices and biases can sometimes mean that judging different people by the same methods are unfair because they will impact one group more than another. For example, an ethnic group may historically have had less access to wealth, disqualifying them from approval for a loan with the same cut-off point for everyone. The algorithm is procedurally fair, but because of historical injustices, it has disparate impact.
There are hands-on explorations of this issue, such as in MIT Technology Review’s “game” to try to set a fair threshold for an algorithm that determines if a defendant is understood as “high risk” and therefore should not be allowed out while waiting trial (Hao and Stray, 2019). This is a great tool to get people thinking, drawing from a real algorithm used in US judicial systems. Vox (Samuel, 2022) also provides a general overview of different kinds of fairness. Further, it argues for a question of fairness that Meena raised at the end of this episode: should this technology even be made? For example, we may ask, is facial recognition technology ever “fair,” no matter how accurate it is, if it necessarily invades privacy and treats people as guilty by default? (Samiel, 2019; Johnson, 2022; and see Barrett, 2022, on the question of emotion recognition). Some technology violates privacy or comes with a cost of blocking us from the community building that is so essential to all human services. In that case, should we even build it?
How, then, do we make ethical AI? Of course, in development, we start by getting the best dataset we can, and by analysing and understanding where bias may exist in the data: less garbage in. We can also train AI, fine tuning them out of discrimination. Moreover, we can assess what kinds of fairness are most relevant in our work and make conscious, clearly expressed decisions about how we will implement them.
We can also set safeguards at the other end by keeping the people who work with AI—our tech leaders, our frontline staff—informed about the ethical risks and how to spot them. It is often the frontline staff, working with AI in interaction with humans every day, who are most likely to see the problems that may have been missed in development. If those who use the technology every day know what the risks are in our datasets and the models of fairness we’ve used, they will be more alert to possible lapses, and if they feel empowered to report on these, issues will be caught sooner. This is another face of the symbiosis Meena proposed.
We can also always keep in mind that last question Meena raised: should we even be making this? There is a great deal of potential in AI. It’s essential we explore the potential that is in line with our values and are conscious of where interesting new technology may instead go against the values that we want to act out in our work.
WE Value is fortunate to have University of Windsor grad students who specialise in privacy and ethics in AI dedicated to reviewing the project and mitigating risks. This is also a field I’ve been compiling general-knowledge resources on so that our entire team can be equipped and mobilised to ensure our AI systems are safe and fair.
Humans Centring Humans
It is, at least for now, humans who design the technology that humans use, so we must centre ourselves in our projects. Two of Meena’s proposals to get people involved in this work stand out to me: Asking questions and discussing the AI already in our systems as a starting point.
Often, non-profit organisations develop our AIs in partnership with others. When we on the business side ask questions and actively seek transparency and accountability from developers, we set a standard of transparency and accountability as essential features for any technological solutions in our sector. A basic foundational knowledge empowers us to ask the questions that matter: what datasets are used, and do they have biases? What types of fairness are we using, and what types of fairness will we not be able to achieve as a result? What impacts will our technology have? Who will be accountable for harms it may cause?
But there are barriers to getting people ready to discuss AI at all, from preconceptions to intimidation and hopelessness, and that’s why I love the idea of talking about the AI we already have in our systems as a starting point for conversations. Even as I type this, autosuggest is eagerly feeding me the next word to speed up my writing process (to unpack that: Coldwell, 2021), an AI I only recently realised was there. Meena’s suggestion to identify the AI all around us is a great first step for encouraging awareness and decision making about the AI systems we have and may implement. It opens the door to discussion throughout our organisations on fairness, explainability, robustness, transparency, and privacy, so that when our AI does what we ask, we won’t regret that (see Wolchover, 2020).
These are some initial explorations into some questions raised in the podcast. Other questions, too, come to mind as we dive into this issue: How do we ensure that not just our frontline staff, but our partners and clients, can ask questions and make decisions about how AI is part of their lives? What do we need to communicate as groundwork? What kind of feelings, knowledge, concerns, and values do we need to elicit? How to ensure our AI projects result in something that isn’t forced upon people but is happily and safely embraced?
These are questions I look forward to exploring with my team and this community of tech-savvy practitioners. Thank you for all the informed, thoughtful ideas about a subject that will only be growing more relevant over time.
Please take this short 7-question survey where you can tell us how we are doing and how we might do better. This survey is anonymous. Your feedback will be used to improve the KM4S.ca website. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)