Abstract
"Immigrant and refugee populations face multiple barriers to accessing mental health services. This scoping review applies the (Levesque et al. in Int J Equity Health 12:18, 2013) Patient-Centred Access to Healthcare model in exploring the potential of increased access through virtual mental healthcare services VMHS for these populations by examining the affordability, availability/accommodation, and appropriateness and acceptability of virtual mental health interventions and assessments.
Accessibility depended on individual (e.g., literacy), program (e.g., computer required) and contextual/social factors (e.g., housing characteristics, internet bandwidth). Participation often required financial and technical support, raising important questions about the generalizability and sustainability of VMHS’ [virtual mental healthcare services] accessibility for immigrant and refugee populations. Given limitations in current research (i.e., frequent exclusion of patients with severe mental health issues; limited examination of cultural dimensions; de facto exclusion of those without access to technology), further research appears warranted."
Conclusion
The COVID-19 pandemic made VMHS a necessity, but in so doing opened up new opportunities for increased access to mental health care for various populations, including immigrants and refugees, and it seems likely that virtual approaches will continue to be promoted [14]. This scoping review suggests that the potential of virtual mental healthcare to reach underserved populations may not be achieved because of insufficient consideration of barriers for those already facing the greatest challenges in accessing care (e.g., those with limited language fluency, digital
literacy or access to devices). This includes neglecting whether the additional supports required to make VMHS accessible (e.g., providing devices and financial support for phone or internet services) will be available in programs once they are beyond the testing and research phase, highlighting the importance of more implementation research. A number of common challenges in VMHS accessibility were identified across this diverse range of interventions and populations; however, this scoping review also identified unique barriers determined by systemic, contextual, clinical and personal characteristics for
immigrant and refugee populations. Such obstacles warrant further attention. We propose that working with the intended user population on the planning and delivery of virtual mental health services will help increase accessibility for these populations, both now and in the future.
The Social Research and Demonstration Corporation (SRDC) has released its initial evaluation of the CANN E-Link, a technology platform developed by S.U.C.C.E.S.S. as a part of its Community Airport Newcomers Network (CANN). CANN E-Link aims to tackle the challenge of newcomer awareness and engagement with free settlement services during their critical initial months in Canada.
CANN E-Link’s arrival e-notification and information-sharing system seeks to information Newcomers about services and increase the uptake of settlement services by alerting settlement service provider organizations (SPOs) as they reach out to newcomers entering Canada through the Vancouver International Airport (YVR).
S.U.C.C.E.S.S. developed new technology modules to capture the needs assessment and contract information for newcomers arriving through YVR. The Client Referral Service System (CRSS) sends the information electronically to SPOs in the community where the newcomer will initially be staying. SPOs who receive the e-notification reach out to the newcomer to engage and support them in accessing settlement services appropriate for their needs.
SRDC’s evaluation of CANN E-Link included delving into administrative data from the E-Link system, conducting interviews with CANN staff and partner SPOs, and gathering survey data from both CANN and E-Link clients. The evaluation reveals that the CANN E-Link project has been implemented as designed, and is already exceeding expectations in terms of immediate impact and policy objectives.
E-Link has substantially increased the contact of newcomers with a SPO by 17 percentage points soon after arrival. Usages of settlement services and employment services increased by over seven percentage points.
“These impacts corroborate with the positive experience newcomers have with CANN and E-Link,” said Taylor Shek-wai Hui, Research Director, SRDC. “The process was simple and easy to understand, especially if the CANN staff spoke in their preferred language.”
Other key findings from the interim evaluation include:
“CANN E-Link bridges the gap between newcomers and essential services, ensuring a smoother settlement journey and enhancing their prospects for success in Canada,” said Mr. Hui. “The positive feedback we have received from newcomers and the remarkable outcomes we have witnessed thus far exemplifies S.U.C.C.E.S.S. and CANN’s commitment to enhancing newcomer experiences of settling in their new home.”
CANN E-Link was made possible thanks to funding from Immigration, Refugees and Citizenship Canada (IRCC).
SRDC’s final evaluation report, offering further insights into E-Link's impact on settlement service utilization and overall newcomer settlement outcomes within the first six months of arrival, is scheduled for release in March 2024.
Methodology, Scope & Audience
This paper is a compilation of findings from a literature review as well as key informant interviews with AI experts and humanitarians pushing the agenda for digital innovation in humanitarian action.
This paper is presented as a think-brief in order to help start a conversation or help provide a concrete stepping stone for those interested in topics of Generative AI. The paper is not intended to be interpreted or treated as an academic or peer-reviewed paper. Instead, it is a compilation of introductory research intended for a broad audience.
The paper is aimed at humanitarian practitioners and leaders who would like to gain a general knowledge on Generative AI or would like to gain insight on trending strategies for mainstreaming Generative AI tools within their organization. By providing main topics of concern and recommendations, we lay out the landscape of capabilities and potential pathways for safe and responsible adoption of Generative AI. Organizations can select key takeaways and narrow down investigations on each topic.
Key messages
Abstract:
This project sought to uncover the reported practices and attitudes towards published research of English language teachers who reported reading or being interested in research and research-oriented publications. The author writes that "the voices, experiences, and perspectives of teachers themselves" tend to be missing from much of the literature on this topic. He aimed to give voice to and learn from these ‘research-interested’ teachers in this report: "the project examined the role of research publications and research-oriented literature in the teachers’ professional lives and in the development of their professional understandings and practices."
"It examined those factors which facilitated or created a barrier to such engagement, and additionally sought to uncover those key areas of research that the teachers saw as priorities, or of particular relevance to themselves. It also explored how, from the teachers’ perspective, such research findings might be made more accessible within the field. Ultimately, therefore, the project sought to find out how, from the standpoint of those teachers who are interested in engaging with research and research-oriented publications, the often-problematic relationship between research and practice in English language teaching (ELT) might start to be addressed."
Why it matters
There is a tremendous amount of research being done and available (although many times behind academic paywalls) that doesn't seem to impact Immigrant and Refugee-focused services. This report focuses on English language training, but is relevant for a wider sector audience. The main question it seeks to answer is why is there a "breakdown in the ‘interface’ and ‘dialogue’ between research and practice?"
Core research questions
The main section of the report seeks to answer these questions:
What did the researcher do?
The project adopted a mixed-method research design combining quantitative and qualitative approaches:
What did the researcher find?
How can you use this research?
Researchers should find new and genuinely collaborative ways of talking to and working with teachers in ways which do not place additional burdens on teachers’ working lives. They should present findings through spoken presentations, short written summaries, posters, online forums and so forth, with research projects developed within a truly collaborative framework in which teachers and researchers cooperate to set research agendas, collect data, and co-author and disseminate findings.
Teachers should seek out both research and research relationships that ensure that research and how it is present is practically oriented and aligns with your classroom concerns. Access and encourage research that is primarily focused on informing, developing, or confirming your teaching practices.
Professional ELT organizations and associations, and communities of practice are most frequently accessed by teachers for research summaries and should include relevant summaries in their content and communication strategies.
This report focuses on global data talent in the social sector. The report reviews the current landscape and offers four pathways forward for building purpose-driven data professionals. With the values of inclusivity, diversity, equity, and accessibility (IDEA) core to this work, Workforce Wanted identifies an opportunity to shape and support a pool of 3.5 million data professionals focused on social impact in low- and middle-income countries (LMICs) over the next ten years.
The four pathways:
The goal of the report is to offer guiding questions and considerations for humanitarian organizations deciding if a chatbot is an appropriate tool to address program and community needs. It also contains
use cases highlighting the experiences of practitioners working in diverse geographic contexts and
issue areas.
Overview
In recent years, chatbots have offered humanitarian operations the possibility to automate personalized engagement and support, inform tailored program design and gather and share information at a large scale. However, adopting a chatbot is never straightforward, and there are many considerations that should go into doing so responsibly and effectively.
With some humanitarian organizations having experimented with chatbots for several years now, many are now interested in taking stock of their experiences and fostering greater awareness of how to design, budget for and maintain chatbots responsibly and effectively.
Responding to these priorities, The Engine Room has developed this report to explore the existing uses, benefits, trade-offs and challenges of using chatbots in humanitarian contexts. This research has resulted from a collaboration with IFRC, ICRC, and a research advisory board consisting of representatives from IFRC, ICRC, the Netherlands Red Cross and UNHCR.
This report:
Main findings
Always consider community needs, design limitations and responsible data practices
Chatbots work best when they are integrated into existing community engagement approaches and communication channels, and when adequate resources are planned for
Abstract
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions.
Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse."
Additional context from the report:
"Given the transformative impact of generative language models and the potential risks associated with their misuse,
developing trustworthy and accurate detection methods is crucial. In this study, we evaluate several publicly available GPT detectors on writing samples from native and non-native English writers. We uncover a concerning pattern: GPT detectors consistently misclassify non-native English writing samples as AI-generated while not making the same mistakes for native writing samples. Further investigation reveals that simply prompting GPT to generate more linguistically diverse versions of the non-native samples effectively removes this bias, suggesting that GPT detectors may inadvertently penalize writers with limited linguistic expressions...
We evaluated the performance of seven widely-used GPT detectors on a corpus of 91 human-authored TOEFL essays obtained from a Chinese educational forum and 88 US 8-th grade essays sourced from the Hewlett Foundation’s Automated Student Assessment Prize (ASAP) dataset. The detectors demonstrated near-perfect accuracy for US 8-th grade essays. However, they misclassified over half of the TOEFL essays as "AI-generated" (average false positive rate: 61.22%). All seven detectors unanimously identified 18 of the 91 TOEFL essays (19.78%) as AI-authored, while 89 of the 91 TOEFL essays (97.80%) are flagged as AI-generated by at least one detector. For the TOEFL essays that were unanimously identified, we observed that they had significantly lower perplexity compared to the others (P-value: 9.74E-05). This suggests that GPT detectors may penalize non-native writers with limited linguistic expressions...
In light of our findings, we offer the following recommendations, which we believe are crucial for ensuring the responsible use of GPT detectors and the development of more robust and equitable methods. First, we strongly caution against the use of GPT detectors in evaluative or educational settings, particularly when assessing the work of non-native English speakers. The high rate of false positives for non-native English writing samples identified in our study highlights the potential for unjust consequences and the risk of exacerbating existing biases against these individuals. Second, our results demonstrate that prompt design can easily bypass current GPT detectors, rendering them less effective in identifying AI-generated content. Consequently, future detection methods should move beyond solely relying on perplexity measures and consider more advanced techniques, such as second-order perplexity methods17 and watermarking techniques34, 35. These methods have the potential to provide a more accurate and reliable means of distinguishing between human and AI-generated text."
This is an important report for those of us in the Immigrant and Refugee-serving sector. The release is quoted in full below.
"Western technology companies have long struggled with offering their services in languages other than English. A combination of political and technical challenges have impeded companies from building out bespoke, automated systems that function in even a fraction of the world’s 7,000+ languages. With large language models powered by machine learning, online services think they’ve solved the problem. But have they?
A new report from CDT examines the new models that companies claim can analyze text across languages. The paper explains how these language models work and explores their capabilities and limits.
Large language models are a relatively new and buzzy technology that power all sorts of content generation and analysis tools. You’ve read about them in articles about ChatGPT and other generative AI tools that produce “human”-sounding text. But these models can also be adapted to analyze text. Companies already use large language models to moderate speech on social media, and may soon incorporate these tools into systems in other areas such as hiring and making public benefits decisions.
In the past, it has been difficult to develop AI systems — and especially large language models — in languages other than English because of what is known as the resourcedness gap. This gap describes the asymmetry in the availability of high quality digitized text that can serve as training data for a model. English is an extremely highly resourced language, whereas other languages, including those used predominantly in the Global South, often have fewer examples of high quality text (if any at all) on which to train language models.
Recently, developers have started to contend that they can bridge that gap with a new technology called multilingual language models: large language models trained on text from multiple languages at the same time. Multilingual language models, they claim, infer connections between languages, allowing them to uncover patterns in higher resourced languages and apply them to lower resourced languages. In other words, by training on lots of data from lots of languages, multilingual language models can more easily be adapted to tasks in languages other than English.
Language models in general, and multilingual language models in particular, may allow for the creation of exciting new technologies. An effort to increase access to online services in multiple languages will certainly be a step in the right direction. They may even help to open up different opportunities and access to information for people who speak one of the many languages that are currently rarely supported by online services.
However, while multilingual language models show promise as a tool for content analysis, they also face key limitations:
These shortcomings are amplified when used in high risk contexts. If these models are used to scan applications for asylum for example, errant systems may limit a users’ ability to access safety. In content moderation, misinterpretations of text can result in takedowns of posts which may erect barriers to information, particularly where not a lot of information in a particular language is available.
To adequately assess if these models are up to the task, we need to know more. Governments, technology companies, researchers, and civil society should not assume these models work better than they do, and should invest in greater transparency and accountability efforts in order to better understand the impact of these models on individuals’ rights and access to information and economic opportunities. Crucially, researchers from different language communities should be supported and be at the forefront of the effort to develop models and methods that build capacity for tools in different languages.
This new report is the third in a series published by CDT on the capabilities and limits of automated content analysis technology; the first focused on English-language social media content analysis technology and the second on multimedia content analysis tools.
As part of this project, we are proud to announce that we have translated the executive summary of this paper in three additional languages: Arabic, French, and Spanish.
Read the executive summary in English here.
The articles in this edition of Canadian Diversity should provide you with some inspiration and ideas. You should want to learn more about the projects and research. Hopefully they will encourage you to share what you’re working on. There is so much innovation in our sector. Too much of it stays under the radar. With this publication, we’re looking to reveal some of the innovation to you. Let’s make this a starting point for an ongoing sector conversation.
Your peers and colleagues here offer you insights for our digital transformation roadmap. This is a conversation that needs to happen at scale in the Immigrant and Refugee-serving sector. It is core to the future of our sector and how we work and serve. It needs to involve everyone from frontline workers, middle management, to leadership, to funders, with Newcomers at the centre. It takes time, effort, and investment.
But we need to be intentional about it. There is much wisdom in the room. Huge amounts of experience. It’s time to tap into it, together.
Imagine the conversation we could have in another 25 years if we get it right this time.
Articles:
Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)