The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions.

Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse."

Additional context from the report:

"Given the transformative impact of generative language models and the potential risks associated with their misuse,
developing trustworthy and accurate detection methods is crucial. In this study, we evaluate several publicly available GPT detectors on writing samples from native and non-native English writers. We uncover a concerning pattern: GPT detectors consistently misclassify non-native English writing samples as AI-generated while not making the same mistakes for native writing samples. Further investigation reveals that simply prompting GPT to generate more linguistically diverse versions of the non-native samples effectively removes this bias, suggesting that GPT detectors may inadvertently penalize writers with limited linguistic expressions...

We evaluated the performance of seven widely-used GPT detectors on a corpus of 91 human-authored TOEFL essays obtained from a Chinese educational forum and 88 US 8-th grade essays sourced from the Hewlett Foundation’s Automated Student Assessment Prize (ASAP) dataset. The detectors demonstrated near-perfect accuracy for US 8-th grade essays. However, they misclassified over half of the TOEFL essays as "AI-generated" (average false positive rate: 61.22%). All seven detectors unanimously identified 18 of the 91 TOEFL essays (19.78%) as AI-authored, while 89 of the 91 TOEFL essays (97.80%) are flagged as AI-generated by at least one detector. For the TOEFL essays that were unanimously identified, we observed that they had significantly lower perplexity compared to the others (P-value: 9.74E-05). This suggests that GPT detectors may penalize non-native writers with limited linguistic expressions...

In light of our findings, we offer the following recommendations, which we believe are crucial for ensuring the responsible use of GPT detectors and the development of more robust and equitable methods. First, we strongly caution against the use of GPT detectors in evaluative or educational settings, particularly when assessing the work of non-native English speakers. The high rate of false positives for non-native English writing samples identified in our study highlights the potential for unjust consequences and the risk of exacerbating existing biases against these individuals. Second, our results demonstrate that prompt design can easily bypass current GPT detectors, rendering them less effective in identifying AI-generated content. Consequently, future detection methods should move beyond solely relying on perplexity measures and consider more advanced techniques, such as second-order perplexity methods17 and watermarking techniques34, 35. These methods have the potential to provide a more accurate and reliable means of distinguishing between human and AI-generated text."


This is an important report for those of us in the Immigrant and Refugee-serving sector. The release is quoted in full below.

"Western technology companies have long struggled with offering their services in languages other than English. A combination of political and technical challenges have impeded companies from building out bespoke, automated systems that function in even a fraction of the world’s 7,000+ languages. With large language models powered by machine learning, online services think they’ve solved the problem. But have they? 

A new report from CDT examines the new models that companies claim can analyze text across languages. The paper explains how these language models work and explores their capabilities and limits.

Large language models are a relatively new and buzzy technology that power all sorts of content generation and analysis tools. You’ve read about them in articles about ChatGPT and other generative AI tools that produce “human”-sounding text. But these models can also be adapted to analyze text. Companies already use large language models to moderate speech on social media, and may soon incorporate these tools into systems in other areas such as hiring and making public benefits decisions.

In the past, it has been difficult to develop AI systems — and especially large language models — in languages other than English because of what is known as the resourcedness gap. This gap describes the asymmetry in the availability of high quality digitized text that can serve as training data for a model. English is an extremely highly resourced language, whereas other languages, including those used predominantly in the Global South, often have fewer examples of high quality text (if any at all) on which to train language models.

Recently, developers have started to contend that they can bridge that gap with a new technology called multilingual language models: large language models trained on text from multiple languages at the same time. Multilingual language models, they claim, infer connections between languages, allowing them to uncover patterns in higher resourced languages and apply them to lower resourced languages. In other words, by training on lots of data from lots of languages, multilingual language models can more easily be adapted to tasks in languages other than English.

Language models in general, and multilingual language models in particular, may allow for the creation of exciting new technologies. An effort to increase access to online services in multiple languages will certainly be a step in the right direction. They may even help to open up different opportunities and access to information for people who speak one of the many languages that are currently rarely supported by online services. 

However, while multilingual language models show promise as a tool for content analysis, they also face key limitations:

  1. Multilingual language models often rely on machine-translated text that can contain errors or terms native language speakers don’t actually use. 
  2. When multilingual language models fail, their problems are hard to identify, diagnose, and fix.
  3. Multilingual language models do not and cannot work equally well in all languages.
  4. Multilingual language models fail to account for the contexts of local language speakers.

These shortcomings are amplified when used in high risk contexts. If these models are used to scan applications for asylum for example, errant systems may limit a users’ ability to access safety. In content moderation, misinterpretations of text can result in takedowns of posts which may erect barriers to information, particularly where not a lot of information in a particular language is available.

To adequately assess if these models are up to the task, we need to know more. Governments, technology companies, researchers, and civil society should not assume these models work better than they do, and should invest in greater transparency and accountability efforts in order to better understand the impact of these models on individuals’ rights and access to information and economic opportunities. Crucially, researchers from different language communities should be supported and be at the forefront of the effort to develop models and methods that build capacity for tools in different languages.

This new report is the third in a series published by CDT on the capabilities and limits of automated content analysis technology; the first focused on English-language social media content analysis technology and the second on multimedia content analysis tools.

As part of this project, we are proud to announce that we have translated the executive summary of this paper in three additional languages: Arabic, French, and Spanish.

Read the full report here.

Read the executive summary in English here.

Résumé exécutif – Français.

Resumen ejecutivo – Español.

.الملخص التنفيذي


The articles in this edition of Canadian Diversity should provide you with some inspiration and ideas. You should want to learn more about the projects and research. Hopefully they will encourage you to share what you’re working on. There is so much innovation in our sector. Too much of it stays under the radar. With this publication, we’re looking to reveal some of the innovation to you. Let’s make this a starting point for an ongoing sector conversation.

Your peers and colleagues here offer you insights for our digital transformation roadmap. This is a conversation that needs to happen at scale in the Immigrant and Refugee-serving sector. It is core to the future of our sector and how we work and serve. It needs to involve everyone from frontline workers, middle management, to leadership, to funders, with Newcomers at the centre. It takes time, effort, and investment.

But we need to be intentional about it. There is much wisdom in the room. Huge amounts of experience. It’s time to tap into it, together.

Imagine the conversation we could have in another 25 years if we get it right this time.



Executive Summary:

"Around the world, civil society is being thrust into the digital world. Technology systems are now entwined in every aspect of our individual and collective lives. People rely on the internet, mobile devices, and social networking platforms to connect and communicate, and civil society organizations must now grapple and engage with many issues that had been considered the more specific domain of a small subset of digital rights organizations in earlier decades. Digital policy issues - including information privacy, net neutrality, government surveillance, and the regulation of artificial intelligence—now affect the core missions of nonprofits and associations working in areas as divergent as education, the environment, criminal justice, health, community development, justice, and the arts. To effectively continue to protect and promote well-being, rights, and opportunities, civil society must become digital civil society—a sector with the confidence and resources to address how technology shapes core mission issues.

Starting in January 2019, the Digital Civil Society Lab at Stanford University initiated a research study to map these changing contours of civil society, to analyze current connection and collaboration between more traditional civil society and digital policy organizations, and to identify additional ways that the philanthropic and organizational community could better support civil society in the digital age. The research study focused on four geographic domains—the United States, the European Union, the UK, and Canada. The project was conducted through policy convenings, face-to-face and remote interviews, an online survey, and desk research to understand the policy agendas of leading civil society and digital policy organizations in each geographic domain.

What we discovered is that the current mix of relationships between civil society and digital policy organizations runs the gamut, from active and highly effective alliances to just passing awareness. But there is a widespread and growing understanding and desire to weave together expertise on digital policies, civil society advocacy, and the lived experiences of many communities. Civil society organizations want to understand and be equipped to build, use, and advocate for digital systems and policies that protect people and promote rights.

Experts in digital policy issues want to know and understand how people and organizations are experiencing social, environmental, or economic harms from these systems and be able to help take action to address it.

Both traditional civil society and digital policy organizations see a common, intertwined fate for the future of democracy, human well-being, and essential rights; recognize the power of connection; and are eager to have support to be able to develop more and new ways to work together. Organizations unsurprisingly highlighted funding and resource-support needs that are foundational for any meaningful and sustainable social change. These included long-term and general funding in order to develop expertise and capacity, as well as funding that is ecosystem-focused and flexible to support diverse organizations and integrated advocacy strategies that can adapt to changing dynamics.

They also highlighted direct support for relationship building, common language, and collaboration infrastructure. Our recommendations distill and build on each of these sets of research learnings and focus on the “how” to weave the way forward to build a healthy civil society ecosystem for the digital age.

We have identified some tangible steps that the philanthropic and organizational community can take, starting from where we found that people and organizations are now, and then tiering support to further build collective strength. It should begin with robust support for The Core - existing diverse alliances of organizations who are modeling digital civil society in action. It is critical that The Core be in a position to both continue their substantive, collaborative work and also have the time and resources to support The Energized - groups ready to engage on digital policy for the first time - and connect and share knowledge with the far broader circle of The Affected - groups that are ready to learn, but need support to do so.

The world is now digital and institutions committed to supporting a healthy civil society ecosystem must similarly adapt by understanding these new realities and supporting the learning, collaboration, and infrastructure needed for a robust digital civil society. This report illustrates some important ways forward."



"Canada's immigration policy is regarded globally as a best practice model for selecting highly skilled migrants. Yet, upon arrival many immigrants face challenges integrating into employment. Where immigrants settle is one factor that has been shown to impact on employment integration. In Canada, regionalization policies have resulted in more immigrants settling in small to mid-sized cities. It is important to understand how these local systems are organized to promote immigrant integration into employment.

Using a systems approach, this paper presents a case study of immigrant employment in a mid-sized city in Ontario, Canada. Through a document review and stakeholder interviews, a systems map was developed, and local perspectives were analyzed.

Results demonstrate that in a mid-sized city, few organizations play a large role in immigrant employment. The connections between these core organizations and the local labour market are complex. Any potential challenges to the system that interfere with these connections can cause a delay for newcomers seeking employment. As cities begin to experience growth driven by immigration, there is a need to ensure local services are not only available but also working effectively within the larger employment system."


The Canadian Centre for Nonprofit Digital Resilience (CCNDR) has released their Building the  Cybersecurity and Resilience of Canada’s Nonprofit Sector report.

CCNDR "convened a Working Group focused on Building the Cybersecurity and Resilience of Canada’s Nonprofit Sector. The following document captures the knowledge and insights of the working group participants as well as the many sector stakeholders who offered feedback on drafts...

Cyber risks are risks to operations (e.g. inability to access applications needed for service delivery) and risks to data (e.g. client and donor data getting into the wrong hands). These risks translate into real financial, reputational, operational, and strategic impacts. Cyber incidents – particularly data breaches – erode hard-earned community trust and the organization's reputation. They can impact program delivery and service capacity. They can also affect fundraising, volunteer engagement, and staff morale.

Nonprofits collect a good deal of data from clients, donors, staff, and others, including sensitive data such as personal health information and financial information. The biggest impact of a data breach can be on clients who may already be uncomfortable with technology, have limited knowledge of their data exposure, and/or face language barriers. Where clients are vulnerable for these reasons, their personal risk of experiencing fraud increases.

Most nonprofits have limited (if any) contingency funding to respond to a breach– including ransomware  payments, fines, legal fees, and damages related to non-compliance actions and litigation. As a result of the dramatic rise in cybercrime-related claims, cyber insurance with relevant coverage limits has become prohibitively costly for many organizations. Even if they could afford cyber insurance, most nonprofits would not meet the stringent eligibility requirements."

Next steps

"To realize the vision and objectives above, the Working Group agreed to develop and test several prototypes.

A cybersecurity on-ramp in the settlement sector
We will prototype an on-ramp, including a risk assessment, with the immigrant and refugee-serving sector. The strategic approach is to go deep into the needs of one sector, develop a successful intervention, and then scale it to other sectors.

This pilot will focus on answering the following question: “How can we remove the overwhelm nonprofit leaders feel and provide an on-ramp to cybersecurity for organizations?”

A model cybersecurity policy for social services
In partnership with Islamic Family and Social Services Association, we will develop a model cybersecurity policy that can be adopted by other social service organizations."

Building the Cybersecurity and Resilience of Canada’s Nonprofit Sector (2023)
Building the Cybersecurity and Resilience of Canada’s Nonprofit Sector (2023)Download


"Social media is increasingly being leveraged by researchers to engage in public debates and rapidly disseminate research results to health care providers, health care users, policy makers, educators, and the general public.

This paper contributes to the growing literature on the use of social media for digital knowledge mobilization, drawing particular attention to TikTok and its unique potential for collaborative knowledge mobilization with underserved communities who experience barriers to health care and health inequities (eg, equity-seeking groups).

Setting the TikTok platform apart from other social media are the unique audiovisual video editing tools, together with an impactful algorithm, that make knowledge dissemination and exchange with large global audiences possible. As an example, we will discuss digital knowledge mobilization with trans and nonbinary (trans) communities, a population that experiences barriers to health care and is engaged in significant peer-to-peer health information sharing on the web. To demonstrate, analytics data from 13 selected TikTok videos on the topic of research on gender-affirming medicine (eg, hormonal therapy and surgeries) are presented to illustrate how knowledge is disseminated within the trans community via TikTok.

Considerations for researchers planning to use TikTok for digital knowledge mobilization and other related community engagement with equity-seeking groups are also discussed. These include the limitations of TikTok analytics data for measuring knowledge mobilization, population-specific concerns related to community safety on social media, the spread of disinformation, barriers to internet access, and commercialization and intellectual property issues.

This paper concludes that TikTok is an innovative social media platform that presents possibilities for achieving transformative, community-engaged knowledge mobilization among researchers, underserved health care users, and their health care providers, all of whom are necessary to achieve better health care and population health outcomes."

Examining TikTok’s Potential for Community-Engaged Digital Knowledge Mobilization With Equity-Seeking Groups (2021)Download


"TESOL Technology Standards were developed for language teachers to better understand how to use technology
appropriately. This article explores the following questions: (1) After 10 years since its publication, are the TESOL Technology Standards for Language Teachers (2011) still applicable to the current educational context? and (2) What potential updates are needed?

To answer these questions, a panel of practitioners with expertise and experience in language teaching, computer-assisted language Learning (CALL), and instructional technology were recruited to provide perspectives. Data collection utilized an online survey and a semi-structured interview protocol.

Findings reveal that the TESOL Technology Standards for Language Teachers remain applicable and helpful to teachers in current contexts.

However, further updates for performance indicators are needed, such as learning with mobile applications, learner data privacy, and other issues that have been emerging over the decade. Addressing this need is crucial for furthering the research and guiding teacher education programs, teacher educators, individual teachers and other relevant stakeholders who pursue the effective technology integration in current and future educational contexts. Informed by qualitative data, this research recommends using the TESOL Technology Standards for Language Teachers as guidelines to train teachers for technology integration."

Ten years later Reexamining the TESOL Technology Standards for Language Teachers (2022)Download

Based on survey responses from 7500+ participants across 136 countries, this report provides the largest ever mapping of the digital barriers facing civil society organizations — and those faced by the communities they serve. It provides data across a range of issues, including access, affordability, digital skills, policy, and funding for digital equity efforts.

Report launch webinar:

Recommendations for civil society, governments & policy makers, philanthropy, and corporations:

Civil society

Governments, policymakers and regulators


Private Sector

This report is based on a survey conducted in partnership between Connect Humanity and TechSoup, and with additional distribution from CIVICUS, FORUS, NTEN, and WINGS.

The survey questions are available to download. If you are a researcher interested in access to the raw, anonymized survey data, please email community@connecthumanity.fund to discuss.

State of Digital Inequity - Civil Society Perspectives on Barriers to Progress in our Digitizing World (2023)Download

Please take this short 7-question survey where you can tell us how we are doing and how we might do better. This survey is anonymous. Your feedback will be used to improve the KM4S.ca website. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)