I know, I know. Again with AI.
But "the dais" at Toronto Metropolitan University released a report that caught my eye recently: Automation Nation? AI Adoption in Canadian Businesses.
“By 2021, only 3.7 percent of Canadian firms had deployed AI in their business in any capacity.”
It’s a catchy line.
"While only three percent of the smallest firms with between 5 and 19 employees have adopted AI, 20 percent of firms with 100 or more employees have already started to use this technology."
And, of course, “Adoption has also been leaving some equity-seeking groups behind: businesses owned by women, Indigenous peoples, and people living with disabilities are far less likely than other businesses to currently be using AI.”
What’s more, according to their research, about 2-3% of the “health care and social assistance” industry has had success adopting AI.
Sure. This is 2021. Pre-ChatGPT. And there are tonnes of nuances within the report that suggest that AI is being used in many different ways by many different businesses. And, since 2021 we’ve been inundated with business cases for AI technologies.
And, yes, Canada was once an AI pioneer.
A key takeaway for me is their “most important takeaway from studying business adoption of AI [which] is that there is significant room for growth in responsible AI use in Canada.”
Emphasis on responsible AI use. Which should be grounded in overall responsible tech use.
We’re in an AI hype cycle—can Canada make it a responsible one?
“AI conversations have the characteristics of a hype cycle, which is one reason why we should slow down how we approach the matter from a policy and regulatory perspective. Unfortunately, Canada’s Ministry of Innovation, Science, and Economic Development (ISED) is operating in urgency mode. ISED has a mandate to establish Canada as a world leader in AI, and, apparently, to accelerate AI’s use and uptake across all sectors of our society. The confidence with which ISED is asserting societal consensus on AI’s uptake is troubling. Very few of us have had a chance to think about if and how we do and don’t want AI to become installed in our society and culture, our relationships, our workplaces, and our democracy.”
Because if we don’t use AI responsibly, we get this:
Lost in AI translation: growing reliance on language apps jeopardizes some asylum applications
“Translators say the US immigration system relies on AI-powered translations, without grasping the limits of the tools.”
Ignoring whether or not you care about being a frontrunner in AI adoption, I think a key point is that you don’t need to be.
But you need to be paying attention. And maybe seeking out some early adopters to learn from and even partner with.
New framework designed to advocate for responsible AI use among fundraisers. How does it work?
“Since ChatGPT launched in November 2022, many non-profit organizations have used it to drive efficiency and innovation. However, non-profits could jeopardize public trust in them without knowing how generative AI affects them, their donors, and their clients.”
Should we allow artificial intelligence to manage migration? (webinar recording)
How is artificial intelligence being used in governing migration? What are the risks and opportunities that the emerging technology raises for both the state and the individual crossing a country’s borders?
AI-powered strategies can supercharge job search and permanent resident applications, influencer says
“Max Medyk, an immigration advocate and social media influencer with a large online following, is championing the use of artificial intelligence to revolutionize job search and Permanent Resident (PR) applications in Canada… Medyk captivated an audience of more than 60 at the Halifax Central Library on Sunday, teaching how to use AI to search for jobs, refine resumes, enhance cover letters and generate strategic guidance for immigrants to best navigate the often complex pathways to obtaining Permanent Residenct status.”
Federal government’s Guide on the use of Generative AI
This document provides preliminary guidance to federal institutions on their use of generative AI tools. This includes instances where these tools are deployed by federal institutions. It provides an overview of generative AI, identifies challenges and concerns relating to its use, puts forward principles for using it responsibly, and offers policy considerations and best practices.
Read, but be a bit wary of the tech consultants:
The GenAI Imperative
For my entire career, going back to 1979, I’ve been told that artificial intelligence is “the next big thing” and that “next year, it’s going to arrive and change everything.” It wasn’t, and it never did… In Forrester’s 40 years, we’ve rarely recommended that clients move immediately to build a new technology. We typically counsel cautious experimentation until the tech matures and the vendor landscape rationalizes. We’re breaking that rule with generative artificial intelligence (genAI). We believe that you must move NOW.”
But take this advice:
How to Prepare for a GenAI Future You Can’t Predict
“Given the staggering pace of generative AI development, it’s no wonder that so many executives are tempted by the possibilities of AI, concerned about finding and retaining qualified workers, and humbled by recent market corrections or missed analyst expectations. They envision a future of work without nearly as many people as today. But this is a miscalculation. Leaders, understandably concerned about missing out on the next wave of technology, are unwittingly making risky bets on their companies’ futures. Here are steps every leader should take to prepare for an uncertain world where generative AI and human workforces coexist but will evolve in ways that are unknowable.”