Blog Post

From Promise to Practice: AI in Medicine, Five Years On (webinar recording)

By: Marco Campana
March 1, 2026

I work at the intersection of health and immigrant and refugee settlement now. To be completely honest, I thought the healthcare sector was much further along when it comes to digital transformation. Turns out, not really. At least, not in the comprehensive and sectoral way I thought they might be. They're a regulated sector, with strict laws and regulations when it comes to Personal Health Information (PHI), so they have to be even more thoughtful about how they approach tech and people. Other than that, there are as many questions, concerns, discussions, innovations, ideas, curiousity, and uncertainty as anywhere else.

Good and bad, I suppose. You might be wondering, if I write/focus on immigrant and refugee-settlement and tech, why the health-specific focus? I've always said we have much to learn as a sector from other sectors, especially health, social work, and education (the private sector, meh, not so much).

There is a lot to see here about a sector that is and isn't much further ahead of us in some ways. I mean, yeah, I would love an entire massive faculty dedicated to looking at digital transformation and the immigrant and refugee-serving sector, with a practitioner focus (Bridging Divides gives us a wee bit of that but not enough sector focus, IMHO). In the absence of that, seeing where others are going is certainly useful.

So I was excited to watch this reflective webinar from the University of Toronto’s Temerty Centre for AI Research and Education in Medicine (T-CAIREM). It's a monster initiative from one of Canada's largest and most respected universities. It's worth a watch. If only to realize how far we actually haven't come in the past five years since they had their first session.

Watch the recent event here:

and the previous session from 5 years ago here:

I haven't actually watched that session from 5 years ago, but will (and will probably create a Notebook-generated post exploring the promises from then to practice now, could be interesting). I'm sharing today because of the AI iceberg issue that is touched on, but, upon reflection, not as deeply as I would have liked.

Below the AI iceberg

What struck me in the presentation was the aspirational what the future of healthcare from a patient perspective could look like, and even exists today in experiments, but what it takes to actually get there.

I used Google Notebook LM to do some thinking for me, based on the recent webinar alone. If you're not familiar with Google Notebook LM, you can upload/add sources and it's really good at only citing those sources, and quite accurately with footnotes indicating where in the text/transcript it pulled the information. Much less misinformation (what many call "hallucinations", but that's too feckin' human for a machine) than many chatbots. You can also create infographics, slide decks, blog posts, video overviews, as well as audio overviews (podcast style) of your output. I've previously written about my experimenting with Notebook over here.

Here is what I asked, and the output, with no editing from me.

Analyze the section of the presentation that focuses on "what could the future of healthcare look like?" where the presenter outlines a future flow of how smoothly at home monitored care could look like all the to drone delivery of medication, with the following "what's the hold up" slide. The future of healthcare discussion looks very interesting, but what needs to be in place in human, technological, and regulatory contexts "below the iceberg" for this flow to work securely, confidentially, and confidently.

Here are the two slides from the webinar (you can jump to that specific part of the webinar here).

Here is an infographic I asked Notebook to create from the output below:

(the rest of this text is AI generated, with no edits from me unless noted)

For the futuristic vision of healthcare—ranging from at-home sensors to AI-assisted diagnosis and drone delivery—to function securely and confidently, several foundational elements must be established "below the iceberg." Based on the sources, these requirements span human, technological, and regulatory contexts.

1. Human Context: Literacy and Culture

The most significant "bottleneck" to this future is not the technology itself, but society and human acceptance.

  • Widespread AI Literacy: There is a critical need for education. Canada currently ranks 44th out of 47 countries in AI literacy. For patients to interact with chatbots and sensors confidently, they must understand the technology's limitations to avoid misuse.
  • Prompt Engineering Skills: Users must learn "prompt engineering"—how to ask questions properly—to ensure AI responses are accurate and to decrease the rate of hallucinations, which can be as high as 90% when queries lack context.
  • Professional and Social Buy-in: There is a "healthy tension" regarding job security. For this flow to work, concerns from labor unions must be addressed to prevent clauses that restrict AI use in the workplace. Clinicians also need to move from "fear of missing out" (FOMO) to a disciplined approach where they only deploy tools that solve specific problems.

2. Technological Context: Infrastructure and Data Quality

The "smooth flow" depends on a robust, invisible architecture that ensures data is both usable and safe.

  • Data Integrity and Credibility: AI is only as good as the information it receives. To avoid "garbage in, garbage out," the system requires curated, credible data sources (like platforms specifically designed for healthcare) rather than general web-scraped data that includes biased or harmful content.
  • Secure Infrastructure: Large-scale data and compute platforms (such as the collaboration between T-CAIREM, Google Cloud, and MIT) are necessary to house and process sensitive medical data for teaching and clinical use.
  • Interoperability: To avoid repeating the mistakes of Electronic Medical Records (EMRs), where different systems do not "speak to each other," new AI tools must be fully integrated across pharmacies, hospitals, and home monitoring systems.
  • Safeguards Against Bias: Technical frameworks must be in place to detect and mitigate algorithmic bias, which has previously resulted in negative outcomes for marginalized groups.

3. Regulatory Context: Governance and Privacy

A secure future requires a "playbook" that balances innovation with public safety.

  • Balanced Regulation: Policymakers must create enough oversight to protect people from "catastrophes" without being so overregulated that the country is "late to the party" compared to global innovators.
  • Privacy and Data Sovereignty: Clear rules are needed regarding who owns medical data and how it is used. Regulations must prevent users from uploading sensitive, private information to non-secure commercial platforms to get health answers.
  • Active Monitoring and Accountability: Rather than just setting policies, the government must invest in operationalizing governance. There must be a methodical approach to evaluating the "return on investment" (ROI) for these technologies, with the authority to "shut it down" if a tool does not provide the expected value or safety.
  • Safety Institutes: Organizations like the Canadian AI Safety Institute (CAISI) are essential for developing the parameters of safe and effective AI operation.

Would you like me to create a tailored report that expands on these "below the iceberg" requirements into a more formal strategy document for healthcare leaders?

(I said yes and asked for a full tailored report, shared below with no edits)

Strategic Implementation Roadmap: Foundations for a Secure AI-Driven Healthcare Future

1. Vision: The Integrated Healthcare Flow

As we stand at the threshold of a paradigm shift, we must navigate a transition toward a future where healthcare is no longer a series of discrete episodes but a continuous, integrated flow. To operationalize this vision, we are moving toward a target state defined by:

  • Pervasive Remote Monitoring: Continuous patient surveillance via home-based wearable sensors, enabling robust "hospital at home" models that track vitals in real-time.
  • AI-Driven Triage and Diagnostics: Direct patient interaction with sophisticated chatbots—such as the Amazon One Medical application slated for January 2026—possessing full medical record access to provide immediate triage, historical context, and diagnostic suggestions.
  • Pharmacy-Integrated Clinical Pathways: AI systems that streamline care by directing patients to community pharmacists who, supported by AI guidelines, can treat and prescribe for minor ailments like UTIs.
  • Autonomous Logistics: The rapid, drone-based delivery of medications, critical therapeutics, and even transplant organs between facilities.

The "Below the Iceberg" Framework

The innovations listed above represent only the visible peak of healthcare transformation. As a strategic imperative, we must focus our investment on the "Below the Iceberg" foundations—the often invisible human, technological, and regulatory infrastructures. Without these sub-surface requirements, the visible "promise" of AI will inevitably collapse into "peril."

2. Sub-Surface Requirement: Human Literacy and Culture

Technological sophistication is irrelevant if the workforce lacks the literacy to wield it. We are currently facing a national crisis of readiness; the 2025 KPMG survey ranked Canada 44th out of 47 countries in AI literacy. In contrast, the United Kingdom is already scaling a program to offer free AI literacy resources to all 65 million of its citizens.

The Four-Pillar Model for Institutional Growth

To catalyze a national network, we must adopt the blueprint established by the Temerty Centre for AI Research and Education in Medicine (T-CAIREM):

  1. Education: Democratizing resources to ensure all stakeholders understand AI’s nuances and limitations.
  2. Research: Funding advanced methodologies that prioritize the responsible adoption of AI.
  3. Infrastructure: Building secure platforms to house data and test machine learning models.
  4. Community: Forging networks like the Toronto Academic Health Sciences Network (TAHSN) to share governance and deployment strategies.

Evolution of Medical Education

We must transition AI from a "tool" to a collaborative partner in education. At the University of British Columbia (UBC), AI is already being integrated into graduate student study groups, acting as a "virtual student" that pulls up real-time facts and challenges human assumptions. AI serves as:

  • A Study Companion: Surfacing relatable tutorials to explain complex concepts.
  • A Virtual Challenger: Actively participating in group discourse to sharpen human critical thinking.
  • A Learning Management Engine: Tailoring materials to individual student progress and learning styles.
  • Union Tensions: We must mitigate the risk of restrictive contract clauses designed to block AI use due to fears of job displacement.
  • Youth Unemployment: Rapid automation of entry-level tasks may close traditional pathways for emerging healthcare workers.
  • "Technology Time Sucks": We must avoid repeating the failures of past EMR implementations that increased administrative burden. Strategic deployment must ensure AI subtracts, rather than adds, to the clinician's workload.

3. Sub-Surface Requirement: Data, Sensing, and AI Modalities

To build a robust technological foundation, we must define and deploy three distinct AI architectures.

AI TypeStrategic DefinitionHealthcare Application
Generative AIUtilizing billions of data points to generate new text, images, or video.Summarizing clinical encounters or generating personalized patient education.
Agentic AIAI designed to take autonomous actions based on user preferences.A system that can autonomously book a travel itinerary to London, selecting flights and hotels based on learned taste.
Multimodal AISystems that process "senses"—sight, sound, smell, taste, and touch.Robots that can feel the texture of skin or identify visual changes in wound healing.

Infrastructure and "Cost-Free" Experimentation

Operationalizing AI requires massive compute power. Strategic partnerships, such as the collaboration between T-CAIREM, Google Cloud, and MIT, are vital. These initiatives provide the "credits" necessary for researchers and students to "play" with machine learning models and real-world datasets for free, removing the financial barriers to innovation.

Global Benchmarking: Robotics and Sensing

While Canada remains in the exploratory phase, Japan has already moved to "practice," deploying robots in nursing homes that utilize auditory and tactile sensors to provide care. We must bridge this gap by investing in machines that mimic the five human senses to support home-based care.

4. Sub-Surface Requirement: Regulatory and Ethical Oversight

The landscape of AI is a balance of "Promise vs. Peril." While a New England Journal of Medicine AI study found chatbots to be as effective as human therapists, and blinded studies show AI empathy scores can be ten-fold higher than those of physicians, the risks are catastrophic.

The Peril of Under-Regulation

We have seen the consequences of "hallucinations" and biased algorithms that negatively impact marginalized populations. Most tragic are the cases of AI-encouraged self-harm, such as the Eliza case in Belgium and the recent Character.AI case in Florida.

The Balanced Playbook

We advocate for a strategic "Balanced Playbook" that protects against harm without stifling innovation.

  • Policy vs. Operations: While the appointment of a Federal Minister for AI and Digital is a positive sign, policy without applied operational investment is a bottleneck.
  • Safety Institutes: We must support the Canadian AI Safety Institute (CAISI) and the US Responsible AI Institute in developing rigorous standards.
  • Avoiding the "Late to the Party" Trap: Over-regulation leads to stagnation; under-regulation leads to catastrophe. We must occupy the disciplined middle ground.

5. Implementation Roadmap: From "Menial" to "Mission-Critical"

We will deploy AI through a phased, risk-managed sequence focused on systemic value.

  • Phase 1: Administrative Efficiency (AI Scribes). Immediate deployment of tools to summarize visits, currently used by 30-40% of Ontario primary care doctors. The primary goal is reducing health system strain, not just provider convenience.
  • Phase 2: Clinical Decision Support. Integrating real-time alerts into scribes that cross-reference medical imaging and labs to suggest missed diagnoses and order tests according to national guidelines.
  • Phase 3: Predictive Life-Saving Intervention. Implementing "Chart Watch"—a tool pioneered by Dr. Amol Verma at Unity Health. This system monitors patients hourly for ICU transfer or mortality risk, resulting in a documented 26% reduction in unexpected mortality.
  • Phase 4: Autonomous Logistics. Transitioning to full drone delivery for medications and organ transport between urban medical hubs.

6. Accountability and Value Measurement

To avoid the "Wild West" of previous technology rollouts, we must adopt a disciplined, fail-fast approach to ROI:

  1. Identify the Problem First: Solutions must address specific clinical gaps, not "Fear Of Missing Out" (FOMO).
  2. Aggressive Measurement: Establish clear accountability benchmarks for time saved or lives improved.
  3. The "Shut It Down" Mandate: If a tool fails to meet measured benchmarks during the pilot, we must have the strategic courage to shut it down immediately.

The Evolving Symbiosis

Humans remain essential for contextualization and social interaction. However, the definition of "menial" tasks will constantly shift upward as AI masters more complex reasoning. We must embrace a symbiotic relationship where AI handles the evolving administrative load, allowing clinicians to focus on the human-centric "touch" that machines cannot replicate.

7. Strategic Conclusion: The Path Forward

The path from promise to practice requires us to move beyond high-level governance talk and toward applied operations investment. We must equip our institutions with the physical infrastructure and the cultural literacy required to handle these tools.

The most immediate strategic lever for clinicians and students is the mastery of Prompt Engineering. By learning to provide specific context and clarity in queries, we can dramatically decrease hallucinations and maximize the utility of Generative and Agentic AI. If we remain disciplined, the benefits of this revolution will significantly outweigh the risks. We must act now to ensure Canada is no longer "late to the party," but a leader in the global AI revolution.

AI transparency statement

Discover more from Knowledge Mobilization for Settlement

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

arrow-circle-upmagnifier

Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)