I get a morning email from Lexology, a free legal news service. It's incredibly useful. You can customize the feed and news you get. I focus on these legal areas: Employment & Labour, Human Rights, Immigration, IT & Data Protection, Internet & Social Media, Legal Tech, & Non-profit Organizations in Canada, with additional filtering of Ontario-related news.
The articles are written by lawyers, typically for lawyers. But they're useful for us all.
Sometimes incredibly useful. I'm not a lawyer. But these articles are generally written in an accessible way. The articles help me to understand the legal implications of changes in legislation or regulations, through a lawyer's lens.
Most days I scan and email and one or a couple of articles catch my eye.
Today, it was Key legal considerations with generative AI. (Part 2 is here - Canada’s evolving artificial intelligence and privacy regime)
The first line grabbed me and I had an aha moment: "Mass media attention following Open AI’s release of ChatGPT has pushed the subject of artificial intelligence (AI) back into the limelight."
It's true. The subject and conversation of AI is not new. In fact, I was recently listening to earlier episodes of the podcast The AI Effect, which explored themes around the Canadian AI ecosystem.
In fact, this episode from Feb 19, 2018: "Canada is a world leader in artificial intelligence research and development -- but what is AI? In this first episode, hosts Amanda Lang and Jodie Wallis introduce us to Canada’s AI ecosystem and examine the history of AI in Canada. Featuring interviews with Richard Zemel, co-founder and director of research at the Vector Institute for Artificial Intelligence, and Rebecca Finlay, VP of engagement and public policy at the Canadian Institute for Advanced Research (CIFAR)."
Not only is the conversation about AI not new in a fairly broad sense, but Canada has led from time to time.
The first question that comes to mind is, what happened to that conversation? I mean, the podcast includes useful conversations that seem new this year, but are of course not. Stuff like Balancing innovation and safety: ethics, transparency, bias and privacy, and Human-centered AI: developing and retaining the skills to lead.
Somehow we seem to have stepped away and let the tech bros play without fences. Without sufficient regulation. Without sufficient oversight.
But things move slowly.
And since ChatGPT hit the market, things have accelerated incredibly. Quicker than the responsible tech landscape has evolved (although, again, in various formats, it's always been there, just fragmented, academic, and not given nearly the profile and influence needed) and governments have acted, as far as I can tell.
So, to the lawyers we go.
This article, Key legal considerations with generative AI, has useful insights. It's part 1 of a three-part series exploring "how key areas of law will influence the development of generative AI and how it is used by businesses. Part 1 focuses on copyright law and the critical questions of ownership and authorship over AI-generated content, Part 2 discusses privacy law considerations under Canada’s proposed Artificial Intelligence and Data Act, and Part 3 explores issues of liability for the creation and use of AI-generated content; namely, who is accountable for AI-generated content and when."
I'll add the additional parts as they are published, but if this is of interest or influences your work, I strongly suggest taking a look at Lexology and creating a personalized daily legal newsfeed.
The article intro:
"Part 1: Generative AI and copyright law
Canadian law has yet to definitively decide how, if and when copyright laws should apply to content generated by AI. In light of the gaps in the current Copyright Act vis-à-vis AI generally, there have been two government publications containing recommendations that speak to some aspects of the AI/copyright interface: the 2019 Report of the Standing Committee on Industry, Science and Technology (often referred to as the INDU Report) was tabled following the statutorily-required five-year review of the Copyright Act; and in 2021, the Canadian government’s department of Innovation, Science and Economic Development (ISED) published a consultation paper which considered whether and how to adapt the Copyright Act in light of current AI capabilities. The government has yet to table any proposed amendments in light of the recommendations contained in the INDU Report and the ISED paper. Despite the uncertainty, we know enough to anticipate that the primary commercial concerns and legal questions will arise in the context of the “data inputs,” which are used to train the AI system, and the “data outputs,” being the content generated by the AI."
The key issues outlined in the article:
Input issues: With training data comes risk
"The main copyright liability issue that may arise from data inputs would result from the reproduction of data consisting of, or containing, works protected by copyright being used to train an AI program, or in the process of the creation of “new” works by the AI program."
"Many of the copyright issues related to content outputs generated by AI remain unresolved by Canadian lawmakers – the Copyright Act simply does not address AI in explicit terms. However, the two primary issues relate to the interpretation of “authorship” and “ownership” of AI-generated works."
"Until the Copyright Act is amended to address the issues identified above, a primary concern for users of generative AI will be what they can do with the output generated by the AI – in short, how can the content produced by an AI, be it text or an image, be used? Can you use the image created by an AI program on a t-shirt, as the cover of a book, or in a movie? Can you use the text created by an AI program in your brochure or novel?"
Then, and you'll love this as an example of where things are going, they end with a conclusion generated by ChatGPT, with minor edits!
It's a good read. Look for more as they share parts 2 (April 10th) and 3 (April 17th).