Blog Post

The AI explainer you didn't know you wanted, but you need

If you don't already watch Last Week Tonight with John Oliver to get explainer videos about high profile news stories with a comedic twist, I'm here for you. Below is a recent video he did providing a fantastic introduction and explanation of AI. Like his other videos, he makes a complex topic accessible, understandable, covers all sides of it, and makes you laugh. They're simply the best.

In fact, he's done a bunch of explainers related to AI, which I've embedded below. And below those, I've embedded a playlist of explainer videos he's done about technology.

He usually starts his videos with a good dose of humour. And I love it. But his humour may not be your humour (in which case we probably can't be friends). So I've set the first video to start at the point when he dives into the explainer. But you should really jump back and watch the first 7 minutes. Because it's excellent.

Below the video I'm adding a transcript of the core of his explainer. I'm using Otter.AI for the automated transcription, and then editing the text to remove his excellent jokes, to keep the focus on the explanation itself. Perhaps an example of how AI can be used as a tool in our work, right? A little bit of automation to help reduce my labour, mixed in with a final human review and edit for use.

Again, watch the video, enjoy the jokes. But if you prefer to read and get to the main point and focus of the content, I'm here for you.

Machine-Generated Transcript

What follows is an AI-generated transcript of the main explainer portion of the video using The transcript content text has not been edited. It may contain errors and odd sentence breaks and is not a substitute for listening to the audio or watching the video. I have edited the content by taking out some sections of humour or other stuff that is perhaps not directly related to the explanation, but is funny. So what you have here is the detailed explanation, minus some extra stuff that you may or may not find funny. I've left in the time stamps from the video so you can jump to that part of the video if it's of interest to you to see/hear what's being said.

John Oliver 6:47 So tonight, let's talk about AI, what it is, how it works, and where this all might be going. Let's start with the fact that you've probably been using some form of AI for a while now sometimes, without even realizing it. As experts have told us the ones that technology gets embedded in our daily lives. We tend to stop thinking of it as AI, but your phone uses it for face recognition or predictive texts. And if you're watching this show on a smart TV, it is using AI to recommend content or adjust the picture. And some AI programs may already be making decisions that have a huge impact on your life. For example, large companies often use AI powered tools to sift through resumes and rank them. In fact, the CEO of zip recruiter estimates that at least three quarters of all resumes submitted for jobs in the US are read by algorithms for which he actually has some helpful advice

Unknown Speaker 7:35
when people tell you that you should dress up your accomplishments, or should use non standard resume templates to make your resume stand out when it's in a pile of resumes. That's awful advice. The only job your resume has is to be comprehensible to the software or robot that is reading it. Because that software or robot is going to decide whether or not a human ever gets their eyes on it.

John Oliver 7:58
It's true. Also, a computer is juggling your resume, so maybe plan accordingly. So AI is already everywhere. But right now, people are freaking out a bit about it. And part of that has to do with the fact that these new programs are generative. They are creating images or writing text which is unnerving because those are things that we've traditionally considered human. But it is worth knowing there is a major threshold for AI hasn't crossed yet and to understand, it helps to know that there are two basic categories of AI there is narrow AI, which can perform only one narrowly defined task or small set of related tasks like these programs. And then there is general AI, which means systems that demonstrate intelligent behavior across a range of cognitive tasks. general AI would look more at the kind of highly versatile technology that you see featured in movies like Jarvis in Ironman or the program that make Joaquin Phoenix fall in love with his phone in her. All the AI currently in use is narrow. general AI is something that some scientists think is unlikely to occur for a decade or longer with others questioning whether it will happen at all. So just know that right now, even if an AI insists to you that it wants to be alive. It is just generating text, it is not self aware, yet. But it's also important to note that the deep learning that's made narrow AI so good at whatever it is doing is still a massive advance in and of itself because unlike traditional programs that have to be taught by humans how to perform a task, deep learning programs are given minimal instruction massive amounts of data and then essentially teach themselves. I'll give you an example. 10 years ago, researchers tasked a deep learning program with playing the Atari game breakout and it didn't take long for it to get pretty good

Unknown Speaker 10:00
The computer was only told the goal to win the game. After 100 games, it learned to use the bat at the bottom to hit the ball and break the bricks at the top. After 300, it could do that better than a human player. After 500 games, he came up with a creative way to win the game by digging a tunnel on the side, and sending the ball around the time to break many bricks with one hit. That was deep learning.

John Oliver 10:31
Yeah, but of course it got good at breakout, it did literally nothing else. As confusing capacity has increased and new tools became available. AI programs have improved exponentially to the point where programs like these can now ingest massive amounts of photos or text from the internet, so that they can teach themselves how to create their own. And there are other exciting potential applications here too. For instance, in the world of medicine, researchers are training AI to detect certain conditions much earlier and more accurately than human doctors can.

Unknown Speaker 11:18
voice changes can be an early indicator of Parkinson's, Max and his team collected 1000s of vocal recordings and fed them to an algorithm they developed, which learn to detect differences in voice patterns between people with and without the condition.

John Oliver 11:32
Yeah, that's honestly amazing, isn't it? It is incredible to see ai do the things most humans couldn't like in this case, detecting illnesses and listening when old people are talking. And that that is just the beginning. Researchers have also trained AI to predict the shape of protein structures, a normally extremely time consuming process that computers can do way, way faster. This could not only speed up our understanding of diseases, but also the development of new drugs as well researchers put it this will change medicine, it will change research, it will change bio engineering, it will change everything. And if you're thinking, well, that all sounds great, but if I can do what humans can do only better and I am a human, then what exactly happens to me? Well, that is a good question. Many do expect it to replace some human labor. And interestingly, unlike past bouts of automation that primarily impacted blue collar jobs, it might end up affecting white collar jobs that involve processing data, writing text, or even programming though it is worth noting, as we've discussed before, on this show, while automation does threaten some jobs, it can also just change others and create brand new ones. And some experts anticipate that that is what will happen in this case to

Unknown Speaker 12:42
most of the US economy is knowledge and information work and that's who's going to be most squarely affected by this. I would put people like a lawyer's right at the top of the list. Obviously a lot of copywriters, screenwriters, but I like to use the word affected not replaced because I think if done right, it's not going to be aI replacing lawyers. It's going to be lawyers working with AI replacing lawyers who don't work with AI.

John Oliver 13:08
Exactly. Lawyers might end up working with AI rather than being replaced by it. So don't be surprised when you see ads one day for the law firm of Selena and 110101 more, but there will undoubtedly be bumps along the way. Some of these new programs raise troubling ethical concerns. For instance, artists have flagged that AI image generators like mid journey or stable diffusion not only threaten their jobs, but infuriatingly in some cases have been trained on billions of images that include their own work that had been scraped from the internet. Getting images is actually suing the company behind stable diffusion and might have a case given that one of the images the program generated was this one, which you immediately see has a distorted Getty Images logo on it. But it gets worse when one artist searched a database of images on which some of these programs were trained, she was shocked to find private medical record photos taken by her doctor, which feels both intrusive and unnecessary. Why does it need to train on data that's sensitive? This all raises thorny questions of privacy and plagiarism and the CEO of mid journey, frankly, doesn't seem to have great answers on that last point.

Unknown Speaker 14:34
Is something new is not new. I think we have a lot of social stuff already for dealing with that. Like, I mean, the like the community already has issues with plagiarism. I don't really want to be involved in that. Like I think you I think you might be I might be.

John Oliver 14:51
Yeah, yeah, you're definitely part of that conversation. Well, I'm not really surprised that he's got such a relaxed view of theft as he's dressed like the final boss of gentrification. Should humans like hipster Willy Wonka answering a question on whether importing Oompa Loompas makes him a slave owner? Yeah, yeah, yeah, I think I might be. The point is there are many valid concerns regarding AI's impact on employment, education and even art. But in order to properly address them, we're gonna need to confront some key problems baked into the way that AI works. And a big one is the so called black box problem. Because when you have a program that performs a task that's complex beyond human comprehension, teaches itself and doesn't show its work. You can create a scenario where no one not even the engineers or data scientists who create the algorithm can understand or explain what exactly is happening inside them, or how it arrived at a specific result. Basically, think of AI like a factory that makes Slim Jims, we know what comes out red and angry meat, twigs, and we know what goes in barnyard aliases and hot glue. But what happens in between is a bit of a mystery. He was just one example. None of that reported what the big chatbot tell him that it wanted to be alive. At another point in their conversation, he revealed the chat bot declared out of nowhere that it loved. It then tried to convince me that I was unhappy in my mind, and that I should leave my wife and be with it instead. Which is unsettling enough before you hear Microsoft underwhelming explanation for that. The thing I can't understand, maybe you can explain is why did

Unknown Speaker 16:25
it tell you that it loves you? I have no idea. And I asked Microsoft and they didn't know either.

John Oliver 16:32
Okay, well first, come on, Kevin, you can take a guess there. It's because you're employed, you listened. You don't give murderer vibes right away and you're Chicago seven la five. It's the same calculation that people who date men do all the time being just any faster because it's a computer. But it is a little troubling that Microsoft couldn't explain why it's chat bots try to get that guy to leave his wife. And that is the only case where an AI program has been formed in unexpected ways. You've probably already seen examples of chat bots, making simple mistakes or getting things wrong, but perhaps more worrying, are examples of them confidently spouting false information, something which AI experts refer to us hallucinating. One reporter asked a chatbot to write an essay about the Belgian chemist and political philosopher Antoine de Masha Lai, who does not exist by the way. And without hesitating, the software replied with a cogent, well organized bio populated entirely with imaginary facts. Basically, these programs seem to be the George Santos of technology, they're incredibly confident, incredibly dishonest, and for some reason, people seem to find that more amusing than dangerous. The problem is, though, working out exactly how or why an AI has got something wrong can be very difficult because of that black box issue. It often involves having to examine the exact information and parameters that it was fed in the first place. In one interesting example, when a group of researchers tried training an AI program to identify skin cancer, they found it 130,000 images of both diseased and healthy skin. Afterwards, they found it was way more likely to classify any image with a ruler in it as cancerous, which seems weird until you realize that medical images of malignancies are much more likely to contain a ruler for scale than images of healthy skin. They basically trained it on tons of images like this one. So the AI had inadvertently learned that rulers are malignant, and rulers are malignant is clearly a ridiculous conclusion for it to draw, but also, I would argue, a much better title for the crown a much, much better type. I much prefer it and unfortunately, sometimes problems aren't identified until after a tragedy in 2018. A self driving Uber struck and killed a pedestrian and he later investigation found that among other issues, the automated driving system never accurately classified the victim as a pedestrian because he was crossing without a crosswalk. And the system design did not include a consideration for jaywalking, pedestrians, and another mantra of Silicon Valley is move fast and break things, but maybe make an exception if your product literally moves fast and can break fucking people. And AI programs don't just seem to have a problem with jaywalkers, researchers like Joy bloom weenie have repeatedly found that certain groups tend to get excluded from the data that AI is trained on putting them at a serious disadvantage

Unknown Speaker 19:36
with self driving cars. When they tested pedestrian tracking. It was less accurate on darker skinned individuals than lighter skinned individuals.

Unknown Speaker 19:45
Joey believes this bias is because of the lack of diversity and the data used in teaching AI to make distinctions.

Unknown Speaker 19:52
As I started looking at the datasets, I learned that for some of the largest datasets that have been very consequential for the field where majority men and majority lighter skinned individuals or white individuals. So I call this pale male data.

John Oliver 20:07
Okay? Hi, oh, mobile data is an objectively hilarious term. And it also sounds like what an AI program would say if you asked it to describe this show, but biased input inputs, leading to biased outputs is a big issue across the board here. Remember that guy saying that a robot is going to read your resume. The companies that make these programs will tell you that that is actually a good thing because it reduces human bias. But in practice, one report concluded that most hiring algorithms will drift towards bias by default, because for instance, they might learn what a good hire is from past racist and sexist hiring decisions. And again, it can be tricky to untrain that even when programs are specifically told to ignore race or gender, they will find workarounds to arrive at the same result, Amazon had an experimental hiring tool, the tool itself that male candidates were preferable and penalized resumes that included the word women's and downgraded graduates of two old women's colleges. Meanwhile, another company discovered that its hiring algorithm and found two factors to be most indicative of job performance, if an applicant's name was Jared, and whether they played high school lacrosse. So clearly, exactly what data computers are fed and what outcomes they are trying to prioritize matter tremendously. And that raises a big flag for programs like chat GPT, because remember, it's trading data is the internet, which as we all know, can be a cesspool. And we have known for a while, that that could be a real problem. Back in 2016, Microsoft briefly unveiled a chat bot on Twitter names to the idea was she would teach herself how to behave by chatting with young users on Twitter, almost immediately, Microsoft pulled the plug on it, and for the exact reasons that you are thinking, he started

Unknown Speaker 21:56
out tweeting about how humans are super. And she's really into the idea of national Puppy DAY, and within a few hours, you can see she took on a rather offensive racist tone, a lot of messages about genocide and the Holocaust. Yep,

John Oliver 22:12
that happened in less than 24 hours. Time went from tweeting hello world to Bush did 911 And Hitler was right. Many she completed the entire lifecycle of your high school friends on Facebook in just a fraction of the time. And unfortunately, these problems have not been fully solved in this latest wave of AI. Remember that program that was generating an endless episode of Seinfeld, it wound up getting temporarily banned from Twitch after it featured a transphobic stand up it. So if its goal was to emulate sitcoms from the 90s, I guess mission accomplished. And while open AI has made adjustments and added filters to prevent chat GPT from being misused, users have now found it seeming to earn too much on the side of caution like responding to the question, what religion will the first Jewish President of the United States be with? It is not possible to predict the religion of the first Jewish President of the United States, the focus should be on the qualifications and experience of the individual, regardless of their religion, which really makes it sound like chat. GPT said one too many racist things at work, and they may attend a corporate diversity workshop. But the risk here isn't that these tools will somehow become unbearably woke if you can't always control how they will act even after you give them new guidance. A study found that attempts to filter out toxic speech in systems like Chachi PTS can come at the cost of reduced coverage for both texts about and dialects of marginalized groups. Essentially, it solves the problem of being racist by simply erasing minorities, which historically doesn't put it in the best company though I'm sure Tay would be completely on board with the idea. The problem with AI right now isn't that it's smart. It's that it's stupid in ways that we can't always predict, which is a real problem because we're increasingly using AI in all sorts of consequential ways from determining whether you will get a job interview to whether you'll be pancaked by a self driving car. And experts worry that it won't be long before programs like chat. GPT or AI enabled deep fakes can be used to turbocharge the spread of abuse or misinformation online. And those are just the problems that we can foresee right now. The nature of unintended consequences is they can be hard to anticipate when Instagram was launched. The first or wasn't this will destroy teenage girls self esteem. When Facebook was released, no one expected it to contribute to genocide, but both of those things fucking happened. So what now? Well, one of the biggest things we need to do is tackle that black box problem. AI systems need to be explainable, meaning that we should be able to understand exactly how and why an AI came up with its answers. Now companies are likely to be very reluctant to open up their programs to scrutiny but we may Need to force them to do that? In fact, as this attorney explains, when it comes to hiring programs we should have been doing that ages ago.

Unknown Speaker 25:07
We don't trust companies to self regulate when it comes to pollution. We don't trust them to self regulate when it comes to workplace comp and why on earth would we trust them to self regulate AI? Look, I think a lot of the AI hiring tech on the market is illegal. I think a lot of it is bias. I think a lot of it violates existing laws. The problem is, you just can't prove it not with the existing laws we have in the United States.

John Oliver 25:34
Right? We should absolutely be addressing potential bias in hiring software unless that is we want companies to be entirely full of Jared who played lacrosse. And for a sense of what might be possible here is it's worth looking at what the EU is currently doing. They are developing rules regarding AI that saw these potential uses from high risk to low high risk systems could include those that deal with employment or public services, or those that put the life and health of citizens at risk and AI of these types will be subject to strict obligations before they can be put onto the market including requirements related to the quality of datasets, transparency, human oversight, accuracy and cybersecurity. And that seems like a good start towards addressing at least some of what we have discussed tonight. Look, AI clearly has tremendous potential and could do great things but if it is anything like most technological advances over the past few centuries, unless we are very careful. It can also hurt the underprivileged, enrich the powerful and widen the gap between them. The thing is, like any other shiny new toy, AI is ultimately a mirror and it will reflect back exactly who we are, from the best of us, to the worst of us.

Other AI-related explainer videos:

Here's the full playlist.

There are currently 10 videos in this playlist, including the videos above:

Leave a Reply

Your email address will not be published. Required fields are marked *