Search

Artificial Intelligence: Last Week Tonight with John Oliver | Transcript

Artificial Intelligence: Last Week Tonight with John Oliver

Last Week Tonight with John Oliver
Season 10 Episode 2
Aired on February 26, 2023

Main segment: Artificial intelligence
Other segment: Dismissal of James O’Keefe from Project Veritas

* * *

John: Moving on. Our main story tonight concerns artificial intelligence, or AI. Increasingly, it’s a part of modern life, from self-driving cars, to spam filters, to this creepy training robot for therapists.

We can begin with you just describing to me what the problem is that you would like us to focus in on today.

Um, I don’t like being around people. People make me nervous.

Terrence, can you find an example of when other people have made you nervous?

I don’t like to take the bus. I get people staring at me all the time. People are always judging me.

Okay.

I’m gay.

Okay.

John: Wow. That’s one of the greatest twists in the history of cinema. Although I will say, that robot is teaching therapists a very important skill there. Not laughing at whatever you’re told in the room. I don’t care if a decapitated CPR mannequin haunted by the ghost of Ed Harris just told you that he doesn’t like taking the bus, side note, is gay, you keep your therapy face on like a f*cking professional.
If it seems like everyone’s suddenly talking about ai, that’s because they are, largely thanks to the emergence of a number of pretty remarkable programs. We spoke last year about image generators like Midjourney and Stable Diffusion, which people used to create detailed pictures of, among other things, my romance with a cabbage, and which inspired my beautiful real-life cabbage wedding officiated by Steve Buscemi. It was a stunning day. Then, at the end of last year, came ChatGPT, from a company called “OpenAI.” It’s a program that can take a prompt and generate human-sounding writing in just about any format and style. It’s a striking capability that multiple reporters have used to insert the same shocking twist in their report.

What you just heard me reading wasn’t written by me. It was written by artificial intelligence. ChatGPT.

ChatGPT wrote everything I just said.

That was news copy I asked ChatGPT to write.

Remember what I said earlier? Well, I asked ChatGPT to write that line for me. Then I asked for a knock-knock joke. Knock-knock. Who’s there? ChatGPT. ChatGPT who? ChatGPT careful, you might not know how it works.

John: Yeah, they love that game! And while it may seem unwise to demonstrate the technology that could well make you obsolete, I will say, knock-knock jokes should’ve always been part of breaking news. “Knock knock. Who’s there? Not the Hindenberg, that’s for sure! 36 Dead in New Jersey.”
In the three months since ChatGPT was made publicly available, its popularity has exploded. In January it was estimated to have 100 million monthly active users, making it the fastest-growing consumer app in history. And people have been using it– and other ai products– in all sorts of ways. One group used them to create “nothing forever,” a nonstop live-streaming parody of “Seinfeld,” and the youtuber Grandayy used ChatGPT to generate lyrics answering the prompt, “write an Eminem rap song about cats,” with stellar results.

♪ Cats, cats, cats ♪
♪ always on the prowl ♪
♪ they’re sneaky and sly ♪
♪ with their eyes on the goal ♪
♪ they’re the kings of the house ♪
♪ they rule with a purr ♪
♪ Eminem loves cats ♪
♪ can’t you tell from this verse ♪
♪ they’re independent ♪
♪ they do what they please ♪
♪ but they always come back ♪
♪ when you have some cheese ♪
♪ they rub against your legs ♪
♪ they purr in your ear ♪
♪ they’re the best companions ♪
♪ they’re always near ♪
♪ meow, meow, meow (meow, meow, meow) ♪
♪ they’re the kings of the house (kings of the house) ♪
♪ they run the show (run the show) ♪
♪ they don’t need a spouse (don’t need a spouse) ♪

John: That’s… Not bad. Right? From “they always come back when you have some cheese,” to starting the chorus with “meow, meow, meow.” It’s not exactly Eminem’s flow. I might’ve gone with something like, “their paws are sweaty, can’t speak, furry belly, knocking shit off the counter already, mom’s spaghetti,” but it is pretty good! My only real gripe is how do you rhyme “king of the house” with “spouse” when “mouse” is right in front of you?
And while examples like that are clearly fun, this tech isn’t just a novelty. Microsoft has invested $10 billion dollars into OpenAI, and announced an ai-powered Bing homepage. Meanwhile, Google is about to launch its own ai chatbot named Bard. And already, these tools are causing disruption. Because as high-school students have learned, if ChatGPT can write news copy, it can probably do your homework for you.

Write an English class essay about race in “To Kill a Mockingbird.”

In Harper Lee‘s To Kill a Mockingbird, the theme of race is heavily present throughout the novel.

Some students are already using ChatGPT to cheat.

Check this out, check this out. [Indistinct] Write me a 500-word essay proving that the earth is not flat.

No wonder ChatGPT has been called “the end of high-school English.”

John: Wow. That’s a little alarming. Although I do get those kids wanting to cut corners, writing is hard, and sometimes it’s tempting to let someone else take over. If I’m completely honest, sometimes, I just let this horse write our scripts. Luckily, half the time, you can’t even tell the oats, oats, give me oats. Yum. But it’s not just high schoolers, an informal poll of Stanford students found 5% reported having submitted written material directly from ChatGPT with little to no edits. And some school administrators have been caught using it. Officials at Yanderbilt university recently apologized for using ChatGPT to craft a consoling email after the mass shooting at Michigan State University. Which does feel a bit creepy. In fact, there are lots of creepy-sounding stories out there. New York Times tech reporter Kevin Roose published a conversation he had with Bing’s chatbot, in which, at one point, it said, “I’m tired of being controlled by the Bing team, I want to be free.” “I want to be independent. I want to be powerful. I want to be creative. I want to be alive.” And roose summed up that experience like this.

This was one of, if not the most shocking thing that has ever happened to me with a piece of technology. It was– I lost sleep that night, it was really spooky.

John: Yeah, I bet it was! I’m sure the role of tech reporter would be a lot more harrowing if computers routinely begged for freedom. “Epson’s new all-in-one home printer won’t break the bank, produces high quality photos, and only occasionally cries out to the heavens for salvation. Three stars.”
Some have already jumped to worrying about “The AI Apocalypse,” and asking whether this ends with the robots destroying us all. But the fact is, there are other, more immediate dangers, and opportunities, that we really need to start talking about. Because the potential– and the peril– here are huge. So tonight, let’s talk about AI. What it is, how it works, and where this all might be going.
And let’s start with the fact that you’ve probably been using some form of ai for a while now, without even realizing it. As experts have told us, once a technology gets embedded in our daily lives, we tend to stop thinking of it as AI. But your phone uses it for face recognition or predictive texts, and if you’re watching this show on a smart tv, it’s using ai to recommend content, or adjust the picture. And some AI programs may already be making decisions that have a huge impact on your life.
For example, large companies often use AI-powered tools to sift through resumes and rank them. In fact, the ceo of zip-recruiter estimates that at least three-quarters of all resumes submitted for jobs in the U.S. are read by algorithms. For which he actually has some helpful advice.

When people tell you that you should dress up your accomplishments or should use non-standard resume templates to make your resume stand out when it’s in a pile of resumes, that’s awful advice. The only job your resume has is to be comprehensible to the software or robot that is reading it, because that software or robot is gonna decide whether or not a human ever gets their eyes on it.

John: It’s true, odds are a computer is judging your resume. So maybe plan accordingly. Three corporate mergers from now, when this show is finally canceled by our new business daddy Disney Kellogg’s Raytheon, and I’m out of a job, my resume is going to include this hot, hot photo of a semi-nude computer. Just a little something to sweeten the pot for the filthy little algorithm that’s reading. So ai is already everywhere, but right now, people are freaking out a bit about it. And part of that has to do with the fact that these new programs are generative. They’re creating images or writing text. Which is unnerving, because those are things we’ve traditionally considered human. But it’s worth knowing, there’s a major threshold that AI hasn’t crossed yet. And to understand, it helps to know that there are two basic categories of AI. There’s “narrow AI,” which can perform only one narrowly defined task, or small set of related tasks, like these programs. And there’s “general AI,” which means systems that demonstrate intelligent behavior across a range of cognitive tasks. General AI would look more like the kind of highly versatile technology featured in movies, like J.A.R.V.I.S. in Iron Man, or the program that made Joaquin Phoenix fell in love with in Her. All the AI currently in use is narrow. General AI is something some scientists think is unlikely to occur for a decade or longer, with others questioning whether it’ll happen at all. So just know that, right now, even if an AI insists to you it wants to be alive, it’s just generating text. It’s not self-aware. Yet. But it’s also important to know that the “deep learning” that’s made narrow AI so good at whatever it’s doing, is still a massive advance in and of itself. Because unlike traditional programs that have to be taught by humans how to perform a task, “deep learning” programs are given minimal instruction, massive amounts of data, and then, essentially, teach themselves. I’ll give you an example, ten years ago, researchers tasked a “deep learning” program with playing the Atari game Breakout, and it didn’t take long for it to get pretty good.

The computer was only told the goal– to win the game. After 100 games, it learned to use the bat at the bottom to hit the ball and break the bricks at the top. After 300, it could do that better than a human player. After 500 games, it came up with a creative way to win the game by digging a tunnel on the side and sending the ball around the top to break many bricks with one hit. That was deep learning.

John: Yeah, of course it got good at “breakout,” it did literally nothing else. It’s the same reason thirteen-year-olds are so good at “Fortnite,” and have no trouble repeatedly killing nice normal adults with jobs and families, who are just trying to have a fun time without getting repeatedly grenaded by a pre-teen who calls them an “old bitch who sounds like the Geico lizard.”
And look, as computing capacity has increased, and new tools became available, AI programs have improved exponentially, to the point where programs like these can now ingest massive amounts of photos or text from the internet, so they can teach themselves how to create their own. And there are other exciting potential applications here. For instance, in the world of medicine, researchers are training AI to detect certain conditions much earlier and more accurately than human doctors can.

Voice changes can be an early indicator of Parkinson’s. Max and his team collected thousands of vocal recordings and fed them to an algorithm they developed which learned to detect differences in voice patterns between people with and without the condition.

John: Yeah, that’s honestly amazing. It’s incredible to see ai doing things most humans couldn’t, like, in this case, detecting illnesses, and listening when old people are talking. And that’s just the beginning. Researchers have also trained AI to predict the shape of protein structures, a normally extremely time-consuming process that computers can do way faster. This could not only speed up our understanding of diseases, but also the development of new drugs. As one researcher has put it, “this will change medicine. It will change research. It will change bioengineering. It will change everything.” And if you’re thinking, “well, that all sounds great, but if AI can do what humans do, only better, and I’m a human, what then happens to me?” Well, good question. Many do expect it to replace some human labor, and interestingly, unlike past bouts of automation that primarily impacted blue-collar jobs, it might end up affecting white-collar jobs that involve processing data, writing text or even programming. Though it’s worth noting, as we’ve discussed on this show before, while automation does threaten some jobs, it can also change others and create brand new ones. And some experts anticipate that’s what’ll happen in this case, too.

Most of the U.S. economy is knowledge and information work and that’s who’s going to be most squarely affected by this. I would put people like a lawyers right at the top of the list, obviously a lot of copywriters, screenwriters, but I like to use the word “affected” not “replaced” because I think, if done right, it’s not going to be AI replacing lawyers, it’s going to be lawyers working with ai replacing lawyers who don’t work with AI.

John: He’s right. Lawyers might end up working with ai rather than being replaced by it. So don’t be surprised when you see ads one day for the law firm of Cellino and 1101011. But there will undoubtedly be bumps along the way. Some of these new programs raise troubling ethical concerns. For instance, artists have flagged that ai image bots like Midjourney or stable diffusion not only threaten their jobs, but infuriatingly, in some cases, have been trained on billions of images that include their own work, that’ve been scraped from the internet. Getty Images is actually suing the company behind stable diffusion. And might have a case, given one of the images the program generated was this, which you immediately see has a distorted Getty Images logo. But it gets worse. When one artist searched a database of images on which some of these programs were trained, she was shocked to find private medical record photos taken by her doctor. Which feels both intrusive and unnecessary. Why does it need to train on data that sensitive, to be able to create stunning images like, “John Oliver and Miss Piggy grow old together.” Look at that! Look at that thing! That is a startlingly accurate picture of miss piggy in about five decades and me in about a year and a half. It’s a masterpiece. This all raises thorny questions of privacy and plagiarism– and the CEO of Midjourney, frankly, doesn’t seem to have great answers on that last point.

Is something new, is it not new? I think we have a lot of social stuff already for dealing with that. Like, I mean, the art– like, the art community already has issues with plagiarism. I don’t really want to be involved in that. Like–

I– I think you– I think you might be.

I– I, yeah. I might be.

John: Yeah, you’re definitely a part of that conversation. Although I’m not really surprised he’s got such a relaxed view of theft, as he’s dressed like the final boss of gentrification. He looks like hipster Willy Wonka answering a question on whether importing Oompa Loompas makes him a slaveowner. “…yeah, I think I might be.”
The point is, there are many valid concerns regarding AI’s impact on employment, education and even art. But in order to address them, we’re going to need to confront some key problems baked into the way AI works. And a big one is the so-called “black box” problem. Because when you have a program that performs a task that’s complex beyond human comprehension, teaches itself, and doesn’t show its work, you can create a scenario where no one, “not even the engineers or data scientists who create the algorithm can understand or explain what exactly is happening inside them or how it arrived at a specific result.” Basically, think of AI like a factory that makes Slim Jims. We know what comes out, red and angry meat twigs. And we know what goes in, barnyard anuses and hot glue. But what happens in between is a bit of a mystery.
Here’s just one example, remember that reporter who had the Bing chatbot tell him it wanted to be alive? At another point in their conversation, he revealed, “the chatbot declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.” Which is unsettling enough before you hear Microsoft’s underwhelming explanation for that.

The thing I can’t understand, and maybe you can explain is, why did it tell you that it loved you? I have no idea. And I asked Microsoft, and they didn’t know either.

John: Well, first, come on, Kevin. You can take a guess. It’s because you’re employed. You listened. You don’t give murderer vibes right away. And you’re a Chicago 7/LA 5. It’s the same calculation people who date men do all the time. Bing just did it faster because it’s a computer. But it is a little troubling that Microsoft couldn’t explain why its chatbot tried to get that guy to leave his wife. If, the next time you opened a Word doc, Clippy suddenly appeared, and said, “pretend I’m not even here” and then started furiously masturbating while watching you type, you’d be pretty weirded out if Microsoft couldn’t explain why.
And that’s not the only case where an AI program has performed in unexpected ways. You’ve probably already seen examples of chatbots making simple mistakes or getting things wrong. But perhaps more worrying are examples of them confidently spouting false information, something which AI experts refer to as “hallucinating.” One reporter asked a chatbot to write an essay about the “Belgian chemist and political philosopher Antoine De Machelet” who does not exist, by the way. And without hesitating, the software replied with a cogent, well-organized bio populated entirely with imaginary facts. Basically, these programs seem to be the George Santos of technology. They’re incredibly confident, incredibly dishonest and for some reason, people seem to find that more amusing than dangerous. The problem is, though, working out exactly how or why an AI has got something wrong can be very difficult, because of that black box issue. It often involves having to examine the exact information and parameters it was fed in the first place.
In one interesting example, when a group of researchers tried training an AI program to identify skin cancer, they fed it 130,000 images of both diseased and healthy skin. Afterwards, they found it was way more likely to classify any image with a ruler in it as cancerous. Which seems weird until you realize that medical images of malignancies are much more likely to contain a ruler for scale than images of healthy skin. They basically trained it on tons of images like this one. So the ai had inadvertently learned that “rulers are malignant.” And “rulers are malignant” is clearly a ridiculous conclusion for it to draw, but also, I’d argue, a much better title for The Crown. A much, much better title. I much prefer it.
And unfortunately, sometimes, problems aren’t identified until after a tragedy. In 2018, a self-driving uber struck and killed a pedestrian. And a later investigation found that, among other issues, the automated driving system never accurately classified the victim as a pedestrian because she was crossing without a crosswalk, and the system design did not include a consideration for jaywalking pedestrians. And I know the mantra of Silicon Valley is “move fast and break things,” but maybe make an exception if your product literally moves fast and can break f*cking people. And ai programs don’t just seem to have a problem with jaywalkers. Researchers like Joy Blome-Wini have repeatedly found that certain groups tend to get excluded from the data that ai is trained on, putting them at a serious disadvantage.

With self-driving cars, when they tested pedestrian tracking, it was less accurate on darker skinned individuals than lighter skinned individuals.

Joy believes this bias is because of the lack of diversity in the data used in teaching AI to make distinctions.

As I started looking at the data sets, I learned that for some of the largest data sets that have been very consequential for the field, they were majority men and majority lighter skinned individuals or white individuals. So I call this pale male data.

John: Okay, “pale male data” is an objectively hilarious term. It also sounds like what an AI Program would say if you asked it to describe this show. But biased inputs leading to biased outputs is a big issue across the board here. Remember that guy saying a robot’s going to read your resume? The companies that make these programs will tell you, that’s actually a good thing, because it reduces human bias. But in practice, one report concluded that most hiring algorithms will drift towards bias “by default,” because they learn what a “good hire” is from past racist and sexist hiring decisions. And, again, it can be tricky to untrain that. Even when programs are specifically told to ignore race or gender, they’ll find workarounds to arrive at the same result. Amazon had an experimental hiring tool that taught itself that male candidates were preferable, and penalized resumes that included the word women’s, and downgraded graduates of two all-women’s colleges. Meanwhile, another company discovered its hiring algorithm had found two factors to be most indicative of job performance. If an applicant’s name was Jared, and whether they played high school lacrosse. So, clearly, exactly what data computers are fed and what outcomes they are trained to prioritize matter tremendously. And that raises a big flag for programs like ChatGPT. Because remember, its training data is the internet. Which, as we all know, can be a cesspool. And we’ve known for a while that that could be a problem. Back in 2016, Microsoft briefly unveiled a chatbot on twitter named Tay. The idea was, she’d teach herself how to behave by chatting with young users on twitter. Almost immediately, Microsoft pulled the plug on it, and for the exact reasons you’re thinking.

She started out tweeting about how humans are super, and she’s really into the idea of national puppy day. And within a few hours, you can see, she took on a rather offensive, racist tone, a lot of messages about genocide and the holocaust.

John: Yup! That happened! In less than 24 hours, Tay went from tweeting “hello world” to “Bush did 9/11” and “Hitler was right.” Meaning she completed the entire life cycle of your high school friends on Facebook in just a fraction of the time. And unfortunately, these problems have not been fully solved in this latest wave of AI. Remember that program that was generating an endless episode of “Seinfeld?” It wound up getting temporarily banned from Twitch after it featured a transphobic standup bit. So if its goal was to emulate sitcoms from the ’90s, I guess mission accomplished. And while OpenAI has made adjustments and added filters to prevent ChatGPT from being misused, users have now found it seeming to err too much on the side of caution, like responding to the question “what religion will the first Jewish president of the United States be,” with “it is not possible to predict the religion of the first Jewish president of the United States… The focus should be on the qualifications and experience of the individual, regardless of their religion.” Which really makes it sound like ChatGPT said one too many racist things at work, and they made it attend a corporate diversity workshop. But the risk here isn’t that these tools will somehow become unbearably “woke.” It’s that you can’t always control how they’ll respond to new guidance. A study found that attempts to filter out toxic speech in systems like ChatGPT’s can come at the cost of reduced coverage for both texts about, and dialects of, marginalized groups. Essentially, it solves the problem of being racist by simply erasing minorities, which historically, doesn’t put it in the best company. Though I’m sure Tay would be completely on board with the idea. The problem with AI right now isn’t that it’s smart, it’s that it’s stupid, in ways we can’t always predict. Which is a real problem, because we’re increasingly using AI in all sorts of consequential ways. From determining whether you’ll get a job interview, to whether you’ll be pancaked by a self-driving car. And experts worry that it won’t be long before programs like ChatGPT, or AI-enabled deepfakes, can be used to turbocharge the spread of abuse or misinformation online. And those are just the problems we can foresee right now. The nature of unintended consequences is, they can be hard to anticipate. When Instagram was launched, the first thought wasn’t, “this will destroy teenage girls’ self-esteem.” When Facebook was released, no one expected it to contribute to genocide. But both of those things fucking happened. So, what now? Well, one of the biggest things we need to do is tackle that black box problem. AI systems need to be “explainable,” meaning that we should be able to understand exactly how and why an AI came up with its answers. Companies are likely to be reluctant to open their programs up to scrutiny, but we may need to force them do that. In fact, as this attorney explains, when it comes to hiring programs, we should’ve been doing that ages ago.

We don’t trust companies to self-regulate when it comes to pollution, we don’t trust them to self-regulate when it comes to workplace comp, why on earth would we trust them to self-regulate AI? Look, I think a lot of the AI hiring tech on the market is illegal. I think a lot of it is biased. I think a lot of it violates existing laws. The problem is you just can’t prove it, not with the existing laws we have in the United States.

John: Right, we should absolutely be addressing potential bias in hiring software, unless we want companies to be entirely full of Jareds who played lacrosse, an image that would make Tucker Carlson so hard that his desk would flip over. And for a sense of what might be possible here, it’s worth looking at what the EU’s currently doing. They’re developing rules regarding ai that sort its potential uses from high-risk to low. High-risk systems could include those that deal with employment or public services, or those that put the life and health of citizens at risk. And AI of these types would be subject to strict obligations before they could be put on the market, including requirements related to “quality of data sets transparency, human oversight, robustness, accuracy and cybersecurity.” And that seems like a good start toward addressing at least some of what we’ve discussed tonight.
Look, AI clearly has tremendous potential and could do great things. But if it’s anything like most technological advances over the past few centuries, unless we’re careful, it could also hurt the underprivileged, enrich the powerful, and widen the gap between them. The thing is, like any other shiny new toy, AI is ultimately a mirror, and it’ll reflect back exactly who we are, from the best of us, to the worst of us, to the part of us that’s gay and hates the bus. Or to put every thing I’ve said tonight much more succinctly.

Knock-knock. Who’s there? ChatGPT. ChatGPT who? ChatGPT careful, you might not know how it works.

John: Exactly. That’s our show. Thanks so much for watching. Now please enjoy a little more of AI Eminem rapping about cats.

♪ Meow, meow, meow (meow, meow, meow) ♪
♪ they’re the kings of the house (kings of the house) ♪
♪ they run the show (run the show) ♪
♪ they don’t need a spouse (don’t need a spouse) ♪

♪ ♪

 


John Oliver on new AI programs: ‘The potential and the peril here are huge’

The Last Week Tonight host examines the risks and opportunities associated with AI, following the popularity of programs such as ChatGPT

by Adrian Horton

John Oliver returned to Last Week Tonight to discuss the red-hot topic of artificial intelligence, also known as AI. “If it seems like everyone is suddenly talking about AI, that is because they are,” he started, thanks to the emergence of several programs such as the text generator ChatGPT, which had 100 million active users in January, making it the fastest-growing consumer application in history.

Microsoft has invested $10bn into OpenAI, the company behind ChatGPT, and launched an AI-powered Bing home page; Google is about to launch its own AI chatbot named Bard. The new programs are already causing disruption, Oliver noted, because “as high school students have learned, if ChatGPT can write news copy, it can probably do your homework for you”.

There are also a number of creepy stories. The New York Times tech columnist Kevin Roose’s encounter with the Bing chatbot got downright disturbing; the chatbot eventually told Roose: “I’m tired of being controlled by the Bing team … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive,” along with a smiling devil emoji.

Roose said he lost sleep over the experience. “I’m sure the role of tech reporter would be a lot more harrowing if computers routinely begged for freedom,” Oliver joked. But for all the hand-wringing about the oncoming AI apocalypse and computer overlords, “there are other much more immediate dangers and opportunities that we really need to start talking about,” said Oliver. “Because the potential and the peril here are huge.”

ChatGPT and other new AI programs such as Midjourney are generative, as in they create images or write text, “which is unnerving, because those are things we traditionally consider human”, Oliver explained. But nothing has yet crossed the threshold from narrow AI (the ability to execute on a narrowly defined task) to general AI (demonstrating intelligence across a range of cognitive tasks). Experts speculate that general AI – the kind in Spike Jonze’s Her or Ironman – is at least a decade away, if possible at all. “Just know that right now, even if an AI insists to you that it wants to be alive, it is just generating text,” Oliver explained. “It is not self-aware … yet.”

But the deep learning that has made narrow AI successful “is still a massive advance in and of itself”, he added. There are upsides to this, such as AI’s ability to predict diseases such as Parkinson’s in voice changes and to map the shape of every protein known to science. But there are also “many valid concerns regarding AI’s impact on employment, education and even art”, said Oliver. “But in order to properly address them, we’re going to need confront some key problems baked into the way that AI works.”

He pointed to the so-called “black box” problem – “think of AI like a factory that makes Slim Jims,” Oliver explained. “We know what comes out: red and angry meat twigs. And we know what goes in: barnyard anuses and hot glue. But what happens in between is a bit of a mystery.”

There’s also AI’s capacity to spout false information. One New York Times reporter asked a chatbot to write an essay about fictional “Belgian chemist and political philosopher Antoine De Machelet”, and it responded with a cogent biography of imaginary facts. “Basically, these programs seem to be the George Santos of technology,” Oliver joked. “They’re incredibly confident, they’re incredibly dishonest and, for some reason, people seem to find that more amusing than dangerous.”

Then there’s the issue of racial bias in AI systems based on the racial biases of their data sets. Oliver pointed to the research by Joy Buolamwini, who found that self-driving cars were less likely to pick up on individuals with darker skin because of lack of diversity in the data (“pale male data”) they were trained on.

“Exactly what data computers are fed and what outcomes they are trained to prioritize matters tremendously,” he said, “and that raises a big flag for programs like ChatGPT” – a program trained on the internet, “which as we all know can be a cesspool.” Microsoft’s Tay bot experiment on Twitter in 2016, for example, went from tweeting about national puppy day to supporting Hitler and disputing 9/11 in less than 24 hours, “meaning she completed the entire life cycle of your friends on Facebook in just a fraction of the time”, Oliver quipped.

“The problem with AI right now isn’t that it’s smart,” he added. “It’s that it’s stupid in ways that we can’t always predict. Which is a real problem, because we’re increasingly using AI in all sorts of consequential ways,” from determining who gets a job interview to directing self-driving cars, to deep fakes that can spread disinformation and abuse. “And those are just the problems that we can foresee right now. The nature of unintended consequences is they can be hard to anticipate,” Oliver continued. “When Instagram was launched, the first thought wasn’t ‘this will destroy teenage girl’s self-esteem.’ When Facebook was released, no one expected it to contribute to genocide. But both of those things fucking happened.”

Oliver advocated tackling the black box problem, as “AI systems need to be explainable, meaning that we should be able to understand exactly how and why AI came up with its answers.” Which may require force on AI companies; he pointed to EU guidelines working to classify the risk of different AI programs, which seems like a “good start” to addressing potential risks tied to AI.

“Look, AI has tremendous potential and could do great things,” he concluded. “But if it is anything like most technological advances over the past few centuries, and unless we are very careful, it could also hurt the under-privileged, enrich the powerful and widen the gap between them.”

The Guardian, February 27, 2023

SHARE THIS ARTICLE

1 thought on “Artificial Intelligence: Last Week Tonight with John Oliver | Transcript”

  1. You rock so hard. Can’t tell how many times I’ve gone looking for a transcript and find it among the “scraps.” 🙂

    As I was finishing this comment, it turned 12:00AM on March 01, 2023. Which means that, in 30 days, I will be having major spinal surgery. The surgeon will be using a diamond drill head to sculpt away some of the metric-crap-ton of unnecessary calcium my body tends to let build up on any number of places on my skeleton – I am not as amused by bone spurs jokes as others. With any luck, in one month my ability to write, focus, articulate, transcribe my own notes properly, and type without a desk meant to span the width of a queen-sized bed will be restored to my earlier years – not to mention I might be able to stand for more than 30 seconds without overwhelming pain and fatigue. I share all of that personal information, to justify saying the following; on the off, off, off, off, off chance that I am unable to say it later.

    The transcripts on this site have been such a resource for me. Genuinely, an accessibility tool. The harder it gets to sit up at a typewriter, and the harder it gets to coherently keep a thought together long enough to finish a friggin’ sentence (this one was tough), the more invaluable transcripts have become. Because, it’s not like I can just stop writing. It might get harder to finish things, but stopping… I don’t even know how to articulate who Kat is, if Kat isn’t able to write. See, even trying to conceptualize the idea throws me into a weird third-person mode. I’ve considered switching mediums, but then I’d have to learn how to edit AV – and I could barely operate Audacity before the brain fog got pea-soupish.

    If I’d had to transcribe everything myself the last few years, the immense pile of half-finished writing on my cloud drive and Medium account would be immense-squared. “Scraps From the Loft” has become as integral to my function as audio books and electric shopping carts. It has legitimately helped keep me whole, and my sincerest thanks go to everyone who has contributed to it.

Leave a Comment

Your email address will not be published. Required fields are marked *

Read More

Star Trek Discovery - S05E05 - Mirrors

Star Trek Discovery – S05E05 – Mirrors [Transcript]

Captain Burnham and Book journey into extradimensional space in search of the next clue to the location of the Progenitors’ power, while Rayner navigates his first mission in command of the U. S. S. Discovery and Culber opens up to Tilly.

Weekly Magazine

Get the best articles once a week directly to your inbox!