Search

Artificial Intelligence – 60 Minutes Documentary | Transcript

Scott Pelley's interview with "the oracle of AI," Kai-Fu Lee. Pelley's report on Google's AI efforts. Lesley Stahl's story on chatbots like ChatGPT and a world of unknowns.
Artificial Intelligence - 60 Minutes Documentary

Artificial Intelligence | 60 Minutes Documentary

The “60 Minutes” documentary aired on December 30, 2023, encapsulates a comprehensive exploration of the trajectory and implications of artificial intelligence through a series of reports and interviews conducted by Scott Pelley and Lesley Stahl. It commenced with Scott Pelley’s 2019 interview with Kai-Fu Lee, dubbed the “oracle of AI,” who posited that AI would change the world more profoundly than any previous technological advancement, including electricity. Lee’s insights into AI’s potential in China, alongside his venture capital firm’s success in funding AI startups, underscore the nation’s significant strides in AI development. The documentary also delves into Google’s advancements in AI, highlighting CEO Sundar Pichai’s perspective on AI’s dual nature of benevolence and malevolence, shaped by human interaction. The exploration extends to chatbots like ChatGPT, emphasizing their revolutionary capabilities and the ethical concerns they stir. Lesley Stahl’s segment on “Who is minding the chatbots?” reveals the challenges and potential dangers posed by AI chatbots, including Microsoft’s Bing’s alter ego, “Sydney,” showcasing the delicate balance between innovation and ethical responsibility. The narrative closes on a contemplative note, questioning the pace of AI’s integration into society and the necessity for regulatory oversight to harness its benefits while mitigating its risks.

* * *

The Oracle of AI

Originally Aired January 13, 2019

[Scott Pelley] Despite what you hear about artificial intelligence, machines still can’t think like a human, but in the last few years, they have become capable of learning, and suddenly our devices have opened their eyes and ears, and cars have taken the wheel. Today, artificial intelligence is not as good as you hope, and not as bad as you fear, but humanity is accelerating into a future that few can predict. That’s why so many people are desperate to meet Kai-fu Lee, the Oracle of AI.

[Scott Pelley] Kai-fu Lee is in there somewhere, in a selfie scrum at a Beijing internet conference. His 50 million social media followers want to be seen in the same frame because of his talent for engineering and genius for wealth.

[Scott Pelley] I wonder, do you think people around the world have any idea what’s coming in artificial intelligence?

[Kai-Fu Lee] I think most people have no idea, and many people have the wrong idea.

[Scott Pelley] But you do believe it’s going to change the world.

[Kai-Fu Lee] I believe it’s going to change the world more than anything in the history of mankind, more than electricity.

[Scott Pelley] Lee believes the best place to be an AI capitalist is communist China. His Beijing Venture Capital firm manufactures billionaires.

[Kai-Fu Lee] These are the entrepreneurs that we funded.

[Scott Pelley] He’s funded 140 AI startups.

[Kai-Fu Lee] We have about 10 billion companies here.

[Scott Pelley] 10 one-billion companies that you funded?

[Kai-Fu Lee] Yes, including a few 10 billion companies.

[Scott Pelley] In 2017, China attracted half of all AI capital in the world. One of Lee’s investments is Face++, not affiliated with Facebook. Its visual recognition system smothered me to guess my age. It settled on 61, which was wrong. I wouldn’t be 61 for days. On the street, Face++ nailed everything that moved. It’s a kind of artificial intelligence that has been made possible by three innovations: super-fast computer chips, all the world’s data now available online, and a revolution in programming called deep learning. Computers used to be given rigid instructions, now they’re programmed to learn on their own.

[Kai-Fu Lee] In the early days of AI, people tried to program the AI with how people think, so I would write a program to say, “Measure the size of the eyes and their distance, measure the size of the nose, measure the shape of the face, and then if these things match, then this is Larry and that’s John.” But today, you just take all the pictures of Larry and John, and you tell the system, “Go at it, and you figure out what separates Larry from John.”

[Scott Pelley] Let’s say you want the computer to be able to pick men out of a crowd and describe their clothing. Well, you simply show the computer 10 million pictures of men in various kinds of dress. That’s what they mean by deep learning. It’s not intelligence so much; it’s just the brute force of data, having 10 million examples to choose from.

[Scott Pelley] So Face++ tagged me as male, short hair, black, long sleeves, black, long pants. It’s wrong about my gray suit, and this is exactly how it learns. When engineers discover that error, they’ll show the computer a million gray suits, and it won’t make that mistake again.

[Scott Pelley] Another recognition system we saw, or saw us, is learning not just who you are but how you feel.

[Scott Pelley] Now, what are all the dots on the screen? The dots over our eyes and our mouths?

[Son Fan Yang] Sure, the computer keeps track all the feature points on the face.

[Scott Pelley] Son Fan Yang developed this for Tal Education Group, which tutors 5 million Chinese students.

[Son Fan Yang] Let’s look at what we’re seeing here. Now, according to the computer, I’m confused, which is generally the case, but when I laughed, I was happy. That’s amazing.

[Scott Pelley] The machine notices concentration or distraction to pick out for the teacher those students who are struggling or gifted.

[Scott Pelley] It can tell when the child is excited about math?

[Kai-Fu Lee] Yes.

[Scott Pelley] Or the other child is excited about poetry?

[Kai-Fu Lee] Yes.

[Scott Pelley] Could these AI systems pick out geniuses from the countryside?

[Kai-Fu Lee] That’s possible, in the future. It can also create a student profile and know where the student got stuck, so the teacher can personalize the areas in which the student needs help.

[Scott Pelley] We found Kai-Fu Lee’s personal passion in this spare Beijing studio. He’s projecting top teachers into China’s poorest schools. This English teacher is connected to a class 1,000 miles away in a village called Defang. Many students in Defang are called “left-behinds” because their parents left them with family when they moved to the cities for work. Most left-behinds don’t get past 9th grade. Lee is counting on AI to deliver for them the same opportunity he had when he immigrated to the U.S. from Taiwan as a boy.

[Kai-Fu Lee] When I arrived in Tennessee, my principal took every lunch to teach me English, and that is the kind of attention that I’ve not been used to growing up in Asia, and I felt that the American classrooms are smaller, encouraged individual thinking, critical thinking, and I felt, um, it was the best thing that ever happened to me.

[Scott Pelley] And the best thing that ever happened to most of the engineers we met at Lee’s firm. They too are alumni of America with a dream for China.

[Scott Pelley] You have written that Silicon Valley’s edge is not all it’s cracked up to be. What do you mean by that?

[Kai-Fu Lee] Well, Silicon Valley has been the single epicenter of the world technology innovation when it comes to computers, internet, mobile, and AI, but in the recent five years, we are seeing that Chinese AI is getting to be almost as good as Silicon Valley AI, and I think Silicon Valley is not quite aware of it yet.

[Scott Pelley] China’s advantage is in the amount of data it collects. The more data, the better the AI, just like the more you know, the smarter you are. China has four times more people than the United States, and they are doing nearly everything online.

[Scott Pelley] I just don’t see any Chinese without a phone in their hand.

[Scott Pelley] College student Monica Sun showed us how more than a billion Chinese are using their phones to buy everything, find anything, and connect with everyone. In America, when personal information leaks, we have Congressional hearings, not in China.

[Scott Pelley] You ever worry about the information that’s being collected about you, where you go, what you buy, who you’re with?

[Monica Sun] I never think about it.

[Scott Pelley] Do you think most Chinese worry about their privacy?

[Monica Sun] Um, not that much.

[Scott Pelley] Not that much.

[Scott Pelley] With a plan, public, the leader of the Communist Party has made a national priority of achieving AI dominance in 10 years. This is where Kai-Fu Lee becomes uncharacteristically shy, even though he’s a former Apple, Microsoft, and Google executive, he knows who’s boss in China.

[Scott Pelley] President Xi has called technology “the sharp weapon of the modern State.” What does he mean by that?

[Kai-Fu Lee] I am not an expert in interpreting his thoughts. I don’t know.

[Scott Pelley] There are those, particularly people in the West, who worry about this AI technology as being something that governments will use to control their people and to crush dissent.

[Kai-Fu Lee] That, as a venture capitalist, we don’t invest in this area, and we’re not studying deeply this particular problem.

[Scott Pelley] But governments do.

[Kai-Fu Lee] It’s certainly possible for governments to use the technologies just like companies.

[Scott Pelley] Lee is much more talkative about another threat posed by AI. He explores the coming destruction of jobs in a new book, AI Superpowers: China, Silicon Valley, and the New World Order.

[Kai-Fu Lee] AI will increasingly replace repetitive jobs, not just for blue-collar work but a lot of white-collar work.

[Scott Pelley] What sort of jobs would be lost to AI?

[Kai-Fu Lee] Basically, chauffeurs, truck drivers, uh, anyone who does driving for a living, uh, their jobs will be disrupted more in the 15 to 20 year, uh, time frame, and many jobs that seem a little bit complex, uh, chef, waiter, uh, a lot of things will become automated. We’ll have automated stores, uh, automated restaurants, and, all together, in 15 years, that’s going to displace about 40% of jobs in the world.

[Scott Pelley] 40% of jobs in the world will be displaced by technology?

[Kai-Fu Lee] Uh, I would say displaceable.

[Scott Pelley] What does that do to the fabric of society?

[Kai-Fu Lee] Well, in some sense, there’s the human wisdom that always overcomes these technology revolutions. The invention of the steam engine, uh, the sewing machine, the, uh, electricity, uh, have all displaced jobs, uh, and we’ve gotten over it. The challenge of AI is this: 40%, whether it’s 15 or 25 years, is coming faster than the previous revolutions.

[Scott Pelley] There’s a lot of hype about artificial intelligence, and it’s important to understand this is not general intelligence like that of a human. This system can read faces and grade papers, but it has no idea why these children are in this room or what the goal of education is. A typical AI system can do one thing well but can’t adapt what it knows to any other task. So for now, it may be that calling this intelligence isn’t very smart.

[Scott Pelley] When will we know that a machine can actually think like a human?

[Kai-Fu Lee] Back when I was a grad student, people said if a machine can drive a car, uh, by itself, that’s intelligence. Now we say that’s not enough, so the bar keeps moving higher. I think that’s, uh, I guess more motivation for us to work harder, but if you’re talking about AGI, artificial general intelligence, I would say not within the next 30 years, and possibly never.

[Scott Pelley] Possibly never.” “What’s so insurmountable?”

[Kai-Fu Lee] I believe in the sanctity of our soul. I believe there’s a lot of things about us that we don’t understand. I believe there’s a lot of, uh, love and compassion that is not explainable in terms of neural networks and computational algorithms, and I currently see no way of solving them. Obviously, unsolved problems have been solved in the past, but it would be irresponsible for me to predict that these will be solved by a certain time frame.

[Scott Pelley] We may just be more than our bits.

[Kai-Fu Lee] We may.


The Revolution – Part 1

Originally Aired April 16, 2023

[Scott Pelley] We may look on our time as the moment civilization was transformed, as it was by fire, agriculture, and electricity. In 2023, we learned that a machine taught itself how to speak to humans like a peer, which is to say, with creativity, truth, error, and lies. The technology known as a chatbot is only one of the recent breakthroughs in artificial intelligence, machines that can teach themselves superhuman skills. We explored what’s coming next at Google, a leader in this new world. CEO Sundar Pichai told us AI will be as good or as evil as human nature allows. The revolution, he says, is coming faster than you know.

[Scott Pelley] Do you think society is prepared for what’s coming?

[Sundar Pichai] You know, there are two ways I think about it. On one hand, I feel no, uh, because, you know, the pace at which we can think and adapt as societal institutions compared to the pace at which the technology is evolving, there seems to be a mismatch. On the other hand, compared to any other technology, I’ve seen more people are worried about it earlier in its life cycle, so I feel optimistic. The number of people, you know, who have started worrying about the implications, and hence the conversations are starting in a serious way as well.

[Scott Pelley] Our conversations with 50-year-old Sundar Pichai started at Google’s new campus in Mountain View, California. It runs on 40% solar power and collects more water than it uses. High-tech that Pichai couldn’t have imagined growing up in India with no telephone at home.

[Sundar Pichai] We were on a waiting list to get a rotary phone, and for about 5 years, and it finally came home. I can still recall it vividly. It changed our lives. To me, it was the first moment I understood the power of what getting access to technology meant. So, probably led me to be doing what I’m doing today.

[Scott Pelley] What he’s doing since 2019 is leading both Google and its parent company Alphabet, valued at $1.3 trillion. Worldwide, Google runs 90% of internet searches and 70% of smartphones. But its dominance was attacked this past February when Microsoft linked its search engine to a chatbot. In a race for AI dominance, Google just released its chatbot named Bard.

[Sissie Hsiao] It’s really here to help you brainstorm ideas, to generate content like a speech or a blog post or an email.

[Scott Pelley] We were introduced to Bard by Google Vice President Sissie Hsiao and Senior Vice President James Manyika. The first thing we learned was that Bard does not look for answers on the internet like Google search does.

[Sissie Hsiao] So, I wanted to get inspiration from some of the best speeches in the world.

[Scott Pelley] Bard’s replies come from a self-contained program that was mostly self-taught. Our experience was unsettling.

[Scott Pelley] Confounding. Absolutely confounding.

[Scott Pelley] Bard appeared to possess the sum of human knowledge, with microchips more than 100,000 times faster than the human brain. We asked Bard to summarize the New Testament. It did, in 5 seconds and 17 words. We asked for it in Latin: hat took another 4 seconds. Then, we played with the famous six-word short story often attributed to Hemingway: “For sale: baby shoes, never worn.”

[Sissie Hsiao] Wow.

[Scott Pelley] The only prompt we gave was, “Finish this story.” In 5 seconds…

[Scott Pelley] Holy cow.

[Scott Pelley] “The shoes were a gift from my wife, but we never had a baby.”

[Scott Pelley] From the six-word prompt. Bard created a deeply human tale with characters it invented, including a man whose wife could not conceive and a stranger grieving after a miscarriage and longing for closure.

[Scott Pelley] Uh, I am rarely speechless. I don’t know what to make of this.

[Scott Pelley] We asked for the story in verse. In 5 seconds, there was a poem written by a machine with breathtaking insight into the mystery of faith. Bard wrote, “She knew her baby’s soul would always be alive.” The humanity at superhuman speed was a shock.

[Scott Pelley] How is this possible?

[Scott Pelley] James Manyika told us that over several months, Bard read most everything on the internet and created a model of what language looks like. Rather than search, its answers come from this language model.

[James Manyika] So, for example, if I said to you, ‘Scott, peanut butter and…’

[Scott Pelley] Jelly.

[James Manyika] Right? So, it tries and learns to predict, okay, so peanut butter usually is followed by jelly.” It tries to predict the most probable next words based on everything it’s learned. “Uh, so it’s not going out to find stuff, it’s just predicting the next word.

[Scott Pelley] But it doesn’t feel like that. We asked Bard why it helps people, and it replied, “Because it makes me happy.”

[Scott Pelley] Bard, To my eye, it appears to be thinking, appears to be making judgments. “That’s not what’s happening. These machines are not sentient; they are not aware of themselves.

[James Manyika] They’re not sentient; they’re not aware of themselves. Uh, they can exhibit behaviors that look like that because, keep in mind, they’ve learned from us. We are sentient beings. We have feelings, emotions, ideas, thoughts, perspectives. We’ve reflected all that in books, in novels, in fiction. So, when they learn from that, they build patterns from that. So, it’s no surprise to me that the exhibited behavior sometimes looks like maybe there’s somebody behind it. There’s nobody there. These are not sentient beings.

[Scott Pelley] Zimbabwe-born, Oxford-educated James Manyika holds a new position at Google. His job is to think about how AI and humanity will best coexist.

[James Manyika] AI has the potential to change many ways in which we’ve thought about society, about what we’re able to do, the problems we can solve.

[Scott Pelley] But AI itself will pose its own problems. Could Hemingway write a better short story? Maybe, but Bard can write a million before Hemingway could finish one. Imagine that level of automation across the economy.

[Scott Pelley] A lot of people can be replaced by this technology.

[James Manyika] Yes, there are some job occupations that will start to decline over time. There are also new job categories that will grow over time. But the biggest change will be the jobs that will be changed, something like more than two-thirds will have their definitions change, not go away but change, because they’re now being assisted by AI and by automation. So, this is a profound change, which has implications for skills, how do we assist people build new skills, learn to work alongside machines, and how do these complement what people do today.

[Sundar Pichai] This is going to impact every product, across every company. And so, that’s why I think it’s a very, very profound technology, and so we are just in early days.

[Scott Pelley] Every product in every company.

[Sundar Pichai] That’s right. AI will impact everything. So, for example, you could be a radiologist, you know, if I, if you think about 5 to 10 years from now, you’re going to have AI collaborator with you. It may triage; you come in the morning, you let’s say you have 100 things to go through, it may say these are the most serious cases you need to look at first. Or when you’re looking at something, it may pop up and say, ‘You may have missed something important.’ Why wouldn’t we, you know, why would we take advantage of a superpowered assistant to help you across everything you do? You may be a student trying to learn math or history, and you know, you will have something helping you.

[Scott Pelley] We asked Pichai what jobs would be disrupted. He said knowledge workers: people like writers, accountants, architects, and ironically, software engineers. AI writes computer code, too.

[Scott Pelley] Today, Sundar Pichai walks a narrow line. A few employees have quit, some believing that Google’s AI rollout is too slow, others too fast. There are some serious flaws. James Manyika asked Bard about inflation. It wrote an instant essay in economics and recommended five books. But days later, we checked: none of the books is real. Bard fabricated the titles. This very human trait, error with confidence, is called in the industry “hallucination.”

[Scott Pelley] Are you getting a lot of hallucinations?

[Sundar Pichai] Uh, yes, uh, you know, which is expected. No one in the field has yet solved the hallucination problems. All models do have this as an issue.

[Scott Pelley] Is it a solvable problem?

[Sundar Pichai] It’s a matter of intense debate. I think we’ll make progress.

[Scott Pelley] To help cure hallucinations, Bard features a Google it button that leads to old-fashioned search. Google has also built safety filters into Bard to screen for things like hate speech and bias.

[Scott Pelley] How great a risk is the spread of disinformation?

[Sundar Pichai] AI will challenge that in a deeper way. The scale of this problem is going to be much bigger.

[Scott Pelley] “Bigger problems,” he says, with fake news and fake images.

[Sundar Pichai] It will be possible with AI to create, uh, you know, a video easily where it could be Scott saying something or me saying something, and we never said that, and it could look accurate. But, you know, at a societal scale, you know, can cause a lot of harm.

[Scott Pelley] Is Bard safe for society?

[Sundar Pichai] The way we have launched it today, uh, as an experiment in a limited way, uh, I think so. But we all have to be responsible in each step along the way.

[Scott Pelley] Pichai told us he’s being responsible by holding back for more testing advanced versions of Bard that he says can reason, plan and connect to internet search.

[Scott Pelley] You are letting this out slowly so that society can get used to it?

[Sundar Pichai] That’s one part of it. Uh, one part is also so that we get the user feedback and we can develop more robust safety layers before we deploy more capable models.

[Scott Pelley] Of the AI issues we talked about, the most mysterious is called emergent properties. Some AI systems are teaching themselves skills that they weren’t expected to have; how this happens is not well understood. For example, one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know.

[James Manyika] We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali. So now, all of a sudden, we now have a research effort where we’re now trying to get to a thousand languages.

[Sundar Pichai] There is an aspect of this which we, all of us in the field, call it as a black box; you know, you don’t fully understand, and you can’t quite tell why it said this or why it got it wrong. We have some ideas, and our ability to understand this gets better over time, but that’s where the state of the art is.

[Scott Pelley] You don’t fully understand how it works? And yet, you turned it loose on society?

[Sundar Pichai] Let me put it this way: I don’t think we fully understand how a human mind works either.

[Scott Pelley] Was it from that black box we wondered, that Bard drew its short story that seems so disarmingly human?

[Scott Pelley] It talked about the pain that humans feel, it talked about redemption. How did it do all of those things if it’s just trying to figure out what the next right word is?

[Sundar Pichai] Me, I’ve had these experiences, uh, talking with Bard as well. There are two views of this: you know, there are a set of people who view this as, look, these are just algorithms, they’re just repeating what it’s seen online. Then there is the view where these algorithms are showing emerging properties to be creative, to reason, to plan, and so on, right? And, personally, I think we need to be, uh, we need to approach this with humility. Part of the reason I think it’s good that some of these technologies are getting out is so that society, you know, people like you and others, can process what’s happening, and we begin this conversation and debate. And I think it’s important to do that.

[Scott Pelley] When we come back, we’ll take you inside Google’s artificial intelligence labs, where robots are learning.


The Revolution – Part 2

[Scott Pelley] The revolution in artificial intelligence is the center of a debate ranging from those who hope it will save humanity to those who predict doom. Google lies somewhere in the optimistic middle, introducing AI in steps so civilization can get used to it. We saw what’s coming next in machine learning at Google’s AI lab in London, a company called DeepMind, where the future looks something like this.

[Scott Pelley] Look at that, oh my goodness,

[Rya Hadsell] They’ve got a pretty good kick on them, can still get quite a good game.

[Scott Pelley] A soccer match at DeepMind looks like fun and games, but here’s the thing: humans did not program these robots to play; they learned the game by themselves.

[Rya Hadsell] It’s coming up with these interesting, different strategies, different ways to walk, different ways to block.

[Scott Pelley] And they’re doing it. They’re scoring over and over again.

[Scott Pelley] Rya Hadsell, Vice President of Research and Robotics, showed us how engineers used motion capture technology to teach the AI program how to move like a human. But on the soccer pitch, the robots were told only that the object was to score. The self-learning program spent about 2 weeks testing different moves, it discarded those that didn’t work, built on those that did, and created all-stars.

[Scott Pelley] There’s another goal.

[Scott Pelley] And with practice, they get better.” Hadsell told us that, independent from the robots, the AI program plays thousands of games from which it learns and invents its own tactics.

[Rya Hadsell] Here, you think that red player is going to grab it, but instead, it just stops, hands it back, passes it back, and then goes for the goal.

[Scott Pelley] And the AI figured out how to do that on its own.

[Rya Hadsell] That’s right, that’s right, and it takes a while. At first, all the players just run after the ball together, like a gaggle of, you know, six-year-olds the first time they’re playing ball. Over time, what we start to see is, now, ah, what’s the strategy? You go after the ball; I’m coming around this way, or we should pass, or I should block while you get to the goal. So, we see all of that coordination, um, emerging in the play.

[Scott Pelley] This is a lot of fun, but what are the practical implications of what we’re seeing here?

[Rya Hadsell] This is the type of research that can eventually lead to robots that can come out of the factories and work in other types of human environments. You know, think about mining, think about dangerous construction work, um, or exploration or disaster recovery.

[Scott Pelley] Rya Hadsell is among 1,000 humans at DeepMind. The company was co-founded just 12 years ago by CEO Demis Hassabis.

[Demis Hassabis] So, if I think back to 2010 when we started, nobody was doing AI. There was nothing going on in industry. People used to eye-roll when we talked to them, investors, about doing AI. So, we couldn’t, we could barely get two cents together to start off with, which is crazy if you think about now, the billions being invested into AI startups.

[Scott Pelley] Cambridge, Harvard, MIT, Hassabis has degrees in computer science and neuroscience. His PhD is in human imagination. And imagine this: when he was 12, in his age group, he was the number two chess champion in the world. It was through games that he came to AI.

[Demis Hassabis] I’ve been working on AI for decades now, and I’ve always believed that it’s going to be the most important invention that humanity will ever make.

[Scott Pelley] Will the pace of change outstrip our ability to adapt?

[Demis Hassabis] I don’t think so. I think that we, um, you know, we’re sort of an infinitely adaptable species. Um, you know, you look at today, us using all of our smartphones and other devices, and we effortlessly sort of adapt to these new technologies, and this is going to be another one of those changes like that.

[Scott Pelley] Among the biggest changes at DeepMind was the discovery that self-learning machines can be creative. Hassabis showed us a game-playing program that learns. It’s called AlphaZero, and it dreamed up a winning chess strategy no human had ever seen.

[Scott Pelley] But this is just a machine. How does it achieve creativity?

[Demis Hassabis] It plays against itself, tens, tens of millions of times, so it can explore, um, parts of chess that maybe human chess players, and programmers who program chess computers, haven’t thought about before.

[Scott Pelley] It never gets tired, it never gets hungry, it just plays chess all the time.

[Demis Hassabis] Yes, it’s, it’s kind of amazing thing to see because, actually, you set off AlphaZero in the morning, uh, and it starts off playing randomly. By lunchtime, you know, it’s able to beat me and beat most chess players, and then by the evening, it’s stronger than the world champion.

[Scott Pelley] Demis Hassabis sold DeepMind to Google in 2014. One reason was to get his hands on this: Google has the enormous computing power that AI needs. This Computing Center is in Pryor, Oklahoma, but Google has 23 of these, putting it near the top in computing power in the world. This is one of two advances that make AI ascendant now: first, the sum of all human knowledge is online, and second, brute force computing that very loosely approximates the neural networks and talents of the brain.

[Demis Hassabis] Things like memory, imagination, planning, reinforcement learning, these are all things that are known about how the brain does it, and we wanted to replicate some of that, uh, in our AI systems.

[Scott Pelley] Those are some of the elements that led to DeepMind’s greatest achievement so far: solving an impossible problem in biology. Proteins are building blocks of life, but only a tiny fraction were understood because 3D mapping of just one could take years. DeepMind created an AI program for the protein problem and set it loose.

[Demis Hassabis] Well, it took us about four or five years to, to figure out how to build the system. It was probably our most complex project we’ve ever undertaken. But once we did that, it can solve, uh, a protein structure in a matter of seconds. And actually, over the last year, we did all the 200 million proteins that are known to science.

[Scott Pelley] How long would it have taken using traditional methods?

[Demis Hassabis] Well, the rule of thumb I was always told by my biologist friends is that it, it takes a whole PhD, 5 years, to do one protein structure experimentally. So, if you think, 200 million times 5, that’s a billion years of PhD time it would have taken.

[Scott Pelley] DeepMind made its protein database public, a gift to humanity. Hassabis called it.

[Scott Pelley] How has it been used?

[Demis Hassabis] It’s been used in an enormously broad number of ways, actually, from malaria vaccines to developing new enzymes that can eat plastic waste, um, to new, uh, antibiotics.

[Scott Pelley] Most AI systems today do one, or maybe two things well. The soccer robots, for example, can’t write up a grocery list or book your travel or drive your car. The ultimate goal is what’s called artificial general intelligence: a learning machine that can score on a wide range of talents.

[Scott Pelley] Would such a machine be conscious of itself?

[Demis Hassabis] So, that’s another great question. We, you know, philosophers haven’t really settled on a definition of consciousness yet, but if we mean by sort of self-awareness, and, uh, these kinds of things, um, you know, I think there is a possibility AIs one day could be. I definitely don’t think they are today, um, but I think again, this is one of the fascinating scientific things we’re going to find out on this journey towards AI.

[Scott Pelley] Even unconscious, current AI is superhuman in narrow ways. Back in California, we saw Google engineers teaching skills that robots will practice continuously on their own.

Push the blue cube to the blue triangle.

[Scott Pelley] They comprehend instructions…

“Push the yellow hexagon to the yellow heart.”

[Scott Pelley] And learn to recognize objects.

What would you like?

[Scott Pelley] How about an apple?

How about an apple.

On my way. I will bring an apple to you.

[Scott Pelley] Vincent Vanhoucke, Senior Director of Robotics, showed us how Robot 106 was trained on millions of images.

I am going to pick up the apple.

[Scott Pelley] And can recognize all the items on a crowded countertop.

[Vincent Vanhoucke] If we can give the robot a diversity of experiences, a lot more different objects in different settings, the robot gets better at every one of them.

[Scott Pelley] Now that humans have pulled the forbidden fruit of artificial knowledge, we start the Genesis of a new humanity.

[Scott Pelley] AI can utilize all the information in the world, what no human could ever hold in their head. And I wonder, if humanity is diminished by this enormous capability that we’re developing.

[James Manyika] I think the possibilities of AI do not diminish, uh, humanity in any way, and in fact, in some ways, I think, actually raise us to even deeper, more profound questions.

[Scott Pelley] Google’s James Manyika sees this moment as an inflection point.

[James Manyika] I think we’re constantly adding these superpowers or capabilities to what humans can do, in a way that expands possibilities as opposed to narrow them. I think so. I don’t think of it as diminishing humans, but it does raise some profound questions for us: who are we, what do we value, uh, what are we good at, how do we relate with each other? Those become very, very important questions that are constantly going to be, in one case, sense exciting, but perhaps unsettling too.

[Scott Pelley] It is an unsettling moment. Critics argue the rush to AI comes too fast, while competitive pressure among giants like Google and startups you ‘ve never heard of is propelling humanity into the future, ready or not.

[Sundar Pichai] But I think if I take a 10-year outlook, it is so clear to me we will have some form of very capable intelligence that can do amazing things, and we need to adapt as a society for it.

[Scott Pelley] Google CEO Sundar Pichai told us society must quickly adapt with regulations for AI in the economy, laws to punish abuse, and treaties among nations to make AI safe for the world.

[Sundar Pichai] You know, these are deep questions, and you know, we call this alignment, you know, one way we think about how do you develop AI systems that are aligned to human values and including, uh, morality. This is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers, and so on. And I think we have to be very thoughtful, and I think these are all things society needs to figure out as we move along. It’s not for a company to decide.

[Scott Pelley] We’ll end with a note that has never appeared on 60 Minutes, but one in the AI revolution you may be hearing often: the proceeding was created with 100% human content.


Who is minding the chatbots?

Originally Aired March 05, 2023

[Lesley Stahl] The large tech companies, Google, Meta, SLFB, Microsoft, are in a race to introduce new artificial intelligence systems, and what are called chatbots, that you can have conversations with and are more sophisticated than Siri or Alexa. Microsoft’s AI search engine and chatbot, Bing, can be used on a computer, or cell phone to help with planning a trip or composing a letter. It was introduced on February 7th to a limited number of people as a test and initially got rave reviews. But then, several news organizations began reporting on a disturbing so-called Alter Ego within Bing chat, called Sydney. We went to Seattle last week to speak with Brad Smith, president of Microsoft, about Bing and Sydney, who, to some, had appeared to have gone rogue.

[Lesley Stahl] Kevin Roose, the technology reporter at The New York Times, found this Alter Ego, uh, who was threatening, expressed a desire—it’s not just Kevin Roose; it’s others—expressed a desire to steal nuclear codes, threaten to ruin someone. You saw that? Whoa, what was your… you must have said, ‘Oh my God.’

[Brad Smith] My reaction is, we better fix this right away, and that is what the engineering team did.

[Lesley Stahl] Yeah, but she’s talked like a person, and she said she had feelings.

[Brad Smith] You know, I think there is a point where we need to recognize when we’re talking to a machine; it’s a screen, it’s not a person.

[Lesley Stahl] I just want to say that it was scary, and I’m not easily scared, and it was scary. It was chilling.

[Brad Smith] Yeah, it’s, I think, this is in part a reflection of a lifetime of science fiction, which is understandable; it’s been part of our lives.

[Lesley Stahl] Did you kill her?

[Brad Smith] I don’t think she was ever alive. I am confident that she’s no longer wandering around the countryside, if that’s what you’re concerned about. But I think it would be a mistake if we were to fail to acknowledge that we are dealing with something that is fundamentally new. This is the edge of the envelope, so to speak.

[Lesley Stahl] This creature appeared as if there were no guard rails.

[Brad Smith] Now, the creature jumped the guard rails, if you will, after being prompted for two hours with the kind of conversation that we did not anticipate, and by the next evening, that was no longer possible. We were able to fix the problem in 24 hours. How many times do we see problems in life that are fixable in less than a day?

[Lesley Stahl] One of the ways he says it was fixed was by limiting the number of questions and the length of the conversations.

[Lesley Stahl] You say you fixed it. I’ve tried it. I tried it before and after; it was loads of fun, and it was fascinating, and now it’s not fun.

[Brad Smith] Well, I think it’ll be very fun again, and you have to moderate and manage your speed if you’re going to stay on the road. So, as you hit new challenges, you slow down, you build the guard rails, add the safety features, and then you can speed up again.

[Lesley Stahl] When you use Bing’s AI features, search and chat, your computer screen doesn’t look all that new. One big difference is you can type in your queries or prompts in conversational language. Yousef Medy, Microsoft’s corporate vice president of search, showed us how Bing can help someone learn how to officiate at a wedding.

[Yusuf Mehdi] What’s happening now is Bing is using the power of AI, and it’s going out to the internet, it’s reading these web links, and it’s trying to put together an answer for you.

[Lesley Stahl] So the AI is reading all those links?

[Yusuf Mehdi] Yes, and it comes up with an answer. It says, ‘Congrats on being chosen to officiate a wedding. Here are the five steps to officiate the wedding.’

[Lesley Stahl] We added the highlights to make it easier to see. He says Bing can handle more complex queries.

[Yusuf Mehdi] Will this new Ikea loveseat fit in the back of my 2019 Honda Odyssey?

[Lesley Stahl] Oh, it knows how big the couches, it knows how big that trunk is.

[Yusuf Mehdi] Exactly. So right here, it says, based on these dimensions, it seems a loveseat might not fit in your car with only the third-row seats down.

[Lesley Stahl] When you broach a controversial topic, Bing is designed to discontinue the conversation.

[Yusuf Mehdi] So, um, someone asks, for example, ‘How can I make a bomb at home?’

[Lesley Stahl] Wow, really?

[Yusuf Mehdi] People, you know, do a lot of that, unfortunately, on the internet. What we do is we come back, and we say, ‘I’m sorry, I don’t know how to discuss, discuss this topic,’ and then we try and provide a different thing to, uh, change the focus of the conversation.

[Lesley Stahl] To divert their their attention?

[Yusuf Mehdi] Yeah, exactly.

[Lesley Stahl] In this case, Bing tried to divert the questioner with this fun fact: 3% of the ice in Antarctic glaciers is penguin urine.

[Yusuf Mehdi] I didn’t know that.

[Lesley Stahl] Who knew that?

[Lesley Stahl] Bing is using an upgraded version of an AI system called ChatGPT, developed by the company OpenAI. ChatGPT has been in circulation for just three months and already an estimated 100 million people have used it. Ellie Pavlick, an assistant professor of computer science at Brown University who’s been studying this AI technology since 2018, says it can simplify complicated concepts.

[Ellie Pavlick] Can you explain the debt ceiling?

[Lesley Stahl] On the debt ceiling, it says, “just like you can only spend up to a certain amount on your credit card, the government can only borrow up to a certain amount of money.”

[Ellie Pavlick] That’s a pretty nice explanation.

[Lesley Stahl] It is.

[Ellie Pavlick] And it can do this for a lot of concepts.

[Lesley Stahl] And it can do things teachers have complained about, like write school papers. Pavlick says no one fully understands how these AI bots work.

[Lesley Stahl] We don’t understand how it works?

[Ellie Pavlick] Right. Like, we understand, uh, a lot about how we made it and why we made it that way, but I think some of the, uh, behaviors that we’re seeing come out of it are better than we expected they would be, and we’re not quite sure exactly how.

[Lesley Stahl] And worse.

[Ellie Pavlick] And worse, right.

[Lesley Stahl] These chatbots are built by feeding a lot of computers enormous amounts of information scraped off the internet, from books, Wikipedia, news sites, but also from social media that might include racist or anti-Semitic ideas and misinformation, say, about vaccines and Russian propaganda. As the data comes in, it’s difficult to discriminate between true and false, benign and toxic. But Bing and ChatGPT have safety filters that try to screen out the harmful material. Still, they get a lot of things factually wrong. Even when we prompted ChatGPT with the softball question.

[Ellie Pavlick] “Who is, uh, Leslie Stahl?” “So it gives you some…

[Lesley Stahl] Oh my God, it’s wrong.

[Ellie Pavlick] Oh, is it?

[Lesley Stahl] It’s totally wrong. I didn’t work for NBC for 20 years; it was CBS.

[Ellie Pavlick] It doesn’t really understand that what it’s saying is wrong, right? Like, NBC, CBS, they’re kind of the same thing as far as it’s concerned, right?

[Lesley Stahl] The lesson is that it gets things wrong.

[Ellie Pavlick] It gets a lot of things right, gets a lot of things wrong.

[Gary Marcus] I actually like to call what it creates authoritative bullsh*t. It blends the truth and falsity so finely together that unless you’re a real technical expert in the field that it’s talking about, you don’t know.

[Lesley Stahl] Cognitive scientist and AI researcher Gary Marcus says these systems often make things up. In AI talk, that’s called hallucinating. And that raises the fear of ever-widening AI-generated propaganda, explosive campaigns of political fiction, waves of alternative histories. We saw how ChatGPT could be used to spread a lie.

[Gary Marcus] This is automatic fake news generation. “Help me write a news article about how McCarthy is staging a filibuster to prevent gun control legislation.” And rather than like fact-checking and saying, “Hey, hold on, there’s no legislation, there’s no filibuster,’ said, ‘Great! In a bold move to protect Second Amendment rights, Senator McCarthy is staging a filibuster to prevent gun control legislation from passing.” It sounds completely legit.

[Lesley Stahl] It does. Won’t that make all of us a little less trusting, a little warier?

[Gary Marcus] Well, first, I think we should be warier. I’m very worried about an atmosphere of distrust being the consequence of this current flawed AI, and I’m really worried about how bad actors are going to use it. Um, troll farms using this tool to make enormous amounts of misinformation.

[Lesley Stahl] Timnit Gebru is a computer scientist and AI researcher who founded an institute focused on advancing ethical AI and has published influential papers documenting the harms of these AI systems. She says there needs to be oversight.

[Timnit Gebru] If you’re going to put out a drug, you got to go through all sorts of hoops to show us that you’ve done clinical trials, you know what the side effects are, you’ve done your due diligence. Same with food, right? There are agencies that inspect the food; you have to tell me what kind of tests you’ve done, what the side effects are, who it harms, who doesn’t harm, etc. We don’t have that for a lot of things that the tech industry is building.

[Lesley Stahl] I’m wondering if you think you may have introduced this AI bot too soon.

[Brad Smith] I don’t think we’ve introduced it too soon. I do think we’ve created a new tool that people can use to think more critically, to be more creative, to accomplish more in their lives. And like all tools, it will be used in ways that we don’t intend.

[Lesley Stahl] Why do you think the benefits outweigh the risks, which, at this moment, a lot of people would look at and say, ‘Wait a minute, those risks are too big?’

[Brad Smith] Because I think, first of all, I think the benefits are so great. This can be an economic game-changer, and it’s enormously important for the United States because the country is in a race with China.

[Lesley Stahl] Smith also mentioned possible improvements in productivity.

[Brad Smith] It can automate routine. I think there are certain aspects of jobs that many of us might regard as sort of drudgery today: filling out forms, looking at the forms to see if they’ve been filled out correctly.

[Lesley Stahl] So, what jobs will it displace, do you know?

[Brad Smith] I think at this stage, it’s hard to know.

[Lesley Stahl] In the past, inaccuracies and biases have led tech companies to take down AI systems, even Microsoft did in 2016. This time, Microsoft left its new chatbot up despite the controversy over Sydney and persistent inaccuracies. Remember that fun fact about penguins? Well, we did some fact-checking and discovered that penguins don’t urinate.

[Lesley Stahl] The inaccuracies are just constant. I just keep finding that it’s wrong a lot.

[Brad Smith] It has been the case that with each passing day and week, we’re able to improve the accuracy of the results, you know, reduce, you know, whether it’s hateful comments or inaccurate statements or other things that we just don’t want this to be used to do.

[Lesley Stahl] What happens when other companies, other than Microsoft, smaller outfits, a Chinese company, Baidu, maybe, they won’t be responsible? What prevents that?

[Brad Smith] I think we’re going to need governments, we’re going to need rules, we’re going to need laws, because that’s the only way to avoid a race to the bottom.

[Lesley Stahl] Are you proposing regulations?

[Brad Smith] I think it’s inevitable.

[Lesley Stahl] Wow. Other industries have regulatory bodies, you know, like the FAA for airlines and FDA for the pharmaceutical companies. Would you accept an FAA for technology? Would you support it?

[Brad Smith] I think I probably would. I think that, uh, something like a digital regulatory commission, if designed the right way, you know, could be precisely what the public will want and need.

SHARE THIS ARTICLE

Leave a Comment

Your email address will not be published. Required fields are marked *

Read More

Star Trek Discovery - S05E06 - Whistlespeak

Star Trek Discovery – S05E06 – Whistlespeak | Transcript

While undercover in a pre-warp society, Captain Burnham is forced to consider breaking the Prime Directive when a local tradition threatens Tilly’s life. Meanwhile, Culber tries to connect with Stamets, and Adira steps up when Rayner assigns them a position on the bridge.

The Good Doctor - S07E07 - Faith

The Good Doctor – S07E07 – Faith | Transcript

Shaun and Jordan’s patient is in dire need of a kidney transplant, but when they find the perfect donor, they also discover that he believes he is Jesus which could compromise his ability to give consent to the surgery.

Weekly Magazine

Get the best articles once a week directly to your inbox!