Search

The Impact of AI Advancements: Insights from Former Google CEO Eric Schmidt | Transcript

Ex-Google CEO Eric Schmidt recently made headlines with some controversial comments about AI during an interview conduced at Stanford University. This interview was taken down at his request after he admitted to misspeaking.
Eric Schmidt

In a recent interview, Eric Schmidt, former CEO of Google, discussed the transformative potential of large-scale AI technologies like expanded context windows, LLM agents, and text-to-action models, predicting that within the next few years, these innovations will significantly impact society, potentially more than social media. He emphasized the growing gap between the top AI companies and others, citing the massive capital and energy resources required for advancements in AI, such as AGI. Schmidt also touched on the need for critical thinking to combat misinformation, the challenges of geopolitical AI competition, especially between the US and China, and how AI could disrupt industries and education by offering everyone personal programming assistants. He envisions a future where AI programmers and agents can perform complex tasks autonomously, reshaping fields like chemistry, warfare, and software development.

* * *

When they are delivered at scale, it’s going to have an impact on the world at a scale that no one understands yet.

Eric Schmidt, the former CEO of Google, just did an interview at Stanford where he talked about a lot of controversial stuff. Initially, the interview was uploaded on Stanford’s YouTube channel, but a couple of days later, the interview was taken down from YouTube and everywhere else. But today, I was somehow able to access the interview video after spending multiple hours, so let’s watch it together and dissect some important parts of the interview.

* * *

Eric Schmidt: In the next year, you’re going to see very large context windows, agents, and text-to-action. When they are delivered at scale, it’s going to have an impact on the world at a scale that no one understands yet, much bigger than the horrific impact we’ve had by social media, in my view. So here’s why.

In a context window, you can basically use that as short-term memory, and I was shocked that context windows could get this long. The technical reasons have to do with the fact that it’s hard to serve, hard to calculate, and so forth. The interesting thing about short-term memory is, when you feed it a question — you ask it to read 20 books — you give it the text of the books as the query and you say, “Tell me what they say.” It forgets the middle, which is exactly how human brains work too, right? That’s where we are.

With respect to agents, there are people who are now building essentially LLM agents, and the way they do it is they read something like chemistry, they discover the principles of chemistry, and then they test it, and then they add that back into their understanding, right? That’s extremely powerful. And then the third thing, as I mentioned, is text-to-action.

So, I’ll give you an example. The government is in the process of trying to ban TikTok. We’ll see if that actually happens. If TikTok is banned, here’s what I propose each and every one of you do: say to your LLM the following: “Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it’s not viral, do something different along the same lines.” That’s the command. Boom, boom, boom, boom, right? You understand how powerful that is? If you can go from arbitrary language to arbitrary digital command—which is essentially what Python in this scenario is—imagine that each and every human on the planet has their own programmer that actually does what they want, as opposed to the programmers that work for me who don’t do what I ask, right? The programmers here know what I’m talking about. So, imagine a non-arrogant programmer that actually does what you want, and you don’t have to pay all that money, and there’s an infinite supply of these programs.

Interviewer: And this is all within the next year or two.

Eric Schmidt: Very soon.

So, we’ve already discussed on this channel a number of different versions of this, whether you’re talking about Ethereum, Devin, Pythagora, or just using agents to collaborate with each other in code. There are just so many great options for coding assistance right now. However, AI coders that can actually build full-stack, complex applications—we’re not quite there yet, but hopefully soon. And also, what he’s describing of just saying “download all the music and the secrets and recreate”—that’s not really possible right now. Obviously, all of that stuff is behind security walls, and you can’t just download all that stuff. So, if he’s saying “reproduce the functionality,” you can certainly do that.

Eric Schmidt: Those three things, and I’m quite convinced, it’s the union of those three things that will happen in the next wave.

So, you asked about what else is going to happen. Every six months I oscillate, so we’re on an even-odd oscillation. At the moment, the gap between the frontier models — which there are now only three, I refuse to say who they are — and everybody else appears to me to be getting larger. Six months ago, I was convinced that the gap was getting smaller, so I invested lots of money in the little companies. Now, I’m not so sure, and I’m talking to the big companies, and the big companies are telling me that they need $10 billion, $20 billion, $50 billion, $100 billion.

Interviewer: Stargate is a, what, $100 billion, right?

Eric Schmidt: Very, very hard. I talked to Sam Altman, who is a close friend. He believes that it’s going to take about $300 billion, maybe more. I pointed out to him that I’d done the calculation on the amount of energy required, and I then, in the spirit of full disclosure, went to the White House on Friday and told them that we need to become best friends with Canada because Canada has really nice people, helped invent AI, and lots of hydro power. Because we, as a country, do not have enough power to do this.

The alternative is to have the Arabs fund it, and I like the Arabs personally—I’ve spent lots of time there, right?—but they’re not going to adhere to our national security rules, whereas Canada and the US are part of a triumphed where we all agree.

Interviewer: So, these $100 billion, $300 billion data centers, electricity starts becoming the scarce resource.

Now, first of all, we definitely don’t have enough energy resources to achieve AGI. It’s just not possible right now, and Eric is also assuming that we’re going to need more and more data and larger models to reach AGI, and I think that’s also not actually true. Sam Altman has said similar things. He has said that we need to be able to do more with less, or even the same amount of data, because we’ve already used all the data that humanity has ever created. There’s really no more left. So, we’re going to need to either figure out how to create synthetic data that is valuable, not just derivative, and we’re also going to have to do more with the data that we do have.

Interviewer: You were at Google for a long time, and they invented the Transformer architecture…

Eric Schmidt: It’s all Peter’s fault.

Interviewer: …thanks to brilliant people over there like Peter and Jeff Dean and everyone. But now it doesn’t seem like they’ve kind of lost the initiative to OpenAI. Even the last leaderboard I saw, Anthropic’s Claude was at the top of the list. I asked Sundar Pichai about this, and he didn’t really give me a very sharp answer. Maybe you have a sharper or a more objective explanation for what’s going on there?

Eric Schmidt: I’m no longer a Google employee, yes. In the spirit of full disclosure, Google decided that work-life balance, going home early, and working from home was more important than winning.

Okay, so that is the line that got him in trouble. It was everywhere, all over Twitter, all over the news. When he said Google prioritized work-life balance, going home early, not working as hard as the competitor, over winning. They chose work-life balance over winning. And that’s actually a pretty common perception of Google.

Eric Schmidt: And the startups. The reason startups work is because the people work like crazy. And I’m sorry to be so blunt, but the fact of the matter is, if you all leave the university and go fund a company, you’re not going to let people work from home and only come in one day a week if you want to compete against the other startups.

Interviewer: In the early days of Google, Microsoft was like that.

Eric Schmidt: Exactly.

Interviewer: But now it seems to be…

Eric Schmidt: And there’s a long history in my industry, our industry I guess, of companies winning in a genuinely creative way and really dominating a space, and not making the next transition. It’s very well documented. And I think that the truth is, founders are special. The founders need to be in charge. The founders are difficult to work with. They push people hard.

As much as we can dislike Elon’s personal behavior, look at what he gets out of people. I had dinner with him, and he was flying… I was in Montana. He was flying that night at 10:00 p.m. to have a meeting at midnight with X.AI, right? Think about it.

I was in Taiwan—different country, different culture—and they said that, and this is TSMC, who I’m very impressed with, they have a rule that the starting PhDs coming out, the good physicists, work in the factory on the basement floor. Now, can you imagine getting American physicists to do that with PhDs? Highly unlikely. Different work ethic. And the problem here, the reason I’m being so harsh about work, is that these are systems which have network effects. So, time matters a lot, and in most businesses, time doesn’t matter that much, right? You have lots of time. You know, Coke and Pepsi will still be around, and the fight between Coke and Pepsi will continue to go along, and it’s all glacial, right? When I dealt with Telco’s, the typical Telco deal would take 18 months to sign, right? There’s no reason to take 18 months to do anything. Get it done. We’re in a period of maximum growth, maximum gain.

So, here he was asked about competition with China’s AI and AGI, and that’s his answer: we’re ahead, we need to stay ahead, and we need money.

Eric Schmidt: I was the chairman of an AI commission that sort of looked at this very carefully and—you can read it, it’s about 752 pages and I’ll just summarize it by saying we’re ahead, we need to stay ahead, and we need lots of money to so. Our customers were the Senate and the House, and out of that came the CHIPS Act and a lot of other stuff like that. The rough scenario is that if you assume the frontier models drive forward, and a few of the open-source models, it’s likely that a very small number of companies can play this game. Countries, excuse me. What are those countries? Or who are they? Countries with a lot of money and a lot of talent, strong educational systems, and a willingness to win. The US is one of them, China is another one. How many others are there?

Interviewer: Are there any others?

Eric Schmidt: I don’t know. Maybe. But certainly, in your lifetimes, the battle between the US and China for knowledge supremacy is going to be the big fight, right?

So, the US government banned essentially the NVIDIA chips, although they weren’t allowed to say that’s what they were doing, but they actually did that into China. They have about a ten-year chip advantage. We have a roughly ten-year chip advantage in terms of sub-five nanometer chips. That is, sub-five nanometers is roughly ten years ahead, wow. And so, you’re going to have… so an example would be today we’re a couple of years ahead of China. My guess is we’ll get a few more years ahead of China, and the Chinese are whopping mad about this, like hugely upset about it.

Interviewer: Let’s talk too about a real war that’s going on. I know that something you’ve been very involved in is the Ukraine war, and in particular, I don’t know how much you can talk about White Stork and your goal of having $500 drones destroy $5 million tanks. So, how’s that changing warfare?

Eric Schmidt: I worked for the Secretary of Defense for seven years and tried to change the way we run our military. I’m not a particularly big fan of the military, but it’s very expensive, and I wanted to see if I could be helpful, and I think in my view I largely failed. They gave me a medal, so they must give medals to failure, or you know, whatever, but my self-criticism was nothing has really changed, and the system in America is not going to lead to real innovation.

So, watching the Russians use tanks to destroy apartment buildings with little old ladies and kids just drove me crazy. So, I decided to work on a company with your friend Sebastian Thrun — a former faculty member here — and a whole bunch of Stanford people, and the idea basically is to do two things: use AI in complicated, powerful ways for essentially robotic war, and the second one is to lower the cost of the robots.

Now, you sit there and you go, “Why would a good liberal like me do that?” And the answer is that the whole theory of armies is tanks, artilleries, and mortar, and we can eliminate all of them.

So, here what he’s talking about is that Ukraine has been able to create really cheap and simple drones by spending just a couple hundred dollars. Ukraine is creating 3D-printed drones, they carry a bomb, drop it on a million-dollar tank, and they’ve been able to do that over and over again. So, there’s this asymmetric warfare happening between drones and more traditional artillery.

Interviewer: There was an article that you and Henry Kissinger and Dan Huttenlocher wrote last year about the nature of knowledge and how it’s evolving. I had a discussion the other night about this as well. For most of history, humans sort of had a mystical understanding of the universe. Then there’s the Scientific Revolution and the Enlightenment. And in your article, you argue that now these models are becoming so complicated and difficult to understand that we don’t really know what’s going on in them. I’ll take a quote from Richard Feynman: “What I cannot create, I do not understand.” I saw this quote the other day. But now people are creating things they can create, but they don’t really understand what’s inside them. Is the nature of knowledge changing in a way? Are we going to have to start just taking the word for these models, having them able to explain it to us?

Eric Schmidt: The analogy I would offer is to teenagers. If you have a teenager, you know that they’re human, but you can’t quite figure out what they’re thinking. But somehow we’ve managed in society to adapt to the presence of teenagers, right? And they eventually grow out of it. So, it’s probably the case that we’re going to have knowledge systems that we cannot fully characterize, but we understand their boundaries, right? We understand the limits of what they can do, and that’s probably the best outcome we can get.

Interviewer: Do you think we’ll understand the limits?

Eric Schmidt: We’ll get pretty good at it.

He’s referencing the way that large language models work, which is really essentially a black box. You put in a prompt, you get a response, but we don’t know why certain nodes within the algorithm light up, and we don’t know exactly how the answers come to be. It’s really a black box. There’s a lot of work being done right now trying to unveil what is going on behind the curtain, but we just don’t know.

Eric Schmidt: The consensus of my group that meets every week is that eventually, the way you’ll do this — it’s called so-called adversarial AI — is that there will actually be companies that you will hire and pay money to, to break your AI system. So, it’ll be red teams — instead of human red teams, which is what they do today — you’ll have whole companies and a whole industry of AI systems whose job is to break the existing AI systems and find their vulnerabilities, especially the knowledge that they have that we can’t figure out.

That makes sense to me. It’s also a great project for you here at Stanford because if you have a graduate student who has to figure out how to attack one of these large models and understand what it does, that is a great skill to build the next generation. So, it makes sense to me that the two will travel together.

Interviewer: Alright, let’s take some questions from the students. There’s one right there in the back. Just say your name.

Student 1: You mentioned, and this is related to a comment right now about getting AI that actually does what you want. You just mentioned adversarial AI. I’m wondering if you could elaborate on that more. So, it seems to me, besides obviously compute will increase and get more performant models, but getting them to do what we want seems largely unanswered.

Eric Schmidt: Well, you have to assume that the current hallucination problems become less, right, as the technology gets better and so forth. I’m not suggesting it goes away. And then, you also have to assume that there are tests for efficacy, so there has to be a way of knowing that the system succeeded. So, in the example that I gave of the TikTok competitor — and by the way, I was not arguing that you should illegally steal everybody’s music — what you would do if you’re a Silicon Valley entrepreneur, which hopefully all of you will be, is if it took off, then you’d hire a whole bunch of lawyers to go clean the mess up, right? But if nobody uses your product, it doesn’t matter that you stole all the content. And do not quote me, right?

Interviewer: Right, you’re on camera.

Eric Schmidt: Yeah, that’s right. But you see my point. In other words, Silicon Valley will run these tests and clean up the mess, and that’s typically how those things are done.

So, my own view is that you’ll see more and more performative systems with even better tests and eventually adversarial tests, and that’ll keep it within a box. The technical term is called chain-of-thought reasoning, and people believe that in the next few years, you’ll be able to generate a thousand steps of chain-of-thought reasoning. Right? Do this, do this. It’s like building recipes. You can run the recipe, and you can actually test that it produced the correct outcome.

Now, that was maybe not my exact understanding of chain-of-thought reasoning. My understanding of chain-of-thought reasoning, which I think is accurate, is when you break a problem down into its basic steps and you solve each step, allowing for progression into the next step. Not only does it allow you to kind of replay the steps, but it’s more about how you break problems down and then think through them step by step.

Eric Schmidt: The amounts of money being thrown around are mindboggling. I chose — I essentially invest in everything because I can’t figure out who’s going to win, and the amounts of money that are following me are so large. I think some of it is because the early money has been made, and the big-money people who don’t know what they’re doing have to have an AI component. And everything is now an AI investment, so they can’t tell the difference. I define AI as learning systems, systems that actually learn. So, I think that’s one of them.

The second is that there are very sophisticated new algorithms that are sort of post-transformers. My friend and collaborator for a long time has invented a new non-transformer architecture. There’s a group that I’m funding in Paris that claims to have done the same thing, so there’s enormous invention there. A lot of things at Stanford.

And the final thing is that there is a belief in the market that the invention of intelligence has infinite return. So, let’s say you put $50 billion of capital into a company. You have to make an awful lot of money from intelligence to pay that back. So, it’s probably the case that we’ll go through some huge investment bubble, and then it’ll sort itself out. That’s always been true in the past, and it’s likely to be true here.

So there’s been something like a trillion dollars already invested into artificial intelligence and only $30 billion in revenue. I think those are accurate numbers. And really, there just hasn’t been a return on investment yet, but again, as he just mentioned, that’s been the theme in previous waves of technology: huge upfront investment and then it pays off in the end.

Well, I don’t know what he’s talking about here because didn’t he run Google, and Google has always been about being closed-source and always tried to protect the algorithm at all costs. So, I don’t know what he’s referring to there.

Interviewer: Do you think that the leaders are pulling away from others right now?

Eric Schmidt: The question is, um, roughly the following: there’s a company called Mistral in France, and they’ve done a really good job, and I’m obviously an investor. They have produced their second version. Their third model is likely to be closed because it’s so expensive. They need revenue, and they can’t give their model away. So, this open-source versus closed-source debate in our industry is huge.

And, um, my entire career was based on people being willing to share software in open-source. Everything about me is open-source. Much of Google’s underpinnings were open-source.

What?!? Didn’t he run Google? And Google was all about staying closed source and everything about Google was kept secret at all times, so I don’t know what he’s referring to there.

Eric Schmidt: Everything I’ve done technically, and yet it may be that the capital costs, which are so immense, fundamentally change how software is built. You and I were talking — my own view of software programmers is that their productivity will at least double. There are three or four software companies that are trying to do that. I’ve invested in all of them, and they’re all trying to make software programmers more productive.

The most interesting one that I just met with is called Augment, and I always think of an individual programmer, but they said, “That’s not our target. Our target is these 100-person software programming teams on millions of lines of code where nobody knows what’s going on.” Well, that’s a really good AI thing. Will they make money? I hope so.

Yes, ma’am.

Student 2: At the very beginning, you mentioned that there’s the combination of the context window expansion, the agents, and the text-to-action that is going to have unimaginable impacts. First of all, why is the combination important? And second of all, I know that you’re not like a crystal ball and can’t necessarily tell the future, but why do you think it’s beyond anything that we could imagine?

Eric Schmidt: I think largely because the context window allows you to solve the problem of recency. The current models take a year to train, roughly — six months of preparation, six months of training, six months of fine-tuning — so they’re always out of date. The context window lets you feed it what happened recently. Like, you can ask it questions about the Hamas-Israel war, right? In a context that’s very powerful. It becomes current. Like Google.

Yeah, so that’s essentially how Search GPT works, for example. The new search product from OpenAI can scour the web, scrape the web, and then take all of that information and put it into the context window. That is the recency he’s talking about.

Eric Schmidt: In the case of agents, I’ll give you an example: I set up a foundation which is funding a non-profit which starts… there’s a — I don’t know if there are chemists in the room, I don’t really understand chemistry — but there’s a tool called ChemCrow, which was an LLM-based system that learned chemistry. And what they do is they run it to generate chemistry hypotheses about proteins, and they have a lab which runs the tests overnight, and then it learns. That’s a huge accelerant in chemistry, material science, and so forth. So, that’s an agent model.

And I think the text-to-action can be understood by just having a lot of cheap programmers, right? And I don’t think we understand what happens when everyone has their own programmer. And I’m not talking about turning on and off the light. I imagine another example: for some reason, you don’t like Google, so you say, “Build me a Google competitor.” Yeah, you personally. “Build me a Google competitor, search the web, build a UI, make a good copy, add generative AI in an interesting way, do it in 30 seconds, and see if it works.” Right? So, a lot of people believe that the incumbents, including Google, are vulnerable to this kind of attack. Now, we’ll see.

Interviewer: How can we stop AI from influencing public opinion or spreading misinformation, especially during the upcoming election? What are the short- and long-term solutions?

Eric Schmidt: Most of the misinformation in this upcoming election, and globally, will be on social media. And the social media companies are not organized well enough to police it. If you look at TikTok, for example, there are lots of accusations that TikTok is favoring one kind of misinformation over another. And there are many people who claim — without proof that I’m aware of — that the Chinese are forcing them to do it. I think we just — we have a mess here.

The country is going to have to learn critical thinking. That may be an impossible challenge for the US, but the fact that somebody told you something does not mean that it’s true. I think that the greatest threat to democracy is misinformation, because we’re going to get really good at it. When [?] managed YouTube, the biggest problems we had on YouTube were that people would upload false videos, and people would die as a result. And we had a no-death policy. Shocking.

Yeah. And also, it’s not even about potentially making deepfakes or kind of misinformation — just muddying the waters is enough to make the entire topic kind of untouchable.

Student 3: I’m really curious about the text-to-action and its impact on, for example, computer science education. Wondering what your thoughts are on how CS education should transform to meet the age?

Eric Schmidt: Well, I’m assuming that computer scientists, as a group, in undergraduate school will always have a programmer buddy with them. So, when you learn your first for-loop and so forth and so on, you’ll have a tool that will be your natural partner, and then that’s how the teaching will go. The professor, you know, he or she will talk about the concepts, but you’ll engage with it that way. That’s my guess.

Yes, ma’am, behind you.

So, here I have a slightly different view. I think in the long run, there probably isn’t going to be the need for programmers. Eventually, the LLMs will become so sophisticated, they’re writing their own kind of code. Maybe it gets to a point where we can’t even read that code anymore. So, there is this world in which it is not necessary to have programmers, researchers, or computer scientists. I’m not sure that’s the way it’s going to be, but there is a timeline in which that happens.

Eric Schmidt: The most interesting country is India, because the top AI people come from India to the US. And we should let India keep some of its top talent, not all of them, but some of them. And they don’t have the kind of training facilities and programs that we so richly have here. To me, India is the big swing state in that regard. China’s lost — it’s not going to come back, they’re not going to change the regime as much as people wish them to. Japan and Korea are clearly in our camp. Taiwan is a fantastic country, whose software is terrible, so that’s not going to work. Amazing hardware.

And in the rest of the world, there are not a lot of other good choices that are big. Germany — Europe is screwed up because of Brussels. It’s not a new fact; I spent ten years fighting them, and I worked really hard to get them to fix the AI Act. And they still have all the restrictions that make it very difficult to do our kind of research in Europe. My French friends have spent all their time battling Brussels, and Macron, who’s a personal friend, is fighting hard for this. And so, France, I think, has a chance. I don’t see Germany coming, and the rest is not big enough.

Student 4: Given the capabilities that you envision these models having, should we still spend time learning to code?

Yeah, so here she asked, should we still learn to code?

Eric Schmidt: Yeah. Because ultimately, it’s the old thing of, why do you study English if you can speak English? You get better at it, right? You really do need to understand how these systems work, and I feel very strongly about that.

Yes, sir.

So, these were the most important parts of the interview, and with that being said, this is it for today’s video. See you again next week with another video.

SHARE THIS ARTICLE

Leave a Comment

Your email address will not be published. Required fields are marked *

Read More

Weekly Magazine

Get the best articles once a week directly to your inbox!