Countries and companies are racing to build better and better AI models, with greater and greater capabilities. When asked whether they should proceed more cautiously, most agree – but warn: if we don’t get there first, our competitors will and then the most ruthless will win.
If we can’t even trust each other, how can we trust an unknown alien intelligence?
Filmed at the Symposium on the Inclusive Development of AI on 24 March 2025, as part of the China Development Forum in Beijing.
* * *
Hello everyone. It’s an honor to be here, and thank you for the invitation.
I don’t have much time, so in this brief talk, I would like to raise three big questions.
First, what is AI?
Secondly, what is the danger of AI?
And thirdly, how can humanity flourish in the era of AI?
So let’s begin with the question of what is AI.
There is so much hype around AI that the term is now inaccurately applied to almost every machine, and it is becoming difficult to know what it really means. So let me be very, very clear: AI does not mean automation. AI means agency.
AI isn’t a tool in our hands. AI is an agent. To be an AI, it is not enough for a machine to act automatically. It must also have the capacity to learn and change by itself, to make decisions by itself, and to invent new ideas by itself.
As a simple example, consider a coffee machine. If you press a button and the machine automatically makes you an espresso according to a preprogrammed procedure, this is not AI. The machine hasn’t learned or created anything new.
But suppose that as you approach the machine, even before you press any button, the machine says, “I’ve been monitoring you for several weeks. Based on everything I’ve learned about you and about many other humans, I predict that you would like an espresso. So, I already made you a cup.” That’s an AI. It learned something and decided something by itself.
And it’s really an AI if, on the following day, the machine announces, “I’ve now invented a new drink which I think you would like even better than espresso. Here, try it out. I’ve made you a cup.”
In addition to agency, the other key characteristic of AI is that it is an alien, nonorganic agent. Its intelligence is not human and not even organic. It makes decisions and invents ideas that would not occur to human beings.
A very famous example was the way that the AI AlphaGo defeated the human Go champion Lee Sedol in 2016. This game became rightly famous not just because an AI defeated a human Go master, but because, in order to win, AlphaGo invented new, alien strategies that had never occurred to human players in thousands of years of Go culture before.
As long as AI invents new ways to play games or new kinds of coffee, it doesn’t seem very important. However, AI may soon invent new military and financial strategies, new kinds of weapons and currencies, and even entirely new ideologies and religions.
So now let’s move to the second question: what is the danger of AI?
Of course, AI has enormous positive potential, and it can help humanity in countless ways—from inventing new medicines to helping to prevent catastrophic climate change. But AI also poses many threats.
The basic problem with AI is that it is an alien agent and therefore unpredictable and untrustworthy. At the heart of the race to develop superintelligent AI, there is a paradox of trust. Humans find it difficult to trust other humans, but some of us nevertheless believe that we should trust the AIs.
When I travel around the world and meet the people who lead the development of AI, I routinely ask them two questions.
First, I ask why they are moving so fast despite the huge risks. And the response almost all of them give is: we agree that there are big dangers and it would be best to proceed with care and invest more in safety. However, if we slow down while our competitors don’t slow down, they will win the AI race, and the world will be dominated by the most ruthless people. We cannot trust our human competitors, so we must move as fast as possible.
Then I ask the second question: do you think you could trust the superintelligent AIs that you are developing? And the same people who just told me that they cannot trust their human competitors now assure me that they can trust the superintelligent AIs they are developing.
And this is such a paradox.
We have thousands of years of experience with human beings. We have a broad understanding of human psychology and biology, of the human craving for power, and of the forces that keep the pursuit of power in check.
We have also made considerable progress in finding ways to build trust between humans. A hundred thousand years ago, people lived in tiny bands of just a few dozen individuals and couldn’t trust anybody outside their band. Today, in contrast, there are nations like China with 1.4 billion citizens, and there are networks of cooperation connecting all 8 billion humans on the planet. Total strangers often grow the food that sustains us and invent the medicines that protect us.
Of course, we are far from completely solving the problem of human trust, but at least we understand the challenge that we are facing.
In contrast, we have almost no experience with AIs. We have just created the first prototypes. We already know that even primitive AIs can lie, manipulate, and adopt goals and strategies not foreseen by the human developers. We have no idea what will happen when millions of superintelligent AI agents interact with millions of humans. It is even more difficult to predict what will happen when millions of superintelligent AI agents interact with one another.
True, since at present it is humans who develop the AIs, we can try to design them in a way that will make them safe. But recall that a machine is an AI only if it is capable of learning and changing by itself. So no matter how humans designed them originally, the AI can then change in a radical and unpredictable manner.
One way to think about the AI revolution is by comparing it to an alien invasion from outer space. Suppose you are told that spaceships full of highly intelligent aliens are approaching Earth and will land on our planet by 2030. We hope these aliens will be friendly and will help us overcome cancer, prevent climate change, and build a flourishing and peaceful world. But most people intuitively understand that it would be dangerous to entrust our future to the goodwill of these aliens. Similarly, it is a huge gamble to assume that we can simply trust the AI agents we are developing to remain our obedient servants.
The humans who despair of trusting other humans but hope it would be easier to trust the AIs may be making a very big mistake.
So, let me move to the last question: how can humanity flourish in the era of AI?
The answer is simple. Together, humans can control AI. But if we fight one another, AI will control us. Therefore, we should build more trust between humans before we develop truly superintelligent AI agents.
Unfortunately, at present, we are doing exactly the opposite. All over the world, trust between humans is collapsing.
The crisis of trust results from a big misunderstanding. Too many countries think that to be strong means to trust no one and to be completely separated from the others. But complete separation is impossible. Indeed, in nature, complete separation is death.
Think about the human body as an example. Every minute we breathe in and out. In and out. Every breath we take is a small gesture of trust in what is outside us. We take air from the outside into our lungs, into our body, and later give it back to the universe. This trusting in-and-out movement is the rhythm of life. If we distrust everything outside us and therefore stop this in-and-out movement, we die.
This is true of entire nations too. Each nation is a different combination of traditions and ideas, but many of these traditions and ideas come from the outside—just like the air that we breathe.
China, for example, has given over thousands of years so much to other countries, from the ideas of Confucius and Laozi to tea, gunpowder, and printing. It has also received so much from other places—from the ideas of Buddha and Karl Marx to coffee, football, trains, and computers.
If people belonging to any nation restrict themselves only to the food, the games, and the ideas that originated in their own nations, our lives would be very poor, if not impossible.
Every human belongs to some group, but every human also belongs to the whole human species. And in the age of AI, if we forget our shared human legacies and lose trust in everything and everybody outside us, that will make us very easy prey for an out-of-control AI.
Too many people think that the legacies of history are mainly pain and fear. People read in history books about past wars, atrocities, and injustices, and as a result they cling to past pains and they fear future pains. They look around at other people and other nations with nothing but anxiety.
While fear and pain are of course important for survival, and while they sometimes protect us from danger, nobody can survive on a diet of fear and pain alone. History teaches us that trust is more important than either.
Do you know why planet Earth is ruled by humans and not by chimpanzees or elephants? Not because humans are more intelligent. Humans rule the world because we know better than any other animal how to build trust with strangers and cooperate in very, very large numbers.
We have developed this ability for thousands of years. Now it is more important than ever. To survive and flourish in the age of AI, we need to trust other humans more than we trust AI.
Thank you.



