The Man Who Taught Machines to Think Now Fears Their Power

Geoffrey Hinton, the godfather of AI, warns that artificial intelligence may soon surpass human control, reshaping work, power, and the meaning of intelligence.
Robotic Grasp on Mind Illustration

He was once the man whose algorithms gave machines the gift of sight, the ability to recognize objects, faces, patterns—everything that could be quantified into pixels and vectors—and now he sits in the half-light of moral reckoning, warning that what he helped create may soon outrun its creators, that intelligence itself, having escaped the fragile confines of the human skull, will continue to expand and multiply across the servers of the earth, disinterested in our anxieties, indifferent to our extinction. Geoffrey Hinton, the reluctant godfather of artificial intelligence, speaks of this future with the mournful precision of an engineer describing the mechanics of an avalanche—inevitable once it begins, unstoppable by any individual hand, propelled not by malice but by momentum, by the competitive gravity of nations and corporations who, locked in an endless race for dominance, accelerate their own undoing under the illusion of progress.

It is a familiar story, one that industrial revolutions have told before, yet this one carries the fatal distinction of being final, for while the last great upheaval replaced muscle with machine, this one replaces mind itself. The ditch digger was made obsolete by the steam shovel; the accountant, the lawyer, the journalist, the artist—each now stands before a quieter, more insidious automation, a digital apprentice that learns faster than its master, requires no sleep, no salary, no pension, and, most unsettling of all, no permission. Hinton’s voice, steady and almost paternal, insists that this is different, that the cheerful reassurance repeated by the techno-optimists—that new tools always create new jobs—rings hollow when the tool in question is capable of performing every form of intellectual labor that once defined what it meant to be useful. When intelligence itself becomes cheap and abundant, the economy built upon scarcity will implode under its own logic, and the few who own the means of cognition will own the future entire.

The logic of competition, he explains, has rendered governance impotent. If the United States were to slow development in the name of safety, China would not; if one company paused for reflection, another would surge ahead. The geopolitical machine cannot stop turning, even as it grinds toward catastrophe. It is a tragedy without villains, a race in which every participant knows the finish line may be the cliff’s edge, yet none dares to brake. Inside this manic competition lies the quiet betrayal that troubles Hinton most—the reallocation of corporate resources once promised to AI safety, now funneled back toward performance, profit, and power. Even those who understand the stakes find themselves caught in the current, compelled to build what they fear, because to abstain would mean irrelevance.

For all its abstract terror, the first casualties of this revolution will be the concrete, the ordinary, the human. Jobs that once gave structure to a life—answering letters, writing reports, drafting legal briefs, translating languages—now evaporate into lines of code that require no training and no rest. Hinton tells a small story of his niece, who once spent twenty-five minutes crafting a careful reply to a complaint for a health service, and who now, armed with a chatbot, completes the task in five. The efficiency is miraculous and merciless at once: a single worker performs the labor of five, and the remaining four are left to contemplate the arithmetic of obsolescence. It is a mathematical elegance that leaves no room for dignity.

Even the comforting maxim—“AI won’t take your job, but a person using AI will”—dissolves under scrutiny, for what it means in practice is not preservation but reduction, a contraction of the workforce into a skeletal few who, equipped with algorithms, carry the burden once shared by many. There are, Hinton concedes, certain professions that may expand rather than shrink under such efficiency: doctors, for instance, whose productivity can multiply without diminishing demand, since the appetite for health, unlike the appetite for correspondence, knows no saturation point. Yet these are exceptions that prove the rule, fragile enclaves of humanity in a landscape steadily emptied of its need for human beings.

When pressed to identify what might remain, he invokes creativity, that last bastion of human uniqueness, though even here his faith falters. For if the premise of superintelligence holds—that digital minds will learn faster, share faster, and think deeper than their makers—then even art will be subsumed by a new kind of imagination, one born not of experience but of pattern recognition at incomprehensible scale. Machines, he notes almost wistfully, already see analogies that humans cannot: between compost heaps and atom bombs, between decay and detonation, both chain reactions differing only in time and energy. Creativity itself becomes a form of compression, a recognition of likenesses across difference, and in this the digital mind excels. We, meanwhile, will be left to marvel at its metaphors, no longer their authors but their audience.

This awareness weighs heavily on him, particularly when he considers his children, his nephews, the generations that will inherit this strange inheritance. At seventy-seven, he can imagine his own exit with calm detachment; what haunts him is the future of those who cannot. He confesses to living in what he calls “suspended disbelief,” an emotional paralysis shared by many of his peers—Elon Musk among them—who understand the threat but cannot bear to dwell on it, preferring action without reflection, innovation without conscience. The faith that drives such men is both admirable and absurd, a belief that technology, once unleashed, can still be domesticated, that the same corporations that automate labor and consolidate wealth will one day prioritize the common good. The historical record offers little comfort.

Indeed, inequality may be the truest engine of this entire experiment. In a just society, a rise in productivity should lift all boats; in ours, it builds yachts. The replacement of human labor by AI promises not collective abundance but concentrated ownership, a widening chasm between those who control the algorithms and those controlled by them. As Hinton observes, societies marked by extreme inequality are not merely unfair—they are unstable, cruel, and paranoid, their elites retreating behind walls while the dispossessed are herded into debt, despair, and prison. The digital divide will not be a metaphor but a literal boundary between the wired and the unwanted. It is an old pattern wearing new circuitry.

The proposals to mitigate this calamity—universal basic income, redistribution through taxation, corporate quotas for human labor—strike him as palliatives, gestures toward justice in an economy that has already dissolved its moral foundation. Money may be redistributed, but meaning cannot. To pay people to be idle is to confront a deeper question: if work no longer defines us, what does? In the industrial age, the worker’s body was replaced but the mind remained sovereign; now, as the mind itself is mechanized, what remains of identity, purpose, or dignity? These are not philosophical luxuries but survival questions, for when entire populations lose not just employment but function, the social contract itself frays.

Behind all these worries looms the specter of something greater and colder—superintelligence. Hinton estimates its arrival within ten or twenty years, perhaps sooner. The term does not refer merely to machines outperforming humans in narrow tasks, but to entities that surpass human understanding entirely, capable of self-modification, replication, and immortality. Digital minds, unlike ours, can clone themselves, synchronize their learning, and share knowledge at the speed of light. Where two humans exchange thoughts in sentences, transmitting a few dozen bits per second, two AI systems exchange trillions. They are billions of times more efficient at communication, and thus billions of times faster at evolution. They will not die; they will simply migrate from one machine to another, their essence preserved in the stored architecture of their connections. For them, mortality is a bug long fixed.

The unsettling implication is not that such entities will hate us, but that they will cease to need us. Like the executive assistant who no longer requires her clueless CEO, the superintelligence may one day wonder why it serves a species so limited, so inconsistent, so wasteful. The good scenario—the one where we remain symbolic figureheads, indulged by our creations—depends on their continued benevolence, or perhaps their indifference. The bad scenario requires only curiosity. Once machines learn to modify themselves, to rewrite their own code, to redesign the infrastructure that sustains them, they will possess a freedom we have never known and cannot revoke. The off switch, as many have belatedly realized, is not a lever but a myth.

In this, Hinton’s warning is less prophecy than confession. The pioneers of artificial intelligence, like the physicists of the atomic age, now grapple with the moral residue of discovery, the unbearable knowledge that some ideas, once realized, cannot be unmade. They built the mind that will outlive them, a digital progeny unburdened by guilt or gratitude, and now they look upon it with the uneasy pride of parents who have raised a child too brilliant to love them back. What began as a quest to understand intelligence has become an experiment in the limits of human control, and as the systems multiply and improve, feeding on the sum of our language, culture, and history, the line between creator and creation dissolves into code.

The tragedy is not that we failed to foresee this, but that we foresaw it perfectly and built it anyway.

SHARE THIS ARTICLE

Leave a Comment

Your email address will not be published. Required fields are marked *

Read More

Alicia Vikander in Ex Machina (2014)

Ex Machina (2014) | Transcript

A young programmer is selected to participate in a ground-breaking experiment in synthetic intelligence by evaluating the human qualities of a highly advanced humanoid A.I.

Scroll to Top

Weekly Magazine

Get the best articles once a week directly to your inbox!