The “Godfather of AI,” Geoffrey Hinton, has delivered his latest pronouncement, and it’s the kind of headline that makes you want to double-check your calendar to see if it’s still 2024 or if we’ve slipped into some science-fiction abyss. Hinton, the man once lauded for his pioneering work in artificial intelligence, is now warning us—with the calm of someone mentioning the weather—that there’s a 20% chance AI could wipe us out within the next 30 years. Twenty percent! Not the lottery odds you want when extinction is on the table.
You can hear the alarm bells in his BBC interview as he shifts his estimate from 10% to 20%, like a meteorologist upping the chance of rain. When the show’s guest editor, Sajid Javid, quipped, “You’re going up,” Hinton replied with an almost bemused, “If anything.” The stakes, though, are anything but amusing. “We’ve never had to deal with things more intelligent than ourselves before,” he explains. His analogies—a three-year-old trying to control a genius, or a baby supposedly “controlling” its mother through sheer evolutionary trickery—are as unsettling as they are vivid.
This is not the technophobic rambling of someone left behind by progress. Hinton has been deep in the AI trenches, and if he’s worried, you almost have to wonder if everyone else is sleepwalking. He points to the accelerating pace of technological advances—progress faster than even he expected—and, in a clear-eyed way, suggests that governments need to stop daydreaming and start regulating. He doesn’t trust corporations to prioritize anything over their profits. (This, of course, is a truism so obvious it feels ridiculous to state, but here we are, stating it again.)
And then there’s the chorus of dissenters, like Yann LeCun of Meta, singing an almost evangelical hymn about AI’s potential to “save humanity from extinction.” If LeCun is auditioning for a role as AI’s cheerleader-in-chief, Hinton seems more like the grim prophet who sees the runaway train and doesn’t have a whistle to blow.
Hinton’s warning aligns with earlier statements from the Center for AI Safety, signed by heavyweights like Elon Musk and Steve Wozniak, cautioning against treating the threat as a hypothetical. The stakes, they argue, are on par with pandemics and nuclear war—two cheerful touchpoints for existential dread. The idea is that AI, if left unchecked, could outsmart its creators and spin into chaos, unleashing scenarios we’re not equipped to handle.
The drama of it all feels almost cinematic—like we’re living through a Kubrick-ian cautionary tale, but without HAL 9000’s soothing monotone. Hinton’s urgency is palpable: dedicate resources now, think ahead, and don’t let the profit-driven inertia of Big Tech lead us off a cliff. It’s a simple enough plea, but then again, so was “don’t play with fire,” and look where that got us.
You don’t have to squint too hard to see the parallels to some of our favorite screen dystopias: the hubris of creation, the folly of thinking we can control what we unleash, the tragic inevitability of it all. If anything, Hinton might just be the Cassandra of our time, doomed to shout warnings that no one really wants to hear. Or maybe, just maybe, he’s the one voice standing between us and our willingness to let the machines take over the writing of our epilogues.



