AI pioneer warns of superintelligence takeover within next two decades
Hinton's latest prediction suggests superintelligent AI could arrive sooner than previously estimated, with significant implications for humanity.

Nobel laureate Geoffrey Hinton, widely regarded as the "godfather of artificial intelligence," has revised his timeline for the arrival of superintelligent AI, now estimating it could emerge within the next 4 to 19 years. This prediction, shared during an in-depth interview with CBS's Brook Silva-Braga on April 26, 2025, represents a significant acceleration compared to his previous estimates from two years ago.
"AI has developed even faster than I thought," Hinton stated during the interview conducted at the Toronto offices of Radical Ventures. "In particular, they now have these AI agents which are more dangerous than AI that just answers questions because they can do things in the world."
Get the PPC Land newsletter ✉️ for more like this
When previously interviewed in March 2023, Hinton had suggested a longer timeline for superintelligent AI development. Now, he believes there's "a good chance it'll be here in 10 years or less." This acceleration reflects the rapid advancements in AI capabilities over the past two years, particularly in reasoning abilities.
"Previously what the large language models would do is they'd spit out one word at a time and that would be it," Hinton explained. "Now they spit out words and they're looking at the words they spit out. They will spit out words that aren't the answer to the question yet... It's called chain of thought reasoning."
This capability for self-reflection and reasoning represents a significant leap forward in AI development that has surprised even Hinton, who spent decades pioneering neural network technology. According to Hinton, many researchers from "old-fashioned AI" had claimed neural networks couldn't reason properly without logical frameworks, but "they were just utterly wrong."
Hinton expressed serious concerns about the potential for superintelligent AI to eventually take control from humans. He places the probability of this scenario at "10 to 20%" - a figure he acknowledges is "just a wild guess" but consistent with his assessment of the risks.
"Most of the experts in the field would agree that if you consider the possibility that these things will get much smarter than us and then just take control away from us, just take over, the probability of that happening is very likely more than 1% and very likely less than 99%," Hinton said.
The AI pioneer emphasized that humanity has no historical experience with entities more intelligent than humans, making it difficult to predict how superintelligent AI might behave or what controls would be effective.
"How many examples do you know of less intelligent things controlling much more intelligent things?" Hinton asked rhetorically. "With a big gap in intelligence, there's very, very few examples where the more intelligent one isn't in control."
Hinton expressed strong criticism of current industry practices, particularly the release of AI model weights by companies like Meta and the recently announced plans by OpenAI. He compared this practice to making nuclear materials widely accessible.
"These companies should not be releasing the weights because once you release the weights, you've got rid of the main barrier to using these things," Hinton warned. "The equivalent of fissile material for AI is the weights of a big model because it costs hundreds of millions of dollars to train a really big model."
According to Hinton, releasing these weights enables malicious actors to fine-tune powerful models for harmful purposes at a much lower cost, potentially in the range of a few million dollars instead of hundreds of millions.
Hinton also criticized major AI companies for prioritizing profits over safety. "If you look what the big companies are doing right now, they're lobbying to get less AI regulation. There's hardly any regulation as it is, but they want less because they want short-term profits," he said.
Potential benefits
Despite his concerns, Hinton acknowledged several potential benefits of advanced AI, including:
- Healthcare improvements: AI systems will become significantly better at diagnosing medical conditions and designing new treatments. "A family doctor who can integrate information about your genome with the results of all the tests on you and all the tests on your relatives, the whole history, and doesn't forget things. That would be much, much better already," Hinton noted.
- Educational advantages: AI tutors could potentially help people learn "three or four times as fast" by understanding exactly what concepts students struggle with and providing tailored explanations.
- Climate solutions: AI could contribute to designing better materials for renewable energy technologies and carbon capture, though Hinton expressed skepticism about some approaches due to energy considerations.
- Productivity gains: "Almost every company wants to predict things from data, and AI is very good at doing predictions," Hinton explained, suggesting widespread productivity improvements across industries.
Workforce disruption
Hinton has revised his previous position on job displacement, now seeing it as a major concern. "AI's got so much better in the last few years that... if I had a job in a call center, I'd be very worried. Or maybe a job as a lawyer or a job as a journalist or a job as an accountant," he stated.
He expressed concern that productivity gains might not be distributed equitably: "It ought to be that if you can increase productivity, everybody benefits. The people who are doing those jobs can work a few hours a week instead of 60 hours a week... But we know it's not going to be like that. We know what's going to happen is the extremely rich are going to get even more extremely rich, and the not very well-off are going to have to work three jobs."
Hinton emphasized the urgent need for greater investment in AI safety research by major companies. "We need people to put pressure on governments to insist that the big companies do serious safety research," he urged.
He suggested that companies should allocate "a significant fraction like a third" of their computing resources to safety research, a proportion much higher than current industry practice.
Among major AI companies, Hinton identified Anthropic as "the most concerned with safety," noting that many safety researchers who left OpenAI went to Anthropic. However, he expressed concern that Anthropic's investors might pressure the company to release products prematurely.
Nobel influence
Having received the Nobel Prize in Physics in 2023 for his pioneering work on neural networks, Hinton plans to use his enhanced credibility to sound the alarm about AI risks.
"I've talked mainly about that second threat [AI itself taking over] not because I think it's more important than the other threats, but because people thought it was science fiction. And I want to use my reputation to say no, it's not science fiction. We really need to worry about that," he explained.
Unlike climate change, where the solution pathway is relatively clear, Hinton emphasized that researchers don't yet know how to prevent superintelligent AI from potentially taking control. "The big companies aren't going to do that [serious safety research]. If you look what the big companies are doing right now, they're lobbying to get less AI regulation."
Public awareness challenge
Despite the potential existential risk, Hinton acknowledged the psychological difficulty of fully internalizing such concerns. "I don't despair, but mainly because even I find it very hard to take it seriously," he admitted. "It's very hard to get your head around the fact that we're at this very, very special point in history where in a relatively short time, everything might totally change – a change of a scale we've never seen before."
This cognitive challenge may explain the limited public response to AI risks so far. "I do notice even though people maybe are concerned, I've never seen a protest. There's no real political movement around this idea. The world is changing, and no one really seems to care that much," Hinton observed.
As a practical response to these concerns, Hinton mentioned spreading his savings across multiple banks due to the increasing risk of AI-powered cyberattacks potentially compromising financial institutions.
Timeline of AI development milestones according to Hinton
- Pre-2023: Hinton works at Google, helping develop neural network technology
- Late April 2023: Hinton resigns from Google after experiencing an "epiphany" about AI risks
- October 2023: Hinton awarded Nobel Prize in Physics for his pioneering work on neural networks
- March 2023 - April 2025: AI capabilities advance "even faster" than Hinton expected
- April 2025: Hinton revises timeline, estimating superintelligent AI could emerge within 4-19 years
- Current estimate: "Good chance" superintelligent AI will arrive within 10 years or less
- 10-20% probability: Hinton's estimate of AI eventually taking control from humans