Geoffrey Hinton, a Nobel laureate physicist widely recognised as the 'godfather of AI', has expressed a significant concern about the future of artificial intelligence, suggesting there is a 10 to 20 percent chance that AI could eventually take over humanity. His comments were made during an interview broadcast by CBS News, which aired on 1 April, and have since gained considerable attention due to his prestigious role in the development of AI.
Hinton, who won the Nobel Prize last year for his pioneering work on neural networks—machine learning models that emulate the reasoning of the human brain—warns that the advancements in AI could lead to scenarios where these systems possess intelligence surpassing human capability. He stated, "I'm in the unfortunate position of happening to agree with Elon Musk on this, which is that there's a 10 to 20 percent chance that these things will take over, but that's just a wild guess."
Elon Musk, known for his role as chief executive of xAI, a company behind the chatbot Grok, has previously predicted that AI will become smarter than the collective human race by 2029. Musk has also foreseen a future where AI could displace human jobs, performing tasks with greater efficiency.
In discussing the nature of AI, Hinton used the analogy of a tiger cub: while it may seem harmless at first, unless one is certain it will not be dangerous when fully grown, there is cause for concern. He remarked, "The best way to understand it emotionally is we are like somebody who has this really cute tiger cub. Unless you can be very sure that it's not gonna want to kill you when it's grown up, you should worry."
While current AI models largely exist as digital tools accessible on phones and computers, developments have advanced to the point where AI is now being integrated into physical robots. A recent example was showcased at Auto Shanghai 2025, where Chinese automaker Chery presented a humanoid robot resembling a young woman, capable of pouring drinks and interacting with people. According to Chinese state media, this robot is intended to assist customers and perform entertainment tasks.
Hinton believes that AI's capabilities will soon extend far beyond such services. He predicts revolutionary changes in sectors like healthcare and education. "In areas like healthcare, they will be much better at reading medical images, for example," he said. He further explained that AI can analyse millions of X-rays and learn from them, a process beyond the capability of individual doctors. Hinton also envisions AI becoming highly skilled family doctors, able to draw on extensive medical histories to make accurate diagnoses.
In education, Hinton predicts AI will emerge as an exceptional private tutor, able to tailor explanations and lessons to an individual’s precise misunderstandings. He stated, "These things, eventually, will be extremely good private tutors who know exactly what it is you misunderstand and exactly what example to give you to clarify it so you understand." He noted this could significantly accelerate learning, though it may impact traditional educational institutions.
Additionally, Hinton indicated AI's potential role in addressing climate change, by aiding in the design of better batteries and technologies for carbon capture.
For many of these projections to be realised, AI must reach what experts call artificial general intelligence (AGI)—a stage where AI is capable of intelligent reasoning across a broad range of tasks at or beyond human levels. Max Tegmark, a physicist at MIT who has studied AI extensively, has suggested that AGI could emerge within a relatively short timeframe. Hinton’s estimate places the arrival of AGI somewhere between five and twenty years from now.
Despite the promise of AGI, Hinton expressed concerns about the current approach towards AI safety by leading companies such as OpenAI, Google, and xAI. He criticised them for prioritising profits and lobbying for reduced AI regulation rather than investing sufficiently in safety research. "If you look at what the big companies are doing right now, they're lobbying to get less AI regulation. There's hardly any regulation as it is, but they want less," he said, advocating that up to a third of AI companies' computing power should be dedicated to safety.
A particular point of contention for Hinton is Google's reversal on its earlier pledge not to engage AI for military purposes. He was critical of Google's decision to provide enhanced AI tools to the Israel Defence Forces following the attacks on 7 October 2023, as reported by The Washington Post in January.
Hinton is among the signatories of the 'Statement on AI Risk', a 2023 open letter warning that mitigating the risk of AI-related extinction should be a global priority akin to combating pandemics and nuclear war. Other prominent signatories include Sam Altman of OpenAI, Dario Amodei of Anthropic, and Demis Hassabis of Google DeepMind.
These developments and cautionary statements illustrate the complex and evolving landscape of artificial intelligence, reflecting both its transformative potential and the significant challenges it presents to humanity.
Source: Noah Wire Services