Geoffrey Hinton, a British-Canadian computer scientist renowned for his foundational work in artificial intelligence (AI), has recently expressed significant concerns regarding the rapid development of AI technologies and their potential risks. In 2024, Hinton, alongside John Hopfield, was awarded the Nobel Prize in Physics for their pioneering contributions to machine learning, particularly in the development of artificial neural networks that emulate human brain functions. (reuters.com)
Following his departure from Google in 2023, Hinton has been vocal about the need for stringent safety measures in AI development. He has advocated for government intervention to ensure that AI systems are developed responsibly, emphasizing that relying solely on corporate profit motives is insufficient to guarantee safety. Hinton has called for regulations to prevent the misuse of AI, stating, "We need regulations to stop people using it for bad things, and we don’t appear to have those kinds of political systems in place at present." (jerseyeveningpost.com)
In December 2024, Hinton warned of the existential risks posed by advanced AI, estimating a 10-20% chance that AI could lead to human extinction within the next 30 years. He highlighted the unprecedented nature of creating entities more intelligent than humans, noting, "How many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples." (newseu.cgtn.com)
Hinton's concerns have been echoed by other AI experts. In August 2024, he co-authored a letter with Yoshua Bengio, Lawrence Lessig, and Stuart Russell, supporting California's SB 1047 bill, known as the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act." This legislation aimed to impose strict regulations on AI companies, including liability for damages and the requirement of a "kill switch" for uncontrollable systems. The bill sought to address significant risks posed by AI, such as access to biological weapons and cyberattacks. (time.com)
Despite support from influential figures like Hinton, the bill faced opposition from tech industry leaders who argued it could stifle innovation. In September 2024, California Governor Gavin Newsom vetoed the bill, expressing concerns that it could adversely impact the state's competitive edge in AI. He preferred addressing demonstrable risks rather than hypothetical ones and indicated plans to collaborate with AI experts on future regulations. (lemonde.fr)
Hinton's advocacy for AI safety has also extended to international platforms. In October 2023, he co-authored a consensus paper titled "Managing extreme AI risks amid rapid progress," which outlined a comprehensive plan combining technical research and proactive governance mechanisms to prepare for the transformative impact of advanced AI systems. (arxiv.org)
Through his continued efforts, Hinton remains a prominent voice in the discourse on AI safety, urging for balanced development that harnesses AI's benefits while mitigating its potential risks.
Source: Noah Wire Services