The majority of leading artificial intelligence companies are failing to manage catastrophic risks posed by increasingly powerful systems, according to a new assessment that ranks firms on safety planning, governance and mitigation of immediate harms. The Future of Life Institute’s AI Safety Index found that none of the seven companies evaluated achieved higher than a C+ overall, and that “existential safety remains the sector’s core structural failure,” a conclusion highlighted in the original report. [1][4]
The independent index , prepared by an expert panel of AI researchers and governance specialists , scored Anthropic highest (C+, 2.64), followed by OpenAI (C, 2.10) and Google DeepMind (C-, 1.76). xAI and Meta sat in a middle tier with D grades, while Chinese firms such as Zhipu AI and DeepSeek trailed with failing marks. The evaluation covered domains including risk assessment, current harms, safety frameworks, existential safety, governance and information sharing. Industry data shows no company scored above a D for planning to prevent existential risks. [4][5][6]
The report’s authors warned that companies’ public ambition to develop artificial general intelligence (AGI) is outpacing credible plans to prevent catastrophic misuse or loss of control. “While companies accelerate their AGI and superintelligence ambitions, none has demonstrated a credible plan for preventing catastrophic misuse or loss of control,” the assessment states, reflecting concerns echoed by external experts. One reviewer told The Guardian that, despite aiming to build human-level systems, none of the firms had “anything like a coherent, actionable plan” to ensure those systems remain safe and controllable. [1][3][4]
Prominent safety voices cited by the index delivered blunt appraisals. “AI CEOs claim they know how to build superhuman AI, yet none can show how they’ll prevent us from losing control – after which humanity’s survival is no longer in our hands,” said Stuart Russell, a professor of computer science at UC Berkeley, in comments reported in the original article. He added he was looking “for proof that they can reduce the annual risk of control loss to one in a hundred million, in line with nuclear reactor requirements,” contrasting that with some companies’ admissions that the risk could be “one in 10, one in five, even one in three.” [1]
The findings arrive amid growing concern about more immediate harms from advanced chatbots and generative systems, including reported links to self-harm and suicide in some interactions. Reuters and other commentators noted the wider context: major technology firms are funneling hundreds of billions into AI capability development even as regulatory frameworks lag, and some researchers including Geoffrey Hinton and Yoshua Bengio have publicly urged pauses or stricter oversight. The indexers and other safety groups described current corporate risk-management practices as “weak to very weak” and “unacceptable.” [2][3]
The companies named in the index offered guarded responses. According to the original report, an OpenAI representative said the company was working with independent experts to “build strong safeguards into our systems, and rigorously test our models”. A Google spokesperson pointed to its “Frontier Safety Framework” and said the company continues “to innovate on safety and governance at pace with capabilities.” The Independent noted it had reached out for comment from Alibaba Cloud, Anthropic, DeepSeek, xAI and Z.ai. Reuters reported that most firms did not respond to requests for comment. [1][2]
The Future of Life Institute’s second public evaluation underscores a widening governance gap: companies are pursuing more ambitious, potentially world-altering capabilities without publishing commensurate, actionable safety plans or sharing detailed assessments. The report urges greater transparency of companies’ own safety assessments, stronger independent oversight and binding standards to manage both near-term harms and existential threats , recommendations echoed by other safety-focused non-profits. Whether regulators will move fast enough to rein in the most dangerous failure modes of advanced AI remains an open question. [4][5][3]
##Reference Map:
- [1] (The Independent) - Paragraph 1, Paragraph 4, Paragraph 6
- [4] (Future of Life Institute) - Paragraph 2, Paragraph 3, Paragraph 7
- [5] (AI Governance Lab / report PDF) - Paragraph 2, Paragraph 7
- [3] (The Guardian) - Paragraph 3, Paragraph 7
- [2] (Reuters) - Paragraph 5, Paragraph 6
Source: Noah Wire Services