At the recent RSA Conference held in San Francisco, renowned cryptography expert Bruce Schneier raised significant concerns regarding the integrity of corporate artificial intelligence (AI) models, suggesting that these systems are increasingly tailored to benefit their creators rather than serve consumers effectively. In a dialogue with The Register, Schneier warned that, if unregulated, AI risks becoming another mechanism for commercial manipulation akin to the way search engines operate. He stated, “I worry that it'll be like search engines, which you use as if they are neutral third parties but are actually trying to manipulate you. They try to kind of get you to visit the websites of the advertisers.”

During his keynote address, Schneier interrogated the motivations behind AI recommendations, asking listeners to consider whether a chatbot suggests a specific airline or hotel because it is the best option for the customer, or whether the suggestion is influenced by kickbacks. He argues that without transparent disclosures regarding the data models receive and the reasoning behind their decisions, bias in AI systems remains a complex challenge.

To address these concerns, Schneier advocates for more rigorous regulatory frameworks. He commended the EU's upcoming AI Act, which is set to commence in August 2024. This landmark legislation introduces tiered requirements based on the assessed risk level of AI systems, mandating that high-risk AI implementations uphold comprehensive technical documentation and transparency regarding their operational processes. Schneier expressed optimism about the potential global implications of the act, suggesting, “Because the EU is the world's largest trading bloc, the law is expected to have a significant impact on any company wanting to do business there.”

However, he noted that genuine legislative progress in the United States remains unlikely under the current administration. The issue of bias is further complicated by user preferences, as individuals often gravitate towards systems that reinforce their existing viewpoints. To illustrate this, Schneier provided the example of AI assistants used by judges, who may select technological partners aligned with their own perspectives.

Emphasising the importance of transparency, Schneier asserts that government and academia must play a role in creating alternatives to corporate AI solutions. He expressed a desire for developments in AI that are nurtured outside profit-driven enterprises, stating, “I think I just want non-corporate AI.” He referenced France's initiative, Current AI, launched in February 2025, which is a public-private partnership aiming to create AI systems with greater accountability and societal focus. This initiative boasts funding of $400 million and is supported by multiple governments, highlighting a collaborative approach to AI development.

French President Emmanuel Macron remarked on the transformative potential of Current AI, asserting that it would enhance access to essential data and infrastructure, ultimately contributing to a diverse and innovative AI ecosystem within Europe.

Despite the advantages presented by alternative, publicly-oriented AI models, Schneier anticipates resistance from corporate interests. He emphasised the urgent need for legislators to succeed where they have faltered in regulating social media and search engines, noting, “We failed with social media. We failed with search, but we can do it with AI.” He posited that an effective regulatory framework must combine technological advancements with sound policy to safeguard consumer interests in the burgeoning field of artificial intelligence.

Source: Noah Wire Services