National, regional, and municipal leaders are increasingly enamoured with the potential of artificial intelligence (AI) to streamline government processes and enhance service delivery. However, a growing body of evidence suggests that the enthusiasm for generative AI tools is misplaced, revealing a catastrophic disconnect from the complex needs of public administration. For instance, a recent investigation uncovered that a chatbot used in New York City inadvertently encouraged landlords to discriminate against potential tenants who require rental assistance, showcasing a glaring disregard for fair housing laws and raising ethical concerns about the reckless deployment of AI in sensitive government roles.

In New York, Mayor Eric Adams introduced an “AI Action Plan” aimed at integrating various AI technologies into city operations. However, the flawed chatbot designed to assist residents has come under fire for disseminating erroneous and even illegal advice, like sanctioning discrimination against low-income tenants. Such miscommunication can empower unscrupulous landlords while undermining the legal protections intended to assist our most vulnerable citizens. This trend underscores a lack of accountability that reflects poorly on those in power, rather than the principles of justice they ought to uphold.

California's situation is no better. Governor Gavin Newsom’s administration is deeply invested in AI, with the California Department of Tax and Fee Administration introducing a chatbot to assist call center agents with state tax inquiries. While sold as an internal solution, the notion of deploying AI in roles traditionally performed by trained professionals raises legitimate concerns about the reliability of information provided to the public. Furthermore, a pilot project addressing homelessness reveals a shocking reliance on AI for complex social issues better suited to human expertise—an alarming shift towards neglecting the human element in governance.

At the US border, the deployment of machine translation tools for processing asylum claims has historically been less effective than human translators. Relying on AI for such critical processes can have catastrophic consequences, where translation errors jeopardise individuals' rights to asylum. As Ariel Koren from Respond Crisis Translation points out, the government's tendency to exploit minor discrepancies to justify deportations illustrates a dangerous trend that plays fast and loose with human rights in the name of expediency.

Meanwhile, in the UK, the government has followed suit in its ambitious AI agenda. In an early 2024 announcement, then-Prime Minister Rishi Sunak labelled generative AI as the "greatest breakthrough of our time," promoting plans to deploy AI across sectors like healthcare. Yet, the implications for patient privacy and data security are deeply troubling. Recent research highlights patient concerns regarding the safeguarding of sensitive health information, particularly given the NHS's ties to companies like Palantir, which have been embroiled in controversies around governmental surveillance.

The integration of AI within the judicial system further underscores the reckless abandon with which these technologies are being adopted. A judge recently relied on outputs from ChatGPT, describing the tool as “jolly useful.” This development raises serious concerns about the reliability of AI within a judicial context, where outcomes can drastically affect individuals’ rights and freedoms. The UK Judiciary Office has issued warnings about the biases and limitations of these tools, yet their continued use suggests a disturbing erosion of judicial integrity that places political expediency above accountable governance.

Leaders such as Adams, Newsom, and Sunak have embraced generative AI with an overconfidence that dismisses the essential need for human oversight in government duties. Although they profess a commitment to ethical and responsible use, the precarious nature of ceding significant governmental functions to opaque algorithms reveals a different narrative—one where the risks of technological reliance may far outweigh any perceived benefits. This highlights an urgent need for reassessment of how AI tools are integrated into processes that fundamentally impact public welfare, liberty, and equity. The complexities of governance, along with the values of accountability and transparency, should not be sacrificed in the face of unexamined technological optimism.

Source: Noah Wire Services