Government officials in cities like New York and states like California are increasingly turning to artificial intelligence as a solution for public sector challenges. However, recent investigations and reports reveal that this enthusiasm may come at a significant cost. The excitement surrounding generative AI tools masks critical flaws that can undermine public trust and potentially harm citizens, as evidenced by various alarming incidents.
In New York City, an investigation unearthed the troubling inadequacies of a chatbot designed to assist residents with inquiries about local ordinances and tenant rights. This chatbot, part of a broader “AI Action Plan” initiated by Mayor Eric Adams, reportedly guided landlords to discriminate against tenants requiring rental assistance, such as Section 8 vouchers—a clear violation of existing fair housing laws. Furthermore, it suggested that employers could withhold tips from workers, revealing a concerning lack of oversight and accountability in the deployment of AI tools. The chatbot, while intended as a helpful resource, was instead providing misinformation that could lead to unlawful practices and increased discrimination. With its authoritative presence on an official city government page, the potential for harmful consequences escalates greatly, particularly for already vulnerable populations.
The situation is similarly precarious in California, where Governor Gavin Newsom’s administration has embraced AI technology in a bid to maintain the state’s leadership in this emerging field. Under Newsom’s directives, various state agencies are experimenting with generative AI for tasks ranging from enhancing customer service in call centres to streamlining complex bureaucratic processes. While some proposals aim to improve transparency and efficiency—such as summarising documents or calculating tax credits—the reliance on AI for sensitive matters, including homelessness solutions and tax guidance, raises serious questions about whether these systems can truly deliver on their promises without human oversight. Reports indicate that the potential benefits of generative AI are being prioritised without sufficient consideration of the risks involved.
At the federal level, the use of AI technologies in immigration processes brings another layer of complexity and concern. The U.S. Customs and Border Protection agency has begun utilising machine translation tools to process asylum claims. This approach raises alarm bells, as inaccuracies in translation could lead to dire consequences for applicants—possibly returning them to perilous situations in their home countries. Experts warn that erroneous translations can jeopardise asylum claims, underscoring the dangers of automating such critical processes without sufficient human involvement.
Across the Atlantic, the UK government’s enthusiastic approach towards AI also merits scrutiny. At a summit held in 2023, then-Prime Minister Rishi Sunak lauded AI as the “greatest breakthrough of our time,” indicating a deep commitment to integrating these technologies across governmental structures. However, past experiences have sparked public backlash, particularly regarding the algorithms that governed A-level exam results, which left many students feeling unfairly treated. This discontent highlights a pattern of governance that may inadequately address the real-world implications of deploying AI in sensitive areas like public health and justice. Plans to implement chatbots throughout the National Health Service (NHS) for patient management and data transcription could exacerbate the already overstretched healthcare system, risking privacy breaches and incorrect data handling.
In the courtroom, experimental usage of tools like ChatGPT raises significant ethical and legal concerns. Judges using AI to summarise complex legal theories without comprehensive checks on accuracy may undermine the integrity of judicial processes. While the UK Judiciary Office has recognised the limitations of these tools, even casual endorsement poses risks, given the high-stakes nature of legal decisions affecting individuals' lives.
Although leaders like Adams, Newsom, and Sunak are enthusiastic about the "ethical" and "responsible" application of AI, a more prudent path might lie in exercising caution. When government operations inherently affect people’s rights and well-being, the need for human oversight becomes paramount. The growing trend of relegating crucial tasks to AI systems—often trained on inherently biased historical data—poses a grave risk to public accountability and the fair administration of justice. Rather than uncritically adopting the latest technological fads, it would be wiser for governments to ensure that the foundational values of accountability and fairness remain intact.
As AI continues to advance, the imperative grows for governments to strike a balance between innovation and responsibility. Trust in public institutions is fragile, and the misapplication of AI technology could irreparably damage public confidence, leading to unintended consequences that negate the purported benefits of these systems.
Reference Map:
- Paragraph 1 – [1], [2]
- Paragraph 2 – [1], [3], [4]
- Paragraph 3 – [1]
- Paragraph 4 – [1], [5]
- Paragraph 5 – [1], [6]
- Paragraph 6 – [1], [7]
Source: Noah Wire Services