California has moved to tighten controls on artificial intelligence used by the state, ordering that companies supplying AI systems to government must demonstrate they have measures in place to prevent biased outcomes, civil rights harms and the distribution of illegal material before they can win contracts, according to the governor's office. (Sources: Governor's press releases on AI policy and subsequent initiatives.)
The executive action tasks the Department of General Services and the California Department of Technology with developing vendor certification requirements on an accelerated timetable so procurement decisions incorporate assessments of model governance and risk mitigation. According to the state, the work is part of a broader effort to build a responsible, transparent approach to adopting AI in public services. (Sources: California executive order and related state AI directives.)
Under the direction, companies would need to attest that their systems include safeguards against exploitation or dissemination of illegal content, measures to reduce harmful model bias, and protections against violations of civil liberties such as unlawful discrimination, surveillance, or impacts on free exercise of rights. The state framed these steps as integral to ensuring AI used by government does not erode legal or ethical safeguards. (Sources: Governor's AI executive materials; state statements on harms and governance.)
The move follows a string of state actions aimed at curbing malicious or deceptive uses of AI: in 2024 the governor approved laws addressing sexually explicit deepfakes and requiring watermarking of AI-generated content, and later measures strengthened online protections for children, including tougher penalties for those who profit from illegal manipulated media. Officials presented those laws as complementary tools to procurement standards, targeting both supply-chain responsibility and consumer-facing harms. (Sources: California legislation on deepfakes and online child protections; subsequent executive statements.)
At the same time, state leaders have signalled a willingness to deploy generative AI where it can improve public services, from easing call-centre demand to supporting wildfire response and traffic management. The approach reflects a dual aim: to harness efficiency gains while imposing guardrails so technology does not amplify bias or enable abuse. (Sources: State announcements on GenAI deployments; launch of AI chatbot for wildfire resources.)
Policy advocates and industry groups have welcomed clarity around procurement but urged detailed, enforceable criteria and independent oversight to ensure attestations translate into demonstrable safety in practice. The administration has indicated it will draw on expert input as agencies finalise the certification framework. (Sources: Governor's AI initiative briefings; state calls for expert-led guidance.)
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [3]
- Paragraph 2: [2], [3]
- Paragraph 3: [2], [3]
- Paragraph 4: [5], [4]
- Paragraph 5: [6], [7]
- Paragraph 6: [3], [2]
Source: Noah Wire Services