The rapid growth of artificial intelligence (AI) has prompted a flurry of legislative activity across the United States, with states introducing over 1,000 bills related to AI and more than 160 laws already enacted. These range from broad model-level regulation, such as in Colorado, to targeted measures addressing areas like AI use in hiring or required disclosures from major AI companies. While state-level initiatives aim to address a range of concerns, this patchwork regulatory landscape presents significant risks to innovation. The multiplicity of compliance requirements could impose substantial costs on developers, especially smaller entities, complicating the deployment of AI technologies nationwide. Such a fragmented regulatory approach also threatens to disrupt the light-touch innovation environment that positioned the US as a global leader during the internet era.

AI innovation inherently transcends state borders, as the development, computational resources, and deployment of AI involve interstate commerce. This raises important federalism questions about the appropriate balance between state and federal oversight. Most AI-related challenges, ranging from model development to ensuring safety, extend beyond single-state boundaries, suggesting that a unified federal framework could better serve innovation while addressing user and societal concerns. State laws regulating AI models directly, such as Colorado’s legislation, present particular problems by imposing static requirements on a rapidly evolving technology, with the potential to restrict availability of AI products nationwide. Conversely, states might play positive roles in areas more clearly confined to intrastate matters, such as establishing safeguards for civil liberties in state government AI use or updating laws to clarify AI-related liability.

Federal intervention has been considered to prevent a regulatory patchwork that could stymie AI development. Notably, in June 2025, a proposed ten-year moratorium on state-level AI regulation was introduced as part of a broader federal bill known colloquially as the "One Big Beautiful Bill." However, the Senate overwhelmingly rejected this measure, with critics wary that the moratorium would shield the industry from meaningful oversight amid perceived federal inaction. More recently, former President Donald Trump has expressed support for a federal preemption of state AI laws. According to reports, he is contemplating an executive order aimed at overriding state regulations that threaten to fragment the market. The draft order would direct the Attorney General to establish an AI Litigation Task Force to challenge state laws on constitutional grounds, including interference with interstate commerce. It would also empower the Department of Commerce to evaluate potentially harmful state laws and consider withholding broadband funding in response, signaling a strong federal stance against state-level AI regulations seen as burdensome.

California has been at the forefront of state-level AI regulation, exemplified by the landmark SB 53 law signed by Governor Gavin Newsom in September 2025. This legislation mandates major AI companies, those with revenues exceeding $500 million, to publicly disclose their strategies for mitigating catastrophic risks posed by advanced AI, such as loss of human control or bioweapon development. The law, which carries fines of up to $1 million per violation, aims to address regulatory gaps left by federal lawmakers and establish California as a leader in responsible AI governance. However, Newsom also vetoed a more prescriptive bill earlier, concerned it would impose rigid requirements potentially hindering AI industry growth. Instead, he favours collaboration with industry experts to develop nuanced safety guidelines.

In balancing federal and state roles in AI governance, policymakers should guard against undermining innovation through overly restrictive or inconsistent regulations. Many of the harms attributed to AI, fraud, discrimination, malicious use, could be addressed under existing legal frameworks without rushing AI-specific statutes, thereby allowing flexibility for the technology to evolve. Furthermore, states can support innovation indirectly by reforming related policy areas, such as energy regulation, which can facilitate technological progress.

An executive order approach to federal preemption may offer a quicker response but risks being vulnerable to reversal and may lack the permanence a legislative solution provides. Legislative action, by contrast, can be more carefully tailored to uphold federalism principles while preventing disruptive regulatory fragmentation. Lessons from earlier internet-era laws, such as the Internet Tax Freedom Act and Section 230, demonstrate how federal preemption can protect innovation while maintaining necessary oversight. Ultimately, any federal policy geared toward AI must carefully define its scope to prevent a patchwork of state mandates without ceding excessive control to administrative agencies.

As AI continues to develop with profound potential impacts across sectors, from medical advances to disaster response, the stakes for effective governance are high. The evolving debate underscores the necessity of crafting a balanced federal framework that preserves America’s leadership in AI innovation while safeguarding public interests from the risks posed by a chaotic landscape of state-by-state regulations.

📌 Reference Map:

  • [1] (Cato Institute) - Paragraphs 1, 2, 3, 4, 5, 6, 7, 8, 9
  • [2] (Cato Institute) - Paragraph 1, 5
  • [3] (Reuters) - Paragraph 5
  • [4] (Reuters) - Paragraph 6
  • [5] (Time) - Paragraph 5
  • [6] (AP News) - Paragraph 7
  • [7] (AP News) - Paragraph 6

Source: Noah Wire Services