A sweeping federal proposal, poised to ban states and local governments from creating their own regulations on artificial intelligence for five years, is nearing a crucial vote in Congress. Senator Ted Cruz (R-TX) and other Republican lawmakers are pushing to include this measure within the broader GOP budget bill currently before the Senate, aiming to secure its passage before a key July 4 legislative deadline.
High-profile tech leaders supporting the moratorium, like OpenAI CEO Sam Altman, Anduril founder Palmer Luckey, and prominent investor Marc Andreessen, argue that a fragmented regulatory landscape across states would hinder American innovation, especially at a critical juncture in technological competition with China.
In stark opposition, a diverse alliance—including most Democrats, many Republicans, Anthropic CEO Dario Amodei, labor organizations, AI safety nonprofits, and consumer advocates—warns that suspending state regulations would strip local governments of the ability to protect citizens from the potential dangers of AI. Critics insist that the moratorium would effectively grant powerful tech corporations unchecked operational freedom, diminishing accountability and oversight.
Amid backlash, 17 Republican governors last Friday urged Senate Majority Leader John Thune and House Speaker Mike Johnson via letter to strip the moratorium language from the budget proposal. Originally crafted as a ten-year pause preventing states from regulating AI in any way, the moratorium length was recently shortened to five years following a compromise between Cruz and Senator Marsha Blackburn (R-TN). Additionally, language was introduced that attempts to exempt state laws addressing child safety, child sexual abuse material, and protections regarding individuals’ personal rights like name, likeness, and image—provided these protections don’t impose “undue or disproportionate burdens” on AI development. Legal experts note, however, that this language remains unclear and its practical impact uncertain.
If enacted, the moratorium would supersede a number of existing state laws. For example, California’s AB 2013, requiring companies to disclose datasets used to train AI systems, and Tennessee’s ELVIS Act, which safeguards musicians from digital imitation, would now risk federal preemption. Similarly threatened would be various state-level laws nationwide designed to curb AI-generated deceptive media in elections.
The proposed measure has managed to advance by intricately linking the moratorium to the allocation of broadband infrastructure funding. Specifically, states receiving funds under the $42 billion Broadband Equity Access and Deployment (BEAD) program would be required to comply with the AI restrictions or possibly risk losing federal broadband funds. Cruz insists that proposed revisions limit this condition to the $500 million of new BEAD funds outlined in the current bill, though critics assert that the language in fact places pre-existing broadband funding commitments in jeopardy as well.
Opponents vigorously dispute the claim by tech executives that compliance with an array of differing state regulations is excessively burdensome, pointing out that large tech firms routinely navigate complex regulatory frameworks for other issues nationwide. Rather, critics argue that the moratorium aims to sidestep oversight altogether, considering federal inaction has left the nation without any existing national AI regulations. Dario Amodei publicly criticized the moratorium as too blunt a tool, warning that absent federal standards, inhibiting state action would leave America dangerously exposed.
Significant Republican pushback has further complicated matters. Conservative senators like Josh Hawley, concerned about the erosion of states’ rights, and Blackburn—despite her initial involvement in shortening the moratorium—have publicly criticized the move. Even Rep. Marjorie Taylor Greene has threatened to reject the overall budget proposal if the AI moratorium remains in place.
Public sentiment appears largely aligned against the idea of reducing regulatory oversight. Recent polling from Pew Research reveals that approximately 60% of Americans believe government protections around AI do not currently go far enough, expressing greater fear of inadequate oversight rather than excessive regulation.
As the Senate readies to engage in a lengthy series of crucial amendment votes on Tuesday, senators from both parties will have an opportunity to shape the future of AI policy. Republicans are expected to approve the Cruz-Blackburn amendment along party lines, while Democrats will seek to remove the AI moratorium altogether via an amendment of their own. How lawmakers ultimately settle this debate carries meaningful implications for both the future of technology sector innovation and AI’s relationship with society.