AI Accountability vs. Innovation: New York’s Secret Weapon Awaiting Governor’s Nod

New York state lawmakers passed legislation on Thursday targeting the potential dangers posed by large-scale artificial intelligence systems, specifically to prevent AI-generated disasters that could result in more than 100 casualties or damages exceeding $1 billion. Named the RAISE Act, the bill marks a significant achievement for AI safety advocates who had previously faced setbacks as Silicon Valley and the Trump Administration emphasized rapid innovation over regulatory oversight.

Supported prominently by renowned AI researchers, including Nobel laureate Geoffrey Hinton and pioneering scientist Yoshua Bengio, the RAISE Act seeks to establish America’s first legally binding transparency standards for major AI laboratories. The bill is now awaiting action from New York Governor Kathy Hochul, who can choose to sign it into law, send it back for rewrite amendments, or veto it entirely.

Should Governor Hochul sign the bill, the legislation would compel the world’s leading AI companies, such as OpenAI, Google, and Anthropic, to provide detailed safety and security assessments of their most advanced AI model developments. Labs would also be tasked with reporting specific safety incidents involving concerning model behaviors or the theft of AI technologies. Non-compliance would enable New York’s Attorney General to impose civil penalties of up to $30 million.

Rather than broadly impacting AI innovation nationwide, Senator Andrew Gounardes, the lead sponsor of the bill, stressed in an interview that the RAISE Act is intentionally designed to narrowly apply to the largest and most resource-intensive AI firms. Targeted companies are specifically defined under this legislation as those whose artificial intelligence models have involved expenditures of over $100 million in computing power and which are available for use within New York state.

Silicon Valley and tech industry representatives have voiced stiff opposition, characterizing the regulations as burdensome and potentially harmful to technological progress. One commonly raised concern is that restrictive measures could persuade major AI developers to withhold their products from the New York market altogether. However, New York State Assemblymember Alex Bores, co-sponsor of the RAISE Act, dismissed these criticisms as unfounded. Given New York state’s status as the third largest economy in the U.S., Bores argues, the economic incentives for tech giants to stay are clear and substantial.

Anthropic, an AI lab focused on safety and responsible development—recently vocal in its call for rigorous federal guidelines—has expressed reservations regarding the possible unintended consequences of the RAISE Act on smaller-scale developers. Addressing this critique, Senator Gounardes reiterated that the bill explicitly excludes smaller organizations and academic research labs from its regulatory framework. Other AI industry leaders, including OpenAI, Google, and Meta, have so far declined to publicly comment on the measure.

The New York legislation resembles a similar effort in California, bill SB 1047, which was ultimately vetoed. California’s proposal had drawn extensive criticism for imposing conditions perceived to stifle innovation. In contrast, proponents of the RAISE Act maintain that their measure strikes a balanced approach by establishing essential transparency standards without hindering future advancements in AI technology.

Ultimately, while the legislative move in New York marks a prominent step forward in U.S. AI regulation, its future depends upon Governor Hochul’s decision in the coming weeks.

More From Author

Mysterious Silicon Valley Giant Clay Surges to a $3 Billion Valuation – What’s Their Secret Formula?

Mysterious Billion-Dollar Twist: The Secret Bid to Save 23andMe and the Battle Over Your Genetic Data

Leave a Reply

Your email address will not be published. Required fields are marked *