Skip to main content
New York Assembly hearing room with lawmakers listening as NYU and Dartmouth reps speak, logos displayed on a backdrop.

Editorial illustration for NY AI Safety Bill Weakens as Universities Embrace Tech Industry Partnerships

NY AI Safety Bill Crumbles as Universities Court Tech Giants

NY AI safety bill defanged as NYU, Dartmouth back industry ties

Updated: 3 min read

New York's ambitious AI safety legislation is quietly transforming, and not in the way its original drafters intended. The RAISE Act, once positioned as a stringent regulatory framework for artificial intelligence, now appears to be softening under pressure from academic institutions eager to maintain cozy relationships with tech industry giants.

Universities have emerged as unexpected brokers in this regulatory dance, strategically aligning themselves with AI companies through lucrative partnerships and funding arrangements. Their involvement suggests a complex web of financial incentives that could fundamentally reshape how AI technologies are developed and governed.

The shifting landscape reveals a nuanced dynamic: academic institutions are no longer neutral observers, but active participants in the AI ecosystem. Their growing ties to companies like OpenAI and Anthropic raise critical questions about potential conflicts of interest and the independence of technological oversight.

Behind closed doors, a quiet negotiation is unfolding, one that could determine the future of AI regulation in New York and potentially set precedents for other states watching closely.

In 2023, OpenAI funded a journalism ethics initiative at NYU. Dartmouth announced a partnership with Anthropic earlier this month, a Carnegie Mellon University professor currently serves on OpenAI's board, and Anthropic has funded programs at Carnegie Mellon. The initial version of the RAISE Act stated that developers must not release a frontier model "if doing so would create an unreasonable risk of critical harm," which the bill defines as the death or serious injury of 100 people or more, or $1 billion or more in damages to rights in money or property stemming from the creation of a chemical, biological, radiological, or nuclear weapon.

That definition also extends to an AI model that "acts with no meaningful human intervention" and "would, if committed by a human," fall under certain crimes. Hochul also increased the deadline for disclosure for safety incidents and lessened fines, among other changes. The AI Alliance has lobbied previously against AI safety policies, including the RAISE Act, California's SB 1047, and President Biden's AI executive order.

It states that its mission is to "bring together builders and experts from various fields to collaboratively and transparently address the challenges of generative AI and democratize its benefits," especially via "member-driven working groups." Some of the group's projects beyond lobbying have involved cataloguing and managing "trustworthy" datasets and creating a ranked list of AI safety priorities. The AI Alliance wasn't the only organization opposing the RAISE Act with ad dollars. As The Verge wrote recently, Leading the Future, a pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (a16z), Palantir cofounder Joe Lonsdale, and OpenAI president Greg Brockman, has spent money on ads targeting the cosponsor of the RAISE Act, New York State Assemblymember Alex Bores.

The emerging landscape of AI safety legislation reveals a complex interplay between academic institutions and tech industry interests. Universities like NYU and Dartmouth are increasingly aligning with AI companies, potentially undermining independent oversight efforts.

These partnerships suggest a blurring of lines between academic research and corporate influence. OpenAI's funding of NYU's journalism ethics initiative and Dartmouth's collaboration with Anthropic highlight how tech giants are strategically embedding themselves within academic ecosystems.

The RAISE Act's initial strong language about preventing "critical harm" appears to be weakening. With academic institutions seemingly more receptive to industry partnerships, meaningful AI safety regulation becomes challenging.

The involvement of Carnegie Mellon professors on corporate boards further demonstrates the intricate connections between academia and AI companies. These relationships raise questions about the independence of research and the potential for compromised regulatory frameworks.

While the full implications remain uncertain, the boundaries between academic research, corporate interests, and AI safety regulation are becoming increasingly porous. Universities' embrace of tech industry partnerships could significantly impact future AI governance approaches.

Further Reading

Common Questions Answered

How are universities influencing the NY AI Safety Bill (RAISE Act)?

Universities are strategically aligning with tech companies through partnerships and funding arrangements, which appear to be softening the original strict regulatory framework of the RAISE Act. These collaborations, such as NYU's partnership with OpenAI and Dartmouth's with Anthropic, are potentially undermining independent AI safety oversight efforts.

What specific partnerships demonstrate the tech industry's influence on academic institutions?

Several notable partnerships highlight the growing connection between tech companies and universities, including OpenAI funding a journalism ethics initiative at NYU and Dartmouth announcing a partnership with Anthropic. Additionally, a Carnegie Mellon University professor currently serves on OpenAI's board, and Anthropic has funded programs at Carnegie Mellon.

What was the original intent of the RAISE Act's language about AI model release?

The initial version of the RAISE Act proposed that developers must not release a frontier AI model if doing so would create an unreasonable risk of critical harm, which was specifically defined as potential death or serious injury to 100 or more people. This language suggested a stringent approach to AI safety regulation before the bill began to soften under academic and industry pressure.