Illustration for: NY AI safety bill defanged as NYU, Dartmouth back industry ties
Policy & Regulation

NY AI safety bill defanged as NYU, Dartmouth back industry ties

3 min read

New York’s much‑talked‑about AI safety bill has lost much of its bite, and the shift has raised eyebrows across the state’s tech and academic circles. Lawmakers originally framed the legislation as a safeguard against unchecked artificial‑intelligence development, yet recent amendments have softened enforcement mechanisms and trimmed reporting requirements. Critics point to a growing chorus of university leaders and researchers who have been quietly aligning themselves with industry players.

While the bill’s sponsors claim the changes reflect pragmatic compromise, the timing coincides with a flurry of university‑industry collaborations that blur the line between independent oversight and corporate interest. Here’s the thing: the same institutions now stepping into the policy conversation have also been receiving direct support from the very companies the bill was meant to regulate. The details below illustrate how those ties are shaping the conversation.

*In 2023, OpenAI funded a journalism ethics initiative at NYU. Dartmouth announced a partnership with Anthropic earlier this month, a Carnegie Mellon University professor currently serves on OpenAI's board, and Anthropic has funded programs at Carnegie Mellon. The initial version of the RAISE Act sta*

In 2023, OpenAI funded a journalism ethics initiative at NYU. Dartmouth announced a partnership with Anthropic earlier this month, a Carnegie Mellon University professor currently serves on OpenAI's board, and Anthropic has funded programs at Carnegie Mellon. The initial version of the RAISE Act stated that developers must not release a frontier model "if doing so would create an unreasonable risk of critical harm," which the bill defines as the death or serious injury of 100 people or more, or $1 billion or more in damages to rights in money or property stemming from the creation of a chemical, biological, radiological, or nuclear weapon.

That definition also extends to an AI model that "acts with no meaningful human intervention" and "would, if committed by a human," fall under certain crimes. Hochul also increased the deadline for disclosure for safety incidents and lessened fines, among other changes. The AI Alliance has lobbied previously against AI safety policies, including the RAISE Act, California's SB 1047, and President Biden's AI executive order.

It states that its mission is to "bring together builders and experts from various fields to collaboratively and transparently address the challenges of generative AI and democratize its benefits," especially via "member-driven working groups." Some of the group's projects beyond lobbying have involved cataloguing and managing "trustworthy" datasets and creating a ranked list of AI safety priorities. The AI Alliance wasn't the only organization opposing the RAISE Act with ad dollars. As The Verge wrote recently, Leading the Future, a pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (a16z), Palantir cofounder Joe Lonsdale, and OpenAI president Greg Brockman, has spent money on ads targeting the cosponsor of the RAISE Act, New York State Assemblymember Alex Bores.

Related Topics: #AI safety #OpenAI #Anthropic #NYU #Dartmouth #RAISE Act #frontier model #Carnegie Mellon

Did the ad spend really shift public opinion? While the campaign reached over two million viewers, the precise impact on legislative outcomes remains unclear. Universities haven't denied involvement.

OpenAI's 2023 funding of a journalism ethics initiative at NYU, Dartmouth's recent partnership with Anthropic, and a Carnegie Mellon professor's seat on OpenAI's board illustrate a web of connections that complicate the narrative of independent academic opposition. Anthropic's contributions to Carnegie Mellon programs add another layer of industry‑academic overlap. The RAISE Act, initially broader in scope, was trimmed in its final form, but whether the ad campaign directly caused that change is uncertain.

Critics note the modest spending—between $17,000 and $25,000—yet the reach suggests a coordinated effort. Without transparent accounting of how the funds were allocated, the true motives behind the push remain ambiguous. What this episode reveals is a pattern of alignment between tech firms and higher‑education institutions that merits closer scrutiny.

Further Reading

Common Questions Answered

How did the recent amendments change the enforcement mechanisms of New York's AI safety bill?

The amendments softened the bill's enforcement by reducing the strictness of penalties for non‑compliance and by scaling back the authority of oversight bodies. This shift makes it easier for developers to avoid severe repercussions if they release frontier models.

What reporting requirements were trimmed from the original version of the RAISE Act?

The original RAISE Act mandated detailed disclosures about model capabilities, risk assessments, and mitigation strategies, but the revised bill now requires only minimal summary reports. This reduction lessens the administrative burden on AI developers and limits transparency.

Which universities have recent financial or partnership ties to major AI firms mentioned in the article?

NYU received funding from OpenAI for a journalism ethics initiative, Dartmouth entered a partnership with Anthropic, and Carnegie Mellon both hosts a professor on OpenAI's board and receives Anthropic‑funded programs. These connections illustrate growing industry influence within academia.

What definition of "critical harm" did the initial RAISE Act use for frontier model releases?

The initial RAISE Act defined critical harm as the death or serious injury of at least 100 people resulting from the deployment of a frontier AI model. This threshold set a high bar for restricting model releases based on potential public safety risks.