Editorial illustration for Google DeepMind staff request physical safety from ICE agents in offices
DeepMind Researchers Demand Safety from ICE Agents
Google DeepMind staff request physical safety from ICE agents in offices
Why is a research lab in Cambridge suddenly sounding an alarm about immigration enforcement? Over the past few months, several DeepMind engineers have circulated internal notes asking senior leadership to address what they describe as “physical safety” concerns. The requests mention federal officials showing up at the office, prompting staff to wonder whether standard security protocols are enough.
While the tech team is busy publishing papers on reinforcement learning, a separate group of employees has been drafting emails to the company’s security chief, urging clearer guidance on how to handle visits from U.S. Immigration and Customs Enforcement officers. The messages, which Wired obtained, reference an alleged attempt by a federal agent to walk into the Cambridge building during the fall.
According to the correspondence, the head of security and risk operations has been looped in, but the staff’s tone suggests they feel the response so far falls short. The tension between cutting‑edge AI work and everyday safety worries is now spilling into internal discussions.
---
*Concerns about ICE agents entering Google's offices are not unfounded. In a message obtained by WIRED, a separate Google DeepMind staffer raised concerns about a federal agent's alleged attempt to enter the company's Cambridge, Massachusetts, office in the fall. Google's head of security and risk op*
Concerns about ICE agents entering Google's offices are not unfounded. In a message obtained by WIRED, a separate Google DeepMind staffer raised concerns about a federal agent's alleged attempt to enter the company's Cambridge, Massachusetts, office in the fall. Google's head of security and risk operations responded to this message to clarify what had happened.
They noted that an "officer arrived at reception without notice" and that the agent was "not granted entry because they did not have a warrant and promptly left." Google did not respond to a request for comment prior to publication. Google is one of many Silicon Valley firms that relies on thousands of highly skilled foreign workers, many of whom are in the United States on visas. In light of the Trump administration's immigration crackdown, these firms have had to offer increased protections for many of their workers.
Late last year, Google and Apple advised employees on visas not to leave the country after the White House toughened its vetting of visa applicants. At that time, Silicon Valley leaders were not shy about defending visa programs, which have allowed the United States to bring in top talent from around the globe. But AI executives have appeared hesitant to speak out about the federal government's latest immigration actions.
Beyond Google, top executives from Silicon Valley firms--including OpenAI, Meta, xAI, Apple, and Amazon--have yet to publicly comment on ICE activities. OpenAI CEO Sam Altman addressed the Minnesota incident in an internal message to the company, according to DealBook, telling employees that "what's happening with ICE is going too far." Anthropic executives have proved to be an exception.
DeepMind employees have asked for concrete safety measures. Their request is simple: policies that keep ICE agents out of office spaces. The timing follows a recent police shooting of Minneapolis nurse Alex Pretti, a development that appears to have heightened anxiety.
Earlier, a staffer warned that a federal agent had allegedly tried to enter the Cambridge office in the fall, suggesting that concerns are not new. Google’s head of security and risk operations is mentioned in the internal exchange, but the company has not disclosed any specific response. It is unclear whether formal guidelines will be issued or how quickly they might be implemented.
The internal messages, captured by WIRED, reveal a palpable unease among the roughly 3,000‑person AI unit, highlighting how operational security concerns intersect with broader societal tensions. Without further detail from Google, the effectiveness of any forthcoming safeguards remains uncertain. Employees seem to be seeking reassurance, yet the company’s next steps have not been made public.
The issue will likely stay under internal review.
Further Reading
- Papers with Code Benchmarks - Papers with Code
- Chatbot Arena Leaderboard - LMSYS
Common Questions Answered
What is the Frontier Safety Framework (FSF) that Google DeepMind published?
[deepmind.google](https://deepmind.google/blog/strengthening-our-frontier-safety-framework/) describes the FSF as a set of protocols aimed at addressing severe risks from advanced AI models. The framework focuses on identifying critical capability levels where AI models might pose significant risks and implementing mitigation strategies across different risk domains including misuse, machine learning research, and potential misalignment.
What new risk domain did Google DeepMind introduce in this version of the Frontier Safety Framework?
The updated framework introduces a Critical Capability Level (CCL) focused on harmful manipulation, specifically addressing AI models with powerful manipulative capabilities. This addition aims to identify and evaluate mechanisms that could systematically change beliefs and behaviors in high-stakes contexts, potentially resulting in severe expected harm.
How has Google DeepMind expanded its approach to misalignment risks in this framework update?
The framework now provides expanded protocols for addressing potential scenarios where misaligned AI models might interfere with operators' ability to direct, modify, or shut down their operations. This update includes more detailed approaches to instrumental reasoning and machine learning research and development critical capability levels, addressing both potential misuse risks and risks stemming from a model's potential for undirected action.