Google DeepMind hires ex‑Boston Dynamics CTO to create Gemini AI for any robot
Google DeepMind just dropped a big hiring move: the guy who used to be CTO at Boston Dynamics is now heading a project to make Gemini robot-ready. It feels like a pivot, from pure research toward an intelligence that isn’t tied to any specific hardware. Gemini already runs a multimodal core that can juggle text, images, audio and video, but so far it’s lived mostly behind screens and cloud APIs.
DeepMind’s new push wants to close that gap, giving the model a way to sense and act in the physical world without a massive re-training effort. The former Boston Dynamics exec brings decades of work on locomotion, balance and real-world perception, which could help turn Gemini’s sensory fluency into on-the-ground decisions. If they pull it off, we might see the same AI powering a humanoid helper one day and a warehouse arm the next, shaving off a lot of the engineering overhead that usually separates software from robot hardware.
That kind of ambition underlies the statement you’ll see from DeepMind’s leadership.
"We want to build an AI system, a Gemini base, that can work almost out-of-the-box across any body configuration. Obviously humanoids, but non-humanoids too." Gemini's multimodal architecture allows it to process text, images, audio, and video, which could make it especially suited to guiding robots through complex environments. Deepmind's growing robotics portfolio Deepmind's robotics research stretches back years and includes foundational projects like RT-1 and RT-2--AI models designed to help robots learn from human demonstrations and generalize across tasks.
In September, the company introduced Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, twin systems that pair AI control with real-world robotic hardware. As global interest in humanoid machines accelerates, Deepmind is ramping up efforts to connect its models more directly with robotic platforms. Hassabis predicts a major breakthrough in AI-driven robotics "in the next couple of years." The humanoid race heats up Deepmind isn't the only player chasing this goal.
DeepMind’s newest hire feels like a signal. Aaron Saunders, who ran hardware at Boston Dynamics, is now Vice President of Hardware Engineering. He helped get Atlas to do backflips and gave Spot its nimble moves, so his background lines up with Google’s robot goals.
The company says Gemini will act as a “brain” you can plug into any robot - humanoid, quadruped, you name it. Its multimodal core already juggles text, images, audio and video, which might make sensor wiring a bit easier. Still, turning a huge model into a fast, reliable control loop isn’t simple.
It’s not clear yet whether one AI stack can handle the very different mechanics of today’s platforms. DeepMind is betting on “almost out-of-the-box” flexibility, but that will have to prove itself on real hardware. Adding Saunders gives the team a solid hardware angle that was missing before.
As the work moves forward, we’ll be watching for demos that go beyond lab toys. People will probably look at latency, safety and power use before they roll it out more widely.
Common Questions Answered
Who did Google DeepMind hire to lead the development of a robot‑ready Gemini AI?
Google DeepMind hired Aaron Saunders, the former chief technology officer of Boston Dynamics, as Vice President of Hardware Engineering. His experience with robots like Atlas and Spot is intended to accelerate Gemini's integration with diverse robotic platforms.
What capabilities does Gemini's multimodal architecture provide for robotics applications?
Gemini's multimodal core can process text, images, audio, and video, enabling it to interpret a wide range of sensor inputs. This versatility is expected to simplify sensor integration and help robots navigate complex environments more effectively.
How does DeepMind describe the intended hardware compatibility of the new Gemini base?
DeepMind aims to create a Gemini base that works "almost out-of-the-box" across any body configuration, including both humanoid and non‑humanoid robots. The goal is a hardware‑agnostic AI that can serve as the brain for robots ranging from quadrupeds to custom platforms.
Which prior DeepMind robotics projects are referenced as foundations for the new Gemini effort?
The article cites DeepMind's earlier robotics models RT‑1 and RT‑2 as foundational projects. These models demonstrated the feasibility of AI‑driven robot control, paving the way for Gemini to become a universal robotic brain.