Skip to main content
OpenAI co-founder Greg Brockman speaks on stage about GPT models' path to AGI, a significant tech conference moment.

Editorial illustration for Greg Brockman says GPT reasoning models have line of sight to AGI

GPT Reasoning Models Inch Closer to AGI Breakthrough

Greg Brockman says GPT reasoning models have line of sight to AGI

2 min read

Why does this matter now? OpenAI’s co‑founder Greg Brockman sat down with host Alex Kantrowitz to discuss the claim that GPT reasoning models have a “line of sight” to artificial general intelligence. While the idea sounds bold, the conversation quickly turned to what might be left out of the roadmap.

Kantrowitz nudged Brockman about “Sora‑style” world models—architectures that attempt to simulate broader contexts rather than narrow text prediction. He reminded the audience that DeepMind’s Demis Hassabis had previously noted Google’s “Nano Banana” image model as feeling “particularly close to AGI.” The question was simple: could OpenAI be skipping a crucial piece of the puzzle by focusing on reasoning alone? Brockman’s response acknowledges the tension.

He admits that in this fast‑moving field, missing a key component isn’t just a theoretical risk—it’s a practical concern that could shape the next generation of AI systems.

When host Alex Kantrowitz asks whether OpenAI could be missing something crucial by skipping Sora‑style world models, pointing out that Deepmind’s Demis Hassabis had said Google’s “Nano Banana” image model felt particularly close to AGI, Brockman acknowledges the risk: “In this field you do have to…”.

When host Alex Kantrowitz asks whether OpenAI could be missing something crucial by skipping Sora-style world models, pointing out that Deepmind's Demis Hassabis had said Google's "Nano Banana" image model felt particularly close to AGI, Brockman acknowledges the risk: "In this field you do have to make choices. You have to make a bet." Researchers remain divided on whether LLMs can reach general intelligence Whether purely text-based models can achieve general intelligence is far from settled in the broader AI research community. Renowned AI researcher Yann LeCun has argued for years that LLMs won't lead to human-like intelligence.

In his view, LLMs have a very limited understanding of logic, don't understand the physical world, have no permanent memory, cannot think rationally, and cannot plan hierarchically. Instead, he's betting on so-called world models to develop a comprehensive understanding of the environment. Deepmind founder Demis Hassabis holds a similar view: LLM scaling alone isn't enough, and further breakthroughs are needed.

Greg Brockman's confidence in text‑only reasoning models marks a clear strategic choice for OpenAI. He says the debate over the limits of purely textual systems is settled, and that these models give the company a direct line of sight to artificial general intelligence. At the same time, OpenAI is de‑emphasizing multimodal world models such as Sora, treating them as a separate track.

When asked whether skipping Sora‑style approaches might overlook a crucial piece, Brockman admitted that the field demands caution. He referenced DeepMind chief Demis Hassabis' comment that Google's “Nano Banana” image model felt especially close to AGI, underscoring that other teams see value in visual modalities. The statement leaves open whether text‑centric pathways will indeed bridge the gap to broader intelligence.

Unclear whether OpenAI's focus will eclipse research on multimodal integration, or if future breakthroughs will require both. Ultimately, the claim rests on internal assessments, and external validation remains pending.

Further Reading

Common Questions Answered

What does Greg Brockman mean by GPT reasoning models having a 'line of sight' to AGI?

Brockman suggests that text-based GPT models are a direct path to achieving artificial general intelligence (AGI). He believes that focusing on text-only reasoning models provides OpenAI with a strategic approach to developing more advanced AI systems.

How does OpenAI's approach differ from DeepMind's perspective on AI development?

While DeepMind and others are exploring multimodal world models like Sora, OpenAI is prioritizing text-based reasoning models as their primary path to AGI. Brockman acknowledges this is a deliberate choice, admitting that in the field of AI, researchers must make strategic bets about development approaches.

What potential risks does OpenAI recognize in their current AI development strategy?

Brockman openly admits there are risks in focusing exclusively on text-based models and potentially overlooking other approaches like world models. He recognizes that by making a specific choice in AI development, OpenAI might be missing crucial insights from alternative architectural approaches.