Illustration for: Thinking Machines challenges OpenAI scaling, says superintelligence is a learner
Business & Startups

Thinking Machines challenges OpenAI scaling, says superintelligence is a learner

2 min read

Thinking Machines is basically throwing the usual AI playbook out the window. While the big AI firms keep throwing billions at ever-bigger models, hoping sheer scale will finally hit artificial general intelligence, a researcher from one of the most secretive, high-valued startups pushed back. “The first superintelligence will be a superhuman learner,” they said, putting the spotlight on learning ability rather than model size.

That’s a direct jab at OpenAI’s scaling-first approach, which many have taken as the gold standard. If a learner can actually outstrip raw compute, the whole race might start to look very different. It also makes you wonder what we should count as progress when we stop measuring by parameters and start measuring by what the system can actually do.

The partnership hints at a deeper split inside a field that often treats bigger as automatically better. Still, without more data it’s unclear how this learner-focused view will shift funding, research priorities, or the timeline for any kind of superintelligence.

While the world's leading artificial intelligence companies race to build ever-larger models, betting billions that scale alone will unlock artificial general intelligence, a researcher at one of the industry's most secretive and valuable startups delivered a pointed challenge to that orthodoxy this week: The path forward isn't about training bigger — it's about learning better. "I believe that the first superintelligence will be a superhuman learner," Rafael Rafailov, a reinforcement learning researcher at Thinking Machines Lab, told an audience at TED AI San Francisco on Tuesday. "It will be able to very efficiently figure out and adapt, propose its own theories, propose experiments, use the environment to verify that, get information, and iterate that process." This breaks sharply with the approach pursued by OpenAI, Anthropic, Google DeepMind, and other leading laboratories, which have bet billions on scaling up model size, data, and compute to achieve increasingly sophisticated reasoning capabilities. Rafailov argues these companies have the strategy backwards: what's missing from today's most advanced AI systems isn't more scale — it's the ability to actually learn from experience.

Related Topics: #Thinking Machines #OpenAI #superintelligence #AI #learner #scaling #artificial general intelligence #Rafael Rafailov #models #parameters

Some folks still wonder if bigger really means better. The latest comments from Thinking Machines hint that chasing ever-larger models might overlook a simpler need: a superhuman learner that can pull knowledge out efficiently. Rafael Rafailov, who works on reinforcement learning at the hush-hush startup, says the first superintelligence will probably come from smarter learning methods, not just more parameters.

Meanwhile, OpenAI and a handful of other companies keep throwing billions at scaling. That creates a counter-story that puts algorithmic tweaks ahead of raw size. It also makes you ask whether current money flows actually target the core learning problem.

The evidence for a learner-first route is thin, and it’s hard to say if it can beat the current hype around massive training runs. Critics will point to recent gains from bigger nets as proof that scale still matters. In the end, the field seems split between scaling optimism and a push for deeper insight into how learning works.

Whether a superhuman learner will show up before even larger systems remains uncertain.

Further Reading

Common Questions Answered

What is Thinking Machines' main challenge to OpenAI's approach to achieving superintelligence?

Thinking Machines challenges the idea that building ever-larger models through massive scaling investments is the primary path to artificial general intelligence. Instead, their researcher argues that the key breakthrough will come from creating a superhuman learner capable of extracting knowledge more efficiently.

According to Rafael Rafailov, what will the first superintelligence fundamentally be?

Rafael Rafailov, a reinforcement-learning researcher at Thinking Machines, states that the first superintelligence will be a superhuman learner. He frames the challenge as a matter of improving learning mechanisms rather than simply increasing the scale of AI models.

How does the article contrast Thinking Machines' strategy with that of leading AI companies like OpenAI?

The article contrasts the strategies by highlighting that leading AI companies are pouring billions into scaling ever-larger models to unlock AGI. In opposition, Thinking Machines posits that the race toward sheer scale may be missing the more fundamental requirement of developing superior learning capabilities.