Skip to main content
OpenAI CEO Sam Altman on stage beside a large screen displaying “gpt‑oss‑120B” and “gpt‑oss‑20B” logos, audience applauding.

Editorial illustration for OpenAI Launches Two Open-Weight Reasoning Models Under Apache License

OpenAI releases gpt-oss-120B and gpt-oss-20B under Apache-2.0-style license

3 min read

OpenAI's latest move signals a potentially significant shift in AI accessibility. The company has quietly released two open-weight reasoning models, gpt-oss-120B and gpt-oss-20B, under an Apache-style license that could reshape how developers and researchers interact with advanced AI technologies.

This strategic release comes at a moment of intense debate about AI transparency and open-source development. By making these models available with permissive licensing, OpenAI appears to be responding to growing calls for more accessible machine learning infrastructure.

The models represent an intriguing experiment in AI distribution. While their exact capabilities remain to be fully understood, the Apache 2.0 license suggests OpenAI is willing to let the broader tech community experiment, modify, and potentially improve these reasoning systems.

Early reactions from the open-source community have been mixed, with initial feedback already highlighting potential limitations. But the release itself marks a notable moment in OpenAI's evolving approach to model sharing and collaboration.

Finally -- and maybe most symbolically -- OpenAI released gpt-oss-120B and gpt-oss-20B, open-weight MoE reasoning models under an Apache 2.0-style license. Whatever you think of their quality (and early open-source users have been loud about their complaints), this is the first time since GPT-2 that OpenAI has put serious weights into the public commons. China's open-source wave goes mainstream If 2023-24 was about Llama and Mistral, 2025 belongs to China's open-weight ecosystem.

A study from MIT and Hugging Face found that China now slightly leads the U.S. in global open-model downloads, largely thanks to DeepSeek and Alibaba's Qwen family. Highlights: DeepSeek-R1 dropped in January as an open-source reasoning model rivaling OpenAI's o1, with MIT-licensed weights and a family of distilled smaller models.

VentureBeat has followed the story from its release to its cybersecurity impact to performance-tuned R1 variants. Kimi K2 Thinking from Moonshot, a "thinking" open-source model that reasons step-by-step with tools, very much in the o1/R1 mold, and is positioned as the best open reasoning model so far in the world. Z.ai shipped GLM-4.5 and GLM-4.5-Air as "agentic" models, open-sourcing base and hybrid reasoning variants on GitHub.

Baidu's ERNIE 4.5 family arrived as a fully open-sourced, multimodal MoE suite under Apache 2.0, including a 0.3B dense model and visual "Thinking" variants focused on charts, STEM, and tool use. Alibaba's Qwen3 line -- including Qwen3-Coder, large reasoning models, and the Qwen3-VL series released over the summer and fall months of 2025 -- continues to set a high bar for open weights in coding, translation, and multimodal reasoning, leading me to declare this past summer as " VentureBeat has been tracking these shifts, including Chinese math and reasoning models like Light-R1-32B and Weibo's tiny VibeThinker-1.5B, which beat DeepSeek baselines on shoestring training budgets.

Related Topics: #OpenAI #GPT-OSS #Open-Weight Models #Apache License #AI Transparency #Machine Learning #Reasoning Models #Open Source AI #DeepSeek #AI Accessibility

OpenAI's latest move signals a potential shift in AI model accessibility. The release of gpt-oss-120B and gpt-oss-20B under an Apache 2.0-style license marks a rare moment of open-sourcing from a company typically known for closed models.

Early user feedback suggests the models aren't without criticism. Still, this represents OpenAI's first substantial public model release since GPT-2, hinting at a possible strategic recalibration.

The broader context suggests an emerging global open-source AI landscape. While 2023-24 saw significant contributions from Llama and Mistral, the narrative seems to be pointing toward China's growing open-weight ecosystem.

Symbolic gestures matter in tech. By putting these reasoning models into the public commons, OpenAI potentially opens doors for researchers, developers, and enthusiasts to experiment and create.

Questions remain about performance and practical applications. But for now, this release represents a noteworthy moment in the ongoing evolution of accessible AI technology.

Further Reading

Common Questions Answered

What are the specific details of OpenAI's newly released open-weight reasoning models?

OpenAI has launched two open-weight reasoning models named gpt-oss-120B and gpt-oss-20B under an Apache 2.0-style license. These models represent OpenAI's first significant open-source model release since GPT-2, potentially signaling a new approach to AI model accessibility and transparency.

How does the Apache 2.0-style license impact the usability of these new OpenAI models?

The Apache 2.0-style license provides developers and researchers with broad permissions to use, modify, and distribute the models with minimal restrictions. This permissive licensing approach could potentially reshape how AI technologies are accessed and developed by the broader research and technology communities.

What early feedback has been reported about the gpt-oss-120B and gpt-oss-20B models?

Early open-source users have been vocal with complaints about the models' quality and performance. Despite these criticisms, the release represents a significant moment of OpenAI putting substantial model weights into the public domain, which could spark further innovation and research.