Illustration for: OpenAI releases gpt-oss-120B and gpt-oss-20B under Apache-2.0-style license
Policy & Regulation

OpenAI releases gpt-oss-120B and gpt-oss-20B under Apache-2.0-style license

3 min read

Why does this matter now? The AI community has spent years wrestling with the tension between proprietary breakthroughs and the promise of open research. While large firms have rolled out ever‑larger models, most have kept the underlying weights behind closed doors, citing safety and competitive concerns.

Regulators, meanwhile, have nudged companies toward more transparency, arguing that public access can curb monopolistic control and spur independent scrutiny. In that climate, OpenAI’s decision to publish two mixture‑of‑experts models—one with 120 billion parameters and a smaller 20 billion‑parameter sibling—under a license resembling Apache 2.0 is striking. The move revives a practice that hasn’t been seen since the GPT‑2 release, when the organization first opened its weights to the public.

Early adopters have already voiced mixed feelings about performance, but the symbolic weight of the licensing choice cuts across policy debates about openness, accountability, and the future of AI development.

Finally -- and maybe most symbolically -- OpenAI released gpt-oss-120B and gpt-oss-20B, open-weight MoE reasoning models under an Apache 2.0-style license. Whatever you think of their quality (and early open-source users have been loud about their complaints), this is the first time since GPT-2 that

Advertisement

Finally -- and maybe most symbolically -- OpenAI released gpt-oss-120B and gpt-oss-20B, open-weight MoE reasoning models under an Apache 2.0-style license. Whatever you think of their quality (and early open-source users have been loud about their complaints), this is the first time since GPT-2 that OpenAI has put serious weights into the public commons. China's open-source wave goes mainstream If 2023-24 was about Llama and Mistral, 2025 belongs to China's open-weight ecosystem.

A study from MIT and Hugging Face found that China now slightly leads the U.S. in global open-model downloads, largely thanks to DeepSeek and Alibaba's Qwen family. Highlights: DeepSeek-R1 dropped in January as an open-source reasoning model rivaling OpenAI's o1, with MIT-licensed weights and a family of distilled smaller models.

VentureBeat has followed the story from its release to its cybersecurity impact to performance-tuned R1 variants. Kimi K2 Thinking from Moonshot, a "thinking" open-source model that reasons step-by-step with tools, very much in the o1/R1 mold, and is positioned as the best open reasoning model so far in the world. Z.ai shipped GLM-4.5 and GLM-4.5-Air as "agentic" models, open-sourcing base and hybrid reasoning variants on GitHub.

Baidu's ERNIE 4.5 family arrived as a fully open-sourced, multimodal MoE suite under Apache 2.0, including a 0.3B dense model and visual "Thinking" variants focused on charts, STEM, and tool use. Alibaba's Qwen3 line -- including Qwen3-Coder, large reasoning models, and the Qwen3-VL series released over the summer and fall months of 2025 -- continues to set a high bar for open weights in coding, translation, and multimodal reasoning, leading me to declare this past summer as " VentureBeat has been tracking these shifts, including Chinese math and reasoning models like Light-R1-32B and Weibo's tiny VibeThinker-1.5B, which beat DeepSeek baselines on shoestring training budgets.

Related Topics: #OpenAI #GPT-2 #gpt-oss-120B #gpt-oss-20B #Apache 2.0 #Mixture-of-Experts #open-weight #AI #LLM

Is this the most symbolic move of the year? OpenAI’s decision to publish gpt‑oss‑120B and gpt‑oss‑20B under an Apache‑2.0‑style license certainly feels that way. The models, open‑weight mixtures‑of‑experts designed for reasoning, arrive at a moment when the AI field looks less like a single monolith and more like a patchwork of open and closed efforts, large and small, Western and Chinese, cloud‑based and local.

Early adopters have already voiced complaints about quality, so the community’s reaction is mixed. Yet the release marks the first time since GPT‑2 that a major lab has opened a model of this scale under a permissive license. Whether developers will build on it or abandon it for other options remains unclear.

The gesture hints at a broader shift toward openness, but practical impact will depend on how the models perform in real‑world tasks and how quickly the ecosystem can integrate them. For now, the significance lies in the licensing choice rather than any proven superiority.

Further Reading

Common Questions Answered

What are the names and sizes of the open-weight models OpenAI released under an Apache-2.0-style license?

OpenAI released two open-weight models: gpt-oss-120B, a 120‑billion‑parameter mixture‑of‑experts model, and gpt-oss-20B, a smaller 20‑billion‑parameter version. Both are designed for reasoning tasks and are now publicly available under an Apache‑2.0‑style license.

Why is the release of gpt-oss-120B and gpt-oss-20B considered a symbolic move for the AI community?

The release marks the first time since GPT‑2 that OpenAI has placed substantial model weights into the public commons, signaling a shift toward greater openness amid regulatory pressure for transparency. It also highlights the growing influence of China’s open‑source ecosystem, positioning 2025 as a year for open‑weight models.

What type of architecture do the gpt-oss models use, and what is their primary intended use?

Both gpt-oss‑120B and gpt-oss‑20B employ a mixture‑of‑experts (MoE) architecture, which allows different expert subnetworks to specialize in various reasoning tasks. Their primary purpose is to provide high‑quality reasoning capabilities while remaining openly accessible.

How have early adopters responded to the quality of the newly released gpt-oss models?

Early adopters have voiced complaints about the quality of the gpt‑oss models, suggesting that performance may not yet match expectations set by proprietary counterparts. Despite these concerns, the open‑weight release is still seen as a significant step toward broader community scrutiny and improvement.

Advertisement