Illustration for: DeepSeek launches V3.2 reasoning model, Speciale API available until Dec 15 2025
Open Source

DeepSeek launches V3.2 reasoning model, Speciale API available until Dec 15 2025

2 min read

DeepSeek just rolled out its latest reasoning engine, the V3.2 model, positioning it as a direct competitor to the upcoming GPT‑5 and Gemini 3 Pro releases. While the broader community has been watching open‑source initiatives chase the performance of proprietary systems, DeepSeek is betting on a blend of speed and context length that it claims will suit everyday workloads. The company is also unveiling a premium variant—V3.2‑Speciale—targeted at users who need the most demanding reasoning capabilities.

Access to this high‑end version isn’t permanent; it’s being served through a limited‑time API that will disappear after a set deadline. By tying the offering to a specific cutoff, DeepSeek signals both confidence in its current rollout and a willingness to test market appetite before committing to a longer‑term service. This approach raises questions about how sustainable the model’s performance promises are once the temporary endpoint closes.

The Speciale variant is offered only through a temporary API endpoint until December 15, 2025. DeepSeek said V3.2 aims to balance inference efficiency with long-context performance, calling it "your daily driver at GPT-5 level performance." The V3.2-Speciale model, positioned for high-end reasoning tasks, "rivals Gemini-3.0-Pro," the company said. According to DeepSeek, Speciale delivers gold-level (expert human proficiency) results across competitive benchmarks such as the IMO, CMO and ICPC World Finals.

The models introduce an expansion of DeepSeek's agent-training approach, supported by a new synthetic dataset spanning more than 1,800 environments and 85,000 complex instructions. The company stated that V3.2 is its first model to integrate thinking directly into tool use, allowing structured reasoning to operate both within and alongside external tools. Alongside the release, DeepSeek updated its API, noting that V3.2 maintains the same usage pattern as its predecessor.

The Speciale model is priced the same as V3.2 but does not support tool calls. The company also highlighted a new capability in V3.2 described as "Thinking in Tool-Use," with additional details provided in its developer documentation.

Related Topics: #DeepSeek #V3.2 #GPT-5 #Gemini-3.0-Pro #Speciale #API #long-context #synthetic dataset #IMO

Will DeepSeek's new models live up to their own benchmarks? The company has released two reasoning‑first systems, DeepSeek‑V3.2 and the higher‑end DeepSeek‑V3.2‑Speciale, both posted openly on Hugging Face. V3.2 replaces the earlier V3.2‑Exp and is now reachable through DeepSeek’s app, web portal and public API.

The Speciale variant, by contrast, is limited to a temporary API endpoint that won’t be available after December 15 2025. DeepSeek describes V3.2 as “your daily driver at GPT‑5 level performance,” emphasizing a blend of inference efficiency and long‑context capability. Yet no independent evaluations have been published, leaving it unclear whether the models truly match the capabilities of GPT‑5 or rival Gemini 3 Pro.

The open‑source release invites scrutiny, but the brief window for Speciale’s API may restrict broader testing. As the suite expands, developers can experiment now, though the longevity of support for the Speciale line remains uncertain. Ultimately, the practical impact of these models will depend on real‑world usage and comparative benchmarks that have yet to surface.

Further Reading

Common Questions Answered

What distinguishes the DeepSeek V3.2 model from its predecessor V3.2‑Exp?

DeepSeek V3.2 replaces the earlier V3.2‑Exp by offering a better balance between inference speed and long‑context performance, positioning it as a daily driver comparable to GPT‑5. The new model is accessible via DeepSeek’s app, web portal, and public API, whereas V3.2‑Exp is no longer highlighted.

How does the V3.2‑Speciale variant compare to Gemini‑3.0‑Pro in benchmark performance?

According to DeepSeek, the V3.2‑Speciale variant rivals Gemini‑3.0‑Pro by delivering "gold‑level" (expert human proficiency) results on competitive benchmarks such as the International Mathematical Olympiad (IMO). This high‑end reasoning capability is marketed for demanding tasks that require top‑tier accuracy.

Until when will the temporary API endpoint for DeepSeek V3.2‑Speciale be available?

The special API endpoint for the V3.2‑Speciale model is available only until December 15, 2025. After that date, the endpoint will be discontinued, limiting access to the premium variant.

Where can developers access the DeepSeek V3.2 and V3.2‑Speciale models?

Both models are posted openly on Hugging Face; V3.2 can also be reached through DeepSeek’s app, web portal, and public API. In contrast, V3.2‑Speciale is restricted to a temporary API endpoint that expires on December 15, 2025.