Editorial illustration for Kilo CLI 1.0 supports 500+ models; OpenAI Codex adds locked 30‑min agent hub
Kilo CLI Unleashes 500+ AI Models for Developers
Kilo CLI 1.0 supports 500+ models; OpenAI Codex adds locked 30‑min agent hub
Kilo CLI 1.0 drops into the terminal with a surprisingly broad catalog—more than 500 language models now sit at developers’ fingertips, all under an open‑source licence that feels more like a community toolkit than a corporate product. The tool promises to let engineers spin up prompts, chain calls and iterate without leaving the command line, a workflow many have been yearning for since the rise of cloud‑based notebooks. Yet the same momentum that fuels Kilo’s openness is pulling in the opposite direction elsewhere.
OpenAI’s new Codex app similarly favors a platform‑locked approach, functioning as a “command center for agents” that allows developers to supervise AI systems running independently for up to 30 minutes. While Codex introduces powerful features like “Skills” to connect to tools like Figma and Linear…
OpenAI's new Codex app similarly favors a platform-locked approach, functioning as a "command center for agents" that allows developers to supervise AI systems running independently for up to 30 minutes. While Codex introduces powerful features like "Skills" to connect to tools like Figma and Linear, it is fundamentally designed to defend OpenAI's ecosystem in a highly contested market. Conversely, Kilo CLI 1.0 utilizes the MIT-licensed OpenCode foundation to deliver a production-ready Terminal User Interface (TUI) that allows engineers to swap between 500+ models. This portability allows teams to select the best cost-to-performance ratio--perhaps using a lightweight model for documentation but swapping to a frontier model for complex debugging.
Kilo’s new CLI arrives as a complete rebuild, now handling more than 500 AI models, from proprietary services to open‑source offerings such as Alibaba’s Qwen. Backed by GitLab co‑founder Sid Sijbrandij, the startup argues that developers shouldn’t be forced to pledge loyalty to a single environment or model. The tool’s open‑source vibe suggests a flexible, remote‑first workflow, yet the sheer number of supported back‑ends raises questions about consistency and maintenance overhead.
OpenAI’s Codex app, by contrast, embraces a platform‑locked design, presenting a “command center for agents” that lets developers supervise autonomous AI runs for up to 30 minutes. Its “Skills” feature links to external tools like Figma and Linear, offering a more curated integration path.
Both releases target the same audience—developers who want AI assistance—but they take opposite stances on openness versus confinement. It is unclear whether the breadth of Kilo’s model support will translate into smoother daily coding, or if Codex’s tighter ecosystem will prove more reliable in practice. Ultimately, the market will decide which approach better fits real‑world development needs.
Further Reading
Common Questions Answered
How does the Codex CLI integrate with different development workflows?
The Codex CLI allows developers to work directly from their terminal, enabling code inspection, file editing, and command execution. It supports various modes like interactive chat, specific task modes, and can be used across different workspaces, providing flexibility for developers to manage coding tasks without leaving the command line.
What new features were introduced in the general availability of Codex?
OpenAI launched three key features with Codex's general availability: a new Slack integration that allows delegating tasks directly in team channels, a Codex SDK for embedding the agent into custom workflows, and new admin tools with environment controls and analytics dashboards. These features aim to make Codex more versatile and manageable for engineering teams.
What impact has Codex had on developer productivity at OpenAI?
Inside OpenAI, Codex has become integral to their development process, with nearly all engineers now using it compared to just over half in July. The tool has helped engineers merge 70% more pull requests each week and automatically reviews almost every PR to catch critical issues before production. This demonstrates significant improvements in coding efficiency and code quality.