Editorial illustration for 60% of 1,100 Developers and CTOs Say AI Agents Deliver Real ROI
AI Agents Surge: 60% of Devs Report Tangible ROI
60% of 1,100 Developers and CTOs Say AI Agents Deliver Real ROI
A fresh survey of 1,100 developers and chief technology officers reveals a shift in how AI investments are being judged. While early hype centered on raw compute and model training, respondents now point to the tools that sit on top of that foundation. When asked which layer of the AI stack promises the most tangible return, a clear majority highlighted the application tier—particularly agents that automate workflows and generate code.
The data shows that practical deployments are already moving beyond pilot projects into revenue‑bearing products. This perspective matters because it reframes where companies should allocate budget and talent if they want measurable outcomes rather than speculative experiments. The numbers also hint at a broader market trend: spending on the application layer is projected to outpace other segments, with analysts estimating a $19 billion slice of generative‑AI expenditure by 2025.
Below, the survey’s key finding puts those figures into sharper focus.
The long-term view is even stronger: 60% see applications and agents as the greatest opportunity in the AI stack, compared to just 19% for infrastructure. According to one report, the application layer captured $19 billion in 2025 -- more than half of all generative AI spending. Coding tools led at $4 billion, representing 55% of departmental AI spend and the single largest category across the entire stack.
Organizations are betting that the application layer, where AI actually touches users and workflows, will matter more than the underlying components. 49% say the cost of running AI at scale is their top barrier to growth Agents only work if you can run them.
The data show AI agents are already moving beyond hype. Sixty‑seven percent of the surveyed firms report measurable productivity gains, and a solid majority—60 percent—see applications and agents as the most valuable layer of the AI stack, outpacing infrastructure by a wide margin. Yet the report also flags a stark contrast: scaling agents in production remains rare, with only a small fraction of respondents indicating successful large‑scale deployment.
This gap suggests that while early‑stage benefits are clear, broader operational integration is still an exception rather than the rule. The survey’s emphasis on long‑term opportunity, coupled with the modest adoption rate, leaves it uncertain whether the current productivity spikes will translate into sustained, enterprise‑wide impact. As the application layer accounted for $19 billion of generative AI spend in 2025, the question remains how many organizations will move past pilot projects to embed agents into core workflows without sacrificing reliability or control.
Further Reading
Common Questions Answered
How are enterprises currently measuring AI agent ROI?
[CB Insights Research](https://www.cbinsights.com/research/ai-agent-roi-markets/) found that 80% of executives prioritize AI agent adoption, but 40% cannot track or understand their ROI. Currently, enterprises default to efficiency metrics, with only 25% measuring revenue impact, indicating a significant gap in comprehensive ROI measurement.
What are the key emerging markets for improving AI agent performance?
[CB Insights Research](https://www.cbinsights.com/research/ai-agent-roi-markets/) identified three critical emerging markets: AI cost management software, memory management, and observability & evaluation. These markets are crucial for linking agent activity to business outcomes, enabling persistent enterprise context, and providing real-time performance visibility.
How are enterprises shifting their approach to AI ROI measurement?
[Futurum's enterprise survey](https://futurumgroup.com/press-release/enterprise-ai-roi-shifts-as-agentic-priorities-surge/) reveals a significant shift from productivity gains to direct financial impact. Productivity metrics dropped from 23.8% to 18.0%, while top-line revenue growth and bottom-line profitability now dominate value conversations, signaling a more sophisticated approach to AI investment evaluation.
What are the implications of 1M token context windows for AI agents?
[Zylos Research](https://zylos.ai/research/2026-02-18-long-context-ai-agents) suggests that 1M token context windows solve specific problems but don't eliminate the need for thoughtful memory architecture. The breakthrough allows for single-pass reviews of entire codebases or document collections, but still requires careful consideration of cost, latency, and specific use case requirements.