Enterprise AI pilots lag; workflow redesign needed for gains, McKinsey says
Most companies that have rushed to test AI‑driven coding assistants are seeing results that fall short of expectations. The buzz around large language models suggests a quick lift in efficiency, yet internal reports reveal a different story. While the underlying technology is impressive, the real bottleneck appears to be how firms embed these tools into daily work.
Teams often treat an agent like a plug‑in, expecting it to slot seamlessly into legacy pipelines. The outcome? Minimal speed gains and, in some cases, added friction.
Why does this matter? Because the promised productivity boost hinges on more than just the algorithm; it depends on the surrounding process. A recent McKinsey analysis of 2025 highlights a pattern: organizations that simply attach an agent to an unchanged workflow rarely see the uplift they anticipated.
The evidence points to a need for deeper redesign—reworking the steps that surround the AI, not just the AI itself. This perspective frames the following insight from the report.
Enterprises must re-architect the workflows around these agents. As McKinsey's 2025 report "One Year of Agentic AI" noted, productivity gains arise not from layering AI onto existing processes but from rethinking the process itself. When teams simply drop an agent into an unaltered workflow, they invite friction: Engineers spend more time verifying AI-written code than they would have spent writing it themselves.
The agents can only amplify what's already structured: Well-tested, modular codebases with clear ownership and documentation. Security and governance, too, demand a shift in mindset. AI-generated code introduces new forms of risk: Unvetted dependencies, subtle license violations and undocumented modules that escape peer review.
Mature teams are beginning to integrate agentic activity directly into their CI/CD pipelines, treating agents as autonomous contributors whose work must pass the same static analysis, audit logging and approval gates as any human developer. GitHub's own documentation highlights this trajectory, positioning Copilot Agents not as replacements for engineers but as orchestrated participants in secure, reviewable workflows.
Is the promise of agentic coding already overstated? Most pilots suggest otherwise. Enterprises drop agents into unchanged pipelines and see little lift.
The report stresses that context—not model size—holds the key. Without a clear structure, history and intent, the AI cannot plan effectively. McKinsey’s 2025 study argues that productivity gains stem from re‑architecting workflows, not merely attaching an agent.
Teams must redesign hand‑offs, validation steps and feedback loops before the technology can deliver. Yet the path to such redesign remains vague. How many organizations will invest in reshaping processes rather than buying new models?
Some firms may experiment, but evidence of systematic change is limited. The findings caution against assuming automatic improvement. In short, agentic AI offers potential, but realizing it depends on workflow overhaul—a step that many companies have yet to take.
Organizations that doesn’t address the surrounding context may continue to see marginal returns. Measuring success will require new metrics that capture iterative feedback and alignment with business intent. Until such frameworks are in place, the true impact of agentic coding remains uncertain.
Further Reading
- The State of AI: Global Survey 2025 - McKinsey - McKinsey
- The State of AI in 2025: Agents, Innovation, and Transformation - Brian Heger
- 10 Takeaways from McKinsey's 2025 AI Report - Dr. Claire Brady
- The State of AI in 2025: Key Insights from McKinsey's Report - Kanerika
Common Questions Answered
Why are enterprise AI pilots for coding assistants falling short of expectations according to McKinsey?
McKinsey’s 2025 "One Year of Agentic AI" report finds that most pilots simply drop AI agents into existing pipelines, creating friction rather than speed. Engineers end up spending extra time verifying AI‑generated code, which erodes the anticipated efficiency gains.
What does McKinsey identify as the primary factor for achieving productivity gains with agentic AI?
The report emphasizes that context—not model size—is the key to productivity. Gains arise when firms re‑architect workflows to provide clear structure, history, and intent, enabling the AI to plan and act effectively.
How should companies redesign hand‑offs and validation steps to unlock AI‑driven coding benefits?
Companies need to redesign hand‑offs, validation steps, and feedback loops so that AI agents are integrated into a purpose‑built workflow rather than a legacy one. This redesign reduces verification overhead and allows the agent to amplify well‑tested processes.
What does the article suggest about the relationship between workflow redesign and speed gains in AI‑driven coding?
The article argues that speed gains are minimal when agents are treated as plug‑ins to unchanged workflows. Meaningful acceleration only occurs after re‑architecting the process, aligning the AI’s capabilities with a structured, context‑rich environment.