Skip to main content
EY consultant using AI to boost coding output, demonstrating engineering standards and efficiency.

Editorial illustration for EY boosts coding output 4‑5× by linking AI agents to engineering standards

EY Multiplies Coding Output 4-5x with AI Agent Strategy

EY boosts coding output 4‑5× by linking AI agents to engineering standards

3 min read

EY’s engineering leaders have been quietly re‑architecting how code gets written across the firm. While most firms tout a quick lift from plugging in a generative‑AI assistant, EY’s approach took a marathon, not a sprint. Over a year and a half to two years, the team headed by senior manager Newman layered AI agents onto a set of internal engineering standards, weaving them into the daily workflow of auditors, tax specialists and financial‑services developers.

The effort was as much about culture as technology—creating shared expectations, governance checkpoints and a feedback loop that lets the AI suggest, but humans still decide. By the time the system was live, it wasn’t just a novelty; it became a semi‑autonomous coding partner that could understand EY’s compliance‑heavy context. The payoff, according to internal data, is a dramatic uplift in output that reshapes how the firm delivers its core platforms.

The result: 4x to 5x productivity gains across teams building EY's suite of audit, tax, and financial platforms. But the gains didn't come from just turning on a tool. Newman's team spent 18 to 24 months building the cultural foundation and technical integrations that made semi‑autonomous coding wor​

The result: 4x to 5x productivity gains across teams building EY's suite of audit, tax, and financial platforms. But the gains didn't come from just turning on a tool. Newman's team spent 18 to 24 months building the cultural foundation and technical integrations that made semi-autonomous coding work at scale.

EY started with GitHub Copilot-style tools, letting engineers get comfortable with prompt engineering and assistive AI. Newman said the key learning was making AI adoption organic rather than forced from leadership. "It's important to bring AI capabilities as a ground-up organic adoption rather than force them onto the users," he said.

Developers wanted to move beyond code generation to building, deployment, and operationalization. Newman realized agents needed access to EY's code repos, engineering standards and source catalogs to generate deployable code. Without that "context universe," as Newman calls it, agents produce generic output that requires extensive rework.

EY evaluated multiple agent platforms: Lovable, Replit and Factory's IDE-based Droids. Rather than mandate a tool, Newman's team measured adoption, usage and productivity across all three.

Did the boost prove sustainable? EY’s teams report four‑ to five‑fold productivity gains after tying AI coding agents to internal engineering standards, yet the numbers hide a caveat. The agents can spit out thousands of lines in minutes, but most of that output fails compliance checks, breaches coding standards, or creates additional cleanup work.

“You can generate a ton of code, but it doesn’t mean really anything, right?” Stephen Newman warned, emphasizing the need for integratable, compliant results. The gains did not appear overnight; Newman's group spent 18 to 24 months building cultural buy‑in and technical scaffolding that allowed semi‑autonomous coding to function. Consequently, the reported uplift reflects both the tool’s speed and the effort invested in aligning it with EY’s standards.

Unclear whether the same approach would yield comparable returns elsewhere, or how much of the generated code ultimately reaches production without further refinement. The experiment underscores that raw code volume alone does not equate to value, and that disciplined integration remains a prerequisite for any measurable productivity lift.

Further Reading

Common Questions Answered

How did EY achieve 4-5x productivity gains in coding?

EY linked AI coding agents to internal engineering standards over 18-24 months, carefully integrating the technology into their workflow. The approach involved starting with GitHub Copilot-style tools, allowing engineers to gradually become comfortable with AI-assisted coding and developing a robust cultural and technical foundation.

What challenges did EY encounter when implementing AI coding agents?

Despite generating thousands of lines of code quickly, EY found that most AI-generated code failed compliance checks or breached coding standards. Stephen Newman emphasized that generating code does not automatically mean the code is usable or valuable, highlighting the need for careful integration and validation.

What was EY's strategy for introducing AI into their coding process?

EY took a methodical approach by first introducing GitHub Copilot-style tools to help engineers get comfortable with prompt engineering and assistive AI. They spent 18-24 months building a cultural foundation and technical integrations to make semi-autonomous coding work effectively across their audit, tax, and financial platforms teams.