Study Reveals AI Tools Slowed Experienced Developers by 19%

Image for Study Reveals AI Tools Slowed Experienced Developers by 19%

A recent study by METR, an AI benchmarking nonprofit, has unveiled a surprising finding: AI coding assistants, including tools like Cursor Pro, actually slowed down experienced open-source developers by 19% in task completion time. This outcome starkly contrasts with initial expectations and developer estimates, challenging the widespread assumption of universal productivity gains from these technologies.

The research, conducted as a randomized controlled trial in the first half of 2025, focused on developers working on their own mature and complex open-source projects. Prior to the tasks, developers forecasted a 24% speed-up from using AI tools. After completing the work, their self-estimated productivity gain was 20%. However, objective measurement revealed the significant slowdown.

This finding introduces a critical counterpoint to numerous reports and industry surveys that have highlighted substantial productivity boosts from AI coding assistants. For instance, studies on GitHub Copilot have indicated productivity increases of 26% or more, with Amazon's CodeWhisperer also reporting developers completing tasks 57% faster. The disparity underscores the complexity of measuring true productivity in varied development environments.

Experts suggest several factors might contribute to this unexpected slowdown. The METR study focused on experienced developers tackling complex, familiar codebases, where the AI might introduce "AI-induced tech debt" or generate code that requires extensive debugging and correction. As one developer noted, "Debugging code you didn’t write can be hard," a sentiment that extends to AI-generated code, especially when it contains hallucinations or outdated information.

The METR study emphasizes that its results are a "snapshot in time" of early 2025 AI capabilities and apply to a specific demographic of developers and project types. While AI tools may offer significant benefits in other contexts, such as for less experienced developers or in simpler coding tasks, the study calls for a re-evaluation of the real-world impact of these tools and highlights the ongoing need for rigorous, objective measurement beyond anecdotal evidence or self-reported gains.