Software program engineer workflows have been reworked in recent times by an inflow of AI coding instruments like Cursor and GitHub Copilot, which promise to boost productiveness by routinely writing strains of code, fixing bugs, and testing modifications. The instruments are powered by AI fashions from OpenAI, Google DeepMind, Anthropic, and xAI which have quickly elevated their efficiency on a variety of software program engineering assessments in recent times.
Nonetheless, a brand new research printed Thursday by the non-profit AI analysis group METR calls into query the extent to which at the moment’s AI coding instruments improve productiveness for skilled builders.
METR carried out a randomized managed trial for this research by recruiting 16 skilled open supply builders and having them full 246 actual duties on massive code repositories they usually contribute to. The researchers randomly assigned roughly half of these duties as “AI-allowed,” giving builders permission to make use of state-of-the-art AI coding instruments equivalent to Cursor Professional, whereas the opposite half of duties forbade using AI instruments.
Earlier than finishing their assigned duties, the builders forecasted that utilizing AI coding instruments would scale back their completion time by 24%. That wasn’t the case.
“Surprisingly, we discover that permitting AI really will increase completion time by 19% — builders are slower when utilizing AI tooling,” the researchers stated.
Notably, solely 56% of the builders within the research had expertise utilizing Cursor, the primary AI device supplied within the research. Whereas practically all of the builders (94%) had expertise utilizing some web-based LLMs of their coding workflows, this research was the primary time some used Cursor particularly. The researchers word that builders have been skilled on utilizing Cursor in preparation for the research.
Nonetheless, METR’s findings elevate questions in regards to the supposed common productiveness positive factors promised by AI coding instruments in 2025. Primarily based on the research, builders shouldn’t assume that AI coding instruments — particularly what’s come to be generally known as “vibe coders” — will instantly velocity up their workflows.
METR researchers level to some potential the explanation why AI slowed down builders quite than rushing them up: Builders spend far more time prompting AI and ready for it to reply when utilizing vibe coders quite than really coding. AI additionally tends to wrestle in massive, advanced code bases, which this take a look at used.
The research’s authors are cautious not to attract any sturdy conclusions from these findings, explicitly noting they don’t imagine AI techniques presently fail to hurry up many or most software program builders. Different large-scale research have proven that AI coding instruments do velocity up software program engineer workflows.
The authors additionally word that AI progress has been substantial in recent times and that they wouldn’t anticipate the identical outcomes even three months from now. METR has additionally discovered that AI coding instruments have considerably improved their potential to finish advanced, long-horizon duties in recent times.
Nonetheless, the analysis provides but another excuse to be skeptical of the promised positive factors of AI coding instruments. Different research have proven that at the moment’s AI coding instruments can introduce errors and, in some circumstances, safety vulnerabilities.