Developers Skeptical of AI Code Yet Often Skip Verification

Despite a lack of trust in AI-generated code, many developers admit they do not consistently check it for errors.

Developers Skeptical of AI Code Yet Often Skip Verification
  • 96% of developers don't fully trust AI, 52% don't always check for errors
  • Majority of ChatGPT and Perplexity users access AI through personal accounts
  • Data exposure and vulnerabilities remain significant concerns

According to Sonar's latest State of Code Developer Survey, a staggering 96% of developers express distrust in the functional accuracy of AI-generated code. Alarmingly, many do not adequately verify this code.

Currently, approximately 42% of developers' code is AI-generated, a notable increase from just 6% in 2023, with projections suggesting it could rise to around 65% by 2027.

Yet, less than half (48%) of developers consistently check AI's output before committing it, indicating a significant risk for bugs and vulnerabilities.

Developers Aren't Verifying AI-Generated Code

While 59% of developers report putting 'moderate' or 'substantial' effort into checking AI-generated code, 38% acknowledge that verifying it takes more time than reviewing human-written code. Furthermore, 61% agree that AI-generated code often appears correct but is not.

This finding aligns with a separate study by CodeRabbit, which revealed that AI-generated code contains 1.7 times more issues—and 1.7 times more major issues—than code written by humans.

Current trends indicate that AI tools are predominantly used for prototyping (88%) and internet production software (83%). While this may not seem critical, a significant number also utilize these tools for customer-facing applications (73%). GitHub Copilot (75%) and ChatGPT (74%) are the most widely used assistants.

Moreover, Sonar discovered that over one-third (35%) of developers use personal accounts instead of work-approved ones, a figure that rises to 52% among ChatGPT users and 63% for Perplexity users. This raises additional concerns regarding the potential exposure of sensitive company information.

Despite the findings on AI usage, developers remain highly concerned about data exposure (57%), minor vulnerabilities (47%), and severe vulnerabilities (44%).

As the report concludes, "Generating code faster is only half the battle; the real value lies in the ability to trust and verify that code efficiently."

Related Posts