Skip to main content
Sonar.tv
Back
The Reality of Developers Using AI | State of Code Research | Sonar Summit 2026Now Playing

The Reality of Developers Using AI | State of Code Research | Sonar Summit 2026

Sonar SummitMarch 4th 202614:41Part of SCAI

Data-driven research findings on how developers actually use AI coding assistants in practice, including the code quality and security gaps that emerge and how SonarQube addresses them at the team level.

The Rapid Integration of AI into Development Workflows

The software development landscape is undergoing a dramatic transformation as AI coding tools become deeply embedded in developers' daily workflows. According to Sonar's State of Code Research survey, AI has shifted from a futuristic concept to a present-day reality affecting the majority of code being written today. Anirban Chatterjee, product marketing lead at Sonar, highlighted that the pace of this change has been extraordinarily rapid—as evidenced by Andre Karpathy's observation that AI coding represents "the biggest change to his basic coding workflow in two decades of programming" and occurred within just a few weeks. The phenomenon has become so prevalent that industry observers have joked that "English is now the hottest new programming language," as developers increasingly use natural language prompts to interact with AI tools.

Key Adoption Metrics and Use Cases

The survey results reveal substantial adoption rates among developers who have experimented with AI coding tools. A striking 72% of developers who tried AI tools now use them nearly every day, far exceeding initial expectations of 50-60% adoption rates. More significantly, the use cases have matured beyond initial prototyping phases, with 58% of developers reporting they are using AI tools for production software and mission-critical services. When examining specific use cases, the research identified an interesting paradox: while documentation writing achieved the highest effectiveness rating at approximately 75%, the most frequently used application remains assisting with new code development. Despite its popularity, developers rate AI's effectiveness for new code assistance at only slightly above 50%—yet approximately nine out of ten developers still employ it for this purpose.

The Trust Gap and Quality Concerns

A critical finding emerged regarding developer confidence in AI-generated code quality. The survey indicates that 96% of developers do not fully trust that AI-generated code is functionally correct, with only 4% expressing complete confidence in the output's accuracy. This widespread skepticism is compounded by a significant oversight in development practices: while 96% of developers harbor doubts about AI code quality, fewer than half (48%) consistently run AI-generated code through formal code review processes before integration. This trust gap reveals a potential vulnerability in software quality assurance, particularly concerning given that AI code is already deployed in mission-critical applications where reliability and security are paramount. Additionally, Sonar's research has identified that AI-generated code tends to be more verbose and complex than human-written code, which further complicates the code review process and increases the volume of code requiring developer attention.

Popular AI Tools and Access Patterns

The landscape of AI coding tools reflects a clear market dominance by GitHub Copilot and ChatGPT, followed closely by Claude, Google Cloud Code, and Google Gemini. However, the methods through which developers access these tools vary significantly across platforms. GitHub Copilot leads in organizational adoption, with 78% of users accessing it through work-provided accounts, reflecting its integration into professional development environments. In contrast, ChatGPT shows a more distributed access pattern, with approximately 50% of users accessing it through personal accounts and 50% through work accounts. Tools like Perplexity demonstrate even higher reliance on personal accounts, suggesting developers often pursue AI assistance through self-selected channels rather than officially sanctioned organizational tools.

Key Takeaways

  • Rapid Daily Adoption: 72% of developers who have tried AI coding tools now use them nearly every day, with usage extending well beyond prototyping to production and mission-critical applications
  • Widespread Trust Deficit: 96% of developers doubt the functional correctness of AI-generated code, yet fewer than half consistently perform code review on AI output, creating a significant quality assurance gap
  • Complexity and Volume Challenges: AI-generated code tends to be more verbose and complex than human code, substantially expanding the code review burden for development teams
  • Tool Dominance with Mixed Access Models: GitHub Copilot and ChatGPT lead market adoption, but access patterns vary significantly, with many developers relying on personal rather than organizational accounts
  • Effectiveness-Usage Paradox: While developers rate AI assistance for new code development at only 50%+ effectiveness, it remains the most frequently used AI application, suggesting factors beyond perceived quality drive adoption