Skip to main content
Sonar.tv
Back
Sonar Summit 2026 | Software Engineering + AI = ?Now Playing

Sonar Summit 2026 | Software Engineering + AI = ?

Sonar SummitMarch 4th 202636:51

A wide-ranging Summit discussion on how the addition of AI coding assistants transforms software engineering workflows, team structures, and the role of SAST and quality gates in the modern SDLC.

Introduction: The Conflicting Narratives

The tech industry faces a paradox when it comes to AI's role in software engineering. Headlines proclaim that artificial intelligence will replace mid-level engineers and fundamentally transform coding jobs, with prominent figures like Mark Zuckerberg warning of imminent disruption. Yet on the ground, companies report a more nuanced reality: AI coding tools sometimes provide questionable suggestions, damage systems when deployed autonomously, and struggle to deliver on their promises. Gergely Orban, creator of the Pragmatic Engineer publication and former engineering manager at companies including Uber, Microsoft, and Skyscanner, explored this disconnect during his presentation at Sonar Summit 2026, drawing on conversations with engineers across startups, big tech, and traditional companies.

The High-Performance Reality at AI-Native Companies

Inside AI development tool startups and major AI laboratories, the integration of AI into the software development workflow has reached remarkable levels of maturity. At Cursor, an AI dev tool company, engineering teams demonstrated internal adoption rates where 40-50% of code was already AI-generated as of October, with developers running multiple parallel AI models simultaneously and managing up to 50 AI-assisted tabs per day. Similarly, at OpenAI's offices, unlimited usage of ChatGPT and Codex is standard practice, with engineers running four to eight parallel agents regularly. Notably, more than 90% of Codex's own application was generated by Codex itself. These organizations have implemented sophisticated workflows, such as automatic one-shot ticket closure through AI assistance and built-in fix buttons for rapid problem resolution. However, these companies represent an outlier: their workforces are uniformly highly technical, and AI integration is deeply embedded across all functions, from marketing to engineering.

Real-World Risks and Limitations

The reality of AI tool deployment in traditional enterprises reveals significant challenges that temper enthusiasm. A fintech startup's trial of AI code review tools exposed critical flaws when the system flagged that the company was not encrypting the last four digits of credit cards—a suggestion that would have been incorrect, since the last four digits are not personally identifiable information. Such errors underscore the risk of misguided AI recommendations in security-sensitive domains. Additionally, documented cases of AI agents causing thousands of dollars in damage when deployed autonomously, combined with reports from founders moving away from AI coding tools entirely due to difficulty reviewing AI-generated output, highlight the gap between theoretical potential and practical implementation. These incidents suggest that AI tools require careful oversight, domain expertise, and human judgment to be effective.

The Critical Role of Code Quality and Observability

These challenges underscore why established code quality and security practices remain essential as AI becomes more prevalent in software development. Tools like SonarQube, which detect errors early in the CI/CD pipeline, become increasingly important when AI-generated code is part of the workflow. The ability to maintain observability, catch errors automatically, and provide developers with a safety net becomes more critical—not less—when portions of code are generated by AI systems. Organizations must balance the productivity gains from AI coding assistance with rigorous quality gates and security scanning to prevent the kind of errors demonstrated in real-world deployments.

Key Takeaways

  • AI coding tools deliver exceptional productivity gains within highly technical, AI-native organizations, but this success does not automatically translate to traditional enterprises
  • AI systems can generate plausible but incorrect suggestions, particularly in security and compliance domains, making human review and domain expertise non-negotiable
  • Code quality and security tools remain essential infrastructure as AI becomes more prevalent in development workflows
  • The divide between AI company hype and ground-level reality reflects different organizational contexts, with widespread adoption still facing adoption barriers in mainstream software engineering