Sonar Summit 2026 | When AI writes code, who owns the bug?
A provocative Sonar Summit session examining accountability, liability, and Quality Gate governance when AI coding agents introduce bugs, vulnerabilities, and technical debt into production codebases.
Abishek Virala, founder of Aqua.sh and a prominent DevOps and AI content creator, addressed a critical question at Sonar Summit 2026: in an era of AI-assisted software engineering, who bears responsibility for bugs introduced by artificial intelligence? With 46% of repositories now being created with AI assistance according to GitHub's October analysis, this question has become increasingly urgent. As developers leverage AI to generate code for everything from complete applications to individual features and bug fixes, organizations face a fundamental shift in how they approach code quality and security.
The Security Paradox of AI-Generated Code
While AI-assisted development significantly enhances developer productivity, it simultaneously creates a dangerous paradox. As developers can now produce 7-8 pull requests per week instead of the previous 2, reviewers face mounting pressure and have developed an unconscious bias toward trusting AI-generated code. This bias leads reviewers to focus exclusively on functional correctness—whether code compiles, runs, and passes functional tests—while overlooking critical security vulnerabilities hidden beneath the surface. AI-generated code can contain insecure implementations such as SQL injection or SSRF vulnerabilities, along with outdated dependencies and deprecated patterns, all while maintaining perfect functional integrity. These vulnerabilities remain invisible to cursory reviews, creating a gateway for attackers once code reaches production.
Establishing Gold Standard Security Baselines
To combat this security paradox, organizations must implement a comprehensive approach built on four pillars. First, they must eliminate the trust bias that treats AI-generated code differently from human-written code. Second, they should establish gold standard security baselines—not optional suggestions, but mandatory sets of tools and practices integrated into the development pipeline before peer review. These security baselines must be consistent across all code authors, automated within CI/CD processes to avoid burdening reviewers, and executed by tools that are both security-aware and context-aware, understanding real-world vulnerabilities rather than merely analyzing syntax. Finally, organizations should adopt a shift-left approach, executing security checks at pull request creation time rather than waiting until production deployment.
Automation as the Path Forward
The key to managing AI-generated code at scale lies in automating the identification and mitigation of bugs before production. Rather than relying on human reviewers to catch security issues amid increasing code volumes, organizations should embed intelligent security verification tools directly into their pull request workflows. These tools must understand the context of modern vulnerabilities and assess code holistically, not just for syntactic correctness. By treating AI-generated code with the same rigor as human-written code and implementing consistent, automated security checks, organizations can harness AI's productivity benefits while maintaining their security posture.
Key Takeaways
- AI-generated code requires equal scrutiny: Developers and reviewers must eliminate bias toward AI-written code and apply the same security standards as they would to human-generated code
- Surface-level review is insufficient: Functionally correct code can contain hidden security vulnerabilities such as SQL injection, SSRF, and outdated dependencies that only deeper analysis can detect
- Gold standard security baselines are essential: Organizations should establish mandatory, automated, and consistent security checks that execute before peer review, not after
- Security-aware tooling is critical: Tools used to verify AI-generated code must understand real-world vulnerabilities and context, not just syntax and semantics
- Shift-left approach prevents production incidents: Security verification should occur at pull request creation time, ensuring vulnerabilities never reach production environments