Skip to main content
Sonar.tv
Back
Why AI Code Passes OWASP but Fails MISRA | Sonar Summit 2026Now Playing

Why AI Code Passes OWASP but Fails MISRA | Sonar Summit 2026

Sonar SummitMarch 4th 202618:41

A technical analysis of why AI-generated code can satisfy OWASP Top 10 checks yet still violate safety-critical MISRA rules, and how advanced SAST techniques in SonarQube bridge that detection gap.

Geoffray, product manager for the C/C++ ecosystem at Sonar, explored the critical relationship between coding standards, individual rules, and artificial intelligence during his Sonar Summit 2026 presentation. Drawing from seven years of experience in the C++ analyzer development team at Sonar, Geoffray provided insights into how coding standards function across different industries and why their fundamental importance remains unchanged despite the emergence of AI-driven code generation. The presentation examined the spectrum of coding standards—from high-level awareness guidelines like OWASP Top 10 to rigid compliance requirements like MISRA—and demonstrated how these standards translate into actionable rules that developers encounter daily.

Understanding the Standards Spectrum

Coding standards exist on a broad spectrum defined by two critical dimensions: prescriptiveness and scope. On one end of the spectrum lie abstract, non-prescriptive guidelines such as general security awareness frameworks, while the opposite extreme features highly prescriptive standards like MISRA, which dictate specific implementation approaches and constrain language features. The vertical axis represents scope, with standards focused purely on code practices at the bottom and comprehensive lifecycle management standards at the top. This positioning reveals an important truth: following coding standards involves significant cost tradeoffs. Organizations must balance development speed against code quality and safety, consider the expertise and processes required for compliance, and invest in appropriate tooling such as static analysis, dynamic analysis, and software composition analysis. These decisions fundamentally affect both code security and developer velocity.

Three Standards in Focus

The presentation examined three distinct standards to illustrate how different communities address code quality challenges. OWASP ASVS (Application Security Verification Standard) emerged from the need to standardize web application security verification, replacing ad hoc security checking with contractual, testable requirements. The standard's 286 testable requirements across multiple levels serve three functions: establishing measurable compliance metrics, providing blueprints for secure development, and enabling contractual obligations between developers and stakeholders. MISRA C++2023, born from automotive industry demands for safety-critical software, takes a fundamentally different approach by defining a safe subset of C++17, restricting the language's complex features through 179 detailed guidelines. These guidelines are categorized into decidable rules—where mathematical proofs ensure perfect implementation across tools—and undecidable rules that require more sophisticated analysis despite their critical importance.

Sonar's own Sonar Way framework, while not technically a standard, fulfills equivalent objectives across industries by defaulting 250 to 600 language-specific rules aimed at keeping code secure, reliable, and maintainable. Sonar Way emphasizes shifting quality checks left in the development process and educating developers to prevent defects rather than merely detecting them. This philosophy reflects a broader principle that Geoffray articulated: the best bugs are those never written in the first place, similar to how defensive driving prevents accidents more effectively than relying solely on safety features.

The AI Question: Standards Don't Change, But Context Does

While artificial intelligence increasingly influences code generation and development workflows, Geoffray emphasized that AI's emergence does not fundamentally alter the value or mechanics of coding standards. The title's provocative claim—that AI code passes OWASP but fails MISRA—points to a critical distinction: different standards serve different purposes and operate at different abstraction levels. OWASP's high-level security requirements may be satisfied by AI-generated code at a surface level, while MISRA's detailed prescriptions about language feature restrictions require explicit, granular compliance that generic AI models struggle to enforce consistently.

This distinction highlights why organizations cannot simply apply a single standard as a universal quality metric. Standards exist within ecosystems tailored to specific industries, threat models, and safety requirements. The presence of AI-generated code in the workflow does not eliminate the need for these carefully constructed standards; rather, it increases their importance as guardrails that ensure AI-assisted development remains aligned with domain-specific safety and security objectives.

Key Takeaways

  • Standards serve distinct purposes: From high-level awareness frameworks like OWASP Top 10 to prescriptive requirements like MISRA, coding standards address different organizational needs and cannot be used interchangeably.
  • Rules are tools, not truth: Individual atomic checks implemented in code analysis tools serve standards as enabling mechanisms, but their quality varies—particularly for undecidable rules that require sophisticated analysis beyond mathematical proof.
  • Cost-benefit tradeoffs are unavoidable: Organizations must deliberately choose which standards to follow based on acceptable risks to development velocity, required expertise, process overhead, and tooling investments.
  • Prevention exceeds detection: Shifting left by educating developers