Skip to main content
Sonar.tv
Back
Achieving Trusted Code in the Agentic SDLC | Sonar Summit 2026Now Playing

Achieving Trusted Code in the Agentic SDLC | Sonar Summit 2026

Sonar SummitMarch 4th 202618:45

Explore the practices and SonarQube Quality Gate configurations that engineering teams need to establish trusted, verifiable code in an agentic SDLC where AI agents write and merge code autonomously.

The Problem with AI Coding Assistants

AI coding assistants have become powerful tools for accelerating development, but they are fundamentally limited by the quality of their input. Currently, developers share prompts in much the same way they shared Stack Overflow snippets a decade ago—discovering something that looks promising and pasting it directly into their projects without deeper scrutiny. This approach presents a significant risk. During a demonstration at Sonar Summit 2026, a well-crafted Express REST API generator prompt was tested. While the prompt appeared to include production-ready features such as a database layer with dynamic query building, authentication with error recovery, and flexible routing, the resulting code revealed critical vulnerabilities when analyzed by SonarQube. The generated application contained 65 security and quality issues, including SQL injection vulnerabilities, hard-coded credentials, path traversal flaws, and various injection attacks—demonstrating that aesthetic presentation of a prompt bears no correlation to the security and reliability of generated code.

Introducing Tessla: A Framework for Trusted AI Skills

To address this challenge, the speakers introduced Tessla, a framework that transforms ad-hoc prompts into versioned, reviewed packages of context called tiles or skills. Similar to how Tesla maintains strict quality standards across its codebase, Tessla establishes a curated ecosystem for AI coding capabilities. Rather than relying on random internet prompts, developers can use Tessla's skill review process to evaluate the quality of AI-generated instructions before they produce code. When the problematic Express API generator prompt was subjected to Tessla's skill review, it received a quality score of 32 out of 100—a failing grade that immediately flagged severe issues including inadequate descriptions, incomplete trigger terms, and critically, the active teaching of dangerous anti-patterns that would create exploitable production systems. This early detection mechanism acts as a code quality gate that operates before any actual code is written, providing developers with invaluable feedback at the planning stage.

Improving Skills Through Documentation and Rules

The Tessla framework provides mechanisms to transform poor-quality prompts into proper, secure skills. Using Tessla's own tile creator skill—available in the public Tessla registry—developers can regenerate a skill with explicit focus on security and code quality best practices. The improved process includes downloading and embedding documentation tiles for relevant dependencies, ensuring that the AI agent uses correct APIs and coding patterns specific to libraries like Express and Zod. Additionally, Tessla enforces rules through the skill system. A properly constructed skill includes both security rules (such as always using parameterized queries, never logging credentials, and avoiding unsafe query construction) and code quality rules (such as preferring const over var, using for-of loops, and returning appropriate HTTP status codes). When the Express API generator prompt was reconstructed through this rigorous process, the resulting skill achieved a perfect 100 score, demonstrating that systematic application of security rules and best practices can transform dangerous patterns into trusted code generation instructions.

Building Security into the Development Workflow

Integration of tools like SonarQube with AI coding agents through MCP (Model Context Protocol) creates a comprehensive feedback loop within the agentic software development lifecycle. Developers can now ask their AI coding assistants to analyze code quality issues discovered by Sonar, fix identified vulnerabilities, and codify security and code quality best practices as enforced rules within their skills. This approach transforms security from a post-development concern into an embedded principle of the development process. The framework enables AI agents to not only generate code but to understand the quality standards expected of it, learn from review feedback, and continuously improve their output through integration with established code quality platforms. By establishing a reviewed, scored ecosystem for AI coding skills similar to app stores or package registries, organizations can ensure that AI assistants operate within controlled, security-focused parameters rather than amplifying risks through indiscriminate use of untrusted prompts.

Key Takeaways

  • AI assistants are only as good as their context: Ad-hoc prompts often contain security vulnerabilities and anti-patterns that result in exploitable production code, regardless of superficial quality indicators
  • Tessla provides early quality gates: Skill review mechanisms catch dangerous patterns before code generation, enabling quality assessment at the planning stage rather than after development
  • Rules and documentation tiles enforce best practices: Systematic integration of security rules, code quality standards, and dependency-specific documentation transforms AI agents into reliable code generators
  • Agentic SDLC requires integrated feedback loops: Combining AI coding assistants with SonarQube analysis through tools like MCP creates comprehensive workflows that make security a development principle rather than an afterthought
  • **Scored skill ecosystems