Skip to main content
Sonar.tv
Back
How to Secure the AI Stack from Code to Runtime | Sonar Summit 2026Now Playing

How to Secure the AI Stack from Code to Runtime | Sonar Summit 2026

Sonar SummitMarch 4th 202624:45

A security-focused session covering how to protect the full AI development stack—from prompt to deployment—using SonarQube's SAST, SCA, and secrets detection capabilities alongside runtime observability.

The Evolving Security Landscape with AI Development

The rapid adoption of AI-powered coding tools and autonomous agents has fundamentally transformed software development, enabling organizations to ship code to production faster than ever before. However, as Donald Fischer (VP of Product Partnerships at Sonar) and Oron Noah (VP of Product Extensibility at Wiz) discussed at Sonar Summit 2026, this acceleration introduces significant security challenges that demand a comprehensive, integrated approach. The emergence of AI in development mirrors previous technological shifts—such as the move from on-premises to cloud infrastructure—each bringing novel vulnerabilities alongside their benefits.

Key Risk Areas in AI-Driven Development

The security risks introduced by AI development tools operate on multiple layers. First, visibility remains a critical challenge. Organizations struggle to understand what AI services are running in their environments, which Model Context Protocol (MCP) implementations are active, and what tools autonomous agents are utilizing. Developers can rapidly adopt new technologies without proper security oversight, creating blind spots for security teams. Simultaneously, the velocity of code deployment has outpaced the growth of security teams, forcing defenders to work more efficiently to manage the increased volume of production code. Additionally, attackers now leverage AI to accelerate exploitation timelines—vulnerabilities that once took weeks to exploit can now be weaponized in days or hours, fundamentally altering the threat landscape.

Democratization of Security Across the Organization

The traditional model of compartmentalized security roles—where developers write code, security teams review it, and operations teams manage runtime—has become obsolete in the AI era. Instead, security must function as a team sport, with every stakeholder from SOC teams to developers to business users engaging with the same security platform. When a security incident occurs, multiple teams must collaborate seamlessly: SOC teams identify threats, cloud security teams investigate root causes, and development teams remediate vulnerabilities at their source. With AI adoption, this democratization intensifies, as more personas and teams contribute code without traditional gatekeepers, requiring organizations to establish guardrails from the ground up.

Building Security Into AI Development Practices

Effective security in the AI stack demands a proactive, design-first approach. Organizations should conduct security reviews earlier in the development process—alongside architecture reviews—rather than as a gate before deployment. Guardrails must be embedded throughout the CI/CD pipeline with robust code scanners, but equally important is ensuring that AI-powered coding tools themselves generate secure code by design. Vendors developing AI coding assistants bear responsibility for preventing their models from generating vulnerable patterns, such as hardcoded secrets or insecure configurations. This shift places accountability on tool vendors to maintain security standards within their generative models.

The Defender's Advantage Through Integrated Solutions

While attackers may leverage AI to scale their capabilities, defenders possess a critical advantage: visibility into internal systems and the ability to deploy AI defensively. Organizations that embrace AI-powered security tools alongside traditional security measures gain superior threat detection and response capabilities. The collaboration between Sonar and Wiz exemplifies how integrated solutions can provide end-to-end protection—from code generation and development-time scanning through runtime monitoring and threat response. By combining visibility across the entire stack with AI-assisted analysis, defenders can outpace attackers and maintain control over increasingly complex, rapidly-evolving environments.

Key Takeaways

  • Visibility is foundational: Organizations must establish comprehensive understanding of AI services, agents, and tools operating in their environments before they can secure them effectively
  • Security is a team sport: Every stakeholder—from developers to SOC teams to business users—must have access to integrated security platforms and collaborate on remediation
  • Shift left and build security in: Security reviews should occur early in development, and AI coding tools must be designed to generate secure code by default
  • Defenders have the advantage: Organizations that strategically deploy AI-powered security tools gain superior visibility and response capabilities compared to attackers
  • Velocity requires efficiency: Security teams must work smarter, not harder, through automation and integrated tools to keep pace with accelerated code deployment