Invicti Launches DAST-to-SAST Correlation as AI Reshapes AppSec
With Invicti's April 2026 release and free LLM code scanners from Anthropic and OpenAI, a decade of frustration over disjointed application security testing is giving way to rapid integration, making runtime-to-source correlation a likely industry standard.
snyk.io
On April 9, 2026, application security vendor Invicti issued a press release announcing DAST-to-SAST correlation, a feature the industry has chased for more than a decade. The capability would let development teams take a vulnerability confirmed at runtime and trace it directly to the line of source code that produced it. The announcement landed five weeks after GovInfoSecurity convened a webinar titled "The Brass Ring of AppSec: Is AI Finally Making DAST to SAST Correlation Possible?" and roughly one month after two of the largest AI labs shipped free code-security scanners built on large language models. In the span of approximately 60 days, a structural gap in application security testing went from enduring annoyance to addressable problem.
The gap has been simple to describe and stubborn to close. Static application security testing, or SAST, examines source code for known vulnerability patterns before the code compiles. Dynamic application security testing, or DAST, probes a running application from the outside, sending malicious inputs and observing responses, much as an attacker would. Each method finds vulnerabilities the other misses. SAST catches injection flaws and hardcoded secrets early but generates high volumes of false positives. DAST confirms real exploitability at runtime but offers no pointer back to the responsible code. For years, security teams have operated these tools in parallel, manually triaging two separate queues of findings that describe the same application from different angles.
The operational cost of that parallelism is measurable. A 2025 survey by the SANS Institute found that the average application security team spends roughly 30 percent of its analyst hours on deduplication and cross-referencing findings across tools. When a DAST scanner flags a cross-site scripting vulnerability on a login endpoint, the developer assigned to fix it must search the codebase for the output-encoding flaw that produced it. If that search takes hours, and it often does, the runtime finding becomes a sourcing problem before it ever becomes a remediation ticket. The SAST scanner may have already flagged the same weakness, but without a common identifier or a correlated data model, the two findings sit in separate dashboards, often assigned to different teams.
The GovInfoSecurity webinar in late March 2026 captured the mood precisely by calling correlation "the brass ring of AppSec," a phrase that acknowledges both the ambition and the frustration. For years, security teams have relied on both SAST and DAST to identify vulnerabilities across the software development lifecycle, the webinar's description noted, but the tools have rarely spoken the same language. The promise of correlation is not merely convenience. It is the difference between patching a symptom and fixing a root cause.
Then the AI labs arrived. On March 10, 2026, VentureBeat reported that OpenAI had launched Codex Security on March 6, entering a market Anthropic had disrupted 14 days earlier with Claude Code Security. Both scanners use large language model reasoning instead of the pattern-matching engines that have defined SAST for two decades. Anthropic's tool, released around February 20, and OpenAI's fast-follow both operate as free additions to existing coding-assistant products, a pricing move that recalibrated expectations across the application security market.
Traditional SAST works by matching code against a database of rules, signatures, and control-flow patterns. It is fast, deterministic, and wrong often enough that developers learn to ignore it. An LLM-based scanner approaches the same codebase differently: it reads the code for intent, traces data flow through functions, and asks whether a particular pattern is exploitable in context. This is reasoning, not matching. The distinction matters because it changes the false-positive economics. A pattern-matcher flags every eval() call. A reasoning scanner can ask whether user input ever reaches that call, and if not, it stays quiet.
Invicti's approach to correlation comes from the other direction. Rather than improving SAST's reasoning, it connects DAST's runtime-confirmed findings to SAST's code-level analysis after the fact. The company's announcement described the capability as helping "DevOps fix verified runtime risks at pipeline speed," a formulation that emphasizes velocity. When a DAST scanner confirms a SQL injection vulnerability on a production endpoint, the correlation engine queries the SAST results database for the matching source pattern, surfaces the specific file and line, and inserts the finding into the developer's existing pull-request workflow.
The technical challenge is nontrivial. DAST findings reference URLs, parameters, and payloads. SAST findings reference files, line numbers, and abstract syntax tree nodes. Bridging the two requires a mapping layer that understands the application's routing table, its data-access patterns, and its deployment topology. Invicti controls both its DAST engine and its SAST engine, which gives it an integration advantage: the mapping can be built and maintained by a single engineering organization rather than negotiated across vendor boundaries. Competing products that source SAST and DAST from different vendors must solve an interoperability problem that has no standard.
The sequence of events is worth laying out precisely. On February 19, 2026, InfoWorld published an analysis by Jenn Gile examining what happens when AI is layered onto traditional SAST. Around February 20, Anthropic released Claude Code Security. On March 6, OpenAI released Codex Security. On March 10, VentureBeat connected the two launches into a market narrative. On March 27, the GovInfoSecurity webinar asked whether AI could finally close the correlation gap. On April 9, Invicti answered with a product. By April 30, the Communications of the ACM published a piece arguing that "a 90-day testing cycle does not just fall short" of modern deployment velocity; "it runs straight into it."
The Systemic Version of a Single-Vendor Failure
Look past the product announcements and a deeper problem emerges. The SAST-DAST gap is not simply a missing feature in someone's platform. It is the visible symptom of an industry that built security testing tools in silos, sold them to different buyers, and never agreed on a common data model for vulnerability findings. The Open Web Application Security Project publishes the Application Security Verification Standard, and MITRE maintains the Common Weakness Enumeration, but neither standard fully addresses the runtime-to-source traceability problem. A CWE identifier can tell you the category of a weakness; it cannot tell you which line of code in which repository produced it.
The false-positive problem compounds at scale. A large financial services firm running SAST across 2,000 repositories might generate 50,000 findings per scan cycle. Perhaps 5 percent are actionable. The triage burden falls on application security engineers who are outnumbered by developers roughly 100 to one, according to data from the DevOps Research and Assessment group. Every false positive that a developer investigates is time not spent on a real vulnerability. The structural incentive is to tune SAST thresholds upward until the noise becomes tolerable, which inevitably means missing real findings.
Jenn Gile's InfoWorld analysis captured the exhaustion that accompanied earlier SAST generations. "At that time, these two approaches were really the only options," the piece notes, referring to the first two waves of SAST technology. "And to be honest, neither option was all that great." The third wave, Gile argues, is defined by AI that does not merely match patterns but understands code structure. The article surveys several vendors moving in this direction, noting that the most credible approaches combine LLM reasoning with traditional control-flow analysis rather than replacing one with the other.
The ACM piece from April 30 sharpens the argument by attacking the testing cadence itself. A 90-day cycle might have been adequate when applications deployed quarterly, but modern continuous-deployment pipelines ship code dozens of times per day. The implication is that both SAST and DAST must move from scheduled scans to continuous, event-driven execution, and that correlation between the two must happen automatically and in near-real-time. The question is whether the tools, however intelligent, can keep pace with the pipeline.
What runtime verification actually requires is a feedback loop that starts the moment a DAST scan completes. The scanner identifies a confirmed exploitable condition. A correlation engine queries the SAST database. The matched finding, now enriched with both runtime evidence and source-code location, is routed to the developer who last touched the relevant file. The developer sees a ticket that says not "possible XSS on /login" but "the render() call in auth/login.tsx:142 passes unsanitized user input to the DOM, and here is the proof-of-concept payload that demonstrates exploitability." That ticket is fixable in minutes, not hours.
What We Don't Know Yet
For all the progress compressed into early 2026, several questions remain open. The first is whether LLM-based SAST scanners from Anthropic and OpenAI will integrate with existing DAST tools or remain standalone code-review assistants. Neither company has announced a DAST product or a correlation partnership. The second is whether correlated findings actually reduce mean time to remediation in production environments, or whether they simply shift the bottleneck from triage to deployment. The third is what happens to the dozens of standalone SAST and DAST vendors whose products lack a correlation story. Consolidation pressure in the application security market was already high; correlated pipelines may accelerate it.
The 60-day period from late February to late April 2026 does not resolve the SAST-DAST gap. It does something more interesting: it changes the question. Before February, the industry asked whether correlation was technically feasible. After April, the question is whether correlation will become table stakes for any application security platform, and how quickly the answer filters down to the teams actually responsible for shipping secure code. The brass ring is still in the air. But for the first time, multiple credible hands are reaching for it.
What the current crop of announcements does not address is the class of vulnerabilities that neither SAST nor DAST can see. Business logic flaws, authorization bypasses that require multi-step stateful exploitation, and supply-chain vulnerabilities introduced through third-party dependencies all sit in the blind spot between static analysis and dynamic probing. Correlation may reduce the noise, but it does not expand the signal bandwidth. That remains the next frontier.
The checkpoint to watch is the second half of 2026. Invicti's correlation capability will be in general availability, and early adopters will publish metrics on whether correlated findings actually close faster. Anthropic and OpenAI will either extend their code-security tools toward runtime or stay focused on the development window. The security teams who have spent years juggling two dashboards will begin to demand, with the leverage of real alternatives, that the industry's oldest testing gap finally close.