TechReaderDaily.com
TechReaderDaily
Live
Application Security · Testing Tools

After 20 Years, SAST-DAST Gap Closes, But AI May Render It Obsolete

Invicti's runtime-to-source correlation engine and free LLM-based scanners from Anthropic and OpenAI are reshaping application security faster than industry terminology can keep pace.

A web application security testing dashboard showing vulnerability scan results across an environment overview. securityboulevard.com
In this article
  1. The schism that correlation was supposed to fix
  2. The AI entrants that skipped the debate

On April 9, 2026, Invicti shipped a capability the application security industry has been chasing for two decades: a production-grade DAST-to-SAST correlation engine that maps runtime vulnerabilities back to the specific lines of source code that produced them. The press release, carried by PRNewswire, described it as a way to help DevOps teams 'fix verified runtime risks at pipeline speed.' It is the kind of phrase that sounds obvious in retrospect. It was not obvious before.

Static application security testing (SAST) and dynamic application security testing (DAST) have coexisted as uneasy neighbours for most of their history. SAST scans source code before deployment, looking for patterns that match known vulnerability signatures. DAST probes a running application from the outside, the way an attacker would, sending malicious inputs and observing responses. The two tools produce separate reports, use separate taxonomies, and are often owned by separate teams. The gap between them has been a source of friction, false positives, and triage overhead since the first scanner shipped.

Invicti's announcement was not the only signal in the first half of 2026 that the old divide is collapsing. On February 19, Anthropic released Claude Code Security, a free tool that uses large language model reasoning to scan codebases for vulnerabilities without relying on the pattern-matching rule sets that have defined SAST for decades. OpenAI followed on March 6 with Codex Security, its own entry into the same space. VentureBeat reported that both scanners 'use LLM reasoning instead of traditional pattern matching to identify vulnerabilities,' a shift that fundamentally rewrites what a SAST tool is expected to do.

These three events, taken together, mark an inflection point for application security testing. The question is no longer whether SAST and DAST can be correlated. It is whether the categories themselves, static and dynamic, inside and outside, pre-deployment and runtime, are still the right organising principle for the work. The taxonomy held for twenty years. It may not hold for another five.

The schism that correlation was supposed to fix

SAST tools work early in the development cycle. A developer commits code. The scanner runs, either in the IDE or in the CI/CD pipeline, and flags potential vulnerabilities: SQL injection, cross-site scripting, hardcoded credentials, unsafe deserialisation. The strength of SAST is speed and proximity to the developer. Its weakness is that it sees the code but not the execution context. A SAST scanner cannot know whether a flagged code path is actually reachable, whether a dependency resolves to a vulnerable version at runtime, or whether a configuration setting neutralises the theoretical risk.

DAST tools operate at the opposite end of the lifecycle. They run against a deployed application, sending requests and analysing responses. They find vulnerabilities that are actually exploitable: an endpoint that leaks user data, a misconfigured CORS header, an injection point that returns a database error. The strength of DAST is that it finds real, reachable problems. Its weakness is that it tells a security team where the wound is, on which endpoint, with which parameter, but not where in the codebase the sutures need to go.

For years, the industry described the resulting workflow as 'correlation,' a word that implied more precision than most tools delivered. A security engineer would receive a DAST report listing hundreds of findings, then manually cross-reference them against a SAST report listing hundreds more. False positives in each tool compounded. True positives went unaddressed because the translation cost was too high. A March 2026 webinar hosted by GovInfoSecurity titled 'The Brass Ring of AppSec: Is AI Finally Making DAST to SAST Correlation Possible?' captured the industry's long-standing frustration with the problem. The title itself treats correlation as the prize that has remained out of reach.

Invicti's new capability addresses the problem mechanically. When the company's DAST scanner identifies a runtime vulnerability, it traces the finding back through the application's request routing and maps it to the responsible code location. The output is not two reports sitting side by side. It is a single finding with a line number, a file path, and a remediation instruction. The company had already established a reputation on the DAST side; in March 2026, independent testing firm Miercom named Invicti the top performer in DAST benchmark tests, finding it delivered 'the most complete vulnerability detection across modern application environments,' according to a separate press release.

The AI entrants that skipped the debate

While Invicti was building bridges between SAST and DAST, Anthropic and OpenAI were shipping tools that treat the entire SAST category as a legacy architecture problem. Traditional SAST scanners use rule sets and pattern matching: they look for calls to eval() in JavaScript, for string concatenation in SQL queries, for imports of deprecated cryptographic libraries. These rules are maintained by security researchers and must be updated as frameworks evolve. The false positive rate is high. InfoWorld reported in November 2025 that a research team had built a 'SAST-LLM mashup' that 'slashed false positives by 91% compared to a widely used standalone SAST tool,' a finding that suggested the rule-based approach was approaching its ceiling.

Claude Code Security and Codex Security bypass the rule set entirely. They read source code the way a human reviewer would, using reasoning to identify patterns that are suspicious not because they match a known signature but because they violate secure coding principles in context. The tools are free, which changes the economics of adoption. A development team that could not justify a six-figure SAST licence can now run Claude Code Security against a repository in minutes. The scanners are also fast, operating at a speed that makes them feasible in CI/CD pipelines rather than only in scheduled nightly scans.

What neither Claude Code Security nor Codex Security does is runtime. They are SAST tools, not SAST-plus-something-else tools. They cannot tell a developer whether a flagged vulnerability is actually exploitable in the deployed environment. That remains the DAST domain, and it is why the correlation problem is not automatically solved by better SAST. A better SAST scanner reduces noise. It does not eliminate the need to verify findings against the running application.

The runtime piece of the triad has been growing quietly in parallel. Dynamic testing has expanded beyond traditional DAST crawlers into interactive application security testing (IAST), which instruments the application at runtime to observe data flow and detect vulnerabilities from the inside. Runtime application self-protection (RASP) takes the idea further, embedding security checks that can block attacks in production. The GovInfoSecurity webinar description listed eBPF security among its keywords, reflecting growing interest in kernel-level instrumentation that can observe application behaviour without modifying the application itself.

Sunil Gentyala, writing in CSO Online in January 2026, argued that the future of application security is 'posture, provenance and proof, not alerts.' The article noted that SAST and software composition analysis (SCA) alone are insufficient and that security teams relying on them are 'already behind.' The observation captures the direction the triad is moving: away from individual scanner outputs and toward continuous, evidence-based assessments that span the entire software lifecycle.

Kiran Elengickal, writing in a Forbes Technology Council piece in October 2025, made the case that DAST is 'non-negotiable' for cloud-native and AI-generated applications because static analysis alone cannot capture the behaviour of containerised workloads, orchestrated services, or code produced by large language models. Elengickal described the attacker's perspective: an adversary interacts only with what is live, probing and pushing until something breaks. If a security programme cannot see what the attacker sees, it has a blind spot no amount of SAST coverage can close.

The Forbes piece highlighted an underappreciated dimension of the AI-generated code problem. When a developer uses an LLM to produce code, the code may be syntactically correct and functionally valid while containing subtle vulnerabilities that a rule-based SAST scanner was never designed to detect. The training data that produced the code may include insecure patterns that are statistically common but not explicitly flagged by any existing CWE entry. DAST catches these flaws at runtime because it does not care how the code was written, only how the application behaves when prodded.

What has not yet arrived is a single tool that unifies all three modes, SAST, DAST, and runtime instrumentation, into one coherent output. Invicti's correlation engine links two of the three. The LLM-based scanners improve SAST without touching DAST. IAST and RASP operate at runtime but produce their own findings in their own formats. The industry is converging on the problem from multiple directions, but the convergence is not yet complete.

The systemic version of this story is not about any single vendor. It is about the organisational cost of maintaining separate testing pipelines that speak different languages. A 2025 study cited by InfoWorld found that large enterprises run an average of four to seven distinct application security testing tools across the development lifecycle, each generating its own set of findings with its own severity taxonomy. The engineers who are asked to fix the vulnerabilities are often not the engineers who configured the scanners, and they are almost never the engineers who triaged the results. The workflow is fragmented by design, and correlation is an attempt to bolt coherence onto a fragmented system rather than to redesign the system.

What to watch in the second half of 2026 is whether the LLM-based SAST entrants extend toward runtime. Anthropic has not announced plans to integrate Claude Code Security with a DAST capability, but the company's broader trajectory suggests it understands the value of end-to-end coverage. OpenAI's Codex Security is similarly SAST-only for now, but the company's platform ambitions make an expansion plausible. The more likely near-term development is that existing vendors, Invicti among them, integrate LLM reasoning into their own SAST engines while continuing to build out the correlation layer. The taxonomy may blur even if the tools remain separate.

The real test of the triad is not whether the dashboards converge. It is whether the mean time to remediate a verified vulnerability drops. That number, measured in hours or days rather than weeks, is the only metric that matters to the teams who will inherit whatever the vendors ship next. It has not dropped enough yet. The tools are getting better. The question is whether the workflows will follow.

Read next

Progress 0% ≈ 9 min left
Subscribe Daily Brief

Get the Daily Brief
before your first meeting.

Five stories. Four minutes. Zero hot takes. Sent at 7:00 a.m. local time, every weekday.

No spam. Unsubscribe in one click.