4 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article discusses Datadog's new feature that uses AI to classify vulnerabilities identified by Static Application Security Testing (SAST) as true or false positives. The aim is to streamline the review process, allowing teams to focus on genuine security risks while filtering out distractions.
If you do, here's more
Static application security testing (SAST) tools play a vital role in identifying vulnerabilities in code, but they often generate a significant number of false positives. This can lead to wasted resources as teams sift through alerts that aren't genuine threats. Datadog's new feature for false positive filtering, powered by Bits AI, aims to tackle this issue by classifying vulnerabilities as likely true or false positives. This classification helps teams prioritize their efforts, allowing them to focus on real vulnerabilities that need attention.
The false positive filtering feature integrates seamlessly into Datadogβs Static Code Analysis, providing AI-driven context to findings. When a potential vulnerability is detected, Bits AI evaluates the surrounding code to assess its legitimacy and offers explanations for its classifications. This immediate context assists engineers in making informed decisions, while a simple filter allows teams to eliminate likely false positives from their review process. Confidence levels assigned to each finding help in prioritizing which alerts to address first.
Through rigorous testing using the OWASP Benchmark, Datadog fine-tuned the Bits AI model to improve its accuracy in distinguishing between true and false positives. The model's effectiveness relies on the underlying static analyzer's quality. While SAST tools err on the side of caution by flagging potential vulnerabilities, the introduction of large language models (LLMs) brings a new level of contextual reasoning that can improve detection rates. This feature represents a step towards enhancing code security by reducing noise and enabling teams to concentrate on significant risks.
Questions about this article
No questions yet.