AI has found 50 bugs in cURL. "AI-native SASTs work well" (etn.se)

🤖 AI Summary
An Australian security researcher, Joshua Rogers, used modern LLM-based SAST (static application security testing) tools to submit 50 valid bug reports that led to fixes in libcURL — a widely used, mature open‑source library. cURL maintainer Daniel Stenberg, who had publicly complained about low-quality AI-generated bug noise, confirmed he was “overwhelmed by the quality” of these findings. While none of the 50 cURL issues were critical, Rogers has also found critical flaws elsewhere, and his work shows generative-AI scanners can surface real defects that long-running conventional analyzers (clang-tidy, scan-build, CodeSonar, Coverity) missed. Technically, the breakthrough comes from combining LLMs’ joint understanding of natural language and code with careful human-in-the-loop triage: Rogers runs multiple AI SAST tools (he currently favors ZeroPath and cites influence from Google’s Big Sleep work), analyzes results from different angles, and manually vets or augments findings with other models. Unlike classic syntactic scanners, generative models spot semantic mismatches between intent, comments, protocols and implementation — even in legacy, seldom-used code (one Kerberos path was simply retired). The implication for the AI/ML and security communities is clear: “AI-native” SAST can reveal new classes of bugs and deserves integration into vulnerability workflows, but only with rigorous triage to avoid hallucination-driven noise.
Loading comments...
loading comments...