Hacking with AI SASTs: An overview of ‘AI Security Engineers’ / ‘LLM Security Scanners’ for Penetration Testers and Security Teams (joshua.hu)

🤖 AI Summary
An independent tester evaluated a new class of AI-native SASTs — tools that ingest source code and use LLM-driven analysis to find vulnerabilities, malicious code, and logic bugs. After trying several commercial offerings, the author concludes these tools already work extremely well: they surface real, often non‑obvious vulnerabilities (including business‑logic and multi‑function flow issues) in minutes, show low false‑positive rates, and are especially good at discovering issues human reviewers miss. The top performers in the review were ZeroPath, Corgea, and Almanax. The systems are indeterministic (different runs can find different issues), which the author frames as a strength similar to having many creative pentesters running varied attacks. Technically, these platforms support multiple ingestion modes (repo integrations, uploads), CI/CD and PR/branch scans, taint/flow analysis, dependency/CVE checks, false‑positive filtering, and auto‑patch or PR generation. Some produce audit PDFs and org/SOC‑2 style reports; feature parity and repo handling vary (e.g., multi‑app detection, ignored directories). Malicious‑code detection remains hard in complex dependency graphs, and auto‑fix quality is inconsistent — useful for triage but not always production ready. Implication: AI SASTs are maturing into must‑have AppSec tooling for pentesters and engineering teams, offering fast, inexpensive discovery today, with caveats around consistency, discoverability of vendors, and patch trustworthiness.
Loading comments...
loading comments...