Finding vulnerabilities using AI with Joshua Rogers (opensourcesecurity.io)

🤖 AI Summary
Security engineer Joshua Rogers ran a battery of AI-assisted static analysis (SAST-style) tools against curl and other open-source projects, triaged the outputs, and responsibly reported multiple bugs—resulting in the curl maintainer already landing 22 fixes with many more issues queued. Rogers found the best results from newer startups (notably ZeroPath), discovered mostly smaller defects but a few potentially serious ones (one Kerberos-authentication issue that broke the feature in practice), and used a second AI (ChatGPT) to clarify and prioritize tool-generated findings. He ran the products via free trials, manually weeded through false positives, and collaborated with maintainers rather than dumping raw, noisy reports. The episode underscores a practical reality: AI tools can meaningfully augment skilled humans in vulnerability discovery but won’t replace expert judgment. Key implications for the AI/ML community include the importance of rigorous human-in-the-loop triage, responsible disclosure workflows for open source, and better discovery/benchmarking of AI security tools (search/SEO currently buries viable offerings). Technically, these tools can surface both security and functional defects across disparate code areas, reduce manual coverage gaps, and scale scouting of legacy code — provided teams budget for follow-up analysis to filter false positives and coordinate with maintainers to avoid overload.
Loading comments...
loading comments...