Google declares AI bug hunting season open, sets a $30K max reward (www.theregister.com)

🤖 AI Summary
Google launched a standalone AI Vulnerability Reward Program that pays up to $30,000 per qualifying report (a base top-tier payout of $20,000 plus up to $10,000 in quality-based multipliers) to incentivize researchers to find security flaws in its AI systems. The move — coming two years after Google first folded AI products into its general VRP — clarifies scope, rewards, and product tiers (flagship: Search, Gemini Apps, Workspace core; standard: AI Studio, Jules, NotebookLM/AppSheet; other: remaining integrations), signaling a sharper focus on concrete security risks in deployed AI rather than content-moderation mistakes. Google lists in-scope issues (ranked by severity) that can earn the biggest bounties: rogue actions (e.g., indirect prompt-injection causing account/data state changes like unlocking a smart lock), sensitive-data exfiltration (PII leaks), phishing enablement via UI/HTML injection, model theft (exfiltrating model parameters), context-manipulation across accounts, access-control bypasses, unauthorized product usage, and certain cross-user DoS scenarios (volumetric DoS and self-account DoS are disallowed). Notably, direct prompt injection, jailbreaks, and alignment/content issues are out-of-scope for bounty payouts — Google says these require trend analysis and long-term fixes rather than one-off rewards. The program should steer red teams toward high-impact, reproducible attack classes (especially indirect prompt injection vectors and cross-account context exploits), improve patching incentives, and clarify what kinds of AI risks companies will pay researchers to prioritize.
Loading comments...
loading comments...