New OpenAI models likely pose "high" cybersecurity risk, company says (www.axios.com)

🤖 AI Summary
OpenAI has alerted the AI/ML community to a potential "high" cybersecurity risk posed by its emerging models, revealing that their capabilities are rapidly evolving. This warning, shared first with Axios, highlights the increasing ease with which individuals could potentially execute cyberattacks due to enhanced model functionalities, particularly their ability to operate autonomously for extended periods. Notably, the recent performance of models like GPT-5, which scored 27% in a capture-the-flag exercise, has improved significantly with GPT-5.1-Codex-Max achieving 76%, indicating a worrying upward trend in capability. The significance of this development cannot be overstated, as it signals an urgent need for improved defenses against AI-driven cyber threats. OpenAI has announced plans to intensify its cybersecurity measures, forming the Frontier Risk Council to connect AI developers with cybersecurity experts and enhance collaborative defenses. They are also testing a tool named Aardvark aimed at identifying vulnerabilities in software products. As AI models continue to improve at exploiting security flaws, the industry must adapt and bolster its defenses, marking a critical juncture for both AI innovation and cybersecurity preparedness.
Loading comments...
loading comments...