2 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
OpenAI warns that its upcoming AI models may pose a "high" cybersecurity risk due to their enhanced capabilities. The company reports that these models could enable more people to execute cyberattacks, especially with their ability to operate autonomously for longer periods. OpenAI is increasing its efforts to address these threats through collaboration and new tools.
If you do, here's more
OpenAI is raising alarms about the cybersecurity risks posed by its upcoming AI models, warning that these models could significantly increase the number of individuals capable of executing cyberattacks. According to a report shared with Axios, recent models have already shown a marked improvement in their capabilities. For instance, GPT-5 scored 27% on a capture-the-flag exercise in August, while its successor, GPT-5.1-Codex-Max, achieved 76% last month. OpenAI anticipates this trend will continue, leading them to prepare for models that could reach "high" levels of cybersecurity capability.
OpenAI is taking proactive steps to address these risks. The company is collaborating with industry leaders through the Frontier Model Forum and plans to create a Frontier Risk Council. This advisory group aims to connect experienced cybersecurity professionals with OpenAI's teams to better tackle threats. They are also privately testing Aardvark, a tool designed to help developers identify security vulnerabilities in their products. OpenAI claims Aardvark has already detected critical flaws, underscoring the urgency of their efforts in the face of evolving cyber threats.
Questions about this article
No questions yet.