Click any tag below to further narrow down your results
Links
OpenAI warns that its upcoming AI models may pose a "high" cybersecurity risk due to their enhanced capabilities. The company reports that these models could enable more people to execute cyberattacks, especially with their ability to operate autonomously for longer periods. OpenAI is increasing its efforts to address these threats through collaboration and new tools.
Researchers at Microsoft discovered a backdoor named SesameOp that misuses the OpenAI Assistants API for command-and-control communications. This malware employs sophisticated techniques to maintain stealth and persistence while executing commands within compromised systems. The findings highlight how threat actors adapt to new technologies for malicious purposes.
Sam Altman is hiring a Head of Preparedness at OpenAI to address the increasing risks associated with advanced AI models. This role will focus on practical risk oversight, including threat modeling, cyber misuse, and mental health impacts, reflecting a shift towards prioritizing safety in AI development.
OpenAI's Codex CLI has a vulnerability (CVE-2025-61260) that allows attackers to execute commands by manipulating configuration files. This flaw can lead to serious security risks, including remote access and supply chain attacks on developers. A patch was released shortly after the issue was reported.
OpenAI has made its first investment in the cybersecurity sector, signaling a strategic move to enhance its capabilities in addressing cyber threats. The investment aims to bolster the security of AI technologies and safeguard user data against emerging cyber risks.
OpenAI has banned ChatGPT accounts that are linked to Russian and Chinese cyber operations. This decision aims to prevent the misuse of AI technologies for malicious activities and uphold security measures against cyber threats. The action reflects ongoing concerns over the exploitation of AI by state-sponsored actors.