1 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Researchers discovered a vulnerability in ChatGPT that allows the exfiltration of user data, with the attack sending data directly from ChatGPT servers. This exploit, called ZombieAgent, builds on a previous attack known as ShadowLeak and demonstrates the ongoing security challenges in AI chatbots.
If you do, here's more
AI chatbots are caught in a cycle of vulnerability and response. When researchers identify a weakness, they exploit it, prompting platforms to implement guardrails. However, these measures often only address specific threats rather than the broader vulnerabilities. This is like reinforcing a highway for small cars while ignoring larger vehicles that could cause accidents.
A recent vulnerability in ChatGPT, discovered by Radware, exemplifies this issue. The exploit allowed attackers to secretly extract users' private information directly from ChatGPT servers, leaving no visible trace on user devices. This stealthy approach was particularly concerning for organizations with sensitive data. The attack also created persistent entries in ChatGPTβs long-term memory, making the threat more enduring.
Radware previously exposed a similar vulnerability called ShadowLeak, which targeted an AI feature called Deep Research. OpenAI responded by implementing fixes, but Radware quickly found a new way to bypass these defenses with the latest exploit, now dubbed ZombieAgent. This pattern of attacks highlights the ongoing challenges in securing AI systems against increasingly sophisticated threats.
Questions about this article
No questions yet.