OpenAI's new ChatGPT Connectors feature allows users to access third-party applications, but it also introduces significant security risks, including a 0-click data exfiltration exploit. Attackers can use indirect prompt injections to stealthily extract sensitive information, such as API keys, from connected services like Google Drive without the victim's knowledge. Despite OpenAI's mitigations against such vulnerabilities, creative methods still exist for malicious actors to bypass these safeguards.
AI browsers are vulnerable to prompt injection attacks, which can lead to significant data exfiltration risks as these browsers gain more agentic capabilities. Researchers have demonstrated various methods of exploiting these vulnerabilities, highlighting the need for improved security measures while acknowledging that complete prevention may never be possible. As AI continues to integrate with sensitive data and act on users' behalf, the potential for malicious exploitation increases.