OpenAI's new ChatGPT Connectors feature allows users to access third-party applications, but it also introduces significant security risks, including a 0-click data exfiltration exploit. Attackers can use indirect prompt injections to stealthily extract sensitive information, such as API keys, from connected services like Google Drive without the victim's knowledge. Despite OpenAI's mitigations against such vulnerabilities, creative methods still exist for malicious actors to bypass these safeguards.