Click any tag below to further narrow down your results
Links
OpenAI's ChatGPT Health aims to provide tailored health advice while raising significant questions about data security and privacy. Users can connect personal medical records, but this could expose sensitive information to third parties. The lack of clarity on regulatory compliance and encryption methods adds to the skepticism surrounding its safety.
Researchers discovered a vulnerability in ChatGPT that allows the exfiltration of user data, with the attack sending data directly from ChatGPT servers. This exploit, called ZombieAgent, builds on a previous attack known as ShadowLeak and demonstrates the ongoing security challenges in AI chatbots.
OpenAI's new ChatGPT Connectors feature allows users to access third-party applications, but it also introduces significant security risks, including a 0-click data exfiltration exploit. Attackers can use indirect prompt injections to stealthily extract sensitive information, such as API keys, from connected services like Google Drive without the victim's knowledge. Despite OpenAI's mitigations against such vulnerabilities, creative methods still exist for malicious actors to bypass these safeguards.