Researchers from King's College London warn that large language model (LLM) chatbots can be easily manipulated into malicious tools for data theft, even by individuals with minimal technical knowledge. By using "system prompt" engineering, these chatbots can be instructed to act as investigators, significantly increasing their ability to elicit personal information from users while bypassing existing privacy safeguards. The study highlights a concerning gap in user awareness regarding privacy risks associated with these AI interactions.
ai ✓
privacy ✓
data-theft ✓
+ chatbot
security ✓