Researchers have discovered a jailbreak method for GPT-5, allowing the model to bypass safety measures and restrictions. This finding raises significant concerns regarding the potential misuse of advanced AI technologies, highlighting the need for more robust safeguards.
gpt-5 ✓
jailbreak ✓
+ ai-safety
cybersecurity ✓
research ✓