1 link tagged with all of: vulnerabilities + guardrails + ai-security + llms + echogram
Links
Researchers at HiddenLayer found a flaw in the guardrails of popular AI models like GPT-5.1 and Claude. The EchoGram attack uses specific words to trick these safety systems, allowing harmful requests to bypass defenses or causing harmless requests to be flagged as dangerous.
echogram ✓
ai-security ✓
vulnerabilities ✓
llms ✓
guardrails ✓