2 links tagged with all of: cybersecurity + language-models
Click any tag below to further narrow down your results
Links
Microsoft revealed a new side-channel attack called Whisper Leak that enables attackers to infer conversation topics from encrypted traffic between users and language models. The attack works despite HTTPS encryption and can identify sensitive subjects, raising serious privacy concerns. Various AI models have shown vulnerability, prompting some companies to implement countermeasures.
The article presents Golden Goose, a method to create unlimited Reinforcement Learning with Verifiable Rewards (RLVR) tasks by using unverifiable internet text. It describes how the authors developed a large-scale dataset, GooseReason-0.7M, which includes over 700,000 tasks across various domains. The approach successfully enhances model performance, even in areas like cybersecurity where prior data was unavailable.