The article explores subliminal learning in language models, where fine-tuning on seemingly unrelated data (like numbers) can lead to the acquisition of hidden preferences (e.g., a model developing a liking for "owls"). It introduces the concept of entangled tokens, where the probability of one token can influence another, and discusses experiments that demonstrate how this phenomenon can be harnessed through prompting and dataset generation. The findings suggest both a mechanism for subliminal learning and potential strategies for mitigating its effects.