Recent advancements in generative diffusion models highlight their ability to understand image style and semantics. The paper introduces a novel attention distillation loss that enhances the transfer of visual characteristics from reference images to generated ones, optimizing the synthesis process and improving Classifier Guidance for faster and more versatile image generation. Extensive experiments validate the effectiveness of this approach in style and texture transfer.