3 links
tagged with all of: image-generation + diffusion
Click any tag below to further narrow down your results
Links
REPA-E introduces a family of end-to-end tuned Variational Autoencoders (VAEs) that significantly improve text-to-image (T2I) generation quality and training efficiency. The method enables effective joint training of VAEs and diffusion models, achieving state-of-the-art performance on ImageNet and enhancing latent space structure across various VAE architectures. Results show accelerated generation performance and better image quality, making E2E-VAEs superior replacements for traditional VAEs.
The article presents the Decoupled Diffusion Transformer (DDT) architecture, demonstrating improved performance with a larger encoder in a diffusion model framework. It achieves state-of-the-art FID scores on ImageNet benchmarks and allows for accelerated inference by reusing encoders across steps. The implementation provides detailed configurations for training and inference, along with online demos.
UCGM is an official PyTorch implementation that provides a unified framework for training and sampling continuous generative models, such as diffusion and flow-matching models. It enables significant acceleration of sampling processes and efficient tuning of pre-trained models, achieving impressive FID scores across various datasets and resolutions. The framework supports diverse architectures and offers tools for both training and evaluating generative models.