MAGI-1 is an autoregressive video generation model that creates videos by predicting sequences of fixed-length video chunks, achieving high temporal consistency and scalability. It incorporates innovations such as a transformer-based variational autoencoder and a unique denoising algorithm, enabling efficient and controllable video generation from text or images. The model has shown state-of-the-art performance in both instruction following and physical behavior prediction compared to existing models.
video-generation ✓
+ autoregressive
machine-learning ✓
deep-learning ✓
model-release ✓