3 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Runway has introduced its first world model, GWM-1, which predicts frame-by-frame simulations to understand physics and dynamics. This model aims to enhance video generation and training for robotics and other applications. Alongside, Runway updated its Gen 4.5 video model to include native audio and multi-shot capabilities.
If you do, here's more
Runway has launched its first world model, GWM-1, which uses frame-by-frame prediction to simulate environments with an understanding of physics. This model distinguishes itself from competitors like Google’s Genie-3 by being more general-purpose. Runway’s CTO, Anastasis Germanidis, emphasized the importance of predicting pixels directly to achieve a robust simulation. The company has also developed specific applications of GWM-1: GWM-Worlds for interactive environments, GWM-Robotics for training robots with synthetic data, and GWM-Avatars for simulating human behavior.
GWM-Worlds allows users to create scenes using prompts or images, generating environments that incorporate geometry, physics, and lighting at 24 frames per second and 720p resolution. This tool has potential applications not only in gaming but also in teaching agents how to navigate real-world scenarios. GWM-Robotics aims to enhance the training of robots by introducing varied conditions and obstacles, potentially exposing when robots might fail to follow instructions.
In addition to the world model, Runway has updated its Gen 4.5 video model to include native audio and multi-shot generation capabilities. Users can now create one-minute videos with consistent characters, dialogue, and background audio. This update positions Runway closer to its competitor, Kling, in the video generation space. The company plans to offer GWM-Robotics through an SDK and is in talks with robotics firms for potential collaborations.
Questions about this article
No questions yet.