2 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Luma has launched Ray3 Modify, an AI model that lets users modify video footage while retaining the original performance of actors. Users can input character references and specify start and end frames to guide transitions, making it easier to control character movements. This model aims to enhance the creative process for studios by blending real footage with AI-generated transformations.
If you do, here's more
Luma, a company backed by a16z, has launched a new AI model named Ray3 Modify, designed to enhance video editing by allowing users to modify existing footage while maintaining the original performance of actors. By providing reference character images, creators can transform an actor's appearance without losing key elements like motion, timing, and emotional delivery. Users can also specify start and end frames to generate seamless transitions, giving them more control over character movements and scene continuity.
CEO Amit Jain emphasized the model's ability to blend real-world performances with AI's flexibility. This means filmmakers can capture scenes and then instantly modify them—changing locations or costumes—without needing to reshoot the entire scene. Ray3 Modify aims to resolve the challenges of editing with generative video models, which often struggle with maintaining the integrity of human performance.
The model is available through Luma's Dream Machine platform. This launch follows a significant $900 million funding round led by Humain, an AI company affiliated with Saudi Arabia’s Public Investment Fund. Other backers include Amplify Partners and Matrix Partners, and Luma is planning to establish a 2GW AI cluster in Saudi Arabia in collaboration with Humain. The company positions itself against competitors like Runway and Kling, having introduced video modification features as recently as June 2025.
Questions about this article
No questions yet.