6 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Depth Anything 3 (DA3) is a model designed for accurate depth estimation and 3D geometry recovery from various visual inputs, regardless of camera pose. It simplifies the process using a single transformer backbone and a depth-ray representation, outperforming previous models in both monocular and multi-view scenarios. Various specialized models within the DA3 series cater to different depth estimation tasks.
If you do, here's more
Depth Anything 3 (DA3) introduces a model designed to predict consistent geometric depth from various visual inputs, even when camera positions aren't known. The key findings from DA3 include that a plain transformer backbone can effectively handle depth tasks without needing complex architecture. It also emphasizes a simplified depth-ray representation, moving away from multi-task learning complexities. Compared to its predecessor DA2, DA3 shows marked improvements in monocular depth estimation and outperforms VGGT in multi-view scenarios.
The DA3 framework consists of multiple model series tailored for different use cases. The main series, including DA3-Giant, DA3-Large, and others, can handle tasks like monocular and multi-view depth estimation and camera pose estimation. There's also a specialized DA3Metric-Large model for metric depth estimation, which is particularly useful in applications requiring real-world scale. The Nested series combines features from the Any-view and monocular models to enhance depth and pose accuracy.
Recent updates have introduced features like DA3-Streaming, which allows for ultra-long video sequence inference with reduced GPU memory requirements. Users can also take advantage of an interactive web UI for visualizing model outputs and a flexible CLI for batch processing. The codebase is modular, making it easy for researchers to integrate new models or functionalities. Installation instructions and example code snippets are provided for users to get started quickly.
Questions about this article
No questions yet.