2 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
ShapeR offers a method for generating 3D shapes from image sequences. It processes input images to extract relevant data, then uses a transformer model to create a mesh representation of each object in the scene. The project includes tools for setup, data exploration, and evaluation.
If you do, here's more
ShapeR is an advanced system for generating 3D shapes from image sequences. It uses a preprocessing stage to extract essential elements such as sparse SLAM points, images, poses, and captions. The core of the system is a rectified flow transformer that processes these multimodal inputs to create a shape code, which is then decoded into a mesh of the object. The method focuses on each detected object to reconstruct an entire scene in metric terms.
For practical application, the project includes a command-line script, `infer_shape.py`, which requires preprocessed data in a pickle file format. Users can select configurations based on their needs—options range from high-quality outputs with slower processing to faster, lower-quality outputs. The script generates output files, including the 3D mesh and visual comparisons of the input, predictions, and ground truths.
ShapeR also comes with an Evaluation Dataset, featuring preprocessed samples from Aria glasses captures. Each sample includes point clouds, multi-view images, camera parameters, text captions, and ground truth meshes. An accompanying Jupyter notebook, `explore_data.ipynb`, provides insights into the data structure and includes interactive visualizations and examples for using the data with different models.
The project's structure is organized into several directories, containing scripts for inference, data exploration, dataset handling, preprocessing, and postprocessing. The majority of ShapeR is licensed under CC-BY-NC, with some components under different terms. Researchers interested in ShapeR are encouraged to cite the associated paper for academic purposes.
Questions about this article
No questions yet.