π OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration
Authors: Yiming Zuo, Willow Yang, Zeyu Ma, Jia Deng
Institution: Princeton Vision & Learning Lab
Paper: OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration, ICCV 2025
Code: GitHub β princeton-vl/OMNI-DC
Huggingface Paper: https://huggingface.co/papers/2411.19278
π§ Model Overview
OMNI-DC is a generalizable depth completion framework that integrates sparse depth signals at multiple spatial resolutions to produce dense, metric-accurate depth maps.
It is trained on a mixture of synthetic and real data, achieving strong cross-dataset generalization and robustness to unseen sparsity patterns.
Key highlights:
- Multiresolution sparse-depth integration
- Differentiable optimization layer for enforcing geometric consistency
- Trained on diverse synthetic datasets with real-world fine-tuning
- Compatible with Depth Anything v2 as a plug-in prior (v1.1+)
π§© Use Cases
| Use case | Description |
|---|---|
| Zero-shot depth completion | Apply to new datasets without retraining |
| SLAM / mapping | Fill dense maps from sparse LiDAR or RGB-D sensors |
| Novel view synthesis | Supply high-quality depth for 3D scene reconstruction |
If you use OMNI-DC, please cite:
@inproceedings{zuo2025omni,
title = {OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration},
author = {Zuo, Yiming and Yang, Willow and Ma, Zeyu and Deng, Jia},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year = {2025}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support