OMNI-DC / README.md
zuoym15's picture
Update README.md
9ec01f0 verified
metadata
license: bsd
pipeline_tag: depth-estimation
tags:
  - Depth Completion
  - ICCV2025
  - Zero-shot
  - Generalizable
  - Lidar+RGB
library_name: pytorch
datasets:
  - KITTI
  - NYUv2
  - ETH3D
  - VOID

πŸŒ€ OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration

Authors: Yiming Zuo, Willow Yang, Zeyu Ma, Jia Deng
Institution: Princeton Vision & Learning Lab
Paper: OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration, ICCV 2025
Code: GitHub – princeton-vl/OMNI-DC
Huggingface Paper: https://huggingface.co/papers/2411.19278


🧠 Model Overview

OMNI-DC is a generalizable depth completion framework that integrates sparse depth signals at multiple spatial resolutions to produce dense, metric-accurate depth maps.
It is trained on a mixture of synthetic and real data, achieving strong cross-dataset generalization and robustness to unseen sparsity patterns.

Key highlights:

  • Multiresolution sparse-depth integration
  • Differentiable optimization layer for enforcing geometric consistency
  • Trained on diverse synthetic datasets with real-world fine-tuning
  • Compatible with Depth Anything v2 as a plug-in prior (v1.1+)

🧩 Use Cases

Use case Description
Zero-shot depth completion Apply to new datasets without retraining
SLAM / mapping Fill dense maps from sparse LiDAR or RGB-D sensors
Novel view synthesis Supply high-quality depth for 3D scene reconstruction

If you use OMNI-DC, please cite:

@inproceedings{zuo2025omni,
  title     = {OMNI-DC: Highly Robust Depth Completion with Multiresolution Depth Integration},
  author    = {Zuo, Yiming and Yang, Willow and Ma, Zeyu and Deng, Jia},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      = {2025}
}