Depth Prompting for Sensor-agnostic Depth Estimation
- Abstract
- Dense depth maps have been used as a key element of visual perception tasks. There have been tremendous efforts to enhance the depth quality, ranging from optimization-based to learning-based methods. Despite the remarkable progress for a long time, their applicability in the real world is limited due to systematic measurement biases such as density, sensing pattern, and scan range. It is well-known that the biases make it difficult for these methods to achieve their generalization. We observe that learning a joint representation for input modalities (e.g., images and depth), which most recent methods adopt, is sensitive to the biases. In this work, we disentangle those modalities to mitigate the biases with prompt engineering. For this, we design a novel depth prompt module to allow the desirable feature representation according to new depth distributions from either sensor types or scene configurations. Our depth prompt can be embedded into foundation models for monocular depth estimation. Through this embedding process, our method helps the foundation model to be free from restraint of depth scan range and to provide absolute scale depth maps. We demonstrate the effectiveness of our method through extensive evaluations, and submit our source codes as supplementary material to validate the robustness.
- Author(s)
- Jin-Hwi Park; Chanhwi Jeong; Junoh Lee; Jeon, Hae-Gon
- Issued Date
- 2024-06-17
- Type
- Conference Paper
- DOI
- 10.1109/CVPR52733.2024.00941
- URI
- https://scholar.gist.ac.kr/handle/local/20913
- Publisher
- IEEE Computer Society
- Citation
- 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, pp.9859 - 9869
- ISSN
- 10636919
- Conference Place
- US
Seattle Convention Center
-
Appears in Collections:
- Department of AI Convergence > 2. Conference Papers
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.