Deep Depth from Uncalibrated Small Motion Clip
- Abstract
- We propose a novel approach to infer a high-quality depth map from a set of images with small viewpoint variations. In general, techniques for depth estimation from small motion consist of camera pose estimation and dense reconstruction. In contrast to prior approaches that recover scene geometry and camera motions using pre-calibrated cameras, we introduce a self-calibrating bundle adjustment method tailored for small motion which enables computation of camera poses without the need for camera calibration. For dense depth reconstruction, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches. Rather than directly estimating depth or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume, and regressing the depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, the proposed method achieves state-of-the-art results on a variety of challenging datasets.
- Author(s)
- Im, Sunghoon; Ha, Hyowon; Jeon, Hae-Gon; Lin, Stephen; Kweon, In So
- Issued Date
- 2021-04
- Type
- Article
- DOI
- 10.1109/TPAMI.2019.2946806
- URI
- https://scholar.gist.ac.kr/handle/local/11595
- Publisher
- Institute of Electrical and Electronics Engineers
- Citation
- IEEE Transactions on Pattern Analysis and Machine Intelligence, v.43, no.4, pp.1225 - 1238
- ISSN
- 0162-8828
-
Appears in Collections:
- Department of AI Convergence > 1. Journal Articles
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.