OAK

Structured Set Matching Networks for One-Shot Part Labeling

Metadata Downloads
Author(s)
Choi, JonghyunJayant KrishnamurthyAniruddha KembhaviAli Farhadi
Type
Conference Paper
Citation
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018, pp.3627 - 3636
Issued Date
2018-06-22
Abstract
Diagrams often depict complex phenomena and serve as a good test bed for visual and textual reasoning. However, understanding diagrams using natural image understanding approaches requires large training datasets of diagrams, which are very hard to obtain. Instead, this can be addressed as a matching problem either between labeled diagrams, images or both. This problem is very challenging since the absence of significant color and texture renders local cues ambiguous and requires global reasoning. We consider the problem of one-shot part labeling: labeling multiple parts of an object in a target image given only a single source image of that category. For this set-to-set matching problem, we introduce the Structured Set Matching Network (SSMN), a structured prediction model that incorporates convolutional neural networks. The SSMN is trained using global normalization to maximize local match scores between corresponding elements and a global consistency score among all matched elements, while also enforcing a matching constraint between the two sets. The SSMN significantly outperforms several strong baselines on three label transfer scenarios: diagram-to-diagram, evaluated on a new diagram dataset of over 200 categories; image-to-image, evaluated on a dataset built on top of the Pascal Part Dataset; and image-to-diagram, evaluated on transferring labels across these datasets.
Publisher
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018
Conference Place
US
Salt Lake City
URI
https://scholar.gist.ac.kr/handle/local/19988
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.