OAK

ActionFlowNet: Learning motion representation for action recognition

Metadata Downloads
Abstract
We present a data-efficient representation learning approach to learn video representation with small amount of labeled data. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. Our model effectively learns video representation from motion information on unlabeled videos. Our model significantly improves action recognition accuracy by a large margin (23.6%) compared to state-of-the-art CNN-based unsupervised representation learning methods trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M. © 2018 IEEE.
Author(s)
Ng, J.Y.-H.Choi, JonghyunNeumann, J.Davis, Larry Steven
Issued Date
2018-03
Type
Conference Paper
DOI
10.1109/WACV.2018.00179
URI
https://scholar.gist.ac.kr/handle/local/20005
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018
Conference Place
US
Appears in Collections:
Department of AI Convergence > 2. Conference Papers
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.