Are you smarter than a sixth grader? Textbook question answering for multimodal machine comprehension
- Abstract
- We introduce the task of Multi-Modal Machine Comprehension (M3C), which aims at answering multimodal questions given a context of text, diagrams and images. We present the Textbook Question Answering (TQA) dataset that includes 1,076 lessons and 26,260 multi-modal questions, taken from middle school science curricula. Our analysis shows that a significant portion of questions require complex parsing of the text and the diagrams and reasoning, indicating that our dataset is more complex compared to previous machine comprehension and visual question answering datasets. We extend state-of-the-art methods for textual machine comprehension and visual question answering to the TQA dataset. Our experiments show that these models do not perform well on TQA. The presented dataset opens new challenges for research in question answering and reasoning across multiple modalities. © 2017 IEEE.
- Author(s)
- Kembhavi, A.; Seo, M.; Schwenk, D.; Choi, Jonghyun; Farhadi, A.; Hajishirzi, H.
- Issued Date
- 2017-07
- Type
- Conference Paper
- DOI
- 10.1109/CVPR.2017.571
- URI
- https://scholar.gist.ac.kr/handle/local/20277
- Publisher
- Institute of Electrical and Electronics Engineers Inc.
- Citation
- Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
- Conference Place
- US
-
Appears in Collections:
- Department of AI Convergence > 2. Conference Papers
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.