OAK

What and when to look? Temporal span proposal network for video relation detection

Metadata Downloads
Author(s)
Woo, SangminNoh, JunhyugKim, Kangil
Type
Article
Citation
Expert Systems with Applications, v.297, no.C
Issued Date
2026-02
Abstract
Identifying relations between objects is central to understanding the scene. While several works have been proposed for relation modeling in the image domain, there have been many constraints in the video domain due to challenging dynamics of spatio-temporal interactions (e.g., between which objects are there an interaction? when do relations start and end?). To date, two representative methods have been proposed to tackle Video Visual Relation Detection (VidVRD): segment-based and window-based. The segment-based methods lack temporal continuity on the other hand, window-based scale poorly. To tackle this limitations of typical methods, we propose a novel approach named Temporal Span Proposal Network (TSPN). TSPN tells what to look: it sparsifies relation search space by scoring relationness of object pair, i.e., measuring how probable a relation exist. TSPN tells when to look: it simultaneously predicts start-end timestamps (i.e., temporal spans) and categories of the all possible relations by utilizing full video context. These two designs enable a win-win scenario: it accelerates training by 2X or more than existing methods and achieves competitive performance on two VidVRD benchmarks (ImageNet-VidVRD and VidOR). Moreover, comprehensive ablative experiments demonstrate the effectiveness of our approach.
Publisher
Elsevier BV
ISSN
0957-4174
DOI
10.1016/j.eswa.2025.129503
URI
https://scholar.gist.ac.kr/handle/local/31991
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.