Action recognition on continuous video

Chang, Y. L. and Chan, Chee Seng and Remagnino, P. (2021) Action recognition on continuous video. Neural Computing and Applications, 33 (4). pp. 1233-1243. ISSN 0941-0643, DOI https://doi.org/10.1007/s00521-020-04982-9.

Full text not available from this repository.

Abstract

Video action recognition has been a challenging task over the years. The challenge herein is not only due to the complication in increasing information in videos but also the requirement of an efficient method to retain information over a longer-term where human action would take to perform. This paper proposes a novel framework, named as long-term video action recognition (LVAR) to perform generic action classification in the continuous video. The idea of LVAR is introducing a partial recurrence connection to propagate information within every layer of a spatial-temporal network, such as the well-known C3D. Empirically, we show that this addition allows the C3D network to access long-term information, and subsequently improves action recognition performance with videos of different length selected from both UCF101 and miniKinetics datasets. Further confirmation of our approach is strengthened with experiments on untrimmed video from the Thumos14 dataset.

Item Type: Article
Funders: Fundamental Research Grant Scheme (FRGS) MoHE Grant, from the Ministry of Education Malaysia (FP021-2018A), Postgraduate Research Grant (PPP) Grant, from University of Malaya, Malaysia (PG006-2016A)
Uncontrolled Keywords: Deep learning; Action recognition; Propagate information
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Faculty of Computer Science & Information Technology
Depositing User: Ms Zaharah Ramly
Date Deposited: 10 Mar 2022 05:29
Last Modified: 10 Mar 2022 05:29
URI: http://eprints.um.edu.my/id/eprint/27118

Actions (login required)

View Item View Item