Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models

AlDahoul, Nouar and Sabri, Aznul Qalid Md and Mansoor, Ali Mohammed (2018) Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models. Computational Intelligence and Neuroscience, 2018. pp. 1-14. ISSN 1687-5265

Full text not available from this repository.
Official URL: https://doi.org/10.1155/2018/1639561

Abstract

Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and hierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM's training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU).

Item Type: Article
Uncontrolled Keywords: Humans; Image Processing, Computer-Assisted; Machine Learning; Motion; Motor Activity; Neural Networks (Computer); Pattern Recognition, Automated; Time Factors; Video Recording
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Faculty of Computer Science & Information Technology
Depositing User: Ms. Juhaida Abd Rahim
Date Deposited: 26 Sep 2019 06:31
Last Modified: 26 Sep 2019 06:31
URI: http://eprints.um.edu.my/id/eprint/22582

Actions (login required)

View Item View Item