Multi-sensor fusion based on multiple classifier systems for human activity identification

Nweke, Henry Friday and Teh, Ying Wah and Mujtaba, Ghulam and Alo, Uzoma Rita and Al-garadi, Mohammed Ali (2019) Multi-sensor fusion based on multiple classifier systems for human activity identification. Human-centric Computing and Information Sciences, 9 (1). p. 34. ISSN 2192-1962, DOI https://doi.org/10.1186/s13673-019-0194-5.

Full text not available from this repository.
Official URL: https://doi.org/10.1186/s13673-019-0194-5

Abstract

Multimodal sensors in healthcare applications have been increasingly researched because it facilitates automatic and comprehensive monitoring of human behaviors, high-intensity sports management, energy expenditure estimation, and postural detection. Recent studies have shown the importance of multi-sensor fusion to achieve robustness, high-performance generalization, provide diversity and tackle challenging issue that maybe difficult with single sensor values. The aim of this study is to propose an innovative multi-sensor fusion framework to improve human activity detection performances and reduce misrecognition rate. The study proposes a multi-view ensemble algorithm to integrate predicted values of different motion sensors. To this end, computationally efficient classification algorithms such as decision tree, logistic regression and k-Nearest Neighbors were used to implement diverse, flexible and dynamic human activity detection systems. To provide compact feature vector representation, we studied hybrid bio-inspired evolutionary search algorithm and correlation-based feature selection method and evaluate their impact on extracted feature vectors from individual sensor modality. Furthermore, we utilized Synthetic Over-sampling minority Techniques (SMOTE) algorithm to reduce the impact of class imbalance and improve performance results. With the above methods, this paper provides unified framework to resolve major challenges in human activity identification. The performance results obtained using two publicly available datasets showed significant improvement over baseline methods in the detection of specific activity details and reduced error rate. The performance results of our evaluation showed 3% to 24% improvement in accuracy, recall, precision, F-measure and detection ability (AUC) compared to single sensors and feature-level fusion. The benefit of the proposed multi-sensor fusion is the ability to utilize distinct feature characteristics of individual sensor and multiple classifier systems to improve recognition accuracy. In addition, the study suggests a promising potential of hybrid feature selection approach, diversity-based multiple classifier systems to improve mobile and wearable sensor-based human activity detection and health monitoring system. © 2019, The Author(s).

Item Type: Article
Funders: University of Malaya BKP Special Grant no vote BKS006-2018
Uncontrolled Keywords: Activity detection; Activity identification; Feature-level fusion; Multi-view stacking ensemble; Multiple classifier systems; Multiple sensor fusion; Wearable sensors
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Faculty of Computer Science & Information Technology
Depositing User: Ms. Juhaida Abd Rahim
Date Deposited: 17 Feb 2020 00:50
Last Modified: 17 Feb 2020 00:50
URI: http://eprints.um.edu.my/id/eprint/23807

Actions (login required)

View Item View Item