Skip to main content

Research Repository

Advanced Search

Heterogeneous multi-modal sensor fusion with hybrid attention for exercise recognition.

Wijekoon, Anjana; Wiratunga, Nirmalie; Cooper, Kay

Authors



Abstract

Exercise adherence is a key component of digital behaviour change interventions for the self-management of musculoskeletal pain. Automated monitoring of exercise adherence requires sensors that can capture patients performing exercises and Machine Learning (ML) algorithms that can recognise exercises. In contrast to ambulatory activities that are recognisable with a wrist accelerometer data; exercises require multiple sensor modalities because of the complexity of movements and the settings involved. Exercise Recognition (ExR) pose many challenges to ML researchers due to the heterogeneity of the sensor modalities (e.g. image/video streams, wearables, pressure mats). We recently published MEx, a benchmark dataset for ExR, to promote the study of new and transferable HAR methods to improve ExR and benchmarked the state-of-the-art ML algorithms on 4 modalities. The results highlighted the need for fusion methods that unite the individual strengths of modalities. In this paper we explore fusion methods with focus on attention and propose a novel multi-modal hybrid attention fusion architecture mHAF for ExR. We achieve the best performance of 96.24% (F1-measure) with a modality combination of a pressure mat, a depth camera and an accelerometer on the thigh. mHAF significantly outperforms multiple baselines and the contribution of model components are verified with an ablation study. The benefits of attention fusion are clearly demonstrated by visualising attention weights; showing how mHAF learns feature importance and modality combinations suited for different exercise classes. We highlight the importance of improving deployability and minimising obtrusiveness by exploring the best performing 2 and 3 modality combinations.

Citation

WIJEKOON, A., WIRATUNGA, N. and COOPER, K. 2020. Heterogeneous multi-modal sensor fusion with hybrid attention for exercise recognition. In Proceedings of the 2020 Institute of Electrical and Electronics Engineers (IEEE) International joint conference on neural networks (IEEE IJCNN 2020), part of the 2020 IEEE World congress on computational intelligence (IEEE WCCI 2020) and co-located with the 2020 IEEE congress on evolutionary computation (IEEE CEC 2020) and the 2020 IEEE International fuzzy systems conference (FUZZ-IEEE 2020), 19-24 July 2020, [virtual conference]. Piscataway: IEEE [online], article ID 9206941. Available from: https://doi.org/10.1109/IJCNN48605.2020.9206941

Conference Name 2020 Institute of Electrical and Electronics Engineers (IEEE) World computational intelligence congress (WCCI 2020), co-located with 2020 International Joint Conference on Neural Networks (IJCNN 2020), 2020 IEEE International Conference on Fuzzy Systems (
Conference Location [virtual conference]
Start Date Jul 19, 2020
End Date Jul 24, 2020
Acceptance Date Mar 15, 2020
Online Publication Date Jul 19, 2020
Publication Date Sep 28, 2020
Deposit Date Mar 30, 2020
Publicly Available Date Mar 29, 2024
Publisher Institute of Electrical and Electronics Engineers (IEEE)
Series ISSN 2161-4393
Book Title Proceedings of the 2020 Institute of Electrical and Electronics Engineers (IEEE) International joint conference on neural networks (IEEE IJCNN 2020), part of the 2020 IEEE World congress on computational intelligence (IEEE WCCI 2020) and co-located with
DOI https://doi.org/10.1109/IJCNN48605.2020.9206941
Keywords Attention; Multi-Modal Fusion; Heterogeneous sensor modalities; Exercise Recognition
Public URL https://rgu-repository.worktribe.com/output/886985

Files






You might also like



Downloadable Citations