TinyHAR: Benchmarking Human Activity Recognition Systems in Resource Constrained Devices
2022 IEEE 8th World Forum on Internet of Things, WF-IoT 2022
Advances in deep learning, especially Convolutional Neural Networks (CNNs) have revolutionized intelligent frame-works such as Human Activity Recognition (HAR) systems by effectively and efficiently inferring human activity from various modalities of data. However, the training and inference of CNNs are often resource-intensive. Recent research developments are focused on bringing the effectiveness of CNNs in resource con-strained edge devices through Tiny Machine Learning (TinyML). However, this is extremely hard to achieve due to the limitations in memory, compute power, and energy of resource constrained edge devices. This paper provides a benchmark to understand these trade-offs among variations of CNN network architectures, different training methodologies, and different modalities of data in the context of HAR, TinyML, and edge devices. We tested and reported the performance of CNN and Depthwise Separable CNN (DSCNN) models as well as two training methodologies: Quantization Aware Training (QAT) and Post-training Quantization (PTQ) on five commonly used benchmark datasets containing image and time-series data: UP-Fall, Fall Detection Dataset (FDD), PAMAP2, UCI-HAR, and WISDM. We also deployed and tested the performance of the model-based standalone applications on multiple commonly available resource constrained edge devices in terms of inference time and power consumption. The experimental results demonstrate the effectiveness and feasibility of Tiny ML for HAR in edge devices.
Convolutional Neural Network, Depthwise Separable CNN, Edge Devices, Human Activity Recognition, TinyML
S. Nooruddin, M. M. Islam and F. Karray, "TinyHAR: Benchmarking Human Activity Recognition Systems in Resource Constrained Devices," 2022 IEEE 8th World Forum on Internet of Things (WF-IoT), Yokohama, Japan, 2022, pp. 1-8, doi: 10.1109/WF-IoT54382.2022.10152039.