Enhancing human activity recognition with lightweight CNN models and integrated blocks

Main Article Content

Teppakorn Sittiwanchai
Uttapon Khawnuan
Nantakrit Yodpijit

Abstract

Human activity recognition (HAR) is crucial for health tracking, fitness monitoring, and fall detection systems. Recently, convolutional neural network (CNN) models have been proven to be highly effective for HAR tasks. This study aimed to enhance HAR performance by integrating specific architectural improvements, namely identity, convolutional, and bottleneck blocks, into lightweight CNN models. To evaluate the effectiveness of these enhancements, two data sets were utilized: HAR using smartphones data set version 1.0 (UCI-HAR) and wireless sensor data mining activity prediction data set version 1.1. The results indicated that the convolutional and identity block models outperformed the original lightweight CNN model on both data sets. The proposed models strike a balance between high performance and computational complexity, thereby making them suitable for real-world applications. The findings of this study contribute to the field of HAR and provide valuable insights for improving the recognition and classification of human activities.

Downloads

Download data is not yet available.

Article Details

How to Cite
Sittiwanchai, T., Khawnuan, U., & Yodpijit, N. (2024). Enhancing human activity recognition with lightweight CNN models and integrated blocks. Science, Engineering and Health Studies, 18, 24040001. https://doi.org/10.69598/sehs.18.24040001
Section
Engineering

References

Agac, S., and Incel, O. D. (2023). On the use of a convolutional block attention module in deep learning-based human activity recognition with motion sensors. Diagnostics (Basel), 13(11), 1861.

Barakbayeva, T., and Demirci, F. M. (2023). Fully automatic CNN design with inception and ResNet blocks. Neural Computing and Applications, 35(2), 1569–1580.

Chen, Y., and Shen, C. (2017). Performance analysis of smartphone-sensor behavior for human activity recognition. IEEE Access, 5, 3095–3110.

Gupta, N., Gupta, S. K., Pathak, R. K., Jain, V., Rashidi, P., and Suri, J. S. (2022). Human activity recognition in artificial intelligence framework: A narrative review. Artificial Intelligence Review, 55, 4755–4808.

Ignatov, A. (2018). Real-time human activity recognition from accelerometer data using convolutional neural networks. Applied Soft Computing, 62, 915–922.

Ismail, W. N., Alsalamah, H. A., Hassan, M. M., and Mohamed, E. (2023). AUTO-HAR: An adaptive human activity recognition framework using an automated CNN architecture design. Heliyon, 9(2), e13636.

Kashyap, S. K., Mahalle, P. N., and Shinde, G. R. (2022). Human activity recognition using 1-Dimensional CNN and comparison with LSTM. Lecture Notes in Electrical Engineering, 939, 1017–1030.

Murad, A., and Pyun, J.-Y. (2017). Deep recurrent neural networks for human activity recognition. Sensors, 17(11), 2556.

Negi, A., Kumar, K., Chaudhari, N. S., Singh, N., and Chauhan, P. (2021). Predictive analytics for recognizing human activities using residual network and fine-tuning. In Big Data Analytics. BDA 2021. Lecture Notes in Computer Science (Srirama, S. N., Lin, J. C. W., Bhatnagar, R., Agarwal, S., and Reddy, P. K., Eds.), pp. 296–310. Berlin: Springer.

Peppas, K., Tsolakis, A. C., Krinidis, S., and Tzovaras, D. (2020). Real-time physical activity recognition on smart mobile devices using convolutional neural networks. Applied Sciences, 10(23), 8482.

Phukan, N., Mohine, S., Mondal, A., Manikandan, M. S., and Pachori, R. B. (2022). Convolutional neural network-based human activity recognition for edge fitness and context-aware health monitoring devices. IEEE Sensors Journal, 22(22), 21816–21826.

Raziani, S., and Azimbagirad, M. (2022). Deep CNN hyperparameter optimization algorithms for sensor-based human activity recognition. Neuroscience Informatics, 2(3), 100078.

Reyes-Ortiz, J., Anguita, D., Ghio, A., Oneto, L., and Parra, X. (2012). Human Activity Recognition Using Smartphones. [Online URL: https://archive.ics.uci.edu/dataset/240/human+activity+recognition+using+smartphones] accessed on September 25, 2023.

Ronald, M., Poulose, A., and Han, D. S. (2021). iSPLInception: An inception-resnet deep learning architecture for human activity recognition. IEEE Access, 9, 68985–69001.

Souza, R. M., Nascimento, E. G. S., Miranda, U. A., Silva, W. J. D., and Lepikson, H. A. (2021). Deep learning for diagnosis and classification of faults in industrial rotating machinery. Computers and Industrial Engineering, 153, 107060.

Straczkiewicz, M., James, P., and Onnela, J.-P. (2021). A systematic review of smartphone-based human activity recognition methods for health research. npj Digital Medicine, 4(1), 1–15.

Teng, Q., Zhang, L., Tang, Y., Song, S., Wang, X., and He, J. (2021). Block-wise training residual networks on multi-channel time series for human activity recognition. IEEE Sensors Journal, 21, 18063–18074.

Wang, A., Chen, G., Yang, J., Zhao, S., and Chang, C.-Y. (2016). A comparative study on human activity recognition using inertial sensors in a smartphone. IEEE Sensors Journal, 16(11), 4566–4578.

Wang, J., Chen, Y., Hao, S., Peng, X., and Hu, L. (2019). Deep learning for sensor-based activity recognition: A survey. Pattern Recognition Letters, 119, 3–11.

Wang, Y., Xu, H., Zheng, L., Zhao, G., Liu, Z., Zhou, S., Wang, M., and Xu, J. (2023). A multi-dimensional parallel convolutional connected network based on multi-source and multi-modal sensor data for human activity recognition. IEEE Internet of Things Journal. 10(16), 14873–14885.

Weiss, G. (2019). WISDM Smartphone and Smartwatch Activity and Biometrics Dataset. [Online URL: https://archive.ics.uci.edu/dataset/507/wisdm+smartphone+and+smartwatch+activity+and+biometrics+dataset] accessed on September 25, 2023.

Xu, Z., Zhao, J., Yu, Y., and Zeng, H. (2020). Improved 1D-CNNs for behavior recognition using wearable sensor network. Computer Communications, 151, 165–171.

Yin, X., Liu, Z., Liu, D., and Ren, X. (2022). A novel CNN-based Bi-LSTM parallel model with attention mechanism for human activity recognition with noisy data. Scientific Reports, 12(1), 1–11.

Zhongkai, Z., Kobayashi, S., Kondo, K., Hasegawa, T., and Koshino, M. (2022). A comparative study: Toward an effective convolutional neural network architecture for sensor-based human activity recognition. IEEE Access, 10, 20547–20558.

Zhou, Y., Chen, S., Wang, Y., and Huan, W. (2020). Review of research on lightweight convolutional neural networks. In Proceedings of 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC), pp. 1713–1720. Chongqing, China.

Zhou, Y., Wang, X., Zhang, M., Zhu, J., Zheng, R., and Wu, Q. (2019). MPCE: A maximum probability based cross entropy loss function for neural network classification. IEEE Access, 7, 146331–146341.