Join the Association of Human-Computer Interaction

The original article is published under the Creative Commons Attribution 4.0 International (CC BY 4.0) license

Real-time gesture recognition through use of wearable device and A-mode ultrasound

A-mode ultrasound has the advantages of high resolution, simple calculation and low cost in predicting skillful gestures. In order to accelerate the popularization of A-mode ultrasonic gesture recognition technology, we have developed a human-machine interface that can interact with the user in real time. Data processing includes Gaussian filtering, feature extraction, and PCA dimension reduction. NB, LDA and SVM algorithms were chosen to train machine learning models. The entire process was written in C++ to classify gestures in real time. This paper conducts offline and real-time experiments based on HMI-A (Ultrasound-based Human-Machine Interface in A-mode), including ten subjects and ten common gestures. To demonstrate the effectiveness of HMI-A and avoid accidental interference, the offline experiment collected ten rounds of gestures for each subject for ten-fold cross-validation. The results show that the offline detection accuracy is 96.92% ± 1. 92%. The real-time experiment was evaluated against four online performance metrics: action selection time, action completion time, action completion rate, and real-time detection accuracy. The results show that the action completion rate is 96.0% ± 3.0%. 6% and the real-time detection accuracy is 83.8% ± 6.9%. This study confirms the great potential of wearable A-mode ultrasound technology and offers a wider range of application scenarios for gesture recognition.