Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying stead...Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying steady locomotion states.However,it remains challenging for these methods to attain high accuracy when facing transitions between steady locomotion states.Due to the similarities between the information of the transitions and their adjacent steady states.Furthermore,most of these methods rely solely on data and overlook the objective laws between physical activities,resulting in lower accuracy,particularly when encountering complex locomotion modes such as transitions.To address the existing deficiencies,we propose the locomotion rule embedding long short-term memory(LSTM)network with Attention(LREAL)for human locomotor intent classification,with a particular focus on transitions,using data from fewer sensors(two inertial measurement units and four goniometers).The LREAL network consists of two levels:One responsible for distinguishing between steady states and transitions,and the other for the accurate identification of locomotor intent.Each classifier in these levels is composed of multiple-LSTM layers and an attention mechanism.To introduce real-world motion rules and apply constraints to the network,a prior knowledge was added to the network via a rule-modulating block.The method was tested on the ENABL3S dataset,which contains continuous locomotion date for seven steady and twelve transitions states.Experimental results showed that the LREAL network could recognize locomotor intents with an average accuracy of 99.03%and 96.52%for the steady and transitions states,respectively.It is worth noting that the LREAL network accuracy for transition-state recognition improved by 0.18%compared to other state-of-the-art network,while using data from fewer sensors.展开更多
In this paper,we propose a BPR-CNN(Biometric Pattern Recognition-Convolution Neural Network)classifier for hand motion classification as well as a dynamic threshold algorithm for motion signal detection and extraction...In this paper,we propose a BPR-CNN(Biometric Pattern Recognition-Convolution Neural Network)classifier for hand motion classification as well as a dynamic threshold algorithm for motion signal detection and extraction by EF(Electric Field)sensors.Currently,an EF sensor or EPS(Electric Potential Sensor)system is attracting attention as a next-generationmotion sensing technology due to low computation and price,high sensitivity and recognition speed compared to other sensor systems.However,it remains as a challenging problem to accurately detect and locate the authentic motion signal frame automatically in real-time when sensing body-motions such as hand motion,due to the variance of the electric-charge state by heterogeneous surroundings and operational conditions.This hinders the further utilization of the EF sensing;thus,it is critical to design the robust and credible methodology for detecting and extracting signals derived from the motion movement in order to make use and apply the EF sensor technology to electric consumer products such as mobile devices.In this study,we propose a motion detection algorithm using a dynamic offset-threshold method to overcome uncertainty in the initial electrostatic charge state of the sensor affected by a user and the surrounding environment of the subject.This method is designed to detect hand motions and extract its genuine motion signal frame successfully with high accuracy.After setting motion frames,we normalize the signals and then apply them to our proposed BPR-CNN motion classifier to recognize their motion types.Conducted experiment and analysis show that our proposed dynamic threshold method combined with a BPR-CNN classifier can detect the hand motions and extract the actual frames effectively with 97.1%accuracy,99.25%detection rate,98.4%motion frame matching rate and 97.7%detection&extraction success rate.展开更多
基金funded by the National Natural Science Foundation of China(Nos.62072212,62302218)the Development Project of Jilin Province of China(Nos.20220508125RC,20230201065GX,20240101364JC)+1 种基金National Key R&D Program(No.2018YFC2001302)the Jilin Provincial Key Laboratory of Big Data Intelligent Cognition(No.20210504003GH).
文摘Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying steady locomotion states.However,it remains challenging for these methods to attain high accuracy when facing transitions between steady locomotion states.Due to the similarities between the information of the transitions and their adjacent steady states.Furthermore,most of these methods rely solely on data and overlook the objective laws between physical activities,resulting in lower accuracy,particularly when encountering complex locomotion modes such as transitions.To address the existing deficiencies,we propose the locomotion rule embedding long short-term memory(LSTM)network with Attention(LREAL)for human locomotor intent classification,with a particular focus on transitions,using data from fewer sensors(two inertial measurement units and four goniometers).The LREAL network consists of two levels:One responsible for distinguishing between steady states and transitions,and the other for the accurate identification of locomotor intent.Each classifier in these levels is composed of multiple-LSTM layers and an attention mechanism.To introduce real-world motion rules and apply constraints to the network,a prior knowledge was added to the network via a rule-modulating block.The method was tested on the ENABL3S dataset,which contains continuous locomotion date for seven steady and twelve transitions states.Experimental results showed that the LREAL network could recognize locomotor intents with an average accuracy of 99.03%and 96.52%for the steady and transitions states,respectively.It is worth noting that the LREAL network accuracy for transition-state recognition improved by 0.18%compared to other state-of-the-art network,while using data from fewer sensors.
基金This work was supported by the NRF of Korea grant funded by the Korea government(MIST)(No.2019 R1F1A1062829).
文摘In this paper,we propose a BPR-CNN(Biometric Pattern Recognition-Convolution Neural Network)classifier for hand motion classification as well as a dynamic threshold algorithm for motion signal detection and extraction by EF(Electric Field)sensors.Currently,an EF sensor or EPS(Electric Potential Sensor)system is attracting attention as a next-generationmotion sensing technology due to low computation and price,high sensitivity and recognition speed compared to other sensor systems.However,it remains as a challenging problem to accurately detect and locate the authentic motion signal frame automatically in real-time when sensing body-motions such as hand motion,due to the variance of the electric-charge state by heterogeneous surroundings and operational conditions.This hinders the further utilization of the EF sensing;thus,it is critical to design the robust and credible methodology for detecting and extracting signals derived from the motion movement in order to make use and apply the EF sensor technology to electric consumer products such as mobile devices.In this study,we propose a motion detection algorithm using a dynamic offset-threshold method to overcome uncertainty in the initial electrostatic charge state of the sensor affected by a user and the surrounding environment of the subject.This method is designed to detect hand motions and extract its genuine motion signal frame successfully with high accuracy.After setting motion frames,we normalize the signals and then apply them to our proposed BPR-CNN motion classifier to recognize their motion types.Conducted experiment and analysis show that our proposed dynamic threshold method combined with a BPR-CNN classifier can detect the hand motions and extract the actual frames effectively with 97.1%accuracy,99.25%detection rate,98.4%motion frame matching rate and 97.7%detection&extraction success rate.