Lower limb motion recognition techniques commonly employ Surface Electromyographic Signal(sEMG)as input and apply a machine learning classifier or Back Propagation Neural Network(BPNN)for classification.However,this a...Lower limb motion recognition techniques commonly employ Surface Electromyographic Signal(sEMG)as input and apply a machine learning classifier or Back Propagation Neural Network(BPNN)for classification.However,this artificial feature engineering technique is not generalizable to similar tasks and is heavily reliant on the researcher’s subject expertise.In contrast,neural networks such as Convolutional Neural Network(CNN)and Long Short-term Memory Neural Network(LSTM)can automatically extract features,providing a more generalized and adaptable approach to lower limb motion recognition.Although this approach overcomes the limitations of human feature engineering,it may ignore the potential correlation among the sEMG channels.This paper proposes a spatial–temporal graph neural network model,STGNN-LMR,designed to address the problem of recognizing lower limb motion from multi-channel sEMG.STGNN-LMR transforms multi-channel sEMG into a graph structure and uses graph learning to model spatial–temporal features.An 8-channel sEMG dataset is constructed for the experimental stage,and the results show that the STGNN-LMR model achieves a recognition accuracy of 99.71%.Moreover,this paper simulates two unexpected scenarios,including sEMG sensors affected by sweat noise and sudden failure,and evaluates the testing results using hypothesis testing.According to the experimental results,the STGNN-LMR model exhibits a significant advantage over the control models in noise scenarios and failure scenarios.These experimental results confirm the effectiveness of the STGNN-LMR model for addressing the challenges associated with sEMG-based lower limb motion recognition in practical scenarios.展开更多
This research presents a Human Lower Limb Activity Recognition(HLLAR)system that identifies specific activities and predicts the angles of the knees simultaneously,based on the EMG signals.The HLLAR systems streamline...This research presents a Human Lower Limb Activity Recognition(HLLAR)system that identifies specific activities and predicts the angles of the knees simultaneously,based on the EMG signals.The HLLAR systems streamlines the research on the lower limb activities.The HILLAR model includes Discrete Hermite Wavelets Transform-based Synchrosqueezing(DHWTS),Deep Two-Layer Multiscale Convolutional Neural Network(DTLMCNN),and Generalized Regression Neural Network(GRNN)as feature extraction,activity recognition,and knee angle prediction respectively.Electromyography signal-based automatic lower limb activity detection is crucial to rehabilitation and human movement analysis.Yet several of these methods face issues in feature extraction in complex data,overlapping signals,extraction of crucial parameters,and adaptation constraints.This research aims classify lower limb activities and predict knee joint angles from electromy-ography signals using HILLAR model.The model is validated on two datasets,comprising 26 subjects performing three classes of activities:walking,standing,and sitting.The proposed model obtained a classification accuracy of 99.95%,along with significant achievements in precision(99.93%),recall(99.91%),and F1-score(99.93%).The generalized regression neural network predicted angles of the knee joint with a root mean squared error of 1.25%.Robustness is demonstrated through consistent results in five-fold cross-validation and statistical significance testing(p-value=0.004,McNemar's test).Additionally,the proposed model showed superior performance over baseline methods by reducing error rates by 18%and decreasing processing time to 0.98 s.展开更多
Virtual reality is nowadays used to facilitate motor recovery in stroke patients. Most virtual reality studies have involved chronic stroke patients; however, brain plasticity remains good in acute and subacute patien...Virtual reality is nowadays used to facilitate motor recovery in stroke patients. Most virtual reality studies have involved chronic stroke patients; however, brain plasticity remains good in acute and subacute patients. Most virtual reality systems are only applicable to the proximal upper limbs (arms) because of the limitations of their capture systems. Nevertheless, the functional recovery of an affected hand is most difficult in the case of hemiparesis rehabilitation after a stroke. The recently developed Leap Motion controller can track the fine movements of both hands and fingers. Therefore, the present study explored the effects of a Leap Motion-based virtual reality system on subacute stroke. Twenty-six subacute stroke patients were assigned to an experimental group that received virtual reality training along with conventional occupational rehabilitation, and a control group that only received conventional rehabilitation. The Wolf motor func- tion test (WMFT) was used to assess the motor function of the affected upper limb; functional magnetic resonance imaging was used to measure the cortical activation. After four weeks of treatment, the motor functions of the affected upper limbs were significantly improved in all the patients, with the improvement in the experimental group being significantly better than in the control group. The action perfor- mance time in the WMFT significantly decreased in the experimental group. Furthermore, the activation intensity and the laterality index of the contralateral primary sensorimotor cortex increased in both the experimental and control groups. These results confirmed that Leap Motion-based virtual reality training was a promising and feasible supplementary rehabilitation intervention, could facilitate the recovery of motor functions in subacute stroke patients. The study has been registered in the Chinese Clinical Trial Registry (registration number: ChiCTR-OCH- 12002238).展开更多
文摘Lower limb motion recognition techniques commonly employ Surface Electromyographic Signal(sEMG)as input and apply a machine learning classifier or Back Propagation Neural Network(BPNN)for classification.However,this artificial feature engineering technique is not generalizable to similar tasks and is heavily reliant on the researcher’s subject expertise.In contrast,neural networks such as Convolutional Neural Network(CNN)and Long Short-term Memory Neural Network(LSTM)can automatically extract features,providing a more generalized and adaptable approach to lower limb motion recognition.Although this approach overcomes the limitations of human feature engineering,it may ignore the potential correlation among the sEMG channels.This paper proposes a spatial–temporal graph neural network model,STGNN-LMR,designed to address the problem of recognizing lower limb motion from multi-channel sEMG.STGNN-LMR transforms multi-channel sEMG into a graph structure and uses graph learning to model spatial–temporal features.An 8-channel sEMG dataset is constructed for the experimental stage,and the results show that the STGNN-LMR model achieves a recognition accuracy of 99.71%.Moreover,this paper simulates two unexpected scenarios,including sEMG sensors affected by sweat noise and sudden failure,and evaluates the testing results using hypothesis testing.According to the experimental results,the STGNN-LMR model exhibits a significant advantage over the control models in noise scenarios and failure scenarios.These experimental results confirm the effectiveness of the STGNN-LMR model for addressing the challenges associated with sEMG-based lower limb motion recognition in practical scenarios.
文摘This research presents a Human Lower Limb Activity Recognition(HLLAR)system that identifies specific activities and predicts the angles of the knees simultaneously,based on the EMG signals.The HLLAR systems streamlines the research on the lower limb activities.The HILLAR model includes Discrete Hermite Wavelets Transform-based Synchrosqueezing(DHWTS),Deep Two-Layer Multiscale Convolutional Neural Network(DTLMCNN),and Generalized Regression Neural Network(GRNN)as feature extraction,activity recognition,and knee angle prediction respectively.Electromyography signal-based automatic lower limb activity detection is crucial to rehabilitation and human movement analysis.Yet several of these methods face issues in feature extraction in complex data,overlapping signals,extraction of crucial parameters,and adaptation constraints.This research aims classify lower limb activities and predict knee joint angles from electromy-ography signals using HILLAR model.The model is validated on two datasets,comprising 26 subjects performing three classes of activities:walking,standing,and sitting.The proposed model obtained a classification accuracy of 99.95%,along with significant achievements in precision(99.93%),recall(99.91%),and F1-score(99.93%).The generalized regression neural network predicted angles of the knee joint with a root mean squared error of 1.25%.Robustness is demonstrated through consistent results in five-fold cross-validation and statistical significance testing(p-value=0.004,McNemar's test).Additionally,the proposed model showed superior performance over baseline methods by reducing error rates by 18%and decreasing processing time to 0.98 s.
基金supported by the Sub-Project under National "Twelfth Five-Year" Plan for Science&Technology Support Project in China,No.2011BAI08B11the Research Project of China Rehabilitation Research Center,No.2014-3
文摘Virtual reality is nowadays used to facilitate motor recovery in stroke patients. Most virtual reality studies have involved chronic stroke patients; however, brain plasticity remains good in acute and subacute patients. Most virtual reality systems are only applicable to the proximal upper limbs (arms) because of the limitations of their capture systems. Nevertheless, the functional recovery of an affected hand is most difficult in the case of hemiparesis rehabilitation after a stroke. The recently developed Leap Motion controller can track the fine movements of both hands and fingers. Therefore, the present study explored the effects of a Leap Motion-based virtual reality system on subacute stroke. Twenty-six subacute stroke patients were assigned to an experimental group that received virtual reality training along with conventional occupational rehabilitation, and a control group that only received conventional rehabilitation. The Wolf motor func- tion test (WMFT) was used to assess the motor function of the affected upper limb; functional magnetic resonance imaging was used to measure the cortical activation. After four weeks of treatment, the motor functions of the affected upper limbs were significantly improved in all the patients, with the improvement in the experimental group being significantly better than in the control group. The action perfor- mance time in the WMFT significantly decreased in the experimental group. Furthermore, the activation intensity and the laterality index of the contralateral primary sensorimotor cortex increased in both the experimental and control groups. These results confirmed that Leap Motion-based virtual reality training was a promising and feasible supplementary rehabilitation intervention, could facilitate the recovery of motor functions in subacute stroke patients. The study has been registered in the Chinese Clinical Trial Registry (registration number: ChiCTR-OCH- 12002238).