Batch processing mode is widely used in the training process of human motiun recognition. After training, the motion elassitier usually remains invariable. However, if the classifier is to be expanded, all historical ...Batch processing mode is widely used in the training process of human motiun recognition. After training, the motion elassitier usually remains invariable. However, if the classifier is to be expanded, all historical data must be gathered for retraining. This consumes a huge amount of storage space, and the new training process will be more complicated. In this paper, we use an incremental learning method to model the motion classifier. A weighted decision tree is proposed to help illustrate the process, and the probability sampling method is also used. The resuhs show that with continuous learning, the motion classifier is more precise. The average classification precision for the weighted decision tree was 88.43% in a typical test. Incremental learning consumes much less time than the batch processing mode when the input training data comes continuously.展开更多
The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for he...The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.展开更多
Human motion recognition is a research hotspot in the field of computer vision,which has a wide range of applications,including biometrics,intelligent surveillance and human-computer interaction.In visionbased human m...Human motion recognition is a research hotspot in the field of computer vision,which has a wide range of applications,including biometrics,intelligent surveillance and human-computer interaction.In visionbased human motion recognition,the main input modes are RGB,depth image and bone data.Each mode can capture some kind of information,which is likely to be complementary to other modes,for example,some modes capture global information while others capture local details of an action.Intuitively speaking,the fusion of multiple modal data can improve the recognition accuracy.In addition,how to correctly model and utilize spatiotemporal information is one of the challenges facing human motion recognition.Aiming at the feature extraction methods involved in human action recognition tasks in video,this paper summarizes the traditional manual feature extraction methods from the aspects of global feature extraction and local feature extraction,and introduces the commonly used feature learning models of feature extraction methods based on deep learning in detail.This paper summarizes the opportunities and challenges in the field of motion recognition and looks forward to the possible research directions in the future.展开更多
Due to the dynamic stiffness characteristics of human joints, it is easy to cause impact and disturbance on normal movements during exoskeleton assistance. This not only brings strict requirements for exoskeleton cont...Due to the dynamic stiffness characteristics of human joints, it is easy to cause impact and disturbance on normal movements during exoskeleton assistance. This not only brings strict requirements for exoskeleton control design, but also makes it difficult to improve assistive level. The Variable Stiffness Actuator (VSA), as a physical variable stiffness mechanism, has the characteristics of dynamic stiffness adjustment and high stiffness control bandwidth, which is in line with the stiffness matching experiment. However, there are still few works exploring the assistive human stiffness matching experiment based on VSA. Therefore, this paper designs a hip exoskeleton based on VSA actuator and studies CPG human motion phase recognition algorithm. Firstly, this paper puts forward the requirements of variable stiffness experimental design and the output torque and variable stiffness dynamic response standards based on human lower limb motion parameters. Plate springs are used as elastic elements to establish the mechanical principle of variable stiffness, and a small variable stiffness actuator is designed based on the plate spring. Then the corresponding theoretical dynamic model is established and analyzed. Starting from the CPG phase recognition algorithm, this paper uses perturbation theory to expand the first-order CPG unit, obtains the phase convergence equation and verifies the phase convergence when using hip joint angle as the input signal with the same frequency, and then expands the second-order CPG unit under the premise of circular limit cycle and analyzes the frequency convergence criterion. Afterwards, this paper extracts the plate spring modal from Abaqus and generates the neutral file of the flexible body model to import into Adams, and conducts torque-stiffness one-way loading and reciprocating loading experiments on the variable stiffness mechanism. After that, Simulink is used to verify the validity of the criterion. Finally, based on the above criterions, the signal mean value is removed using feedback structure to complete the phase recognition algorithm for the human hip joint angle signal, and the convergence is verified using actual human walking data on flat ground.展开更多
Activity and motion recognition using Wi-Fi signals,mainly channel state information(CSI),has captured the interest of many researchers in recent years.Many research studies have achieved splendid results with the hel...Activity and motion recognition using Wi-Fi signals,mainly channel state information(CSI),has captured the interest of many researchers in recent years.Many research studies have achieved splendid results with the help of machine learning models from different applications such as healthcare services,sign language translation,security,context awareness,and the internet of things.Nevertheless,most of these adopted studies have some shortcomings in the machine learning algorithms as they rely on recurrence and convolutions and,thus,precluding smooth sequential computation.Therefore,in this paper,we propose a deep-learning approach based solely on attention,i.e.,the sole Self-Attention Mechanism model(Sole-SAM),for activity and motion recognition using Wi-Fi signals.The Sole-SAM was deployed to learn the features representing different activities and motions from the raw CSI data.Experiments were carried out to evaluate the performance of the proposed Sole-SAM architecture.The experimental results indicated that our proposed system took significantly less time to train than models that rely on recurrence and convolutions like Long Short-Term Memory(LSTM)and Recurrent Neural Network(RNN).Sole-SAM archived a 0.94%accuracy level,which is 0.04%better than RNN and 0.02%better than LSTM.展开更多
基金partly supported by the National Natural Science Foundation of China under Grant 61573242the Projects from Science and Technology Commission of Shanghai Municipality under Grant No.13511501302,No.14511100300,and No.15511105100+1 种基金Shanghai Pujiang Program under Grant No.14PJ1405000ZTE Industry-Academia-Research Cooperation Funds
文摘Batch processing mode is widely used in the training process of human motiun recognition. After training, the motion elassitier usually remains invariable. However, if the classifier is to be expanded, all historical data must be gathered for retraining. This consumes a huge amount of storage space, and the new training process will be more complicated. In this paper, we use an incremental learning method to model the motion classifier. A weighted decision tree is proposed to help illustrate the process, and the probability sampling method is also used. The resuhs show that with continuous learning, the motion classifier is more precise. The average classification precision for the weighted decision tree was 88.43% in a typical test. Incremental learning consumes much less time than the batch processing mode when the input training data comes continuously.
基金funded by the ICT Division of theMinistry of Posts,Telecommunications,and Information Technology of Bangladesh under Grant Number 56.00.0000.052.33.005.21-7(Tracking No.22FS15306)support from the University of Rajshahi.
文摘The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.
基金2021 Scientific research funding project of Liaoning Provincial Education Department(Research and implementation of university scientific research information platform serving the transformation of achievements).
文摘Human motion recognition is a research hotspot in the field of computer vision,which has a wide range of applications,including biometrics,intelligent surveillance and human-computer interaction.In visionbased human motion recognition,the main input modes are RGB,depth image and bone data.Each mode can capture some kind of information,which is likely to be complementary to other modes,for example,some modes capture global information while others capture local details of an action.Intuitively speaking,the fusion of multiple modal data can improve the recognition accuracy.In addition,how to correctly model and utilize spatiotemporal information is one of the challenges facing human motion recognition.Aiming at the feature extraction methods involved in human action recognition tasks in video,this paper summarizes the traditional manual feature extraction methods from the aspects of global feature extraction and local feature extraction,and introduces the commonly used feature learning models of feature extraction methods based on deep learning in detail.This paper summarizes the opportunities and challenges in the field of motion recognition and looks forward to the possible research directions in the future.
文摘Due to the dynamic stiffness characteristics of human joints, it is easy to cause impact and disturbance on normal movements during exoskeleton assistance. This not only brings strict requirements for exoskeleton control design, but also makes it difficult to improve assistive level. The Variable Stiffness Actuator (VSA), as a physical variable stiffness mechanism, has the characteristics of dynamic stiffness adjustment and high stiffness control bandwidth, which is in line with the stiffness matching experiment. However, there are still few works exploring the assistive human stiffness matching experiment based on VSA. Therefore, this paper designs a hip exoskeleton based on VSA actuator and studies CPG human motion phase recognition algorithm. Firstly, this paper puts forward the requirements of variable stiffness experimental design and the output torque and variable stiffness dynamic response standards based on human lower limb motion parameters. Plate springs are used as elastic elements to establish the mechanical principle of variable stiffness, and a small variable stiffness actuator is designed based on the plate spring. Then the corresponding theoretical dynamic model is established and analyzed. Starting from the CPG phase recognition algorithm, this paper uses perturbation theory to expand the first-order CPG unit, obtains the phase convergence equation and verifies the phase convergence when using hip joint angle as the input signal with the same frequency, and then expands the second-order CPG unit under the premise of circular limit cycle and analyzes the frequency convergence criterion. Afterwards, this paper extracts the plate spring modal from Abaqus and generates the neutral file of the flexible body model to import into Adams, and conducts torque-stiffness one-way loading and reciprocating loading experiments on the variable stiffness mechanism. After that, Simulink is used to verify the validity of the criterion. Finally, based on the above criterions, the signal mean value is removed using feedback structure to complete the phase recognition algorithm for the human hip joint angle signal, and the convergence is verified using actual human walking data on flat ground.
基金This work was supported by Foshan Science and Technology Innovation Special Fund Project(No.BK22BF004 and No.BK20AF004),Guangdong Province,China.
文摘Activity and motion recognition using Wi-Fi signals,mainly channel state information(CSI),has captured the interest of many researchers in recent years.Many research studies have achieved splendid results with the help of machine learning models from different applications such as healthcare services,sign language translation,security,context awareness,and the internet of things.Nevertheless,most of these adopted studies have some shortcomings in the machine learning algorithms as they rely on recurrence and convolutions and,thus,precluding smooth sequential computation.Therefore,in this paper,we propose a deep-learning approach based solely on attention,i.e.,the sole Self-Attention Mechanism model(Sole-SAM),for activity and motion recognition using Wi-Fi signals.The Sole-SAM was deployed to learn the features representing different activities and motions from the raw CSI data.Experiments were carried out to evaluate the performance of the proposed Sole-SAM architecture.The experimental results indicated that our proposed system took significantly less time to train than models that rely on recurrence and convolutions like Long Short-Term Memory(LSTM)and Recurrent Neural Network(RNN).Sole-SAM archived a 0.94%accuracy level,which is 0.04%better than RNN and 0.02%better than LSTM.