Activity and motion recognition using Wi-Fi signals,mainly channel state information(CSI),has captured the interest of many researchers in recent years.Many research studies have achieved splendid results with the hel...Activity and motion recognition using Wi-Fi signals,mainly channel state information(CSI),has captured the interest of many researchers in recent years.Many research studies have achieved splendid results with the help of machine learning models from different applications such as healthcare services,sign language translation,security,context awareness,and the internet of things.Nevertheless,most of these adopted studies have some shortcomings in the machine learning algorithms as they rely on recurrence and convolutions and,thus,precluding smooth sequential computation.Therefore,in this paper,we propose a deep-learning approach based solely on attention,i.e.,the sole Self-Attention Mechanism model(Sole-SAM),for activity and motion recognition using Wi-Fi signals.The Sole-SAM was deployed to learn the features representing different activities and motions from the raw CSI data.Experiments were carried out to evaluate the performance of the proposed Sole-SAM architecture.The experimental results indicated that our proposed system took significantly less time to train than models that rely on recurrence and convolutions like Long Short-Term Memory(LSTM)and Recurrent Neural Network(RNN).Sole-SAM archived a 0.94%accuracy level,which is 0.04%better than RNN and 0.02%better than LSTM.展开更多
Lower limb motion recognition techniques commonly employ Surface Electromyographic Signal(sEMG)as input and apply a machine learning classifier or Back Propagation Neural Network(BPNN)for classification.However,this a...Lower limb motion recognition techniques commonly employ Surface Electromyographic Signal(sEMG)as input and apply a machine learning classifier or Back Propagation Neural Network(BPNN)for classification.However,this artificial feature engineering technique is not generalizable to similar tasks and is heavily reliant on the researcher’s subject expertise.In contrast,neural networks such as Convolutional Neural Network(CNN)and Long Short-term Memory Neural Network(LSTM)can automatically extract features,providing a more generalized and adaptable approach to lower limb motion recognition.Although this approach overcomes the limitations of human feature engineering,it may ignore the potential correlation among the sEMG channels.This paper proposes a spatial–temporal graph neural network model,STGNN-LMR,designed to address the problem of recognizing lower limb motion from multi-channel sEMG.STGNN-LMR transforms multi-channel sEMG into a graph structure and uses graph learning to model spatial–temporal features.An 8-channel sEMG dataset is constructed for the experimental stage,and the results show that the STGNN-LMR model achieves a recognition accuracy of 99.71%.Moreover,this paper simulates two unexpected scenarios,including sEMG sensors affected by sweat noise and sudden failure,and evaluates the testing results using hypothesis testing.According to the experimental results,the STGNN-LMR model exhibits a significant advantage over the control models in noise scenarios and failure scenarios.These experimental results confirm the effectiveness of the STGNN-LMR model for addressing the challenges associated with sEMG-based lower limb motion recognition in practical scenarios.展开更多
Along with the development of motion capture technique, more and more 3D motion databases become available. In this paper, a novel approach is presented for motion recognition and retrieval based on ensemble HMM (hidd...Along with the development of motion capture technique, more and more 3D motion databases become available. In this paper, a novel approach is presented for motion recognition and retrieval based on ensemble HMM (hidden Markov model) learning. Due to the high dimensionality of motion’s features, Isomap nonlinear dimension reduction is used for training data of ensemble HMM learning. For handling new motion data, Isomap is generalized based on the estimation of underlying eigen- functions. Then each action class is learned with one HMM. Since ensemble learning can effectively enhance supervised learning, ensembles of weak HMM learners are built. Experiment results showed that the approaches are effective for motion data recog- nition and retrieval.展开更多
Batch processing mode is widely used in the training process of human motiun recognition. After training, the motion elassitier usually remains invariable. However, if the classifier is to be expanded, all historical ...Batch processing mode is widely used in the training process of human motiun recognition. After training, the motion elassitier usually remains invariable. However, if the classifier is to be expanded, all historical data must be gathered for retraining. This consumes a huge amount of storage space, and the new training process will be more complicated. In this paper, we use an incremental learning method to model the motion classifier. A weighted decision tree is proposed to help illustrate the process, and the probability sampling method is also used. The resuhs show that with continuous learning, the motion classifier is more precise. The average classification precision for the weighted decision tree was 88.43% in a typical test. Incremental learning consumes much less time than the batch processing mode when the input training data comes continuously.展开更多
The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for he...The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.展开更多
Human motion recognition is a research hotspot in the field of computer vision,which has a wide range of applications,including biometrics,intelligent surveillance and human-computer interaction.In visionbased human m...Human motion recognition is a research hotspot in the field of computer vision,which has a wide range of applications,including biometrics,intelligent surveillance and human-computer interaction.In visionbased human motion recognition,the main input modes are RGB,depth image and bone data.Each mode can capture some kind of information,which is likely to be complementary to other modes,for example,some modes capture global information while others capture local details of an action.Intuitively speaking,the fusion of multiple modal data can improve the recognition accuracy.In addition,how to correctly model and utilize spatiotemporal information is one of the challenges facing human motion recognition.Aiming at the feature extraction methods involved in human action recognition tasks in video,this paper summarizes the traditional manual feature extraction methods from the aspects of global feature extraction and local feature extraction,and introduces the commonly used feature learning models of feature extraction methods based on deep learning in detail.This paper summarizes the opportunities and challenges in the field of motion recognition and looks forward to the possible research directions in the future.展开更多
Due to the dynamic stiffness characteristics of human joints, it is easy to cause impact and disturbance on normal movements during exoskeleton assistance. This not only brings strict requirements for exoskeleton cont...Due to the dynamic stiffness characteristics of human joints, it is easy to cause impact and disturbance on normal movements during exoskeleton assistance. This not only brings strict requirements for exoskeleton control design, but also makes it difficult to improve assistive level. The Variable Stiffness Actuator (VSA), as a physical variable stiffness mechanism, has the characteristics of dynamic stiffness adjustment and high stiffness control bandwidth, which is in line with the stiffness matching experiment. However, there are still few works exploring the assistive human stiffness matching experiment based on VSA. Therefore, this paper designs a hip exoskeleton based on VSA actuator and studies CPG human motion phase recognition algorithm. Firstly, this paper puts forward the requirements of variable stiffness experimental design and the output torque and variable stiffness dynamic response standards based on human lower limb motion parameters. Plate springs are used as elastic elements to establish the mechanical principle of variable stiffness, and a small variable stiffness actuator is designed based on the plate spring. Then the corresponding theoretical dynamic model is established and analyzed. Starting from the CPG phase recognition algorithm, this paper uses perturbation theory to expand the first-order CPG unit, obtains the phase convergence equation and verifies the phase convergence when using hip joint angle as the input signal with the same frequency, and then expands the second-order CPG unit under the premise of circular limit cycle and analyzes the frequency convergence criterion. Afterwards, this paper extracts the plate spring modal from Abaqus and generates the neutral file of the flexible body model to import into Adams, and conducts torque-stiffness one-way loading and reciprocating loading experiments on the variable stiffness mechanism. After that, Simulink is used to verify the validity of the criterion. Finally, based on the above criterions, the signal mean value is removed using feedback structure to complete the phase recognition algorithm for the human hip joint angle signal, and the convergence is verified using actual human walking data on flat ground.展开更多
Based on light field reconstruction and motion recognition technique, a penetrable interactive floating 3D display system is proposed. The system consists of a high-frame-rate projector, a flat directional diffusing s...Based on light field reconstruction and motion recognition technique, a penetrable interactive floating 3D display system is proposed. The system consists of a high-frame-rate projector, a flat directional diffusing screen, a high-speed data transmission module, and a Kinect somatosensory device. The floating occlusioncorrect 3D image could rotate around some axis at different speeds according to user's hand motion. Eight motion directions and speed are detected accurately, and the prototype system operates efficiently with a recognition accuracy of 90% on average.展开更多
Recognizing and reproducing spatiotemporal motions are necessary when analyzing behaviors andmovements during human-robot interaction. Rigid body motion trajectories are proven as compact and informativeclues in chara...Recognizing and reproducing spatiotemporal motions are necessary when analyzing behaviors andmovements during human-robot interaction. Rigid body motion trajectories are proven as compact and informativeclues in characterizing motions. A flexible dual square-root function (DSRF) descriptor for representing rigid bodymotion trajectories, which can offer robustness in the description over raw data, was proposed in our previousstudy. However, this study focuses on exploring the application of the DSRF descriptor for effective backwardmotion reproduction and motion recognition. Specifically, two DSRF-based reproduction methods are initiallyproposed, including the recursive reconstruction and online optimization. New trajectories with novel situationsand contextual information can be reproduced from a single demonstration while preserving the similarities withthe original demonstration. Furthermore, motion recognition based on DSRF descriptor can be achieved byemploying a template matching method. Finally, the experimental results demonstrate the effectiveness of theproposed method for rigid body motion reproduction and recognition.展开更多
With the rapid advancement of robotics and Artificial Intelligence(AI),aerobics training companion robots now support eco-friendly fitness by reducing reliance on nonrenewable energy.This study presents a solar-powere...With the rapid advancement of robotics and Artificial Intelligence(AI),aerobics training companion robots now support eco-friendly fitness by reducing reliance on nonrenewable energy.This study presents a solar-powered aerobics training robot featuring an adaptive energy management system designed for sustainability and efficiency.The robot integrates machine vision with an enhanced Dynamic Cheetah Optimizer and Bayesian Neural Network(DynCO-BNN)to enable precise exercise monitoring and real-time feedback.Solar tracking technology ensures optimal energy absorption,while a microcontroller-based regulator manages power distribution and robotic movement.Dual-battery switching ensures uninterrupted operation,aided by light and I/V sensors for energy optimization.Using the INSIGHT-LME IMU dataset,which includes motion data from 76 individuals performing Local Muscular Endurance(LME)exercises,the system detects activities,counts repetitions,and recognizes human movements.To minimize energy use during data processing,Min-Max normalization and two-dimensional Discrete Fourier Transform(2D-DFT)are applied,boosting computational efficiency.The robot accurately identifies upper and lower limb movements,delivering effective exercise guidance.The DynCO-BNN model achieved a high tracking accuracy of 96.8%.Results confirm improved solar utilization,ecological sustainability,and reduced dependence on fossil fuels—positioning the robot as a smart,energy-efficient solution for next-generation fitness technology.展开更多
Brain-computer interfaces (BCIs) records brain activity using electroencephalogram (EEG) headsets in the form of EEG signals;these signals can berecorded, processed and classified into different hand movements, which ...Brain-computer interfaces (BCIs) records brain activity using electroencephalogram (EEG) headsets in the form of EEG signals;these signals can berecorded, processed and classified into different hand movements, which can beused to control other IoT devices. Classification of hand movements will beone step closer to applying these algorithms in real-life situations using EEGheadsets. This paper uses different feature extraction techniques and sophisticatedmachine learning algorithms to classify hand movements from EEG brain signalsto control prosthetic hands for amputated persons. To achieve good classificationaccuracy, denoising and feature extraction of EEG signals is a significant step. Wesaw a considerable increase in all the machine learning models when the movingaverage filter was applied to the raw EEG data. Feature extraction techniques likea fast fourier transform (FFT) and continuous wave transform (CWT) were usedin this study;three types of features were extracted, i.e., FFT Features, CWTCoefficients and CWT scalogram images. We trained and compared differentmachine learning (ML) models like logistic regression, random forest, k-nearestneighbors (KNN), light gradient boosting machine (GBM) and XG boost onFFT and CWT features and deep learning (DL) models like VGG-16, DenseNet201 and ResNet50 trained on CWT scalogram images. XG Boost with FFTfeatures gave the maximum accuracy of 88%.展开更多
Motion intention recognition is considered the key technology for enhancing the training effectiveness of upper limb rehabilitation robots for stroke patients,but traditional recognition systems are difficult to simul...Motion intention recognition is considered the key technology for enhancing the training effectiveness of upper limb rehabilitation robots for stroke patients,but traditional recognition systems are difficult to simultaneously balance real-time performance and reliability.To achieve real-time and accurate upper limb motion intention recognition,a multi-modal fusion method based on surface electromyography(sEMG)signals and arrayed flexible thin-film pressure(AFTFP)sensors was proposed.Through experimental tests on 10 healthy subjects(5 males and 5 females,age 23±2 years),sEMG signals and human-machine interaction force(HMIF)signals were collected during elbow flexion,extension,and shoulder internal and external rotation.The AFTFP signals based on dynamic calibration compensation and the sEMG signals were processed for feature extraction and fusion,and the recognition performance of single signals and fused signals was compared using a support vector machine(SVM).The experimental results showed that the sEMG signals consistently appeared 175±25 ms earlier than the HMIF signals(p<0.01,paired t-test).In offline conditions,the recognition accuracy of the fused signals exceeded 99.77%across different time windows.Under a 0.1 s time window,the real-time recognition accuracy of the fused signals was 14.1%higher than that of the single sEMG signal,and the system’s end-to-end delay was reduced to less than 100 ms.The AFTFP sensor is applied to motion intention recognition for the first time.And its low-cost,high-density array design provided an innovative solution for rehabilitation robots.The findings demonstrate that the AFTFP sensor adopted in this study effectively enhances intention recognition performance.The fusion of its output HMIF signals with sEMG signals combines the advantages of both modalities,enabling real-time and accurate motion intention recognition.This provides efficient command output for human-machine interaction in scenarios such as stroke rehabilitation.展开更多
A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the ...A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the working time because of waiting to avoid conflicts. Herein, wepropose an adaptive concurrency control approach that can reduce conflictsand work time. We classify shared object manipulation in mixed reality intodetailed goals and tasks. Then, we model the relationships among goal,task, and ownership. As the collaborative work progresses, the proposedsystem adapts the different concurrency control mechanisms of shared objectmanipulation according to the modeling of goal–task–ownership. With theproposed concurrency control scheme, users can hold shared objects andmove and rotate together in a mixed reality environment similar to realindustrial sites. Additionally, this system provides MS Hololens and Myosensors to recognize inputs from a user and provides results in a mixed realityenvironment. The proposed method is applied to install an air conditioneras a case study. Experimental results and user studies show that, comparedwith the conventional approach, the proposed method reduced the number ofconflicts, waiting time, and total working time.展开更多
基金This work was supported by Foshan Science and Technology Innovation Special Fund Project(No.BK22BF004 and No.BK20AF004),Guangdong Province,China.
文摘Activity and motion recognition using Wi-Fi signals,mainly channel state information(CSI),has captured the interest of many researchers in recent years.Many research studies have achieved splendid results with the help of machine learning models from different applications such as healthcare services,sign language translation,security,context awareness,and the internet of things.Nevertheless,most of these adopted studies have some shortcomings in the machine learning algorithms as they rely on recurrence and convolutions and,thus,precluding smooth sequential computation.Therefore,in this paper,we propose a deep-learning approach based solely on attention,i.e.,the sole Self-Attention Mechanism model(Sole-SAM),for activity and motion recognition using Wi-Fi signals.The Sole-SAM was deployed to learn the features representing different activities and motions from the raw CSI data.Experiments were carried out to evaluate the performance of the proposed Sole-SAM architecture.The experimental results indicated that our proposed system took significantly less time to train than models that rely on recurrence and convolutions like Long Short-Term Memory(LSTM)and Recurrent Neural Network(RNN).Sole-SAM archived a 0.94%accuracy level,which is 0.04%better than RNN and 0.02%better than LSTM.
文摘Lower limb motion recognition techniques commonly employ Surface Electromyographic Signal(sEMG)as input and apply a machine learning classifier or Back Propagation Neural Network(BPNN)for classification.However,this artificial feature engineering technique is not generalizable to similar tasks and is heavily reliant on the researcher’s subject expertise.In contrast,neural networks such as Convolutional Neural Network(CNN)and Long Short-term Memory Neural Network(LSTM)can automatically extract features,providing a more generalized and adaptable approach to lower limb motion recognition.Although this approach overcomes the limitations of human feature engineering,it may ignore the potential correlation among the sEMG channels.This paper proposes a spatial–temporal graph neural network model,STGNN-LMR,designed to address the problem of recognizing lower limb motion from multi-channel sEMG.STGNN-LMR transforms multi-channel sEMG into a graph structure and uses graph learning to model spatial–temporal features.An 8-channel sEMG dataset is constructed for the experimental stage,and the results show that the STGNN-LMR model achieves a recognition accuracy of 99.71%.Moreover,this paper simulates two unexpected scenarios,including sEMG sensors affected by sweat noise and sudden failure,and evaluates the testing results using hypothesis testing.According to the experimental results,the STGNN-LMR model exhibits a significant advantage over the control models in noise scenarios and failure scenarios.These experimental results confirm the effectiveness of the STGNN-LMR model for addressing the challenges associated with sEMG-based lower limb motion recognition in practical scenarios.
基金Project supported by the National Natural Science Foundation of China (Nos. 60533090 and 60525108), the National Basic Research Program (973) of China (No. 2002CB312101), and the Science and Technology Project of Zhejiang Province (Nos. 2005C13032 and 2005C11001-05), China
文摘Along with the development of motion capture technique, more and more 3D motion databases become available. In this paper, a novel approach is presented for motion recognition and retrieval based on ensemble HMM (hidden Markov model) learning. Due to the high dimensionality of motion’s features, Isomap nonlinear dimension reduction is used for training data of ensemble HMM learning. For handling new motion data, Isomap is generalized based on the estimation of underlying eigen- functions. Then each action class is learned with one HMM. Since ensemble learning can effectively enhance supervised learning, ensembles of weak HMM learners are built. Experiment results showed that the approaches are effective for motion data recog- nition and retrieval.
基金partly supported by the National Natural Science Foundation of China under Grant 61573242the Projects from Science and Technology Commission of Shanghai Municipality under Grant No.13511501302,No.14511100300,and No.15511105100+1 种基金Shanghai Pujiang Program under Grant No.14PJ1405000ZTE Industry-Academia-Research Cooperation Funds
文摘Batch processing mode is widely used in the training process of human motiun recognition. After training, the motion elassitier usually remains invariable. However, if the classifier is to be expanded, all historical data must be gathered for retraining. This consumes a huge amount of storage space, and the new training process will be more complicated. In this paper, we use an incremental learning method to model the motion classifier. A weighted decision tree is proposed to help illustrate the process, and the probability sampling method is also used. The resuhs show that with continuous learning, the motion classifier is more precise. The average classification precision for the weighted decision tree was 88.43% in a typical test. Incremental learning consumes much less time than the batch processing mode when the input training data comes continuously.
基金funded by the ICT Division of theMinistry of Posts,Telecommunications,and Information Technology of Bangladesh under Grant Number 56.00.0000.052.33.005.21-7(Tracking No.22FS15306)support from the University of Rajshahi.
文摘The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.
基金2021 Scientific research funding project of Liaoning Provincial Education Department(Research and implementation of university scientific research information platform serving the transformation of achievements).
文摘Human motion recognition is a research hotspot in the field of computer vision,which has a wide range of applications,including biometrics,intelligent surveillance and human-computer interaction.In visionbased human motion recognition,the main input modes are RGB,depth image and bone data.Each mode can capture some kind of information,which is likely to be complementary to other modes,for example,some modes capture global information while others capture local details of an action.Intuitively speaking,the fusion of multiple modal data can improve the recognition accuracy.In addition,how to correctly model and utilize spatiotemporal information is one of the challenges facing human motion recognition.Aiming at the feature extraction methods involved in human action recognition tasks in video,this paper summarizes the traditional manual feature extraction methods from the aspects of global feature extraction and local feature extraction,and introduces the commonly used feature learning models of feature extraction methods based on deep learning in detail.This paper summarizes the opportunities and challenges in the field of motion recognition and looks forward to the possible research directions in the future.
文摘Due to the dynamic stiffness characteristics of human joints, it is easy to cause impact and disturbance on normal movements during exoskeleton assistance. This not only brings strict requirements for exoskeleton control design, but also makes it difficult to improve assistive level. The Variable Stiffness Actuator (VSA), as a physical variable stiffness mechanism, has the characteristics of dynamic stiffness adjustment and high stiffness control bandwidth, which is in line with the stiffness matching experiment. However, there are still few works exploring the assistive human stiffness matching experiment based on VSA. Therefore, this paper designs a hip exoskeleton based on VSA actuator and studies CPG human motion phase recognition algorithm. Firstly, this paper puts forward the requirements of variable stiffness experimental design and the output torque and variable stiffness dynamic response standards based on human lower limb motion parameters. Plate springs are used as elastic elements to establish the mechanical principle of variable stiffness, and a small variable stiffness actuator is designed based on the plate spring. Then the corresponding theoretical dynamic model is established and analyzed. Starting from the CPG phase recognition algorithm, this paper uses perturbation theory to expand the first-order CPG unit, obtains the phase convergence equation and verifies the phase convergence when using hip joint angle as the input signal with the same frequency, and then expands the second-order CPG unit under the premise of circular limit cycle and analyzes the frequency convergence criterion. Afterwards, this paper extracts the plate spring modal from Abaqus and generates the neutral file of the flexible body model to import into Adams, and conducts torque-stiffness one-way loading and reciprocating loading experiments on the variable stiffness mechanism. After that, Simulink is used to verify the validity of the criterion. Finally, based on the above criterions, the signal mean value is removed using feedback structure to complete the phase recognition algorithm for the human hip joint angle signal, and the convergence is verified using actual human walking data on flat ground.
基金supported by the National Basic Research Program of China(973 Program)(No.2013CB328806)the National High Technology Research and Development Program of China(863 Program)(No.2012AA011902)+1 种基金the National Natural Science Foundation of China(No.61177015)the Research Funds for the Central Universities of China(No.2012XZZX013)
文摘Based on light field reconstruction and motion recognition technique, a penetrable interactive floating 3D display system is proposed. The system consists of a high-frame-rate projector, a flat directional diffusing screen, a high-speed data transmission module, and a Kinect somatosensory device. The floating occlusioncorrect 3D image could rotate around some axis at different speeds according to user's hand motion. Eight motion directions and speed are detected accurately, and the prototype system operates efficiently with a recognition accuracy of 90% on average.
基金the Science and Technology Commission of Shanghai Municipality(No.20DZ2220400)the Interdisciplinary Program of Shanghai Jiao Tong University(No.YG2021QN117)。
文摘Recognizing and reproducing spatiotemporal motions are necessary when analyzing behaviors andmovements during human-robot interaction. Rigid body motion trajectories are proven as compact and informativeclues in characterizing motions. A flexible dual square-root function (DSRF) descriptor for representing rigid bodymotion trajectories, which can offer robustness in the description over raw data, was proposed in our previousstudy. However, this study focuses on exploring the application of the DSRF descriptor for effective backwardmotion reproduction and motion recognition. Specifically, two DSRF-based reproduction methods are initiallyproposed, including the recursive reconstruction and online optimization. New trajectories with novel situationsand contextual information can be reproduced from a single demonstration while preserving the similarities withthe original demonstration. Furthermore, motion recognition based on DSRF descriptor can be achieved byemploying a template matching method. Finally, the experimental results demonstrate the effectiveness of theproposed method for rigid body motion reproduction and recognition.
文摘With the rapid advancement of robotics and Artificial Intelligence(AI),aerobics training companion robots now support eco-friendly fitness by reducing reliance on nonrenewable energy.This study presents a solar-powered aerobics training robot featuring an adaptive energy management system designed for sustainability and efficiency.The robot integrates machine vision with an enhanced Dynamic Cheetah Optimizer and Bayesian Neural Network(DynCO-BNN)to enable precise exercise monitoring and real-time feedback.Solar tracking technology ensures optimal energy absorption,while a microcontroller-based regulator manages power distribution and robotic movement.Dual-battery switching ensures uninterrupted operation,aided by light and I/V sensors for energy optimization.Using the INSIGHT-LME IMU dataset,which includes motion data from 76 individuals performing Local Muscular Endurance(LME)exercises,the system detects activities,counts repetitions,and recognizes human movements.To minimize energy use during data processing,Min-Max normalization and two-dimensional Discrete Fourier Transform(2D-DFT)are applied,boosting computational efficiency.The robot accurately identifies upper and lower limb movements,delivering effective exercise guidance.The DynCO-BNN model achieved a high tracking accuracy of 96.8%.Results confirm improved solar utilization,ecological sustainability,and reduced dependence on fossil fuels—positioning the robot as a smart,energy-efficient solution for next-generation fitness technology.
文摘Brain-computer interfaces (BCIs) records brain activity using electroencephalogram (EEG) headsets in the form of EEG signals;these signals can berecorded, processed and classified into different hand movements, which can beused to control other IoT devices. Classification of hand movements will beone step closer to applying these algorithms in real-life situations using EEGheadsets. This paper uses different feature extraction techniques and sophisticatedmachine learning algorithms to classify hand movements from EEG brain signalsto control prosthetic hands for amputated persons. To achieve good classificationaccuracy, denoising and feature extraction of EEG signals is a significant step. Wesaw a considerable increase in all the machine learning models when the movingaverage filter was applied to the raw EEG data. Feature extraction techniques likea fast fourier transform (FFT) and continuous wave transform (CWT) were usedin this study;three types of features were extracted, i.e., FFT Features, CWTCoefficients and CWT scalogram images. We trained and compared differentmachine learning (ML) models like logistic regression, random forest, k-nearestneighbors (KNN), light gradient boosting machine (GBM) and XG boost onFFT and CWT features and deep learning (DL) models like VGG-16, DenseNet201 and ResNet50 trained on CWT scalogram images. XG Boost with FFTfeatures gave the maximum accuracy of 88%.
基金supported by Guangdong Basic and Applied Basic Research Foundation(No.2024A1515012810).
文摘Motion intention recognition is considered the key technology for enhancing the training effectiveness of upper limb rehabilitation robots for stroke patients,but traditional recognition systems are difficult to simultaneously balance real-time performance and reliability.To achieve real-time and accurate upper limb motion intention recognition,a multi-modal fusion method based on surface electromyography(sEMG)signals and arrayed flexible thin-film pressure(AFTFP)sensors was proposed.Through experimental tests on 10 healthy subjects(5 males and 5 females,age 23±2 years),sEMG signals and human-machine interaction force(HMIF)signals were collected during elbow flexion,extension,and shoulder internal and external rotation.The AFTFP signals based on dynamic calibration compensation and the sEMG signals were processed for feature extraction and fusion,and the recognition performance of single signals and fused signals was compared using a support vector machine(SVM).The experimental results showed that the sEMG signals consistently appeared 175±25 ms earlier than the HMIF signals(p<0.01,paired t-test).In offline conditions,the recognition accuracy of the fused signals exceeded 99.77%across different time windows.Under a 0.1 s time window,the real-time recognition accuracy of the fused signals was 14.1%higher than that of the single sEMG signal,and the system’s end-to-end delay was reduced to less than 100 ms.The AFTFP sensor is applied to motion intention recognition for the first time.And its low-cost,high-density array design provided an innovative solution for rehabilitation robots.The findings demonstrate that the AFTFP sensor adopted in this study effectively enhances intention recognition performance.The fusion of its output HMIF signals with sEMG signals combines the advantages of both modalities,enabling real-time and accurate motion intention recognition.This provides efficient command output for human-machine interaction in scenarios such as stroke rehabilitation.
基金supported by“Regional Innovation Strategy (RIS)”through the National Research Foundation of Korea (NRF)funded by the Ministry of Education (MOE) (2021RIS-004).
文摘A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the working time because of waiting to avoid conflicts. Herein, wepropose an adaptive concurrency control approach that can reduce conflictsand work time. We classify shared object manipulation in mixed reality intodetailed goals and tasks. Then, we model the relationships among goal,task, and ownership. As the collaborative work progresses, the proposedsystem adapts the different concurrency control mechanisms of shared objectmanipulation according to the modeling of goal–task–ownership. With theproposed concurrency control scheme, users can hold shared objects andmove and rotate together in a mixed reality environment similar to realindustrial sites. Additionally, this system provides MS Hololens and Myosensors to recognize inputs from a user and provides results in a mixed realityenvironment. The proposed method is applied to install an air conditioneras a case study. Experimental results and user studies show that, comparedwith the conventional approach, the proposed method reduced the number ofconflicts, waiting time, and total working time.
基金supported by Fujian Science&Technology Innovation Laboratory for Optoelectronic Information of China(2021ZZ130)the Natural Science Foundation of Fujian Province,China(2021J01577)。