This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The partici...This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.展开更多
The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remot...The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remote collaboration.A central challenge in these immersive systems lies in enabling intuitive,efficient,and natural interactions.Hand gesture recognition offers a compelling solution by leveraging the expressiveness of human hands to facilitate seamless control without relying on traditional input devices such as controllers or keyboards,which can limit immersion.However,achieving robust gesture recognition requires overcoming challenges related to accurate hand tracking,complex environmental conditions,and minimizing system latency.This study proposes an artificial intelligence(AI)-driven framework for recognizing both static and dynamic hand gestures in VR and AR environments using skeleton-based tracking compliant with the OpenXR standard.Our approach employs a lightweight neural network architecture capable of real-time classification within approximately 1.3mswhilemaintaining average accuracy of 95%.We also introduce a novel dataset generation method to support training robust models and demonstrate consistent classification of diverse gestures across widespread commercial VR devices.This work represents one of the first studies to implement and validate dynamic hand gesture recognition in real time using standardized VR hardware,laying the groundwork for more immersive,accessible,and user-friendly interaction systems.By advancing AI-driven gesture interfaces,this research has the potential to broaden the adoption of VR and AR across diverse domains and enhance the overall user experience.展开更多
With the growing application of intelligent robots in service,manufacturing,and medical fields,efficient and natural interaction between humans and robots has become key to improving collaboration efficiency and user ...With the growing application of intelligent robots in service,manufacturing,and medical fields,efficient and natural interaction between humans and robots has become key to improving collaboration efficiency and user experience.Gesture recognition,as an intuitive and contactless interaction method,can overcome the limitations of traditional interfaces and enable real-time control and feedback of robot movements and behaviors.This study first reviews mainstream gesture recognition algorithms and their application on different sensing platforms(RGB cameras,depth cameras,and inertial measurement units).It then proposes a gesture recognition method based on multimodal feature fusion and a lightweight deep neural network that balances recognition accuracy with computational efficiency.At system level,a modular human-robot interaction architecture is constructed,comprising perception,decision,and execution layers,and gesture commands are transmitted and mapped to robot actions in real time via the ROS communication protocol.Through multiple comparative experiments on public gesture datasets and a self-collected dataset,the proposed method’s superiority is validated in terms of accuracy,response latency,and system robustness,while user-experience tests assess the interface’s usability.The results provide a reliable technical foundation for robot collaboration and service in complex scenarios,offering broad prospects for practical application and deployment.展开更多
Continuous deforming always leads to the performance degradation of a flexible triboelectric nanogenerator due to the Young’s modulus mismatch of different functional layers.In this work,we fabricated a fiber-shaped ...Continuous deforming always leads to the performance degradation of a flexible triboelectric nanogenerator due to the Young’s modulus mismatch of different functional layers.In this work,we fabricated a fiber-shaped stretchable and tailorable triboelectric nanogenerator(FST-TENG)based on the geometric construction of a steel wire as electrode and ingenious selection of silicone rubber as triboelectric layer.Owing to the great robustness and continuous conductivity,the FST-TENGs demonstrate high stability,stretchability,and even tailorability.For a single device with ~6 cm in length and ~3 mm in diameter,the open-circuit voltage of ~59.7 V,transferred charge of ~23.7 nC,short-circuit current of ~2.67 μA and average power of ~2.13 μW can be obtained at 2.5 Hz.By knitting several FST-TENGs to be a fabric or a bracelet,it enables to harvest human motion energy and then to drive a wearable electronic device.Finally,it can also be woven on dorsum of glove to monitor the movements of gesture,which can recognize every single finger,different bending angle,and numbers of bent finger by analyzing voltage signals.展开更多
Aim at the defects of easy to fall into the local minimum point and the low convergence speed of back propagation(BP)neural network in the gesture recognition, a new method that combines the chaos algorithm with the...Aim at the defects of easy to fall into the local minimum point and the low convergence speed of back propagation(BP)neural network in the gesture recognition, a new method that combines the chaos algorithm with the genetic algorithm(CGA) is proposed. According to the ergodicity of chaos algorithm and global convergence of genetic algorithm, the basic idea of this paper is to encode the weights and thresholds of BP neural network and obtain a general optimal solution with genetic algorithm, and then the general optimal solution is optimized to the accurate optimal solution by adding chaotic disturbance. The optimal results of the chaotic genetic algorithm are used as the initial weights and thresholds of the BP neural network to recognize the gesture. Simulation and experimental results show that the real-time performance and accuracy of the gesture recognition are greatly improved with CGA.展开更多
Dynamic hand gesture recognition is a desired alternative means for human-computer interactions.This paper presents a hand gesture recognition system that is designed for the control of flights of unmanned aerial vehi...Dynamic hand gesture recognition is a desired alternative means for human-computer interactions.This paper presents a hand gesture recognition system that is designed for the control of flights of unmanned aerial vehicles(UAV).A data representation model that represents a dynamic gesture sequence by converting the 4-D spatiotemporal data to 2-D matrix and a 1-D array is introduced.To train the system to recognize designed gestures,skeleton data collected from a Leap Motion Controller are converted to two different data models.As many as 9124 samples of the training dataset,1938 samples of the testing dataset are created to train and test the proposed three deep learning neural networks,which are a 2-layer fully connected neural network,a 5-layer fully connected neural network and an 8-layer convolutional neural network.The static testing results show that the 2-layer fully connected neural network achieves an average accuracy of 96.7%on scaled datasets and 12.3%on non-scaled datasets.The 5-layer fully connected neural network achieves an average accuracy of 98.0%on scaled datasets and 89.1%on non-scaled datasets.The 8-layer convolutional neural network achieves an average accuracy of 89.6%on scaled datasets and 96.9%on non-scaled datasets.Testing on a drone-kit simulator and a real drone shows that this system is feasible for drone flight controls.展开更多
Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become availa...Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.展开更多
In human-machine interaction,robotic hands are useful in many scenarios.To operate robotic hands via gestures instead of handles will greatly improve the convenience and intuition of human-machine interaction.Here,we ...In human-machine interaction,robotic hands are useful in many scenarios.To operate robotic hands via gestures instead of handles will greatly improve the convenience and intuition of human-machine interaction.Here,we present a magnetic array assisted sliding triboelectric sensor for achieving a real-time gesture interaction between a human hand and robotic hand.With a finger’s traction movement of flexion or extension,the sensor can induce positive/negative pulse signals.Through counting the pulses in unit time,the degree,speed,and direction of finger motion can be judged in realtime.The magnetic array plays an important role in generating the quantifiable pulses.The designed two parts of magnetic array can transform sliding motion into contact-separation and constrain the sliding pathway,respectively,thus improve the durability,low speed signal amplitude,and stability of the system.This direct quantization approach and optimization of wearable gesture sensor provide a new strategy for achieving a natural,intuitive,and real-time human-robotic interaction.展开更多
In this article,to reduce the complexity and improve the generalization ability of current gesture recognition systems,we propose a novel SE-CNN attention architecture for sEMG-based hand gesture recognition.The propo...In this article,to reduce the complexity and improve the generalization ability of current gesture recognition systems,we propose a novel SE-CNN attention architecture for sEMG-based hand gesture recognition.The proposed algorithm introduces a temporal squeeze-and-excite block into a simple CNN architecture and then utilizes it to recalibrate the weights of the feature outputs from the convolutional layer.By enhancing important features while suppressing useless ones,the model realizes gesture recognition efficiently.The last procedure of the proposed algorithm is utilizing a simple attention mechanism to enhance the learned representations of sEMG signals to performmulti-channel sEMG-based gesture recognition tasks.To evaluate the effectiveness and accuracy of the proposed algorithm,we conduct experiments involving multi-gesture datasets Ninapro DB4 and Ninapro DB5 for both inter-session validation and subject-wise cross-validation.After a series of comparisons with the previous models,the proposed algorithm effectively increases the robustness with improved gesture recognition performance and generalization ability.展开更多
The stable grasping gesture of a novel cable-driven robotic hand is analyzed. The robotic hand is underactuated, using tendon-pulley transmission and a parallel four-linkage mechanism to realize grasp. The structure d...The stable grasping gesture of a novel cable-driven robotic hand is analyzed. The robotic hand is underactuated, using tendon-pulley transmission and a parallel four-linkage mechanism to realize grasp. The structure design and a basic grasping strategy of one finger was introduced. According to the established round object enveloping grasp model, the relationship between the contacting and driving forces in a finger and stable grasping conditions were expounded. A method of interpolation and iteration was proposed to obtain the stable grasping gesture of the cable-driven hand grasping a round target. Quasi-statics analysis in ADAMS validated the variation of grasping forces, which illustrated the feasibility and validity of the proposed analytical method. Three basic types of grasping gestures of the underactuated hand were obtained on the basis of the relationship between the contact forces and position of a grasped object.展开更多
A robotic wheelchair is assumed to be capable of doing tasks like navigation, obstacle detection, etc. using sensors and intelligence. The initial part of the work was development of a cap controlled wheelchair to tes...A robotic wheelchair is assumed to be capable of doing tasks like navigation, obstacle detection, etc. using sensors and intelligence. The initial part of the work was development of a cap controlled wheelchair to test and verify the gesture operation. Following that, a real time operating wheelchair was developed consisting of mode changing option between joystick control mode and head gesture control mode as per as the user’s requirement. The wheelchair consists of MPU6050 sensor, joystick module, RF module, battery, dc motor, toggle switch and Arduino. The movement of the head is detected by MPU6050 and the signal is transmitted to the microcontroller. Then the signal is processed by controller and motion of wheelchair is enabled for navigation. The wheelchair was capable of moving left, right, forward and backward direction. The speed of the wheelchair was 4.8 km/h when tested. Design objective of the wheelchair included cost effectiveness without compromising safety, flexibility and mobility for the users.展开更多
Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient.The representation of hand gestures is critical for recognition.In this paper,we propose...Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient.The representation of hand gestures is critical for recognition.In this paper,we propose a new method to measure the similarity between hand gestures and exploit it for hand gesture recognition.The depth maps of hand gestures captured via the Kinect sensors are used in our method,where the 3D hand shapes can be segmented from the cluttered backgrounds.To extract the pattern of salient 3D shape features,we propose a new descriptor-3D Shape Context,for 3D hand gesture representation.The 3D Shape Context information of each 3D point is obtained in multiple scales because both local shape context and global shape distribution are necessary for recognition.The description of all the 3D points constructs the hand gesture representation,and hand gesture recognition is explored via dynamic time warping algorithm.Extensive experiments are conducted on multiple benchmark datasets.The experimental results verify that the proposed method is robust to noise,articulated variations,and rigid transformations.Our method outperforms state-of-the-art methods in the comparisons of accuracy and efficiency.展开更多
Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning netwo...Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.展开更多
Recognition of dynamic hand gestures in real-time is a difficult task because the system can never know when or from where the gesture starts and ends in a video stream.Many researchers have been working on visionbase...Recognition of dynamic hand gestures in real-time is a difficult task because the system can never know when or from where the gesture starts and ends in a video stream.Many researchers have been working on visionbased gesture recognition due to its various applications.This paper proposes a deep learning architecture based on the combination of a 3D Convolutional Neural Network(3D-CNN)and a Long Short-Term Memory(LSTM)network.The proposed architecture extracts spatial-temporal information from video sequences input while avoiding extensive computation.The 3D-CNN is used for the extraction of spectral and spatial features which are then given to the LSTM network through which classification is carried out.The proposed model is a light-weight architecture with only 3.7 million training parameters.The model has been evaluated on 15 classes from the 20BN-jester dataset available publicly.The model was trained on 2000 video-clips per class which were separated into 80%training and 20%validation sets.An accuracy of 99%and 97%was achieved on training and testing data,respectively.We further show that the combination of 3D-CNN with LSTM gives superior results as compared to MobileNetv2+LSTM.展开更多
Recently,vision-based gesture recognition(VGR)has become a hot research spot in human-computer interaction(HCI).Unlike other gesture recognition methods with data gloves or other wearable sensors,vision-based gesture ...Recently,vision-based gesture recognition(VGR)has become a hot research spot in human-computer interaction(HCI).Unlike other gesture recognition methods with data gloves or other wearable sensors,vision-based gesture recognition could lead to more natural and intuitive HCI interactions.This paper reviews the state-of-the-art vision-based gestures recognition methods,from different stages of gesture recognition process,i.e.,(1)image acquisition and pre-processing,(2)gesture segmentation,(3)gesture tracking,(4)feature extraction,and(5)gesture classification.This paper also analyzes the advantages and disadvantages of these various methods in detail.Finally,the challenges of vision-based gesture recognition in haptic rendering and future research directions are discussed.展开更多
Aiming at the diversity of hand gesture traces by different people,the article presents novel method called cluster dynamic time warping( CDTW),which is based on the main axis classification and sample clustering of i...Aiming at the diversity of hand gesture traces by different people,the article presents novel method called cluster dynamic time warping( CDTW),which is based on the main axis classification and sample clustering of individuals. This method shows good performance on reducing the complexity of recognition and strong robustness of individuals. Data acquisition is implemented on a triaxial accelerometer with 100 Hz sampling frequency. A database of 2400 traces was created by ten subjects for the system testing and evaluation. The overall accuracy was found to be 98. 84% for user independent gesture recognition and 96. 7% for user dependent gesture recognition,higher than dynamic time warping( DTW),derivative DTW( DDTW) and piecewise DTW( PDTW) methods.Computation cost of CDTW in this project has been reduced 11 520 times compared with DTW.展开更多
In this study,we developed a system based on deep space–time neural networks for gesture recognition.When users change or the number of gesture categories increases,the accuracy of gesture recognition decreases consi...In this study,we developed a system based on deep space–time neural networks for gesture recognition.When users change or the number of gesture categories increases,the accuracy of gesture recognition decreases considerably because most gesture recognition systems cannot accommodate both user differentiation and gesture diversity.To overcome the limitations of existing methods,we designed a onedimensional parallel long short-term memory–fully convolutional network(LSTM–FCN)model to extract gesture features of different dimensions.LSTM can learn complex time dynamic information,whereas FCN can predict gestures efficiently by extracting the deep,abstract features of gestures in the spatial dimension.In the experiment,50 types of gestures of five users were collected and evaluated.The experimental results demonstrate the effectiveness of this system and robustness to various gestures and individual changes.Statistical analysis of the recognition results indicated that an average accuracy of approximately 98.9% was achieved.展开更多
Experiment and dynamic simulation were combined to obtain the loads on bicycle frame. A dynamic model of body-bicycle system was built in ADAMS. Then the body gestures under different riding conditions were captured b...Experiment and dynamic simulation were combined to obtain the loads on bicycle frame. A dynamic model of body-bicycle system was built in ADAMS. Then the body gestures under different riding conditions were captured by a motion analysis system. Dynamic simulation was carried out after the data of body motions were input into the simulation system in ADAMS and a series of loads that the body applied on head tube, seat pillar and bottom bracket were obtained. The results show that the loads on flame and their distribution are apparently different under various riding conditions. Finally, finite element analysis was done in ANSYS, which showed that the stress and its distribution on frame were apparently different when the flame was loaded according to the bicycle testing standard and simulation respectively. An efficient way to obtain load on bicycle flame accurately was proposed, which is sig- nificant for the safety of cycling and will also be the basis for the bicycle design of digitalization, lightening and cus- tomization.展开更多
Hand gesture recognition (HGR) is used in a numerous applications,including medical health-care, industrial purpose and sports detection.We have developed a real-time hand gesture recognition system using inertialsens...Hand gesture recognition (HGR) is used in a numerous applications,including medical health-care, industrial purpose and sports detection.We have developed a real-time hand gesture recognition system using inertialsensors for the smart home application. Developing such a model facilitatesthe medical health field (elders or disabled ones). Home automation has alsobeen proven to be a tremendous benefit for the elderly and disabled. Residentsare admitted to smart homes for comfort, luxury, improved quality of life,and protection against intrusion and burglars. This paper proposes a novelsystem that uses principal component analysis, linear discrimination analysisfeature extraction, and random forest as a classifier to improveHGRaccuracy.We have achieved an accuracy of 94% over the publicly benchmarked HGRdataset. The proposed system can be used to detect hand gestures in thehealthcare industry as well as in the industrial and educational sectors.展开更多
文摘This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.
基金supported by research fund from Chosun University,2024.
文摘The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remote collaboration.A central challenge in these immersive systems lies in enabling intuitive,efficient,and natural interactions.Hand gesture recognition offers a compelling solution by leveraging the expressiveness of human hands to facilitate seamless control without relying on traditional input devices such as controllers or keyboards,which can limit immersion.However,achieving robust gesture recognition requires overcoming challenges related to accurate hand tracking,complex environmental conditions,and minimizing system latency.This study proposes an artificial intelligence(AI)-driven framework for recognizing both static and dynamic hand gestures in VR and AR environments using skeleton-based tracking compliant with the OpenXR standard.Our approach employs a lightweight neural network architecture capable of real-time classification within approximately 1.3mswhilemaintaining average accuracy of 95%.We also introduce a novel dataset generation method to support training robust models and demonstrate consistent classification of diverse gestures across widespread commercial VR devices.This work represents one of the first studies to implement and validate dynamic hand gesture recognition in real time using standardized VR hardware,laying the groundwork for more immersive,accessible,and user-friendly interaction systems.By advancing AI-driven gesture interfaces,this research has the potential to broaden the adoption of VR and AR across diverse domains and enhance the overall user experience.
文摘With the growing application of intelligent robots in service,manufacturing,and medical fields,efficient and natural interaction between humans and robots has become key to improving collaboration efficiency and user experience.Gesture recognition,as an intuitive and contactless interaction method,can overcome the limitations of traditional interfaces and enable real-time control and feedback of robot movements and behaviors.This study first reviews mainstream gesture recognition algorithms and their application on different sensing platforms(RGB cameras,depth cameras,and inertial measurement units).It then proposes a gesture recognition method based on multimodal feature fusion and a lightweight deep neural network that balances recognition accuracy with computational efficiency.At system level,a modular human-robot interaction architecture is constructed,comprising perception,decision,and execution layers,and gesture commands are transmitted and mapped to robot actions in real time via the ROS communication protocol.Through multiple comparative experiments on public gesture datasets and a self-collected dataset,the proposed method’s superiority is validated in terms of accuracy,response latency,and system robustness,while user-experience tests assess the interface’s usability.The results provide a reliable technical foundation for robot collaboration and service in complex scenarios,offering broad prospects for practical application and deployment.
基金supported by National Natural Science Foundation of China (NSFC) (No. 61804103)National Key R&D Program of China (No. 2017YFA0205002)+8 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Nos. 18KJA535001 and 14KJB 150020)Natural Science Foundation of Jiangsu Province of China (Nos. BK20170343 and BK20180242)China Postdoctoral Science Foundation (No. 2017M610346)State Key Laboratory of Silicon Materials, Zhejiang University (No. SKL2018-03)Nantong Municipal Science and Technology Program (No. GY12017001)Jiangsu Key Laboratory for Carbon-Based Functional Materials & Devices, Soochow University (KSL201803)supported by Collaborative Innovation Center of Suzhou Nano Science & Technology, the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)the 111 ProjectJoint International Research Laboratory of Carbon-Based Functional Materials and Devices
文摘Continuous deforming always leads to the performance degradation of a flexible triboelectric nanogenerator due to the Young’s modulus mismatch of different functional layers.In this work,we fabricated a fiber-shaped stretchable and tailorable triboelectric nanogenerator(FST-TENG)based on the geometric construction of a steel wire as electrode and ingenious selection of silicone rubber as triboelectric layer.Owing to the great robustness and continuous conductivity,the FST-TENGs demonstrate high stability,stretchability,and even tailorability.For a single device with ~6 cm in length and ~3 mm in diameter,the open-circuit voltage of ~59.7 V,transferred charge of ~23.7 nC,short-circuit current of ~2.67 μA and average power of ~2.13 μW can be obtained at 2.5 Hz.By knitting several FST-TENGs to be a fabric or a bracelet,it enables to harvest human motion energy and then to drive a wearable electronic device.Finally,it can also be woven on dorsum of glove to monitor the movements of gesture,which can recognize every single finger,different bending angle,and numbers of bent finger by analyzing voltage signals.
基金supported by Natural Science Foundation of Heilongjiang Province Youth Fund(No.QC2014C054)Foundation for University Young Key Scholar by Heilongjiang Province(No.1254G023)the Science Funds for the Young Innovative Talents of HUST(No.201304)
文摘Aim at the defects of easy to fall into the local minimum point and the low convergence speed of back propagation(BP)neural network in the gesture recognition, a new method that combines the chaos algorithm with the genetic algorithm(CGA) is proposed. According to the ergodicity of chaos algorithm and global convergence of genetic algorithm, the basic idea of this paper is to encode the weights and thresholds of BP neural network and obtain a general optimal solution with genetic algorithm, and then the general optimal solution is optimized to the accurate optimal solution by adding chaotic disturbance. The optimal results of the chaotic genetic algorithm are used as the initial weights and thresholds of the BP neural network to recognize the gesture. Simulation and experimental results show that the real-time performance and accuracy of the gesture recognition are greatly improved with CGA.
文摘Dynamic hand gesture recognition is a desired alternative means for human-computer interactions.This paper presents a hand gesture recognition system that is designed for the control of flights of unmanned aerial vehicles(UAV).A data representation model that represents a dynamic gesture sequence by converting the 4-D spatiotemporal data to 2-D matrix and a 1-D array is introduced.To train the system to recognize designed gestures,skeleton data collected from a Leap Motion Controller are converted to two different data models.As many as 9124 samples of the training dataset,1938 samples of the testing dataset are created to train and test the proposed three deep learning neural networks,which are a 2-layer fully connected neural network,a 5-layer fully connected neural network and an 8-layer convolutional neural network.The static testing results show that the 2-layer fully connected neural network achieves an average accuracy of 96.7%on scaled datasets and 12.3%on non-scaled datasets.The 5-layer fully connected neural network achieves an average accuracy of 98.0%on scaled datasets and 89.1%on non-scaled datasets.The 8-layer convolutional neural network achieves an average accuracy of 89.6%on scaled datasets and 96.9%on non-scaled datasets.Testing on a drone-kit simulator and a real drone shows that this system is feasible for drone flight controls.
基金Supported by Grant-in-Aid for Young Scientists(A)(Grant No.26700021)Japan Society for the Promotion of Science and Strategic Information and Communications R&D Promotion Programme(Grant No.142103011)Ministry of Internal Affairs and Communications
文摘Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.
基金This work was supported by National Natural Science Foundation of China(51902035 and 52073037)Natural Science Foundation of Chongqing(cstc2020jcyj-msxmX0807)+1 种基金the Fundamental Research Funds for the Central Universities(2020CDJ-LHSS-001 and 2019CDXZWL001)Chongqing graduate tutor team construction project(ydstd1832).
文摘In human-machine interaction,robotic hands are useful in many scenarios.To operate robotic hands via gestures instead of handles will greatly improve the convenience and intuition of human-machine interaction.Here,we present a magnetic array assisted sliding triboelectric sensor for achieving a real-time gesture interaction between a human hand and robotic hand.With a finger’s traction movement of flexion or extension,the sensor can induce positive/negative pulse signals.Through counting the pulses in unit time,the degree,speed,and direction of finger motion can be judged in realtime.The magnetic array plays an important role in generating the quantifiable pulses.The designed two parts of magnetic array can transform sliding motion into contact-separation and constrain the sliding pathway,respectively,thus improve the durability,low speed signal amplitude,and stability of the system.This direct quantization approach and optimization of wearable gesture sensor provide a new strategy for achieving a natural,intuitive,and real-time human-robotic interaction.
基金funded by the National Key Research and Development Program of China(2017YFB1303200)NSFC(81871444,62071241,62075098,and 62001240)+1 种基金Leading-Edge Technology and Basic Research Program of Jiangsu(BK20192004D)Jiangsu Graduate Scientific Research Innovation Programme(KYCX20_1391,KYCX21_1557).
文摘In this article,to reduce the complexity and improve the generalization ability of current gesture recognition systems,we propose a novel SE-CNN attention architecture for sEMG-based hand gesture recognition.The proposed algorithm introduces a temporal squeeze-and-excite block into a simple CNN architecture and then utilizes it to recalibrate the weights of the feature outputs from the convolutional layer.By enhancing important features while suppressing useless ones,the model realizes gesture recognition efficiently.The last procedure of the proposed algorithm is utilizing a simple attention mechanism to enhance the learned representations of sEMG signals to performmulti-channel sEMG-based gesture recognition tasks.To evaluate the effectiveness and accuracy of the proposed algorithm,we conduct experiments involving multi-gesture datasets Ninapro DB4 and Ninapro DB5 for both inter-session validation and subject-wise cross-validation.After a series of comparisons with the previous models,the proposed algorithm effectively increases the robustness with improved gesture recognition performance and generalization ability.
基金The National Natural Science Foundation of China(No.U1613201,51275107)Shenzhen Research Funds(No.JCYJ20170413104438332)
文摘The stable grasping gesture of a novel cable-driven robotic hand is analyzed. The robotic hand is underactuated, using tendon-pulley transmission and a parallel four-linkage mechanism to realize grasp. The structure design and a basic grasping strategy of one finger was introduced. According to the established round object enveloping grasp model, the relationship between the contacting and driving forces in a finger and stable grasping conditions were expounded. A method of interpolation and iteration was proposed to obtain the stable grasping gesture of the cable-driven hand grasping a round target. Quasi-statics analysis in ADAMS validated the variation of grasping forces, which illustrated the feasibility and validity of the proposed analytical method. Three basic types of grasping gestures of the underactuated hand were obtained on the basis of the relationship between the contact forces and position of a grasped object.
文摘A robotic wheelchair is assumed to be capable of doing tasks like navigation, obstacle detection, etc. using sensors and intelligence. The initial part of the work was development of a cap controlled wheelchair to test and verify the gesture operation. Following that, a real time operating wheelchair was developed consisting of mode changing option between joystick control mode and head gesture control mode as per as the user’s requirement. The wheelchair consists of MPU6050 sensor, joystick module, RF module, battery, dc motor, toggle switch and Arduino. The movement of the head is detected by MPU6050 and the signal is transmitted to the microcontroller. Then the signal is processed by controller and motion of wheelchair is enabled for navigation. The wheelchair was capable of moving left, right, forward and backward direction. The speed of the wheelchair was 4.8 km/h when tested. Design objective of the wheelchair included cost effectiveness without compromising safety, flexibility and mobility for the users.
基金supported by the National Natural Science Foundation of China(61773272,61976191)the Six Talent Peaks Project of Jiangsu Province,China(XYDXX-053)Suzhou Research Project of Technical Innovation,Jiangsu,China(SYG201711)。
文摘Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient.The representation of hand gestures is critical for recognition.In this paper,we propose a new method to measure the similarity between hand gestures and exploit it for hand gesture recognition.The depth maps of hand gestures captured via the Kinect sensors are used in our method,where the 3D hand shapes can be segmented from the cluttered backgrounds.To extract the pattern of salient 3D shape features,we propose a new descriptor-3D Shape Context,for 3D hand gesture representation.The 3D Shape Context information of each 3D point is obtained in multiple scales because both local shape context and global shape distribution are necessary for recognition.The description of all the 3D points constructs the hand gesture representation,and hand gesture recognition is explored via dynamic time warping algorithm.Extensive experiments are conducted on multiple benchmark datasets.The experimental results verify that the proposed method is robust to noise,articulated variations,and rigid transformations.Our method outperforms state-of-the-art methods in the comparisons of accuracy and efficiency.
文摘Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.
文摘Recognition of dynamic hand gestures in real-time is a difficult task because the system can never know when or from where the gesture starts and ends in a video stream.Many researchers have been working on visionbased gesture recognition due to its various applications.This paper proposes a deep learning architecture based on the combination of a 3D Convolutional Neural Network(3D-CNN)and a Long Short-Term Memory(LSTM)network.The proposed architecture extracts spatial-temporal information from video sequences input while avoiding extensive computation.The 3D-CNN is used for the extraction of spectral and spatial features which are then given to the LSTM network through which classification is carried out.The proposed model is a light-weight architecture with only 3.7 million training parameters.The model has been evaluated on 15 classes from the 20BN-jester dataset available publicly.The model was trained on 2000 video-clips per class which were separated into 80%training and 20%validation sets.An accuracy of 99%and 97%was achieved on training and testing data,respectively.We further show that the combination of 3D-CNN with LSTM gives superior results as compared to MobileNetv2+LSTM.
基金Supported by the National Natural Science Foundation of China(61773205,61773219)the Fundamental Research Funds for the Central Universities(NS2016032,NS2019018,Nanjing University of Aeronautics and Astronautics)+1 种基金the Scholarship from China Scholarship Council(201906835020)the Fundamental Research Funds for the Central Universities(the Graduate Student Innovation Base Open Fund Project of NUAA,kfjj20190307)。
文摘Recently,vision-based gesture recognition(VGR)has become a hot research spot in human-computer interaction(HCI).Unlike other gesture recognition methods with data gloves or other wearable sensors,vision-based gesture recognition could lead to more natural and intuitive HCI interactions.This paper reviews the state-of-the-art vision-based gestures recognition methods,from different stages of gesture recognition process,i.e.,(1)image acquisition and pre-processing,(2)gesture segmentation,(3)gesture tracking,(4)feature extraction,and(5)gesture classification.This paper also analyzes the advantages and disadvantages of these various methods in detail.Finally,the challenges of vision-based gesture recognition in haptic rendering and future research directions are discussed.
基金National Key R&D Program of China(No.2016YFB1001401)
文摘Aiming at the diversity of hand gesture traces by different people,the article presents novel method called cluster dynamic time warping( CDTW),which is based on the main axis classification and sample clustering of individuals. This method shows good performance on reducing the complexity of recognition and strong robustness of individuals. Data acquisition is implemented on a triaxial accelerometer with 100 Hz sampling frequency. A database of 2400 traces was created by ten subjects for the system testing and evaluation. The overall accuracy was found to be 98. 84% for user independent gesture recognition and 96. 7% for user dependent gesture recognition,higher than dynamic time warping( DTW),derivative DTW( DDTW) and piecewise DTW( PDTW) methods.Computation cost of CDTW in this project has been reduced 11 520 times compared with DTW.
基金supported in part by the National Natural Science Foundation of China under Grant 61461013in part of the Natural Science Foundation of Guangxi Province under Grant 2018GXNSFAA281179in part of the Dean Project of Guangxi Key Laboratory of Wireless Broadband Communication and Signal Processing under Grant GXKL06160103.
文摘In this study,we developed a system based on deep space–time neural networks for gesture recognition.When users change or the number of gesture categories increases,the accuracy of gesture recognition decreases considerably because most gesture recognition systems cannot accommodate both user differentiation and gesture diversity.To overcome the limitations of existing methods,we designed a onedimensional parallel long short-term memory–fully convolutional network(LSTM–FCN)model to extract gesture features of different dimensions.LSTM can learn complex time dynamic information,whereas FCN can predict gestures efficiently by extracting the deep,abstract features of gestures in the spatial dimension.In the experiment,50 types of gestures of five users were collected and evaluated.The experimental results demonstrate the effectiveness of this system and robustness to various gestures and individual changes.Statistical analysis of the recognition results indicated that an average accuracy of approximately 98.9% was achieved.
基金Supported by Special Fund Project for Technology Innovation of Tianjin (No. 10FDZDGX00500)Tianjin Product Quality Inspection Technology Research Institute (No. 11-03)
文摘Experiment and dynamic simulation were combined to obtain the loads on bicycle frame. A dynamic model of body-bicycle system was built in ADAMS. Then the body gestures under different riding conditions were captured by a motion analysis system. Dynamic simulation was carried out after the data of body motions were input into the simulation system in ADAMS and a series of loads that the body applied on head tube, seat pillar and bottom bracket were obtained. The results show that the loads on flame and their distribution are apparently different under various riding conditions. Finally, finite element analysis was done in ANSYS, which showed that the stress and its distribution on frame were apparently different when the flame was loaded according to the bicycle testing standard and simulation respectively. An efficient way to obtain load on bicycle flame accurately was proposed, which is sig- nificant for the safety of cycling and will also be the basis for the bicycle design of digitalization, lightening and cus- tomization.
基金supported by a grant (2021R1F1A1063634)of the Basic Science Research Program through the National Research Foundation (NRF)funded by the Ministry of Education,Republic of Korea.
文摘Hand gesture recognition (HGR) is used in a numerous applications,including medical health-care, industrial purpose and sports detection.We have developed a real-time hand gesture recognition system using inertialsensors for the smart home application. Developing such a model facilitatesthe medical health field (elders or disabled ones). Home automation has alsobeen proven to be a tremendous benefit for the elderly and disabled. Residentsare admitted to smart homes for comfort, luxury, improved quality of life,and protection against intrusion and burglars. This paper proposes a novelsystem that uses principal component analysis, linear discrimination analysisfeature extraction, and random forest as a classifier to improveHGRaccuracy.We have achieved an accuracy of 94% over the publicly benchmarked HGRdataset. The proposed system can be used to detect hand gestures in thehealthcare industry as well as in the industrial and educational sectors.