期刊文献+
共找到2,770篇文章
< 1 2 139 >
每页显示 20 50 100
Generating Social Interactions with Adolescents with Autism Spectrum Disorder, through a Gesture Imitation Game Led by a Humanoid Robot, in Collaboration with a Human Educator
1
作者 Linda Vallée Malik Koné Olivier Asseu 《Open Journal of Psychiatry》 2025年第1期55-71,共17页
This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The partici... This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies. 展开更多
关键词 Human-Robot Interaction (HRI) Autism Spectrum Disorder (ASD) IMITATION Artificial Intelligence gesture Recognition Social Interaction
在线阅读 下载PDF
Study on User Interaction for Mixed Reality through Hand Gestures Based on Neural Network
2
作者 BeomJun Jo SeongKi Kim 《Computers, Materials & Continua》 2025年第11期2701-2714,共14页
The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remot... The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remote collaboration.A central challenge in these immersive systems lies in enabling intuitive,efficient,and natural interactions.Hand gesture recognition offers a compelling solution by leveraging the expressiveness of human hands to facilitate seamless control without relying on traditional input devices such as controllers or keyboards,which can limit immersion.However,achieving robust gesture recognition requires overcoming challenges related to accurate hand tracking,complex environmental conditions,and minimizing system latency.This study proposes an artificial intelligence(AI)-driven framework for recognizing both static and dynamic hand gestures in VR and AR environments using skeleton-based tracking compliant with the OpenXR standard.Our approach employs a lightweight neural network architecture capable of real-time classification within approximately 1.3mswhilemaintaining average accuracy of 95%.We also introduce a novel dataset generation method to support training robust models and demonstrate consistent classification of diverse gestures across widespread commercial VR devices.This work represents one of the first studies to implement and validate dynamic hand gesture recognition in real time using standardized VR hardware,laying the groundwork for more immersive,accessible,and user-friendly interaction systems.By advancing AI-driven gesture interfaces,this research has the potential to broaden the adoption of VR and AR across diverse domains and enhance the overall user experience. 展开更多
关键词 Static hand gesture classification dynamic hand gesture classification virtual reality mixed reality
在线阅读 下载PDF
Research on Human-Robot Interaction Technology Based on Gesture Recognition
3
作者 Ming Hu 《Journal of Electronic Research and Application》 2025年第6期452-461,共10页
With the growing application of intelligent robots in service,manufacturing,and medical fields,efficient and natural interaction between humans and robots has become key to improving collaboration efficiency and user ... With the growing application of intelligent robots in service,manufacturing,and medical fields,efficient and natural interaction between humans and robots has become key to improving collaboration efficiency and user experience.Gesture recognition,as an intuitive and contactless interaction method,can overcome the limitations of traditional interfaces and enable real-time control and feedback of robot movements and behaviors.This study first reviews mainstream gesture recognition algorithms and their application on different sensing platforms(RGB cameras,depth cameras,and inertial measurement units).It then proposes a gesture recognition method based on multimodal feature fusion and a lightweight deep neural network that balances recognition accuracy with computational efficiency.At system level,a modular human-robot interaction architecture is constructed,comprising perception,decision,and execution layers,and gesture commands are transmitted and mapped to robot actions in real time via the ROS communication protocol.Through multiple comparative experiments on public gesture datasets and a self-collected dataset,the proposed method’s superiority is validated in terms of accuracy,response latency,and system robustness,while user-experience tests assess the interface’s usability.The results provide a reliable technical foundation for robot collaboration and service in complex scenarios,offering broad prospects for practical application and deployment. 展开更多
关键词 gesture recognition Human-robot interaction Multimodal feature fusion Lightweight deep neural network ROS Real-time control
在线阅读 下载PDF
Spiral Steel Wire Based Fiber-Shaped Stretchable and Tailorable Triboelectric Nanogenerator for Wearable Power Source and Active Gesture Sensor 被引量:19
4
作者 Lingjie Xie Xiaoping Chen +6 位作者 Zhen Wen Yanqin Yang Jihong Shi Chen Chen Mingfa Peng Yina Liu Xuhui Sun 《Nano-Micro Letters》 SCIE EI CAS CSCD 2019年第3期36-45,共10页
Continuous deforming always leads to the performance degradation of a flexible triboelectric nanogenerator due to the Young’s modulus mismatch of different functional layers.In this work,we fabricated a fiber-shaped ... Continuous deforming always leads to the performance degradation of a flexible triboelectric nanogenerator due to the Young’s modulus mismatch of different functional layers.In this work,we fabricated a fiber-shaped stretchable and tailorable triboelectric nanogenerator(FST-TENG)based on the geometric construction of a steel wire as electrode and ingenious selection of silicone rubber as triboelectric layer.Owing to the great robustness and continuous conductivity,the FST-TENGs demonstrate high stability,stretchability,and even tailorability.For a single device with ~6 cm in length and ~3 mm in diameter,the open-circuit voltage of ~59.7 V,transferred charge of ~23.7 nC,short-circuit current of ~2.67 μA and average power of ~2.13 μW can be obtained at 2.5 Hz.By knitting several FST-TENGs to be a fabric or a bracelet,it enables to harvest human motion energy and then to drive a wearable electronic device.Finally,it can also be woven on dorsum of glove to monitor the movements of gesture,which can recognize every single finger,different bending angle,and numbers of bent finger by analyzing voltage signals. 展开更多
关键词 Triboelectric NANOGENERATOR STRETCHABLE Human motion energy WEARABLE power source ACTIVE gesture SENSOR
在线阅读 下载PDF
Gesture Recognition Based on BP Neural Network Improved by Chaotic Genetic Algorithm 被引量:18
5
作者 Dong-Jie Li Yang-Yang Li +1 位作者 Jun-Xiang Li Yu Fu 《International Journal of Automation and computing》 EI CSCD 2018年第3期267-276,共10页
Aim at the defects of easy to fall into the local minimum point and the low convergence speed of back propagation(BP)neural network in the gesture recognition, a new method that combines the chaos algorithm with the... Aim at the defects of easy to fall into the local minimum point and the low convergence speed of back propagation(BP)neural network in the gesture recognition, a new method that combines the chaos algorithm with the genetic algorithm(CGA) is proposed. According to the ergodicity of chaos algorithm and global convergence of genetic algorithm, the basic idea of this paper is to encode the weights and thresholds of BP neural network and obtain a general optimal solution with genetic algorithm, and then the general optimal solution is optimized to the accurate optimal solution by adding chaotic disturbance. The optimal results of the chaotic genetic algorithm are used as the initial weights and thresholds of the BP neural network to recognize the gesture. Simulation and experimental results show that the real-time performance and accuracy of the gesture recognition are greatly improved with CGA. 展开更多
关键词 gesture recognition back propagation (BP) neural network chaos algorithm genetic algorithm data glove.
原文传递
Deep Learning Based Hand Gesture Recognition and UAV Flight Controls 被引量:11
6
作者 Bin Hu Jiacun Wang 《International Journal of Automation and computing》 EI CSCD 2020年第1期17-29,共13页
Dynamic hand gesture recognition is a desired alternative means for human-computer interactions.This paper presents a hand gesture recognition system that is designed for the control of flights of unmanned aerial vehi... Dynamic hand gesture recognition is a desired alternative means for human-computer interactions.This paper presents a hand gesture recognition system that is designed for the control of flights of unmanned aerial vehicles(UAV).A data representation model that represents a dynamic gesture sequence by converting the 4-D spatiotemporal data to 2-D matrix and a 1-D array is introduced.To train the system to recognize designed gestures,skeleton data collected from a Leap Motion Controller are converted to two different data models.As many as 9124 samples of the training dataset,1938 samples of the testing dataset are created to train and test the proposed three deep learning neural networks,which are a 2-layer fully connected neural network,a 5-layer fully connected neural network and an 8-layer convolutional neural network.The static testing results show that the 2-layer fully connected neural network achieves an average accuracy of 96.7%on scaled datasets and 12.3%on non-scaled datasets.The 5-layer fully connected neural network achieves an average accuracy of 98.0%on scaled datasets and 89.1%on non-scaled datasets.The 8-layer convolutional neural network achieves an average accuracy of 89.6%on scaled datasets and 96.9%on non-scaled datasets.Testing on a drone-kit simulator and a real drone shows that this system is feasible for drone flight controls. 展开更多
关键词 Deep learning neural networks hand gesture recognition Leap Motion Controllers DRONES
原文传递
Multi-modal Gesture Recognition using Integrated Model of Motion, Audio and Video 被引量:3
7
作者 GOUTSU Yusuke KOBAYASHI Takaki +4 位作者 OBARA Junya KUSAJIMA Ikuo TAKEICHI Kazunari TAKANO Wataru NAKAMURA Yoshihiko 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2015年第4期657-665,共9页
Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become availa... Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely. 展开更多
关键词 gesture recognition multi-modal integration hidden Markov model random forests
在线阅读 下载PDF
Magnetic Array Assisted Triboelectric Nanogenerator Sensor for Real‑Time Gesture Interaction 被引量:9
8
作者 Ken Qin Chen Chen +7 位作者 Xianjie Pu Qian Tang Wencong He Yike Liu Qixuan Zeng Guanlin Liu Hengyu Guo Chenguo Hu 《Nano-Micro Letters》 SCIE EI CAS CSCD 2021年第3期168-176,共9页
In human-machine interaction,robotic hands are useful in many scenarios.To operate robotic hands via gestures instead of handles will greatly improve the convenience and intuition of human-machine interaction.Here,we ... In human-machine interaction,robotic hands are useful in many scenarios.To operate robotic hands via gestures instead of handles will greatly improve the convenience and intuition of human-machine interaction.Here,we present a magnetic array assisted sliding triboelectric sensor for achieving a real-time gesture interaction between a human hand and robotic hand.With a finger’s traction movement of flexion or extension,the sensor can induce positive/negative pulse signals.Through counting the pulses in unit time,the degree,speed,and direction of finger motion can be judged in realtime.The magnetic array plays an important role in generating the quantifiable pulses.The designed two parts of magnetic array can transform sliding motion into contact-separation and constrain the sliding pathway,respectively,thus improve the durability,low speed signal amplitude,and stability of the system.This direct quantization approach and optimization of wearable gesture sensor provide a new strategy for achieving a natural,intuitive,and real-time human-robotic interaction. 展开更多
关键词 Sliding triboelectric sensor Magnetic array gesture Real-time Human-machine interaction
在线阅读 下载PDF
A Novel SE-CNN Attention Architecture for sEMG-Based Hand Gesture Recognition 被引量:7
9
作者 Zhengyuan Xu Junxiao Yu +4 位作者 Wentao Xiang Songsheng Zhu Mubashir Hussain Bin Liu Jianqing Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第1期157-177,共21页
In this article,to reduce the complexity and improve the generalization ability of current gesture recognition systems,we propose a novel SE-CNN attention architecture for sEMG-based hand gesture recognition.The propo... In this article,to reduce the complexity and improve the generalization ability of current gesture recognition systems,we propose a novel SE-CNN attention architecture for sEMG-based hand gesture recognition.The proposed algorithm introduces a temporal squeeze-and-excite block into a simple CNN architecture and then utilizes it to recalibrate the weights of the feature outputs from the convolutional layer.By enhancing important features while suppressing useless ones,the model realizes gesture recognition efficiently.The last procedure of the proposed algorithm is utilizing a simple attention mechanism to enhance the learned representations of sEMG signals to performmulti-channel sEMG-based gesture recognition tasks.To evaluate the effectiveness and accuracy of the proposed algorithm,we conduct experiments involving multi-gesture datasets Ninapro DB4 and Ninapro DB5 for both inter-session validation and subject-wise cross-validation.After a series of comparisons with the previous models,the proposed algorithm effectively increases the robustness with improved gesture recognition performance and generalization ability. 展开更多
关键词 Hand gesture recognition SEMG CNN temporal squeeze-and-excite ATTENTION
在线阅读 下载PDF
Stable grasping gesture analysis of a cable-driven underactuated robotic hand 被引量:6
10
作者 Lti Xin Qiao Shangling +1 位作者 Huang Yong Liu Rongqiang 《Journal of Southeast University(English Edition)》 EI CAS 2018年第3期309-316,共8页
The stable grasping gesture of a novel cable-driven robotic hand is analyzed. The robotic hand is underactuated, using tendon-pulley transmission and a parallel four-linkage mechanism to realize grasp. The structure d... The stable grasping gesture of a novel cable-driven robotic hand is analyzed. The robotic hand is underactuated, using tendon-pulley transmission and a parallel four-linkage mechanism to realize grasp. The structure design and a basic grasping strategy of one finger was introduced. According to the established round object enveloping grasp model, the relationship between the contacting and driving forces in a finger and stable grasping conditions were expounded. A method of interpolation and iteration was proposed to obtain the stable grasping gesture of the cable-driven hand grasping a round target. Quasi-statics analysis in ADAMS validated the variation of grasping forces, which illustrated the feasibility and validity of the proposed analytical method. Three basic types of grasping gestures of the underactuated hand were obtained on the basis of the relationship between the contact forces and position of a grasped object. 展开更多
关键词 grasp gesture tendon-pulley transmission underacmated grasp force
在线阅读 下载PDF
Wireless Head Gesture Controlled Robotic Wheel Chair for Physically Disable Persons 被引量:3
11
作者 Shadman Mahmood Khan Pathan Wasif Ahmed +3 位作者 Md. Masud Rana Md. Shahjalal Tasin Faisul Islam Anika Sultana 《Journal of Sensor Technology》 2020年第4期47-59,共13页
A robotic wheelchair is assumed to be capable of doing tasks like navigation, obstacle detection, etc. using sensors and intelligence. The initial part of the work was development of a cap controlled wheelchair to tes... A robotic wheelchair is assumed to be capable of doing tasks like navigation, obstacle detection, etc. using sensors and intelligence. The initial part of the work was development of a cap controlled wheelchair to test and verify the gesture operation. Following that, a real time operating wheelchair was developed consisting of mode changing option between joystick control mode and head gesture control mode as per as the user’s requirement. The wheelchair consists of MPU6050 sensor, joystick module, RF module, battery, dc motor, toggle switch and Arduino. The movement of the head is detected by MPU6050 and the signal is transmitted to the microcontroller. Then the signal is processed by controller and motion of wheelchair is enabled for navigation. The wheelchair was capable of moving left, right, forward and backward direction. The speed of the wheelchair was 4.8 km/h when tested. Design objective of the wheelchair included cost effectiveness without compromising safety, flexibility and mobility for the users. 展开更多
关键词 Head gesture Wheel Chair Arduino Motor Driver Joystick Module
在线阅读 下载PDF
Vision Based Hand Gesture Recognition Using 3D Shape Context 被引量:8
12
作者 Chen Zhu Jianyu Yang +1 位作者 Zhanpeng Shao Chunping Liu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第9期1600-1613,共14页
Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient.The representation of hand gestures is critical for recognition.In this paper,we propose... Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient.The representation of hand gestures is critical for recognition.In this paper,we propose a new method to measure the similarity between hand gestures and exploit it for hand gesture recognition.The depth maps of hand gestures captured via the Kinect sensors are used in our method,where the 3D hand shapes can be segmented from the cluttered backgrounds.To extract the pattern of salient 3D shape features,we propose a new descriptor-3D Shape Context,for 3D hand gesture representation.The 3D Shape Context information of each 3D point is obtained in multiple scales because both local shape context and global shape distribution are necessary for recognition.The description of all the 3D points constructs the hand gesture representation,and hand gesture recognition is explored via dynamic time warping algorithm.Extensive experiments are conducted on multiple benchmark datasets.The experimental results verify that the proposed method is robust to noise,articulated variations,and rigid transformations.Our method outperforms state-of-the-art methods in the comparisons of accuracy and efficiency. 展开更多
关键词 3D shape context depth map hand shape segmentation hand gesture recognition human-computer interaction
在线阅读 下载PDF
Dynamic Hand Gesture Recognition Based on Short-Term Sampling Neural Networks 被引量:14
13
作者 Wenjin Zhang Jiacun Wang Fangping Lan 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第1期110-120,共11页
Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning netwo... Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures. 展开更多
关键词 Convolutional neural network(ConvNet) hand gesture recognition long short-term memory(LSTM)network short-term sampling transfer learning
在线阅读 下载PDF
Dynamic Hand Gesture Recognition Using 3D-CNN and LSTM Networks 被引量:3
14
作者 Muneeb Ur Rehman Fawad Ahmed +4 位作者 Muhammad Attique Khan Usman Tariq Faisal Abdulaziz Alfouzan Nouf M.Alzahrani Jawad Ahmad 《Computers, Materials & Continua》 SCIE EI 2022年第3期4675-4690,共16页
Recognition of dynamic hand gestures in real-time is a difficult task because the system can never know when or from where the gesture starts and ends in a video stream.Many researchers have been working on visionbase... Recognition of dynamic hand gestures in real-time is a difficult task because the system can never know when or from where the gesture starts and ends in a video stream.Many researchers have been working on visionbased gesture recognition due to its various applications.This paper proposes a deep learning architecture based on the combination of a 3D Convolutional Neural Network(3D-CNN)and a Long Short-Term Memory(LSTM)network.The proposed architecture extracts spatial-temporal information from video sequences input while avoiding extensive computation.The 3D-CNN is used for the extraction of spectral and spatial features which are then given to the LSTM network through which classification is carried out.The proposed model is a light-weight architecture with only 3.7 million training parameters.The model has been evaluated on 15 classes from the 20BN-jester dataset available publicly.The model was trained on 2000 video-clips per class which were separated into 80%training and 20%validation sets.An accuracy of 99%and 97%was achieved on training and testing data,respectively.We further show that the combination of 3D-CNN with LSTM gives superior results as compared to MobileNetv2+LSTM. 展开更多
关键词 Convolutional neural networks 3D-CNN LSTM SPATIOTEMPORAL jester real-time hand gesture recognition
在线阅读 下载PDF
Vision-Based Hand Gesture Recognition for Human-Computer Interaction——A Survey 被引量:2
15
作者 GAO Yongqiang LU Xiong +4 位作者 SUN Junbin TAO Xianglin HUANG Xiaomei YAN Yuxing LIU Jia 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2020年第2期169-184,共16页
Recently,vision-based gesture recognition(VGR)has become a hot research spot in human-computer interaction(HCI).Unlike other gesture recognition methods with data gloves or other wearable sensors,vision-based gesture ... Recently,vision-based gesture recognition(VGR)has become a hot research spot in human-computer interaction(HCI).Unlike other gesture recognition methods with data gloves or other wearable sensors,vision-based gesture recognition could lead to more natural and intuitive HCI interactions.This paper reviews the state-of-the-art vision-based gestures recognition methods,from different stages of gesture recognition process,i.e.,(1)image acquisition and pre-processing,(2)gesture segmentation,(3)gesture tracking,(4)feature extraction,and(5)gesture classification.This paper also analyzes the advantages and disadvantages of these various methods in detail.Finally,the challenges of vision-based gesture recognition in haptic rendering and future research directions are discussed. 展开更多
关键词 vision-based gesture recognition human-computer interaction STATE-OF-THE-ART feature extraction
原文传递
Hand Gesture Recognition by Accelerometer-Based Cluster Dynamic Time Warping 被引量:1
16
作者 王琳琳 夏侯士戟 《Journal of Donghua University(English Edition)》 EI CAS 2017年第4期551-555,共5页
Aiming at the diversity of hand gesture traces by different people,the article presents novel method called cluster dynamic time warping( CDTW),which is based on the main axis classification and sample clustering of i... Aiming at the diversity of hand gesture traces by different people,the article presents novel method called cluster dynamic time warping( CDTW),which is based on the main axis classification and sample clustering of individuals. This method shows good performance on reducing the complexity of recognition and strong robustness of individuals. Data acquisition is implemented on a triaxial accelerometer with 100 Hz sampling frequency. A database of 2400 traces was created by ten subjects for the system testing and evaluation. The overall accuracy was found to be 98. 84% for user independent gesture recognition and 96. 7% for user dependent gesture recognition,higher than dynamic time warping( DTW),derivative DTW( DDTW) and piecewise DTW( PDTW) methods.Computation cost of CDTW in this project has been reduced 11 520 times compared with DTW. 展开更多
关键词 main axis classification sample clustering dynamic time warping(DTW) gesture recognition
在线阅读 下载PDF
WiFi CSI Gesture Recognition Based on Parallel LSTM-FCN Deep Space-Time Neural Network 被引量:6
17
作者 Zhiling Tang Qianqian Liu +2 位作者 Minjie Wu Wenjing Chen Jingwen Huang 《China Communications》 SCIE CSCD 2021年第3期205-215,共11页
In this study,we developed a system based on deep space–time neural networks for gesture recognition.When users change or the number of gesture categories increases,the accuracy of gesture recognition decreases consi... In this study,we developed a system based on deep space–time neural networks for gesture recognition.When users change or the number of gesture categories increases,the accuracy of gesture recognition decreases considerably because most gesture recognition systems cannot accommodate both user differentiation and gesture diversity.To overcome the limitations of existing methods,we designed a onedimensional parallel long short-term memory–fully convolutional network(LSTM–FCN)model to extract gesture features of different dimensions.LSTM can learn complex time dynamic information,whereas FCN can predict gestures efficiently by extracting the deep,abstract features of gestures in the spatial dimension.In the experiment,50 types of gestures of five users were collected and evaluated.The experimental results demonstrate the effectiveness of this system and robustness to various gestures and individual changes.Statistical analysis of the recognition results indicated that an average accuracy of approximately 98.9% was achieved. 展开更多
关键词 signal and information processing parallel LSTM-FCN neural network deep learning gesture recognition wireless channel state information
在线阅读 下载PDF
Load on Bicycle Frame During Cycling with Different Speeds and Gestures 被引量:2
18
作者 项忠霞 田冠 +2 位作者 许文 关新 于晓然 《Transactions of Tianjin University》 EI CAS 2011年第4期270-274,共5页
Experiment and dynamic simulation were combined to obtain the loads on bicycle frame. A dynamic model of body-bicycle system was built in ADAMS. Then the body gestures under different riding conditions were captured b... Experiment and dynamic simulation were combined to obtain the loads on bicycle frame. A dynamic model of body-bicycle system was built in ADAMS. Then the body gestures under different riding conditions were captured by a motion analysis system. Dynamic simulation was carried out after the data of body motions were input into the simulation system in ADAMS and a series of loads that the body applied on head tube, seat pillar and bottom bracket were obtained. The results show that the loads on flame and their distribution are apparently different under various riding conditions. Finally, finite element analysis was done in ANSYS, which showed that the stress and its distribution on frame were apparently different when the flame was loaded according to the bicycle testing standard and simulation respectively. An efficient way to obtain load on bicycle flame accurately was proposed, which is sig- nificant for the safety of cycling and will also be the basis for the bicycle design of digitalization, lightening and cus- tomization. 展开更多
关键词 BICYCLE load of frame gesture capturing dynamic simulation
在线阅读 下载PDF
Home Automation-Based Health Assessment Along Gesture Recognition via Inertial Sensors 被引量:2
19
作者 Hammad Rustam Muhammad Muneeb +4 位作者 Suliman A.Alsuhibany Yazeed Yasin Ghadi Tamara Al Shloul Ahmad Jalal Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2023年第4期2331-2346,共16页
Hand gesture recognition (HGR) is used in a numerous applications,including medical health-care, industrial purpose and sports detection.We have developed a real-time hand gesture recognition system using inertialsens... Hand gesture recognition (HGR) is used in a numerous applications,including medical health-care, industrial purpose and sports detection.We have developed a real-time hand gesture recognition system using inertialsensors for the smart home application. Developing such a model facilitatesthe medical health field (elders or disabled ones). Home automation has alsobeen proven to be a tremendous benefit for the elderly and disabled. Residentsare admitted to smart homes for comfort, luxury, improved quality of life,and protection against intrusion and burglars. This paper proposes a novelsystem that uses principal component analysis, linear discrimination analysisfeature extraction, and random forest as a classifier to improveHGRaccuracy.We have achieved an accuracy of 94% over the publicly benchmarked HGRdataset. The proposed system can be used to detect hand gestures in thehealthcare industry as well as in the industrial and educational sectors. 展开更多
关键词 Genetic algorithm human locomotion activity recognition human–computer interaction human gestures recognition principal hand gestures recognition inertial sensors principal component analysis linear discriminant analysis stochastic neighbor embedding
在线阅读 下载PDF
上一页 1 2 139 下一页 到第
使用帮助 返回顶部