The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remot...The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remote collaboration.A central challenge in these immersive systems lies in enabling intuitive,efficient,and natural interactions.Hand gesture recognition offers a compelling solution by leveraging the expressiveness of human hands to facilitate seamless control without relying on traditional input devices such as controllers or keyboards,which can limit immersion.However,achieving robust gesture recognition requires overcoming challenges related to accurate hand tracking,complex environmental conditions,and minimizing system latency.This study proposes an artificial intelligence(AI)-driven framework for recognizing both static and dynamic hand gestures in VR and AR environments using skeleton-based tracking compliant with the OpenXR standard.Our approach employs a lightweight neural network architecture capable of real-time classification within approximately 1.3mswhilemaintaining average accuracy of 95%.We also introduce a novel dataset generation method to support training robust models and demonstrate consistent classification of diverse gestures across widespread commercial VR devices.This work represents one of the first studies to implement and validate dynamic hand gesture recognition in real time using standardized VR hardware,laying the groundwork for more immersive,accessible,and user-friendly interaction systems.By advancing AI-driven gesture interfaces,this research has the potential to broaden the adoption of VR and AR across diverse domains and enhance the overall user experience.展开更多
This paper presented a novel tinny motion capture system for measuring bird posture based on inertial and magnetic measurement units that are made up of micromachined gyroscopes, accelerometers, and magnetometers. Mul...This paper presented a novel tinny motion capture system for measuring bird posture based on inertial and magnetic measurement units that are made up of micromachined gyroscopes, accelerometers, and magnetometers. Multiple quaternion-based extended Kalman filters were implemented to estimate the absolute orientations to achieve high accuracy.Under the guidance of ornithology experts, the extending/contracting motions and flapping cycles were recorded using the developed motion capture system, and the orientation of each bone was also analyzed. The captured flapping gesture of the Falco peregrinus is crucial to the motion database of raptors as well as the bionic design.展开更多
Bodily gestures,facial expressions,and intonations are argued to be notably important features of spoken languagewhich are opposed to written language.Bodily gestures with or without spoken words can influence the cla...Bodily gestures,facial expressions,and intonations are argued to be notably important features of spoken languagewhich are opposed to written language.Bodily gestures with or without spoken words can influence the clarity and density of expres-sion and involvement of listeners.Facial expressions whether or not correspond with exact thought could be"decoded"to influencethe extent of intelligibility of expression.Intonation can always reflect the mutual beliefs concerning the propositional content andstates of consciousness relating to the expression and interpretation.Therefore,these can considerably improve or abate the accura-cy of expression and interpretation of thought.展开更多
This paper proposes a novel,efficient and affordable approach to detect the students’engagement levels in an e-learning environment by using webcams.Our method analyzes spatiotemporal features of e-learners’micro bo...This paper proposes a novel,efficient and affordable approach to detect the students’engagement levels in an e-learning environment by using webcams.Our method analyzes spatiotemporal features of e-learners’micro body gestures,which will be mapped to emotions and appropriate engagement states.The proposed engagement detection model uses a three-dimensional convolutional neural network to analyze both temporal and spatial information across video frames.We follow a transfer learning approach by using the C3D model that was trained on the Sports-1M dataset.The adopted C3D model was used based on two different approaches;as a feature extractor with linear classifiers and a classifier after applying fine-tuning to the pretrained model.Our model was tested and its performance was evaluated and compared to the existing models.It proved its effectiveness and superiority over the other existing methods with an accuracy of 94%.The results of this work will contribute to the development of smart and interactive e-learning systems with adaptive responses based on users’engagement levels.展开更多
The evident change in the design of the autopilot system produced massive help for the aviation industry and it required frequent upgrades.Reinforcement learning delivers appropriate outcomes when considering a contin...The evident change in the design of the autopilot system produced massive help for the aviation industry and it required frequent upgrades.Reinforcement learning delivers appropriate outcomes when considering a continuous environment where the controlling Unmanned Aerial Vehicle(UAV)required maximum accuracy.In this paper,we designed a hybrid framework,which is based on Reinforcement Learning and Deep Learning where the traditional electronic flight controller is replaced by using 3D hand gestures.The algorithm is designed to take the input from 3D hand gestures and integrate with the Deep Deterministic Policy Gradient(DDPG)to receive the best reward and take actions according to 3D hand gestures input.The UAV consist of a Jetson Nano embedded testbed,Global Positioning System(GPS)sensor module,and Intel depth camera.The collision avoidance system based on the polar mask segmentation technique detects the obstacles and decides the best path according to the designed reward function.The analysis of the results has been observed providing best accuracy and computational time using novel design framework when compared with traditional Proportional Integral Derivatives(PID)flight controller.There are six reward functions estimated for 2500,5000,7500,and 10000 episodes of training,which have been normalized between 0 to−4000.The best observation has been captured on 2500 episodes where the rewards are calculated for maximum value.The achieved training accuracy of polar mask segmentation for collision avoidance is 86.36%.展开更多
Experiment and dynamic simulation were combined to obtain the loads on bicycle frame. A dynamic model of body-bicycle system was built in ADAMS. Then the body gestures under different riding conditions were captured b...Experiment and dynamic simulation were combined to obtain the loads on bicycle frame. A dynamic model of body-bicycle system was built in ADAMS. Then the body gestures under different riding conditions were captured by a motion analysis system. Dynamic simulation was carried out after the data of body motions were input into the simulation system in ADAMS and a series of loads that the body applied on head tube, seat pillar and bottom bracket were obtained. The results show that the loads on flame and their distribution are apparently different under various riding conditions. Finally, finite element analysis was done in ANSYS, which showed that the stress and its distribution on frame were apparently different when the flame was loaded according to the bicycle testing standard and simulation respectively. An efficient way to obtain load on bicycle flame accurately was proposed, which is sig- nificant for the safety of cycling and will also be the basis for the bicycle design of digitalization, lightening and cus- tomization.展开更多
The Hand Gestures Recognition(HGR)System can be employed to facilitate communication between humans and computers instead of using special input and output devices.These devices may complicate communication with compu...The Hand Gestures Recognition(HGR)System can be employed to facilitate communication between humans and computers instead of using special input and output devices.These devices may complicate communication with computers especially for people with disabilities.Hand gestures can be defined as a natural human-to-human communication method,which also can be used in human-computer interaction.Many researchers developed various techniques and methods that aimed to understand and recognize specific hand gestures by employing one or two machine learning algorithms with a reasonable accuracy.Thiswork aims to develop a powerful hand gesture recognition model with a 100%recognition rate.We proposed an ensemble classification model that combines the most powerful machine learning classifiers to obtain diversity and improve accuracy.The majority voting method was used to aggregate accuracies produced by each classifier and get the final classification result.Our model was trained using a self-constructed dataset containing 1600 images of ten different hand gestures.The employing of canny’s edge detector and histogram of oriented gradient method was a great combination with the ensemble classifier and the recognition rate.The experimental results had shown the robustness of our proposed model.Logistic Regression and Support Vector Machine have achieved 100%accuracy.The developed model was validated using two public datasets,and the findings have proved that our model outperformed other compared studies.展开更多
Holograms provide a characteristic manner to display and convey information, and have been improved to provide better user interactions Holographic interactions are important as they improve user interactions with vir...Holograms provide a characteristic manner to display and convey information, and have been improved to provide better user interactions Holographic interactions are important as they improve user interactions with virtual objects. Gesture interaction is a recent research topic, as it allows users to use their bare hands to directly interact with the hologram. However, it remains unclear whether real hand gestures are well suited for hologram applications. Therefore, we discuss the development process and implementation of three-dimensional object manipulation using natural hand gestures in a hologram. We describe the design and development process for hologram applications and its integration with real hand gesture interactions as initial findings. Experimental results from Nasa TLX form are discussed. Based on the findings, we actualize the user interactions in the hologram.展开更多
This paper presents an experiment using OPENBCI to collect data of two hand gestures and decoding the signal to distinguish gestures. The signal was extracted with three electrodes on the subiect’s forearm and transf...This paper presents an experiment using OPENBCI to collect data of two hand gestures and decoding the signal to distinguish gestures. The signal was extracted with three electrodes on the subiect’s forearm and transferred in one channel. After utilizing a Butterworth bandpass filter, we chose a novel way to detect gesture action segment. Instead of using moving average algorithm, which is based on the calculation of energy, We developed an algorithm based on the Hilbert transform to find a dynamic threshold and identified the action segment. Four features have been extracted from each activity section, generating feature vectors for classification. During the process of classification, we made a comparison between K-nearest-neighbors (KNN) and support vector machine (SVM), based on a relatively small amount of samples. Most common experiments are based on a large quantity of data to pursue a highly fitted model. But there are certain circumstances where we cannot obtain enough training data, so it makes the exploration of best method to do classification under small sample data imperative. Though KNN is known for its simplicity and practicability, it is a relatively time-consuming method. On the other hand, SVM has a better performance in terms of time requirement and recognition accuracy, due to its application of different Risk Minimization Principle. Experimental results show an average recognition rate for the SVM algorithm that is 1.25% higher than for KNN while SVM is 2.031 s shorter than that KNN.展开更多
Several attempts have appeared recently to control optical trapping systems via touch tablets and cameras instead of a mouse and joystick. Our approach is based on a modern low-cost hardware combined with fingertips a...Several attempts have appeared recently to control optical trapping systems via touch tablets and cameras instead of a mouse and joystick. Our approach is based on a modern low-cost hardware combined with fingertips and speech recognition software. Positions of operator's hands or fingertips control the positions of trapping beams in holographic optical tweezers that provide optical manipulation with microobjects. We tested and adapted two systems for hands position detection and gestures recognition – Creative Interactive Gesture Camera and Leap Motion. We further enhanced the system of Holographic Raman tweezers (HRT) by voice commands controlling the micropositioning stage and acquisition of Raman spectra. Interface communicates with HRT either directly by which requires adaptation of HRT firmware, or indirectly by simulating mouse and keyboard messages. Its utilization in real experiments speeded up the operator’s communication with the system cca. Two times in comparison with the traditional control by the mouse and the keyboard.展开更多
Background Within a virtual environment(VE)the control of locomotion(e.g.,self-travel)is critical for creating a realistic and functional experience.Usually the direction of locomotion,whileusing a head-mounted displa...Background Within a virtual environment(VE)the control of locomotion(e.g.,self-travel)is critical for creating a realistic and functional experience.Usually the direction of locomotion,whileusing a head-mounted display(HMD),is determined by the direction the head is pointing and the forwardor backward motion is controlled with a hand held controllers.However,hand held devices can be difficultto use while the eyes are covered with a HMD.Free hand gestures,that are tracked with a camera or ahand data glove,have an advantage of eliminating the need to look at the hand controller but the design ofhand or finger gestures for this purpose has not been well developed.Methods This study used a depth-sensing camera to track fingertip location(curling and straightening the fingers),which was converted toforward or backward self-travel in the VE.Fingertip position was converted to self-travel velocity using amapping function with three parameters:a region of zero velocity(dead zone)around the relaxed handposition,a linear relationship of fingertip position to velocity(slope orβ)beginning at the edge of the deadzone,and an exponential relationship rather than a linear one mapping fingertip position to velocity(exponent).Using a HMD,participants moved forward along a virtual road and stopped at a target on theroad by controlling self-travel velocity with finger flexion and extension.Each of the 3 mapping functionparameters was tested at 3 levels.Outcomes measured included usability ratings,fatigue,nausea,and timeto complete the tasks.Results Twenty subjects participated but five did not complete the study due tonausea.The size of the dead zone had little effect on performance or usability.Subjects preferred lower β values which were associated with better subjective ratings of control and reduced time to complete thetask,especially for large targets.Exponent values of 1.0 or greater were preferred and reduced the time tocomplete the task,especially for small targets.Conclusions Small finger movements can be used tocontrol velocity of self-travel in VE.The functions used for converting fingertip position to movementvelocity influence usability and performance.展开更多
Hearing and Speech impairment can be congenital or acquired.Hearing and speech-impaired students often hesitate to pursue higher education in reputable institutions due to their challenges.However,the development of a...Hearing and Speech impairment can be congenital or acquired.Hearing and speech-impaired students often hesitate to pursue higher education in reputable institutions due to their challenges.However,the development of automated assistive learning tools within the educational field has empowered disabled students to pursue higher education in any field of study.Assistive learning devices enable students to access institutional resources and facilities fully.The proposed assistive learning and communication tool allows hearing and speech-impaired students to interact productively with their teachers and classmates.This tool converts the audio signals into sign language videos for the speech and hearing-impaired to follow and converts the sign language to text format for the teachers to follow.This educational tool for the speech and hearing-impaired is implemented by customized deep learning models such as Convolution neural networks(CNN),Residual neural Networks(ResNet),and stacked Long short-term memory(LSTM)network models.This assistive learning tool is a novel framework that interprets the static and dynamic gesture actions in American Sign Language(ASL).Such communicative tools empower the speech and hearing impaired to communicate effectively in a classroom environment and foster inclusivity.Customized deep learning models were developed and experimentally evaluated with the standard performance metrics.The model exhibits an accuracy of 99.7% for all static gesture classification and 99% for specific vocabulary of gesture action words.This two-way communicative and educational tool encourages social inclusion and a promising career for disabled students.展开更多
In a recent study,Prof.Rui Min and collaborators published their paper in the journal of Opto-Electronic Science that is entitled"Smart photonic wristband for pulse wave monitoring".The paper introduces nove...In a recent study,Prof.Rui Min and collaborators published their paper in the journal of Opto-Electronic Science that is entitled"Smart photonic wristband for pulse wave monitoring".The paper introduces novel realization of a sensor that us-es a polymer optical multi-mode fiber to sense pulse wave bio-signal from a wrist by analyzing the specklegram mea-sured at the output of the fiber.Applying machine learning techniques over the pulse wave signal allowed medical diag-nostics and recognizing different gestures with accuracy rate of 95%.展开更多
Flexible triboelectric nanogenerators (TENGs)-based pressure sensors are very essential for the wide-range applications, comprising wearable healthcare systems, intuitive human-device interfaces, electronic-skin (e-sk...Flexible triboelectric nanogenerators (TENGs)-based pressure sensors are very essential for the wide-range applications, comprising wearable healthcare systems, intuitive human-device interfaces, electronic-skin (e-skin), and artificial intelligence. Most of conventional fabrication methods used to produce high-performance TENGs involve plasma treatment, photolithography, printing, and electro-deposition. However, these fabrication techniques are expensive, multi-step, time-consuming and not suitable for mass production, which are the main barriers for efficient and cost-effective commercialization of TENGs. Here, we established a highly reliable scheme for the fabrication of a novel eco-friendly, low cost, and TENG-based pressure sensor (TEPS) designed for usage in self-powered-human gesture detection (SP-HGD) likewise wearable healthcare applications. The sensors with microstructured electrodes performed well with high sensitivity (7.697 kPa^-1), a lower limit of detection (~ 1 Pa), faster response time (< 9.9 ms), and highly stable over > 4,000 compression-releasing cycles. The proposed method is suitable for the adaptable fabrication of TEPS at an extremely low cost with possible applications in self-powered systems, especially e-skin and healthcare applications.展开更多
Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro...Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.展开更多
Textiles for health and sporting activity monitoring are on the rise with the advent of smart portable wearables.The intention of this work is to design wireless monitoring wearables,based on widely available textiles...Textiles for health and sporting activity monitoring are on the rise with the advent of smart portable wearables.The intention of this work is to design wireless monitoring wearables,based on widely available textiles and low environmental impact production technologies.Herein we have developed a polymeric ink which is able to functionalize different types of textile fibers(including silver conducting fibers,cotton,and commercial textile)with poly pyrrole.These fibers were weaved together with a thinner silver conducting fiber and carbon fiber to form a touch-sensitive energy harvesting system that would generate an electric output when mechanical pressure is applied to it.Different prototypes were manufactured with loom weaving accessories to simulate real textile cloths.By simple touch,the prototypes produced a maximum voltage of 244 V and a maximum power density of 2.29 W m^(-2).The current generated is then transformed into a digital signal,which is further utilized for human motion or gesture monitorization.The system comprises a wireless block for the Internet of Things(IoT)applicability that will be eventually extended to future remote health and sports monitoring systems.展开更多
Background With the increasing prominence of hand and finger motion tracking in virtual reality(VR)applications and rehabilitation studies,data gloves have emerged as a prevalent solution.In this study,we developed an...Background With the increasing prominence of hand and finger motion tracking in virtual reality(VR)applications and rehabilitation studies,data gloves have emerged as a prevalent solution.In this study,we developed an innovative,lightweight,and detachable data glove tailored for finger motion tracking in VR environments.Methods The glove design incorporates a potentiometer coupled with a flexible rack and pinion gear system,facilitating precise and natural hand gestures for interaction with VR applications.Initially,we calibrated the potentiometer to align with the actual finger bending angle,and verified the accuracy of angle measurements recorded by the data glove.To verify the precision and reliability of our data glove,we conducted repeatability testing for flexion(grip test)and extension(flat test),with 250 measurements each,across five users.We employed the Gage Repeatability and Reproducibility to analyze and interpret the repeatable data.Furthermore,we integrated the gloves into a SteamVR home environment using the OpenGlove auto-calibration tool.Conclusions The repeatability analysis revealed an aggregate error of 1.45 degrees in both the gripped and flat hand positions.This outcome was notably favorable when compared with the findings from assessments of nine alternative data gloves that employed similar protocols.In these experiments,users navigated and engaged with virtual objects,underlining the glove's exact tracking of finger motion.Furthermore,the proposed data glove exhibited a low response time of 17-34 ms and back-drive force of only 0.19 N.Additionally,according to a comfort evaluation using the Comfort Rating Scales,the proposed glove system is wearable,placing it at the WL1 level.展开更多
This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The partici...This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.展开更多
Gestures recognition is of great importance to intelligent human-computer interaction technology, but it is also very difficult to deal with, especially when the environment is quite complex. In this paper, the recogn...Gestures recognition is of great importance to intelligent human-computer interaction technology, but it is also very difficult to deal with, especially when the environment is quite complex. In this paper, the recognition algorithm of dynamic and combined gestures, which based on multi-feature fusion, is proposed. Firstly, in image segmentation stage, the algorithm extracts interested region of gestures in color and depth map by combining with the depth information. Then, to establish support vector machine (SVM) model for static hand gestures recognition, the algorithm fuses weighted Hu invariant moments of depth map into the Histogram of oriented gradients (HOG) of the color image. Finally, an hidden Markov model (HMM) toolbox supporting multi-dimensional continuous data input is adopted to do the training and recognition. Experimental results show that the proposed algorithm can not only overcome the influence of skin object, multi-object moving and hand gestures interference in the background, but also real-time and practical in Human-Computer interaction.展开更多
基金supported by research fund from Chosun University,2024.
文摘The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remote collaboration.A central challenge in these immersive systems lies in enabling intuitive,efficient,and natural interactions.Hand gesture recognition offers a compelling solution by leveraging the expressiveness of human hands to facilitate seamless control without relying on traditional input devices such as controllers or keyboards,which can limit immersion.However,achieving robust gesture recognition requires overcoming challenges related to accurate hand tracking,complex environmental conditions,and minimizing system latency.This study proposes an artificial intelligence(AI)-driven framework for recognizing both static and dynamic hand gestures in VR and AR environments using skeleton-based tracking compliant with the OpenXR standard.Our approach employs a lightweight neural network architecture capable of real-time classification within approximately 1.3mswhilemaintaining average accuracy of 95%.We also introduce a novel dataset generation method to support training robust models and demonstrate consistent classification of diverse gestures across widespread commercial VR devices.This work represents one of the first studies to implement and validate dynamic hand gesture recognition in real time using standardized VR hardware,laying the groundwork for more immersive,accessible,and user-friendly interaction systems.By advancing AI-driven gesture interfaces,this research has the potential to broaden the adoption of VR and AR across diverse domains and enhance the overall user experience.
基金Project supported by the National Natural Science Foundation of China (Grant Nos.52175279 and 51705459)the Natural Science Foundation of Zhejiang Province,China (Grant No.LY20E050022)the Key Research and Development Projects of Zhejiang Provincial Science and Technology Department (Grant No.2021C03122)。
文摘This paper presented a novel tinny motion capture system for measuring bird posture based on inertial and magnetic measurement units that are made up of micromachined gyroscopes, accelerometers, and magnetometers. Multiple quaternion-based extended Kalman filters were implemented to estimate the absolute orientations to achieve high accuracy.Under the guidance of ornithology experts, the extending/contracting motions and flapping cycles were recorded using the developed motion capture system, and the orientation of each bone was also analyzed. The captured flapping gesture of the Falco peregrinus is crucial to the motion database of raptors as well as the bionic design.
文摘Bodily gestures,facial expressions,and intonations are argued to be notably important features of spoken languagewhich are opposed to written language.Bodily gestures with or without spoken words can influence the clarity and density of expres-sion and involvement of listeners.Facial expressions whether or not correspond with exact thought could be"decoded"to influencethe extent of intelligibility of expression.Intonation can always reflect the mutual beliefs concerning the propositional content andstates of consciousness relating to the expression and interpretation.Therefore,these can considerably improve or abate the accura-cy of expression and interpretation of thought.
基金Makkah Digital Gate Initiatives funded this research work under Grant Number(MDP-IRI-8-2020).Emirate of Makkah Province and King Abdulaziz University,Jeddah,Saudi Arabia.https://science.makkah.kau.edu.sa/Default-101888-AR.
文摘This paper proposes a novel,efficient and affordable approach to detect the students’engagement levels in an e-learning environment by using webcams.Our method analyzes spatiotemporal features of e-learners’micro body gestures,which will be mapped to emotions and appropriate engagement states.The proposed engagement detection model uses a three-dimensional convolutional neural network to analyze both temporal and spatial information across video frames.We follow a transfer learning approach by using the C3D model that was trained on the Sports-1M dataset.The adopted C3D model was used based on two different approaches;as a feature extractor with linear classifiers and a classifier after applying fine-tuning to the pretrained model.Our model was tested and its performance was evaluated and compared to the existing models.It proved its effectiveness and superiority over the other existing methods with an accuracy of 94%.The results of this work will contribute to the development of smart and interactive e-learning systems with adaptive responses based on users’engagement levels.
文摘The evident change in the design of the autopilot system produced massive help for the aviation industry and it required frequent upgrades.Reinforcement learning delivers appropriate outcomes when considering a continuous environment where the controlling Unmanned Aerial Vehicle(UAV)required maximum accuracy.In this paper,we designed a hybrid framework,which is based on Reinforcement Learning and Deep Learning where the traditional electronic flight controller is replaced by using 3D hand gestures.The algorithm is designed to take the input from 3D hand gestures and integrate with the Deep Deterministic Policy Gradient(DDPG)to receive the best reward and take actions according to 3D hand gestures input.The UAV consist of a Jetson Nano embedded testbed,Global Positioning System(GPS)sensor module,and Intel depth camera.The collision avoidance system based on the polar mask segmentation technique detects the obstacles and decides the best path according to the designed reward function.The analysis of the results has been observed providing best accuracy and computational time using novel design framework when compared with traditional Proportional Integral Derivatives(PID)flight controller.There are six reward functions estimated for 2500,5000,7500,and 10000 episodes of training,which have been normalized between 0 to−4000.The best observation has been captured on 2500 episodes where the rewards are calculated for maximum value.The achieved training accuracy of polar mask segmentation for collision avoidance is 86.36%.
基金Supported by Special Fund Project for Technology Innovation of Tianjin (No. 10FDZDGX00500)Tianjin Product Quality Inspection Technology Research Institute (No. 11-03)
文摘Experiment and dynamic simulation were combined to obtain the loads on bicycle frame. A dynamic model of body-bicycle system was built in ADAMS. Then the body gestures under different riding conditions were captured by a motion analysis system. Dynamic simulation was carried out after the data of body motions were input into the simulation system in ADAMS and a series of loads that the body applied on head tube, seat pillar and bottom bracket were obtained. The results show that the loads on flame and their distribution are apparently different under various riding conditions. Finally, finite element analysis was done in ANSYS, which showed that the stress and its distribution on frame were apparently different when the flame was loaded according to the bicycle testing standard and simulation respectively. An efficient way to obtain load on bicycle flame accurately was proposed, which is sig- nificant for the safety of cycling and will also be the basis for the bicycle design of digitalization, lightening and cus- tomization.
文摘The Hand Gestures Recognition(HGR)System can be employed to facilitate communication between humans and computers instead of using special input and output devices.These devices may complicate communication with computers especially for people with disabilities.Hand gestures can be defined as a natural human-to-human communication method,which also can be used in human-computer interaction.Many researchers developed various techniques and methods that aimed to understand and recognize specific hand gestures by employing one or two machine learning algorithms with a reasonable accuracy.Thiswork aims to develop a powerful hand gesture recognition model with a 100%recognition rate.We proposed an ensemble classification model that combines the most powerful machine learning classifiers to obtain diversity and improve accuracy.The majority voting method was used to aggregate accuracies produced by each classifier and get the final classification result.Our model was trained using a self-constructed dataset containing 1600 images of ten different hand gestures.The employing of canny’s edge detector and histogram of oriented gradient method was a great combination with the ensemble classifier and the recognition rate.The experimental results had shown the robustness of our proposed model.Logistic Regression and Support Vector Machine have achieved 100%accuracy.The developed model was validated using two public datasets,and the findings have proved that our model outperformed other compared studies.
文摘Holograms provide a characteristic manner to display and convey information, and have been improved to provide better user interactions Holographic interactions are important as they improve user interactions with virtual objects. Gesture interaction is a recent research topic, as it allows users to use their bare hands to directly interact with the hologram. However, it remains unclear whether real hand gestures are well suited for hologram applications. Therefore, we discuss the development process and implementation of three-dimensional object manipulation using natural hand gestures in a hologram. We describe the design and development process for hologram applications and its integration with real hand gesture interactions as initial findings. Experimental results from Nasa TLX form are discussed. Based on the findings, we actualize the user interactions in the hologram.
文摘This paper presents an experiment using OPENBCI to collect data of two hand gestures and decoding the signal to distinguish gestures. The signal was extracted with three electrodes on the subiect’s forearm and transferred in one channel. After utilizing a Butterworth bandpass filter, we chose a novel way to detect gesture action segment. Instead of using moving average algorithm, which is based on the calculation of energy, We developed an algorithm based on the Hilbert transform to find a dynamic threshold and identified the action segment. Four features have been extracted from each activity section, generating feature vectors for classification. During the process of classification, we made a comparison between K-nearest-neighbors (KNN) and support vector machine (SVM), based on a relatively small amount of samples. Most common experiments are based on a large quantity of data to pursue a highly fitted model. But there are certain circumstances where we cannot obtain enough training data, so it makes the exploration of best method to do classification under small sample data imperative. Though KNN is known for its simplicity and practicability, it is a relatively time-consuming method. On the other hand, SVM has a better performance in terms of time requirement and recognition accuracy, due to its application of different Risk Minimization Principle. Experimental results show an average recognition rate for the SVM algorithm that is 1.25% higher than for KNN while SVM is 2.031 s shorter than that KNN.
文摘Several attempts have appeared recently to control optical trapping systems via touch tablets and cameras instead of a mouse and joystick. Our approach is based on a modern low-cost hardware combined with fingertips and speech recognition software. Positions of operator's hands or fingertips control the positions of trapping beams in holographic optical tweezers that provide optical manipulation with microobjects. We tested and adapted two systems for hands position detection and gestures recognition – Creative Interactive Gesture Camera and Leap Motion. We further enhanced the system of Holographic Raman tweezers (HRT) by voice commands controlling the micropositioning stage and acquisition of Raman spectra. Interface communicates with HRT either directly by which requires adaptation of HRT firmware, or indirectly by simulating mouse and keyboard messages. Its utilization in real experiments speeded up the operator’s communication with the system cca. Two times in comparison with the traditional control by the mouse and the keyboard.
文摘Background Within a virtual environment(VE)the control of locomotion(e.g.,self-travel)is critical for creating a realistic and functional experience.Usually the direction of locomotion,whileusing a head-mounted display(HMD),is determined by the direction the head is pointing and the forwardor backward motion is controlled with a hand held controllers.However,hand held devices can be difficultto use while the eyes are covered with a HMD.Free hand gestures,that are tracked with a camera or ahand data glove,have an advantage of eliminating the need to look at the hand controller but the design ofhand or finger gestures for this purpose has not been well developed.Methods This study used a depth-sensing camera to track fingertip location(curling and straightening the fingers),which was converted toforward or backward self-travel in the VE.Fingertip position was converted to self-travel velocity using amapping function with three parameters:a region of zero velocity(dead zone)around the relaxed handposition,a linear relationship of fingertip position to velocity(slope orβ)beginning at the edge of the deadzone,and an exponential relationship rather than a linear one mapping fingertip position to velocity(exponent).Using a HMD,participants moved forward along a virtual road and stopped at a target on theroad by controlling self-travel velocity with finger flexion and extension.Each of the 3 mapping functionparameters was tested at 3 levels.Outcomes measured included usability ratings,fatigue,nausea,and timeto complete the tasks.Results Twenty subjects participated but five did not complete the study due tonausea.The size of the dead zone had little effect on performance or usability.Subjects preferred lower β values which were associated with better subjective ratings of control and reduced time to complete thetask,especially for large targets.Exponent values of 1.0 or greater were preferred and reduced the time tocomplete the task,especially for small targets.Conclusions Small finger movements can be used tocontrol velocity of self-travel in VE.The functions used for converting fingertip position to movementvelocity influence usability and performance.
基金sponsored by Prince Sattam Bin Abdulaziz University(PSAU)as part of funding for its SDG Roadmap Research Funding Programme project number PSAU-2023-SDG-2023/SDG/31.
文摘Hearing and Speech impairment can be congenital or acquired.Hearing and speech-impaired students often hesitate to pursue higher education in reputable institutions due to their challenges.However,the development of automated assistive learning tools within the educational field has empowered disabled students to pursue higher education in any field of study.Assistive learning devices enable students to access institutional resources and facilities fully.The proposed assistive learning and communication tool allows hearing and speech-impaired students to interact productively with their teachers and classmates.This tool converts the audio signals into sign language videos for the speech and hearing-impaired to follow and converts the sign language to text format for the teachers to follow.This educational tool for the speech and hearing-impaired is implemented by customized deep learning models such as Convolution neural networks(CNN),Residual neural Networks(ResNet),and stacked Long short-term memory(LSTM)network models.This assistive learning tool is a novel framework that interprets the static and dynamic gesture actions in American Sign Language(ASL).Such communicative tools empower the speech and hearing impaired to communicate effectively in a classroom environment and foster inclusivity.Customized deep learning models were developed and experimentally evaluated with the standard performance metrics.The model exhibits an accuracy of 99.7% for all static gesture classification and 99% for specific vocabulary of gesture action words.This two-way communicative and educational tool encourages social inclusion and a promising career for disabled students.
文摘In a recent study,Prof.Rui Min and collaborators published their paper in the journal of Opto-Electronic Science that is entitled"Smart photonic wristband for pulse wave monitoring".The paper introduces novel realization of a sensor that us-es a polymer optical multi-mode fiber to sense pulse wave bio-signal from a wrist by analyzing the specklegram mea-sured at the output of the fiber.Applying machine learning techniques over the pulse wave signal allowed medical diag-nostics and recognizing different gestures with accuracy rate of 95%.
文摘Flexible triboelectric nanogenerators (TENGs)-based pressure sensors are very essential for the wide-range applications, comprising wearable healthcare systems, intuitive human-device interfaces, electronic-skin (e-skin), and artificial intelligence. Most of conventional fabrication methods used to produce high-performance TENGs involve plasma treatment, photolithography, printing, and electro-deposition. However, these fabrication techniques are expensive, multi-step, time-consuming and not suitable for mass production, which are the main barriers for efficient and cost-effective commercialization of TENGs. Here, we established a highly reliable scheme for the fabrication of a novel eco-friendly, low cost, and TENG-based pressure sensor (TEPS) designed for usage in self-powered-human gesture detection (SP-HGD) likewise wearable healthcare applications. The sensors with microstructured electrodes performed well with high sensitivity (7.697 kPa^-1), a lower limit of detection (~ 1 Pa), faster response time (< 9.9 ms), and highly stable over > 4,000 compression-releasing cycles. The proposed method is suitable for the adaptable fabrication of TEPS at an extremely low cost with possible applications in self-powered systems, especially e-skin and healthcare applications.
基金supported by the Research Grant Fund from Kwangwoon University in 2023,the National Natural Science Foundation of China under Grant(62311540155)the Taishan Scholars Project Special Funds(tsqn202312035)the open research foundation of State Key Laboratory of Integrated Chips and Systems.
文摘Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.
基金the project BRIGHT(Project reference:MERA-NET3/0004/2021)financed by national funds from FCT-Fundacao para a Ciência e a Tecnologia,I.P.,in the scope of the projects LA/P/0037/2020,UIDP/50025/2020 and UIDB/50025/2020 of the Associate Laboratory Institute of Nanostructures,Nanomodelling and Nanofabrication-i3N+6 种基金the support from the i3N-FCT-Portuguese Foundation for Science and Technology through the Ph.D.(Scholarship grant no.UI/BD/151288/2021)also partially supported by European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreements number 952169(SYNERGY,H2020-WIDESPREAD-2020-5,CSA)and 101008701(EMERGE,H2020-INFRAIA-2020-1),and 101070255(REFORM,HORIZON-C L4-2021-DIGITAL-EMERGING-01)also supported by LISBOA-05-3559-FSE-000007CENTRO-04-3559FSE-000094 operationsco-funded by the Lisboa 2020,Centro 2020 programme,Portugal 2020,European Union,through the European Social FundFunda??o para a Ciência e Tecnologia(FCT)Agência Nacional de Inovacao(ANI)。
文摘Textiles for health and sporting activity monitoring are on the rise with the advent of smart portable wearables.The intention of this work is to design wireless monitoring wearables,based on widely available textiles and low environmental impact production technologies.Herein we have developed a polymeric ink which is able to functionalize different types of textile fibers(including silver conducting fibers,cotton,and commercial textile)with poly pyrrole.These fibers were weaved together with a thinner silver conducting fiber and carbon fiber to form a touch-sensitive energy harvesting system that would generate an electric output when mechanical pressure is applied to it.Different prototypes were manufactured with loom weaving accessories to simulate real textile cloths.By simple touch,the prototypes produced a maximum voltage of 244 V and a maximum power density of 2.29 W m^(-2).The current generated is then transformed into a digital signal,which is further utilized for human motion or gesture monitorization.The system comprises a wireless block for the Internet of Things(IoT)applicability that will be eventually extended to future remote health and sports monitoring systems.
基金Supported by the Sirindhorn International Institute of Technology,Thammasat University,EFS-G(Excellent foreign Student-Graduate)research fund.
文摘Background With the increasing prominence of hand and finger motion tracking in virtual reality(VR)applications and rehabilitation studies,data gloves have emerged as a prevalent solution.In this study,we developed an innovative,lightweight,and detachable data glove tailored for finger motion tracking in VR environments.Methods The glove design incorporates a potentiometer coupled with a flexible rack and pinion gear system,facilitating precise and natural hand gestures for interaction with VR applications.Initially,we calibrated the potentiometer to align with the actual finger bending angle,and verified the accuracy of angle measurements recorded by the data glove.To verify the precision and reliability of our data glove,we conducted repeatability testing for flexion(grip test)and extension(flat test),with 250 measurements each,across five users.We employed the Gage Repeatability and Reproducibility to analyze and interpret the repeatable data.Furthermore,we integrated the gloves into a SteamVR home environment using the OpenGlove auto-calibration tool.Conclusions The repeatability analysis revealed an aggregate error of 1.45 degrees in both the gripped and flat hand positions.This outcome was notably favorable when compared with the findings from assessments of nine alternative data gloves that employed similar protocols.In these experiments,users navigated and engaged with virtual objects,underlining the glove's exact tracking of finger motion.Furthermore,the proposed data glove exhibited a low response time of 17-34 ms and back-drive force of only 0.19 N.Additionally,according to a comfort evaluation using the Comfort Rating Scales,the proposed glove system is wearable,placing it at the WL1 level.
文摘This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.
基金supported by the National Ministries Foundation of China (Y42013040181)the National Ministries Research of Twelfth Five projects (Y31011040315)the Fundamental Research Funds for the Central Universities (NSIY191414)
文摘Gestures recognition is of great importance to intelligent human-computer interaction technology, but it is also very difficult to deal with, especially when the environment is quite complex. In this paper, the recognition algorithm of dynamic and combined gestures, which based on multi-feature fusion, is proposed. Firstly, in image segmentation stage, the algorithm extracts interested region of gestures in color and depth map by combining with the depth information. Then, to establish support vector machine (SVM) model for static hand gestures recognition, the algorithm fuses weighted Hu invariant moments of depth map into the Histogram of oriented gradients (HOG) of the color image. Finally, an hidden Markov model (HMM) toolbox supporting multi-dimensional continuous data input is adopted to do the training and recognition. Experimental results show that the proposed algorithm can not only overcome the influence of skin object, multi-object moving and hand gestures interference in the background, but also real-time and practical in Human-Computer interaction.