Increasing our understanding of primate gestural communication can provide new insights into language evolution.A key question in primate communication is the association between the social relationships of primates a...Increasing our understanding of primate gestural communication can provide new insights into language evolution.A key question in primate communication is the association between the social relationships of primates and their repertoire of gestures.Such analyses can reveal how primates use their repertoire of gestural communication to maintain their networks of family and friends,much as humans use language to maintain their social networks.In this study we examined the association between the repertoire of gestures(overall,manual and bodily gestures,and gestures of different modalities)and social bonds(presence of reciprocated grooming),coordinated behaviors(travel,resting,co-feeding),and the complexity of ecology(e.g.noise,illumination)and sociality(party size,audience),in wild East African chimpanzees(Pan troglodytes schweinfurthii).A larger repertoire size of manual,visual gestures was associated with the presence of a relationship based on reciprocated grooming and increases in social complexity.A smaller repertoire of manual tactile gestures occurred when the relationship was based on reciprocated grooming.A smaller repertoire of bodily gestures occurred between partners who jointly traveled for longer.Whereas gesture repertoire size was associated with social complexity,complex ecology also influenced repertoire size.The evolution of a large repertoire of manual,visual gestures may have been a key factor that enabled larger social groups to emerge during evolution.Thus,the evolution of the larger brains in hominins may have co-occurred with an increase in the cognitive complexity underpinning gestural communication and this,in turn,may have enabled hominins to live in more complex social groups.展开更多
Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro...Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.展开更多
Hearing and Speech impairment can be congenital or acquired.Hearing and speech-impaired students often hesitate to pursue higher education in reputable institutions due to their challenges.However,the development of a...Hearing and Speech impairment can be congenital or acquired.Hearing and speech-impaired students often hesitate to pursue higher education in reputable institutions due to their challenges.However,the development of automated assistive learning tools within the educational field has empowered disabled students to pursue higher education in any field of study.Assistive learning devices enable students to access institutional resources and facilities fully.The proposed assistive learning and communication tool allows hearing and speech-impaired students to interact productively with their teachers and classmates.This tool converts the audio signals into sign language videos for the speech and hearing-impaired to follow and converts the sign language to text format for the teachers to follow.This educational tool for the speech and hearing-impaired is implemented by customized deep learning models such as Convolution neural networks(CNN),Residual neural Networks(ResNet),and stacked Long short-term memory(LSTM)network models.This assistive learning tool is a novel framework that interprets the static and dynamic gesture actions in American Sign Language(ASL).Such communicative tools empower the speech and hearing impaired to communicate effectively in a classroom environment and foster inclusivity.Customized deep learning models were developed and experimentally evaluated with the standard performance metrics.The model exhibits an accuracy of 99.7% for all static gesture classification and 99% for specific vocabulary of gesture action words.This two-way communicative and educational tool encourages social inclusion and a promising career for disabled students.展开更多
The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remot...The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remote collaboration.A central challenge in these immersive systems lies in enabling intuitive,efficient,and natural interactions.Hand gesture recognition offers a compelling solution by leveraging the expressiveness of human hands to facilitate seamless control without relying on traditional input devices such as controllers or keyboards,which can limit immersion.However,achieving robust gesture recognition requires overcoming challenges related to accurate hand tracking,complex environmental conditions,and minimizing system latency.This study proposes an artificial intelligence(AI)-driven framework for recognizing both static and dynamic hand gestures in VR and AR environments using skeleton-based tracking compliant with the OpenXR standard.Our approach employs a lightweight neural network architecture capable of real-time classification within approximately 1.3mswhilemaintaining average accuracy of 95%.We also introduce a novel dataset generation method to support training robust models and demonstrate consistent classification of diverse gestures across widespread commercial VR devices.This work represents one of the first studies to implement and validate dynamic hand gesture recognition in real time using standardized VR hardware,laying the groundwork for more immersive,accessible,and user-friendly interaction systems.By advancing AI-driven gesture interfaces,this research has the potential to broaden the adoption of VR and AR across diverse domains and enhance the overall user experience.展开更多
Textiles for health and sporting activity monitoring are on the rise with the advent of smart portable wearables.The intention of this work is to design wireless monitoring wearables,based on widely available textiles...Textiles for health and sporting activity monitoring are on the rise with the advent of smart portable wearables.The intention of this work is to design wireless monitoring wearables,based on widely available textiles and low environmental impact production technologies.Herein we have developed a polymeric ink which is able to functionalize different types of textile fibers(including silver conducting fibers,cotton,and commercial textile)with poly pyrrole.These fibers were weaved together with a thinner silver conducting fiber and carbon fiber to form a touch-sensitive energy harvesting system that would generate an electric output when mechanical pressure is applied to it.Different prototypes were manufactured with loom weaving accessories to simulate real textile cloths.By simple touch,the prototypes produced a maximum voltage of 244 V and a maximum power density of 2.29 W m^(-2).The current generated is then transformed into a digital signal,which is further utilized for human motion or gesture monitorization.The system comprises a wireless block for the Internet of Things(IoT)applicability that will be eventually extended to future remote health and sports monitoring systems.展开更多
Background With the increasing prominence of hand and finger motion tracking in virtual reality(VR)applications and rehabilitation studies,data gloves have emerged as a prevalent solution.In this study,we developed an...Background With the increasing prominence of hand and finger motion tracking in virtual reality(VR)applications and rehabilitation studies,data gloves have emerged as a prevalent solution.In this study,we developed an innovative,lightweight,and detachable data glove tailored for finger motion tracking in VR environments.Methods The glove design incorporates a potentiometer coupled with a flexible rack and pinion gear system,facilitating precise and natural hand gestures for interaction with VR applications.Initially,we calibrated the potentiometer to align with the actual finger bending angle,and verified the accuracy of angle measurements recorded by the data glove.To verify the precision and reliability of our data glove,we conducted repeatability testing for flexion(grip test)and extension(flat test),with 250 measurements each,across five users.We employed the Gage Repeatability and Reproducibility to analyze and interpret the repeatable data.Furthermore,we integrated the gloves into a SteamVR home environment using the OpenGlove auto-calibration tool.Conclusions The repeatability analysis revealed an aggregate error of 1.45 degrees in both the gripped and flat hand positions.This outcome was notably favorable when compared with the findings from assessments of nine alternative data gloves that employed similar protocols.In these experiments,users navigated and engaged with virtual objects,underlining the glove's exact tracking of finger motion.Furthermore,the proposed data glove exhibited a low response time of 17-34 ms and back-drive force of only 0.19 N.Additionally,according to a comfort evaluation using the Comfort Rating Scales,the proposed glove system is wearable,placing it at the WL1 level.展开更多
With the growing application of intelligent robots in service,manufacturing,and medical fields,efficient and natural interaction between humans and robots has become key to improving collaboration efficiency and user ...With the growing application of intelligent robots in service,manufacturing,and medical fields,efficient and natural interaction between humans and robots has become key to improving collaboration efficiency and user experience.Gesture recognition,as an intuitive and contactless interaction method,can overcome the limitations of traditional interfaces and enable real-time control and feedback of robot movements and behaviors.This study first reviews mainstream gesture recognition algorithms and their application on different sensing platforms(RGB cameras,depth cameras,and inertial measurement units).It then proposes a gesture recognition method based on multimodal feature fusion and a lightweight deep neural network that balances recognition accuracy with computational efficiency.At system level,a modular human-robot interaction architecture is constructed,comprising perception,decision,and execution layers,and gesture commands are transmitted and mapped to robot actions in real time via the ROS communication protocol.Through multiple comparative experiments on public gesture datasets and a self-collected dataset,the proposed method’s superiority is validated in terms of accuracy,response latency,and system robustness,while user-experience tests assess the interface’s usability.The results provide a reliable technical foundation for robot collaboration and service in complex scenarios,offering broad prospects for practical application and deployment.展开更多
This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The partici...This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.展开更多
In today's global society,people from multiple cultural backgrounds often communicate with foreign friends on a daily basis,making it increasingly important to respect and understand cultural differences.For examp...In today's global society,people from multiple cultural backgrounds often communicate with foreign friends on a daily basis,making it increasingly important to respect and understand cultural differences.For example,students who join international exchange programs may find that simple gestures considered polite in one culture,such as bowing or handshakes,might be impolite or confusing in another.展开更多
In a recent study,Prof.Rui Min and collaborators published their paper in the journal of Opto-Electronic Science that is entitled"Smart photonic wristband for pulse wave monitoring".The paper introduces nove...In a recent study,Prof.Rui Min and collaborators published their paper in the journal of Opto-Electronic Science that is entitled"Smart photonic wristband for pulse wave monitoring".The paper introduces novel realization of a sensor that us-es a polymer optical multi-mode fiber to sense pulse wave bio-signal from a wrist by analyzing the specklegram mea-sured at the output of the fiber.Applying machine learning techniques over the pulse wave signal allowed medical diag-nostics and recognizing different gestures with accuracy rate of 95%.展开更多
The paper proposes that the understanding of human language evolution requires the comprehensive understanding of language in terms of language types, formations, and learnings and the comprehensive understanding of h...The paper proposes that the understanding of human language evolution requires the comprehensive understanding of language in terms of language types, formations, and learnings and the comprehensive understanding of human biological evolution in terms of the emergences of various hominin species with various language capacities. This paper proposes language neuromechanics and the human biological-language evolution. Language is derived from bodily movement. Language neuro-mechanics combines neuroscience to study language brain and biomechanics to study language movement. Language neuromechanics consists of language type, language formation, and language learning. Language types for advanced animals include gestural language verse vocal language, instinctive language verse controllable language, and symbolic language verse iconic language. Language formation involves the developments of the different types of languages from different bodily movements phylogenetically and ontogenetically. Language learning involves the learning of controllable language to adapt to communicative environment through language brain regions and language genes. This paper proposes a gradual and step-by-step human language evolution from the language of great apes to the human language through the human biological evolution which chronologically and geographically consists of early hominins, early Homos, middle Homos, and late Homos with different language capacities. For hominins, vocal language and gestural language were evolved together. In conclusion, combining neuroscience and bio-mechanics, language neuromechanics provides the comprehensive understanding of language. The combination of language neuromechanics and the human biological-language evolution provides the clear evolutionary path from great apes’ articulate gestural language without articulate speech to human articulate gestural language and articulate speech.展开更多
Continuous deforming always leads to the performance degradation of a flexible triboelectric nanogenerator due to the Young’s modulus mismatch of different functional layers.In this work,we fabricated a fiber-shaped ...Continuous deforming always leads to the performance degradation of a flexible triboelectric nanogenerator due to the Young’s modulus mismatch of different functional layers.In this work,we fabricated a fiber-shaped stretchable and tailorable triboelectric nanogenerator(FST-TENG)based on the geometric construction of a steel wire as electrode and ingenious selection of silicone rubber as triboelectric layer.Owing to the great robustness and continuous conductivity,the FST-TENGs demonstrate high stability,stretchability,and even tailorability.For a single device with ~6 cm in length and ~3 mm in diameter,the open-circuit voltage of ~59.7 V,transferred charge of ~23.7 nC,short-circuit current of ~2.67 μA and average power of ~2.13 μW can be obtained at 2.5 Hz.By knitting several FST-TENGs to be a fabric or a bracelet,it enables to harvest human motion energy and then to drive a wearable electronic device.Finally,it can also be woven on dorsum of glove to monitor the movements of gesture,which can recognize every single finger,different bending angle,and numbers of bent finger by analyzing voltage signals.展开更多
Aim at the defects of easy to fall into the local minimum point and the low convergence speed of back propagation(BP)neural network in the gesture recognition, a new method that combines the chaos algorithm with the...Aim at the defects of easy to fall into the local minimum point and the low convergence speed of back propagation(BP)neural network in the gesture recognition, a new method that combines the chaos algorithm with the genetic algorithm(CGA) is proposed. According to the ergodicity of chaos algorithm and global convergence of genetic algorithm, the basic idea of this paper is to encode the weights and thresholds of BP neural network and obtain a general optimal solution with genetic algorithm, and then the general optimal solution is optimized to the accurate optimal solution by adding chaotic disturbance. The optimal results of the chaotic genetic algorithm are used as the initial weights and thresholds of the BP neural network to recognize the gesture. Simulation and experimental results show that the real-time performance and accuracy of the gesture recognition are greatly improved with CGA.展开更多
In human-machine interaction,robotic hands are useful in many scenarios.To operate robotic hands via gestures instead of handles will greatly improve the convenience and intuition of human-machine interaction.Here,we ...In human-machine interaction,robotic hands are useful in many scenarios.To operate robotic hands via gestures instead of handles will greatly improve the convenience and intuition of human-machine interaction.Here,we present a magnetic array assisted sliding triboelectric sensor for achieving a real-time gesture interaction between a human hand and robotic hand.With a finger’s traction movement of flexion or extension,the sensor can induce positive/negative pulse signals.Through counting the pulses in unit time,the degree,speed,and direction of finger motion can be judged in realtime.The magnetic array plays an important role in generating the quantifiable pulses.The designed two parts of magnetic array can transform sliding motion into contact-separation and constrain the sliding pathway,respectively,thus improve the durability,low speed signal amplitude,and stability of the system.This direct quantization approach and optimization of wearable gesture sensor provide a new strategy for achieving a natural,intuitive,and real-time human-robotic interaction.展开更多
Dynamic hand gesture recognition is a desired alternative means for human-computer interactions.This paper presents a hand gesture recognition system that is designed for the control of flights of unmanned aerial vehi...Dynamic hand gesture recognition is a desired alternative means for human-computer interactions.This paper presents a hand gesture recognition system that is designed for the control of flights of unmanned aerial vehicles(UAV).A data representation model that represents a dynamic gesture sequence by converting the 4-D spatiotemporal data to 2-D matrix and a 1-D array is introduced.To train the system to recognize designed gestures,skeleton data collected from a Leap Motion Controller are converted to two different data models.As many as 9124 samples of the training dataset,1938 samples of the testing dataset are created to train and test the proposed three deep learning neural networks,which are a 2-layer fully connected neural network,a 5-layer fully connected neural network and an 8-layer convolutional neural network.The static testing results show that the 2-layer fully connected neural network achieves an average accuracy of 96.7%on scaled datasets and 12.3%on non-scaled datasets.The 5-layer fully connected neural network achieves an average accuracy of 98.0%on scaled datasets and 89.1%on non-scaled datasets.The 8-layer convolutional neural network achieves an average accuracy of 89.6%on scaled datasets and 96.9%on non-scaled datasets.Testing on a drone-kit simulator and a real drone shows that this system is feasible for drone flight controls.展开更多
In this article,to reduce the complexity and improve the generalization ability of current gesture recognition systems,we propose a novel SE-CNN attention architecture for sEMG-based hand gesture recognition.The propo...In this article,to reduce the complexity and improve the generalization ability of current gesture recognition systems,we propose a novel SE-CNN attention architecture for sEMG-based hand gesture recognition.The proposed algorithm introduces a temporal squeeze-and-excite block into a simple CNN architecture and then utilizes it to recalibrate the weights of the feature outputs from the convolutional layer.By enhancing important features while suppressing useless ones,the model realizes gesture recognition efficiently.The last procedure of the proposed algorithm is utilizing a simple attention mechanism to enhance the learned representations of sEMG signals to performmulti-channel sEMG-based gesture recognition tasks.To evaluate the effectiveness and accuracy of the proposed algorithm,we conduct experiments involving multi-gesture datasets Ninapro DB4 and Ninapro DB5 for both inter-session validation and subject-wise cross-validation.After a series of comparisons with the previous models,the proposed algorithm effectively increases the robustness with improved gesture recognition performance and generalization ability.展开更多
Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning netwo...Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.展开更多
基金the Economic and Social Research Council,UK,and National Science Centre,Poland,Grant Number:UMO-2018/31/D/NZ8/01144(‘Understanding origins of social brains and communication in wild primates’).
文摘Increasing our understanding of primate gestural communication can provide new insights into language evolution.A key question in primate communication is the association between the social relationships of primates and their repertoire of gestures.Such analyses can reveal how primates use their repertoire of gestural communication to maintain their networks of family and friends,much as humans use language to maintain their social networks.In this study we examined the association between the repertoire of gestures(overall,manual and bodily gestures,and gestures of different modalities)and social bonds(presence of reciprocated grooming),coordinated behaviors(travel,resting,co-feeding),and the complexity of ecology(e.g.noise,illumination)and sociality(party size,audience),in wild East African chimpanzees(Pan troglodytes schweinfurthii).A larger repertoire size of manual,visual gestures was associated with the presence of a relationship based on reciprocated grooming and increases in social complexity.A smaller repertoire of manual tactile gestures occurred when the relationship was based on reciprocated grooming.A smaller repertoire of bodily gestures occurred between partners who jointly traveled for longer.Whereas gesture repertoire size was associated with social complexity,complex ecology also influenced repertoire size.The evolution of a large repertoire of manual,visual gestures may have been a key factor that enabled larger social groups to emerge during evolution.Thus,the evolution of the larger brains in hominins may have co-occurred with an increase in the cognitive complexity underpinning gestural communication and this,in turn,may have enabled hominins to live in more complex social groups.
基金supported by the Research Grant Fund from Kwangwoon University in 2023,the National Natural Science Foundation of China under Grant(62311540155)the Taishan Scholars Project Special Funds(tsqn202312035)the open research foundation of State Key Laboratory of Integrated Chips and Systems.
文摘Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.
基金sponsored by Prince Sattam Bin Abdulaziz University(PSAU)as part of funding for its SDG Roadmap Research Funding Programme project number PSAU-2023-SDG-2023/SDG/31.
文摘Hearing and Speech impairment can be congenital or acquired.Hearing and speech-impaired students often hesitate to pursue higher education in reputable institutions due to their challenges.However,the development of automated assistive learning tools within the educational field has empowered disabled students to pursue higher education in any field of study.Assistive learning devices enable students to access institutional resources and facilities fully.The proposed assistive learning and communication tool allows hearing and speech-impaired students to interact productively with their teachers and classmates.This tool converts the audio signals into sign language videos for the speech and hearing-impaired to follow and converts the sign language to text format for the teachers to follow.This educational tool for the speech and hearing-impaired is implemented by customized deep learning models such as Convolution neural networks(CNN),Residual neural Networks(ResNet),and stacked Long short-term memory(LSTM)network models.This assistive learning tool is a novel framework that interprets the static and dynamic gesture actions in American Sign Language(ASL).Such communicative tools empower the speech and hearing impaired to communicate effectively in a classroom environment and foster inclusivity.Customized deep learning models were developed and experimentally evaluated with the standard performance metrics.The model exhibits an accuracy of 99.7% for all static gesture classification and 99% for specific vocabulary of gesture action words.This two-way communicative and educational tool encourages social inclusion and a promising career for disabled students.
基金supported by research fund from Chosun University,2024.
文摘The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remote collaboration.A central challenge in these immersive systems lies in enabling intuitive,efficient,and natural interactions.Hand gesture recognition offers a compelling solution by leveraging the expressiveness of human hands to facilitate seamless control without relying on traditional input devices such as controllers or keyboards,which can limit immersion.However,achieving robust gesture recognition requires overcoming challenges related to accurate hand tracking,complex environmental conditions,and minimizing system latency.This study proposes an artificial intelligence(AI)-driven framework for recognizing both static and dynamic hand gestures in VR and AR environments using skeleton-based tracking compliant with the OpenXR standard.Our approach employs a lightweight neural network architecture capable of real-time classification within approximately 1.3mswhilemaintaining average accuracy of 95%.We also introduce a novel dataset generation method to support training robust models and demonstrate consistent classification of diverse gestures across widespread commercial VR devices.This work represents one of the first studies to implement and validate dynamic hand gesture recognition in real time using standardized VR hardware,laying the groundwork for more immersive,accessible,and user-friendly interaction systems.By advancing AI-driven gesture interfaces,this research has the potential to broaden the adoption of VR and AR across diverse domains and enhance the overall user experience.
基金the project BRIGHT(Project reference:MERA-NET3/0004/2021)financed by national funds from FCT-Fundacao para a Ciência e a Tecnologia,I.P.,in the scope of the projects LA/P/0037/2020,UIDP/50025/2020 and UIDB/50025/2020 of the Associate Laboratory Institute of Nanostructures,Nanomodelling and Nanofabrication-i3N+6 种基金the support from the i3N-FCT-Portuguese Foundation for Science and Technology through the Ph.D.(Scholarship grant no.UI/BD/151288/2021)also partially supported by European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreements number 952169(SYNERGY,H2020-WIDESPREAD-2020-5,CSA)and 101008701(EMERGE,H2020-INFRAIA-2020-1),and 101070255(REFORM,HORIZON-C L4-2021-DIGITAL-EMERGING-01)also supported by LISBOA-05-3559-FSE-000007CENTRO-04-3559FSE-000094 operationsco-funded by the Lisboa 2020,Centro 2020 programme,Portugal 2020,European Union,through the European Social FundFunda??o para a Ciência e Tecnologia(FCT)Agência Nacional de Inovacao(ANI)。
文摘Textiles for health and sporting activity monitoring are on the rise with the advent of smart portable wearables.The intention of this work is to design wireless monitoring wearables,based on widely available textiles and low environmental impact production technologies.Herein we have developed a polymeric ink which is able to functionalize different types of textile fibers(including silver conducting fibers,cotton,and commercial textile)with poly pyrrole.These fibers were weaved together with a thinner silver conducting fiber and carbon fiber to form a touch-sensitive energy harvesting system that would generate an electric output when mechanical pressure is applied to it.Different prototypes were manufactured with loom weaving accessories to simulate real textile cloths.By simple touch,the prototypes produced a maximum voltage of 244 V and a maximum power density of 2.29 W m^(-2).The current generated is then transformed into a digital signal,which is further utilized for human motion or gesture monitorization.The system comprises a wireless block for the Internet of Things(IoT)applicability that will be eventually extended to future remote health and sports monitoring systems.
基金Supported by the Sirindhorn International Institute of Technology,Thammasat University,EFS-G(Excellent foreign Student-Graduate)research fund.
文摘Background With the increasing prominence of hand and finger motion tracking in virtual reality(VR)applications and rehabilitation studies,data gloves have emerged as a prevalent solution.In this study,we developed an innovative,lightweight,and detachable data glove tailored for finger motion tracking in VR environments.Methods The glove design incorporates a potentiometer coupled with a flexible rack and pinion gear system,facilitating precise and natural hand gestures for interaction with VR applications.Initially,we calibrated the potentiometer to align with the actual finger bending angle,and verified the accuracy of angle measurements recorded by the data glove.To verify the precision and reliability of our data glove,we conducted repeatability testing for flexion(grip test)and extension(flat test),with 250 measurements each,across five users.We employed the Gage Repeatability and Reproducibility to analyze and interpret the repeatable data.Furthermore,we integrated the gloves into a SteamVR home environment using the OpenGlove auto-calibration tool.Conclusions The repeatability analysis revealed an aggregate error of 1.45 degrees in both the gripped and flat hand positions.This outcome was notably favorable when compared with the findings from assessments of nine alternative data gloves that employed similar protocols.In these experiments,users navigated and engaged with virtual objects,underlining the glove's exact tracking of finger motion.Furthermore,the proposed data glove exhibited a low response time of 17-34 ms and back-drive force of only 0.19 N.Additionally,according to a comfort evaluation using the Comfort Rating Scales,the proposed glove system is wearable,placing it at the WL1 level.
文摘With the growing application of intelligent robots in service,manufacturing,and medical fields,efficient and natural interaction between humans and robots has become key to improving collaboration efficiency and user experience.Gesture recognition,as an intuitive and contactless interaction method,can overcome the limitations of traditional interfaces and enable real-time control and feedback of robot movements and behaviors.This study first reviews mainstream gesture recognition algorithms and their application on different sensing platforms(RGB cameras,depth cameras,and inertial measurement units).It then proposes a gesture recognition method based on multimodal feature fusion and a lightweight deep neural network that balances recognition accuracy with computational efficiency.At system level,a modular human-robot interaction architecture is constructed,comprising perception,decision,and execution layers,and gesture commands are transmitted and mapped to robot actions in real time via the ROS communication protocol.Through multiple comparative experiments on public gesture datasets and a self-collected dataset,the proposed method’s superiority is validated in terms of accuracy,response latency,and system robustness,while user-experience tests assess the interface’s usability.The results provide a reliable technical foundation for robot collaboration and service in complex scenarios,offering broad prospects for practical application and deployment.
文摘This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.
文摘In today's global society,people from multiple cultural backgrounds often communicate with foreign friends on a daily basis,making it increasingly important to respect and understand cultural differences.For example,students who join international exchange programs may find that simple gestures considered polite in one culture,such as bowing or handshakes,might be impolite or confusing in another.
文摘In a recent study,Prof.Rui Min and collaborators published their paper in the journal of Opto-Electronic Science that is entitled"Smart photonic wristband for pulse wave monitoring".The paper introduces novel realization of a sensor that us-es a polymer optical multi-mode fiber to sense pulse wave bio-signal from a wrist by analyzing the specklegram mea-sured at the output of the fiber.Applying machine learning techniques over the pulse wave signal allowed medical diag-nostics and recognizing different gestures with accuracy rate of 95%.
文摘The paper proposes that the understanding of human language evolution requires the comprehensive understanding of language in terms of language types, formations, and learnings and the comprehensive understanding of human biological evolution in terms of the emergences of various hominin species with various language capacities. This paper proposes language neuromechanics and the human biological-language evolution. Language is derived from bodily movement. Language neuro-mechanics combines neuroscience to study language brain and biomechanics to study language movement. Language neuromechanics consists of language type, language formation, and language learning. Language types for advanced animals include gestural language verse vocal language, instinctive language verse controllable language, and symbolic language verse iconic language. Language formation involves the developments of the different types of languages from different bodily movements phylogenetically and ontogenetically. Language learning involves the learning of controllable language to adapt to communicative environment through language brain regions and language genes. This paper proposes a gradual and step-by-step human language evolution from the language of great apes to the human language through the human biological evolution which chronologically and geographically consists of early hominins, early Homos, middle Homos, and late Homos with different language capacities. For hominins, vocal language and gestural language were evolved together. In conclusion, combining neuroscience and bio-mechanics, language neuromechanics provides the comprehensive understanding of language. The combination of language neuromechanics and the human biological-language evolution provides the clear evolutionary path from great apes’ articulate gestural language without articulate speech to human articulate gestural language and articulate speech.
基金supported by National Natural Science Foundation of China (NSFC) (No. 61804103)National Key R&D Program of China (No. 2017YFA0205002)+8 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Nos. 18KJA535001 and 14KJB 150020)Natural Science Foundation of Jiangsu Province of China (Nos. BK20170343 and BK20180242)China Postdoctoral Science Foundation (No. 2017M610346)State Key Laboratory of Silicon Materials, Zhejiang University (No. SKL2018-03)Nantong Municipal Science and Technology Program (No. GY12017001)Jiangsu Key Laboratory for Carbon-Based Functional Materials & Devices, Soochow University (KSL201803)supported by Collaborative Innovation Center of Suzhou Nano Science & Technology, the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)the 111 ProjectJoint International Research Laboratory of Carbon-Based Functional Materials and Devices
文摘Continuous deforming always leads to the performance degradation of a flexible triboelectric nanogenerator due to the Young’s modulus mismatch of different functional layers.In this work,we fabricated a fiber-shaped stretchable and tailorable triboelectric nanogenerator(FST-TENG)based on the geometric construction of a steel wire as electrode and ingenious selection of silicone rubber as triboelectric layer.Owing to the great robustness and continuous conductivity,the FST-TENGs demonstrate high stability,stretchability,and even tailorability.For a single device with ~6 cm in length and ~3 mm in diameter,the open-circuit voltage of ~59.7 V,transferred charge of ~23.7 nC,short-circuit current of ~2.67 μA and average power of ~2.13 μW can be obtained at 2.5 Hz.By knitting several FST-TENGs to be a fabric or a bracelet,it enables to harvest human motion energy and then to drive a wearable electronic device.Finally,it can also be woven on dorsum of glove to monitor the movements of gesture,which can recognize every single finger,different bending angle,and numbers of bent finger by analyzing voltage signals.
基金supported by Natural Science Foundation of Heilongjiang Province Youth Fund(No.QC2014C054)Foundation for University Young Key Scholar by Heilongjiang Province(No.1254G023)the Science Funds for the Young Innovative Talents of HUST(No.201304)
文摘Aim at the defects of easy to fall into the local minimum point and the low convergence speed of back propagation(BP)neural network in the gesture recognition, a new method that combines the chaos algorithm with the genetic algorithm(CGA) is proposed. According to the ergodicity of chaos algorithm and global convergence of genetic algorithm, the basic idea of this paper is to encode the weights and thresholds of BP neural network and obtain a general optimal solution with genetic algorithm, and then the general optimal solution is optimized to the accurate optimal solution by adding chaotic disturbance. The optimal results of the chaotic genetic algorithm are used as the initial weights and thresholds of the BP neural network to recognize the gesture. Simulation and experimental results show that the real-time performance and accuracy of the gesture recognition are greatly improved with CGA.
基金This work was supported by National Natural Science Foundation of China(51902035 and 52073037)Natural Science Foundation of Chongqing(cstc2020jcyj-msxmX0807)+1 种基金the Fundamental Research Funds for the Central Universities(2020CDJ-LHSS-001 and 2019CDXZWL001)Chongqing graduate tutor team construction project(ydstd1832).
文摘In human-machine interaction,robotic hands are useful in many scenarios.To operate robotic hands via gestures instead of handles will greatly improve the convenience and intuition of human-machine interaction.Here,we present a magnetic array assisted sliding triboelectric sensor for achieving a real-time gesture interaction between a human hand and robotic hand.With a finger’s traction movement of flexion or extension,the sensor can induce positive/negative pulse signals.Through counting the pulses in unit time,the degree,speed,and direction of finger motion can be judged in realtime.The magnetic array plays an important role in generating the quantifiable pulses.The designed two parts of magnetic array can transform sliding motion into contact-separation and constrain the sliding pathway,respectively,thus improve the durability,low speed signal amplitude,and stability of the system.This direct quantization approach and optimization of wearable gesture sensor provide a new strategy for achieving a natural,intuitive,and real-time human-robotic interaction.
文摘Dynamic hand gesture recognition is a desired alternative means for human-computer interactions.This paper presents a hand gesture recognition system that is designed for the control of flights of unmanned aerial vehicles(UAV).A data representation model that represents a dynamic gesture sequence by converting the 4-D spatiotemporal data to 2-D matrix and a 1-D array is introduced.To train the system to recognize designed gestures,skeleton data collected from a Leap Motion Controller are converted to two different data models.As many as 9124 samples of the training dataset,1938 samples of the testing dataset are created to train and test the proposed three deep learning neural networks,which are a 2-layer fully connected neural network,a 5-layer fully connected neural network and an 8-layer convolutional neural network.The static testing results show that the 2-layer fully connected neural network achieves an average accuracy of 96.7%on scaled datasets and 12.3%on non-scaled datasets.The 5-layer fully connected neural network achieves an average accuracy of 98.0%on scaled datasets and 89.1%on non-scaled datasets.The 8-layer convolutional neural network achieves an average accuracy of 89.6%on scaled datasets and 96.9%on non-scaled datasets.Testing on a drone-kit simulator and a real drone shows that this system is feasible for drone flight controls.
基金funded by the National Key Research and Development Program of China(2017YFB1303200)NSFC(81871444,62071241,62075098,and 62001240)+1 种基金Leading-Edge Technology and Basic Research Program of Jiangsu(BK20192004D)Jiangsu Graduate Scientific Research Innovation Programme(KYCX20_1391,KYCX21_1557).
文摘In this article,to reduce the complexity and improve the generalization ability of current gesture recognition systems,we propose a novel SE-CNN attention architecture for sEMG-based hand gesture recognition.The proposed algorithm introduces a temporal squeeze-and-excite block into a simple CNN architecture and then utilizes it to recalibrate the weights of the feature outputs from the convolutional layer.By enhancing important features while suppressing useless ones,the model realizes gesture recognition efficiently.The last procedure of the proposed algorithm is utilizing a simple attention mechanism to enhance the learned representations of sEMG signals to performmulti-channel sEMG-based gesture recognition tasks.To evaluate the effectiveness and accuracy of the proposed algorithm,we conduct experiments involving multi-gesture datasets Ninapro DB4 and Ninapro DB5 for both inter-session validation and subject-wise cross-validation.After a series of comparisons with the previous models,the proposed algorithm effectively increases the robustness with improved gesture recognition performance and generalization ability.
文摘Hand gestures are a natural way for human-robot interaction.Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications.This paper presents a novel deep learning network for hand gesture recognition.The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.To learn short-term features,each video input is segmented into a fixed number of frame groups.A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot.These two entities are fused and fed into a convolutional neural network(Conv Net)for feature extraction.The Conv Nets for all groups share parameters.To learn longterm features,outputs from all Conv Nets are fed into a long short-term memory(LSTM)network,by which a final classification result is predicted.The new model has been tested with two popular hand gesture datasets,namely the Jester dataset and Nvidia dataset.Comparing with other models,our model produced very competitive results.The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.