Permeable electronics promise improved physiological comfort,but remain constrained by limited functional integration and poor mechanical robustness.Here,we report a three-dimensional(3D)permeable electronic system th...Permeable electronics promise improved physiological comfort,but remain constrained by limited functional integration and poor mechanical robustness.Here,we report a three-dimensional(3D)permeable electronic system that overcomes these challenges by combining electrospun SEBS nanofiber mats,high-resolution liquid metal conductors patterned via thermal imprinting(50μm),and a strain isolators(SIL)that protects vertical interconnects(VIAs)from stress concentration.This architecture achieves ultrahigh air permeability(>5.09 m L cm^(-2)min^(-1)),exceptional stretchability(750%fracture strain),and reliable conductivity maintained through more than 32,500 strain cycles.Leveraging these advances,we have integrated multilayer circuits,strain sensors,and a three-axis accelerometer to achieve a fully integrated,stretchable,permeable wireless real-time gesture recognition glove.The system enables accurate sign language interpretation(98%)and seamless robotic hand control,demonstrating its potential for assistive technologies.By uniting comfort,durability,and high-density integration,this work establishes a versatile platform for nextgeneration wearable electronics and interactive human-robot interfaces.展开更多
Industrial operators need reliable communication in high-noise,safety-critical environments where speech or touch input is often impractical.Existing gesture systems either miss real-time deadlines on resourceconstrai...Industrial operators need reliable communication in high-noise,safety-critical environments where speech or touch input is often impractical.Existing gesture systems either miss real-time deadlines on resourceconstrained hardware or lose accuracy under occlusion,vibration,and lighting changes.We introduce Industrial EdgeSign,a dual-path framework that combines hardware-aware neural architecture search(NAS)with large multimodalmodel(LMM)guided semantics to deliver robust,low-latency gesture recognition on edge devices.The searched model uses a truncated ResNet50 front end,a dimensional-reduction network that preserves spatiotemporal structure for tubelet-based attention,and localized Transformer layers tuned for on-device inference.To reduce reliance on gloss annotations and mitigate domain shift,we distill semantics from factory-tuned vision-language models and pre-train with masked language modeling and video-text contrastive objectives,aligning visual features with a shared text space.OnML2HP and SHREC’17,theNAS-derived architecture attains 94.7% accuracywith 86ms inference latency and about 5.9W power on Jetson Nano.Under occlusion,lighting shifts,andmotion blur,accuracy remains above 82%.For safetycritical commands,the emergency-stop gesture achieves 72 ms 99th percentile latency with 99.7% fail-safe triggering.Ablation studies confirm the contribution of the spatiotemporal tubelet extractor and text-side pre-training,and we observe gains in translation quality(BLEU-422.33).These results show that Industrial EdgeSign provides accurate,resource-aware,and safety-aligned gesture recognition suitable for deployment in smart factory settings.展开更多
Increasing our understanding of primate gestural communication can provide new insights into language evolution.A key question in primate communication is the association between the social relationships of primates a...Increasing our understanding of primate gestural communication can provide new insights into language evolution.A key question in primate communication is the association between the social relationships of primates and their repertoire of gestures.Such analyses can reveal how primates use their repertoire of gestural communication to maintain their networks of family and friends,much as humans use language to maintain their social networks.In this study we examined the association between the repertoire of gestures(overall,manual and bodily gestures,and gestures of different modalities)and social bonds(presence of reciprocated grooming),coordinated behaviors(travel,resting,co-feeding),and the complexity of ecology(e.g.noise,illumination)and sociality(party size,audience),in wild East African chimpanzees(Pan troglodytes schweinfurthii).A larger repertoire size of manual,visual gestures was associated with the presence of a relationship based on reciprocated grooming and increases in social complexity.A smaller repertoire of manual tactile gestures occurred when the relationship was based on reciprocated grooming.A smaller repertoire of bodily gestures occurred between partners who jointly traveled for longer.Whereas gesture repertoire size was associated with social complexity,complex ecology also influenced repertoire size.The evolution of a large repertoire of manual,visual gestures may have been a key factor that enabled larger social groups to emerge during evolution.Thus,the evolution of the larger brains in hominins may have co-occurred with an increase in the cognitive complexity underpinning gestural communication and this,in turn,may have enabled hominins to live in more complex social groups.展开更多
Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro...Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.展开更多
Hearing and Speech impairment can be congenital or acquired.Hearing and speech-impaired students often hesitate to pursue higher education in reputable institutions due to their challenges.However,the development of a...Hearing and Speech impairment can be congenital or acquired.Hearing and speech-impaired students often hesitate to pursue higher education in reputable institutions due to their challenges.However,the development of automated assistive learning tools within the educational field has empowered disabled students to pursue higher education in any field of study.Assistive learning devices enable students to access institutional resources and facilities fully.The proposed assistive learning and communication tool allows hearing and speech-impaired students to interact productively with their teachers and classmates.This tool converts the audio signals into sign language videos for the speech and hearing-impaired to follow and converts the sign language to text format for the teachers to follow.This educational tool for the speech and hearing-impaired is implemented by customized deep learning models such as Convolution neural networks(CNN),Residual neural Networks(ResNet),and stacked Long short-term memory(LSTM)network models.This assistive learning tool is a novel framework that interprets the static and dynamic gesture actions in American Sign Language(ASL).Such communicative tools empower the speech and hearing impaired to communicate effectively in a classroom environment and foster inclusivity.Customized deep learning models were developed and experimentally evaluated with the standard performance metrics.The model exhibits an accuracy of 99.7% for all static gesture classification and 99% for specific vocabulary of gesture action words.This two-way communicative and educational tool encourages social inclusion and a promising career for disabled students.展开更多
The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remot...The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remote collaboration.A central challenge in these immersive systems lies in enabling intuitive,efficient,and natural interactions.Hand gesture recognition offers a compelling solution by leveraging the expressiveness of human hands to facilitate seamless control without relying on traditional input devices such as controllers or keyboards,which can limit immersion.However,achieving robust gesture recognition requires overcoming challenges related to accurate hand tracking,complex environmental conditions,and minimizing system latency.This study proposes an artificial intelligence(AI)-driven framework for recognizing both static and dynamic hand gestures in VR and AR environments using skeleton-based tracking compliant with the OpenXR standard.Our approach employs a lightweight neural network architecture capable of real-time classification within approximately 1.3mswhilemaintaining average accuracy of 95%.We also introduce a novel dataset generation method to support training robust models and demonstrate consistent classification of diverse gestures across widespread commercial VR devices.This work represents one of the first studies to implement and validate dynamic hand gesture recognition in real time using standardized VR hardware,laying the groundwork for more immersive,accessible,and user-friendly interaction systems.By advancing AI-driven gesture interfaces,this research has the potential to broaden the adoption of VR and AR across diverse domains and enhance the overall user experience.展开更多
Textiles for health and sporting activity monitoring are on the rise with the advent of smart portable wearables.The intention of this work is to design wireless monitoring wearables,based on widely available textiles...Textiles for health and sporting activity monitoring are on the rise with the advent of smart portable wearables.The intention of this work is to design wireless monitoring wearables,based on widely available textiles and low environmental impact production technologies.Herein we have developed a polymeric ink which is able to functionalize different types of textile fibers(including silver conducting fibers,cotton,and commercial textile)with poly pyrrole.These fibers were weaved together with a thinner silver conducting fiber and carbon fiber to form a touch-sensitive energy harvesting system that would generate an electric output when mechanical pressure is applied to it.Different prototypes were manufactured with loom weaving accessories to simulate real textile cloths.By simple touch,the prototypes produced a maximum voltage of 244 V and a maximum power density of 2.29 W m^(-2).The current generated is then transformed into a digital signal,which is further utilized for human motion or gesture monitorization.The system comprises a wireless block for the Internet of Things(IoT)applicability that will be eventually extended to future remote health and sports monitoring systems.展开更多
Background With the increasing prominence of hand and finger motion tracking in virtual reality(VR)applications and rehabilitation studies,data gloves have emerged as a prevalent solution.In this study,we developed an...Background With the increasing prominence of hand and finger motion tracking in virtual reality(VR)applications and rehabilitation studies,data gloves have emerged as a prevalent solution.In this study,we developed an innovative,lightweight,and detachable data glove tailored for finger motion tracking in VR environments.Methods The glove design incorporates a potentiometer coupled with a flexible rack and pinion gear system,facilitating precise and natural hand gestures for interaction with VR applications.Initially,we calibrated the potentiometer to align with the actual finger bending angle,and verified the accuracy of angle measurements recorded by the data glove.To verify the precision and reliability of our data glove,we conducted repeatability testing for flexion(grip test)and extension(flat test),with 250 measurements each,across five users.We employed the Gage Repeatability and Reproducibility to analyze and interpret the repeatable data.Furthermore,we integrated the gloves into a SteamVR home environment using the OpenGlove auto-calibration tool.Conclusions The repeatability analysis revealed an aggregate error of 1.45 degrees in both the gripped and flat hand positions.This outcome was notably favorable when compared with the findings from assessments of nine alternative data gloves that employed similar protocols.In these experiments,users navigated and engaged with virtual objects,underlining the glove's exact tracking of finger motion.Furthermore,the proposed data glove exhibited a low response time of 17-34 ms and back-drive force of only 0.19 N.Additionally,according to a comfort evaluation using the Comfort Rating Scales,the proposed glove system is wearable,placing it at the WL1 level.展开更多
With the growing application of intelligent robots in service,manufacturing,and medical fields,efficient and natural interaction between humans and robots has become key to improving collaboration efficiency and user ...With the growing application of intelligent robots in service,manufacturing,and medical fields,efficient and natural interaction between humans and robots has become key to improving collaboration efficiency and user experience.Gesture recognition,as an intuitive and contactless interaction method,can overcome the limitations of traditional interfaces and enable real-time control and feedback of robot movements and behaviors.This study first reviews mainstream gesture recognition algorithms and their application on different sensing platforms(RGB cameras,depth cameras,and inertial measurement units).It then proposes a gesture recognition method based on multimodal feature fusion and a lightweight deep neural network that balances recognition accuracy with computational efficiency.At system level,a modular human-robot interaction architecture is constructed,comprising perception,decision,and execution layers,and gesture commands are transmitted and mapped to robot actions in real time via the ROS communication protocol.Through multiple comparative experiments on public gesture datasets and a self-collected dataset,the proposed method’s superiority is validated in terms of accuracy,response latency,and system robustness,while user-experience tests assess the interface’s usability.The results provide a reliable technical foundation for robot collaboration and service in complex scenarios,offering broad prospects for practical application and deployment.展开更多
This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The partici...This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.展开更多
In today's global society,people from multiple cultural backgrounds often communicate with foreign friends on a daily basis,making it increasingly important to respect and understand cultural differences.For examp...In today's global society,people from multiple cultural backgrounds often communicate with foreign friends on a daily basis,making it increasingly important to respect and understand cultural differences.For example,students who join international exchange programs may find that simple gestures considered polite in one culture,such as bowing or handshakes,might be impolite or confusing in another.展开更多
As humanity advances into the virtual world and the era of mixed reality,high-fidelity gesture mapping has become a key interface connecting the physical world and the digital space.However,the existing solutions are ...As humanity advances into the virtual world and the era of mixed reality,high-fidelity gesture mapping has become a key interface connecting the physical world and the digital space.However,the existing solutions are limited by the bottlenecks of active sensors in terms of information richness and structure and have insufficient precise reconstruction capabilities for high-degree-of-freedom hand movements,making it difficult to achieve a natural and seamless interaction experience.Here,a dynamic gesture mapping system(DGMS)has been developed,which combines a lightweight and efficient hand motion signal acquisition device(HSAD)with an efficient signal processing algorithm(SPA).HSAD captures the motion information on 17°of freedom for the hand through sensor clusters and wirelessly transmits the data to the signal processing platform.On this platform,SPA converts electrical signals into bending angle information,with an error below 1°.DGMS synchronously and accurately maps the hand movement information output by SPA to the virtual environment,achieving real-time interaction with objects in the virtual space.This study has for the first time achieved such a high-degree-of-freedom and high-fidelity gesture mapping using passive sensors and explored the application of gesture mapping in the virtual-real coexisting intelligent enhancement platform,offering a novel technical route for digital twin applications.展开更多
In a recent study,Prof.Rui Min and collaborators published their paper in the journal of Opto-Electronic Science that is entitled"Smart photonic wristband for pulse wave monitoring".The paper introduces nove...In a recent study,Prof.Rui Min and collaborators published their paper in the journal of Opto-Electronic Science that is entitled"Smart photonic wristband for pulse wave monitoring".The paper introduces novel realization of a sensor that us-es a polymer optical multi-mode fiber to sense pulse wave bio-signal from a wrist by analyzing the specklegram mea-sured at the output of the fiber.Applying machine learning techniques over the pulse wave signal allowed medical diag-nostics and recognizing different gestures with accuracy rate of 95%.展开更多
The paper proposes that the understanding of human language evolution requires the comprehensive understanding of language in terms of language types, formations, and learnings and the comprehensive understanding of h...The paper proposes that the understanding of human language evolution requires the comprehensive understanding of language in terms of language types, formations, and learnings and the comprehensive understanding of human biological evolution in terms of the emergences of various hominin species with various language capacities. This paper proposes language neuromechanics and the human biological-language evolution. Language is derived from bodily movement. Language neuro-mechanics combines neuroscience to study language brain and biomechanics to study language movement. Language neuromechanics consists of language type, language formation, and language learning. Language types for advanced animals include gestural language verse vocal language, instinctive language verse controllable language, and symbolic language verse iconic language. Language formation involves the developments of the different types of languages from different bodily movements phylogenetically and ontogenetically. Language learning involves the learning of controllable language to adapt to communicative environment through language brain regions and language genes. This paper proposes a gradual and step-by-step human language evolution from the language of great apes to the human language through the human biological evolution which chronologically and geographically consists of early hominins, early Homos, middle Homos, and late Homos with different language capacities. For hominins, vocal language and gestural language were evolved together. In conclusion, combining neuroscience and bio-mechanics, language neuromechanics provides the comprehensive understanding of language. The combination of language neuromechanics and the human biological-language evolution provides the clear evolutionary path from great apes’ articulate gestural language without articulate speech to human articulate gestural language and articulate speech.展开更多
Continuous deforming always leads to the performance degradation of a flexible triboelectric nanogenerator due to the Young’s modulus mismatch of different functional layers.In this work,we fabricated a fiber-shaped ...Continuous deforming always leads to the performance degradation of a flexible triboelectric nanogenerator due to the Young’s modulus mismatch of different functional layers.In this work,we fabricated a fiber-shaped stretchable and tailorable triboelectric nanogenerator(FST-TENG)based on the geometric construction of a steel wire as electrode and ingenious selection of silicone rubber as triboelectric layer.Owing to the great robustness and continuous conductivity,the FST-TENGs demonstrate high stability,stretchability,and even tailorability.For a single device with ~6 cm in length and ~3 mm in diameter,the open-circuit voltage of ~59.7 V,transferred charge of ~23.7 nC,short-circuit current of ~2.67 μA and average power of ~2.13 μW can be obtained at 2.5 Hz.By knitting several FST-TENGs to be a fabric or a bracelet,it enables to harvest human motion energy and then to drive a wearable electronic device.Finally,it can also be woven on dorsum of glove to monitor the movements of gesture,which can recognize every single finger,different bending angle,and numbers of bent finger by analyzing voltage signals.展开更多
Aim at the defects of easy to fall into the local minimum point and the low convergence speed of back propagation(BP)neural network in the gesture recognition, a new method that combines the chaos algorithm with the...Aim at the defects of easy to fall into the local minimum point and the low convergence speed of back propagation(BP)neural network in the gesture recognition, a new method that combines the chaos algorithm with the genetic algorithm(CGA) is proposed. According to the ergodicity of chaos algorithm and global convergence of genetic algorithm, the basic idea of this paper is to encode the weights and thresholds of BP neural network and obtain a general optimal solution with genetic algorithm, and then the general optimal solution is optimized to the accurate optimal solution by adding chaotic disturbance. The optimal results of the chaotic genetic algorithm are used as the initial weights and thresholds of the BP neural network to recognize the gesture. Simulation and experimental results show that the real-time performance and accuracy of the gesture recognition are greatly improved with CGA.展开更多
In human-machine interaction,robotic hands are useful in many scenarios.To operate robotic hands via gestures instead of handles will greatly improve the convenience and intuition of human-machine interaction.Here,we ...In human-machine interaction,robotic hands are useful in many scenarios.To operate robotic hands via gestures instead of handles will greatly improve the convenience and intuition of human-machine interaction.Here,we present a magnetic array assisted sliding triboelectric sensor for achieving a real-time gesture interaction between a human hand and robotic hand.With a finger’s traction movement of flexion or extension,the sensor can induce positive/negative pulse signals.Through counting the pulses in unit time,the degree,speed,and direction of finger motion can be judged in realtime.The magnetic array plays an important role in generating the quantifiable pulses.The designed two parts of magnetic array can transform sliding motion into contact-separation and constrain the sliding pathway,respectively,thus improve the durability,low speed signal amplitude,and stability of the system.This direct quantization approach and optimization of wearable gesture sensor provide a new strategy for achieving a natural,intuitive,and real-time human-robotic interaction.展开更多
基金supported in part by the National Key R&D Program of China under Grant 2024YFB4405300 and 2022YFA1204300the Natural Science Foundation of Hunan Province under Grant 2023JJ20016+2 种基金the National Natural Science Foundation of China under Grants of 52221001 and 62090035the Key Research and Development Plan of Hunan Province under grants of 2022GK3002 and 2023GK2012the Key Program of Science and Technology Department of Hunan Province under grant of 2020XK2001。
文摘Permeable electronics promise improved physiological comfort,but remain constrained by limited functional integration and poor mechanical robustness.Here,we report a three-dimensional(3D)permeable electronic system that overcomes these challenges by combining electrospun SEBS nanofiber mats,high-resolution liquid metal conductors patterned via thermal imprinting(50μm),and a strain isolators(SIL)that protects vertical interconnects(VIAs)from stress concentration.This architecture achieves ultrahigh air permeability(>5.09 m L cm^(-2)min^(-1)),exceptional stretchability(750%fracture strain),and reliable conductivity maintained through more than 32,500 strain cycles.Leveraging these advances,we have integrated multilayer circuits,strain sensors,and a three-axis accelerometer to achieve a fully integrated,stretchable,permeable wireless real-time gesture recognition glove.The system enables accurate sign language interpretation(98%)and seamless robotic hand control,demonstrating its potential for assistive technologies.By uniting comfort,durability,and high-density integration,this work establishes a versatile platform for nextgeneration wearable electronics and interactive human-robot interfaces.
文摘Industrial operators need reliable communication in high-noise,safety-critical environments where speech or touch input is often impractical.Existing gesture systems either miss real-time deadlines on resourceconstrained hardware or lose accuracy under occlusion,vibration,and lighting changes.We introduce Industrial EdgeSign,a dual-path framework that combines hardware-aware neural architecture search(NAS)with large multimodalmodel(LMM)guided semantics to deliver robust,low-latency gesture recognition on edge devices.The searched model uses a truncated ResNet50 front end,a dimensional-reduction network that preserves spatiotemporal structure for tubelet-based attention,and localized Transformer layers tuned for on-device inference.To reduce reliance on gloss annotations and mitigate domain shift,we distill semantics from factory-tuned vision-language models and pre-train with masked language modeling and video-text contrastive objectives,aligning visual features with a shared text space.OnML2HP and SHREC’17,theNAS-derived architecture attains 94.7% accuracywith 86ms inference latency and about 5.9W power on Jetson Nano.Under occlusion,lighting shifts,andmotion blur,accuracy remains above 82%.For safetycritical commands,the emergency-stop gesture achieves 72 ms 99th percentile latency with 99.7% fail-safe triggering.Ablation studies confirm the contribution of the spatiotemporal tubelet extractor and text-side pre-training,and we observe gains in translation quality(BLEU-422.33).These results show that Industrial EdgeSign provides accurate,resource-aware,and safety-aligned gesture recognition suitable for deployment in smart factory settings.
基金the Economic and Social Research Council,UK,and National Science Centre,Poland,Grant Number:UMO-2018/31/D/NZ8/01144(‘Understanding origins of social brains and communication in wild primates’).
文摘Increasing our understanding of primate gestural communication can provide new insights into language evolution.A key question in primate communication is the association between the social relationships of primates and their repertoire of gestures.Such analyses can reveal how primates use their repertoire of gestural communication to maintain their networks of family and friends,much as humans use language to maintain their social networks.In this study we examined the association between the repertoire of gestures(overall,manual and bodily gestures,and gestures of different modalities)and social bonds(presence of reciprocated grooming),coordinated behaviors(travel,resting,co-feeding),and the complexity of ecology(e.g.noise,illumination)and sociality(party size,audience),in wild East African chimpanzees(Pan troglodytes schweinfurthii).A larger repertoire size of manual,visual gestures was associated with the presence of a relationship based on reciprocated grooming and increases in social complexity.A smaller repertoire of manual tactile gestures occurred when the relationship was based on reciprocated grooming.A smaller repertoire of bodily gestures occurred between partners who jointly traveled for longer.Whereas gesture repertoire size was associated with social complexity,complex ecology also influenced repertoire size.The evolution of a large repertoire of manual,visual gestures may have been a key factor that enabled larger social groups to emerge during evolution.Thus,the evolution of the larger brains in hominins may have co-occurred with an increase in the cognitive complexity underpinning gestural communication and this,in turn,may have enabled hominins to live in more complex social groups.
基金supported by the Research Grant Fund from Kwangwoon University in 2023,the National Natural Science Foundation of China under Grant(62311540155)the Taishan Scholars Project Special Funds(tsqn202312035)the open research foundation of State Key Laboratory of Integrated Chips and Systems.
文摘Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.
基金sponsored by Prince Sattam Bin Abdulaziz University(PSAU)as part of funding for its SDG Roadmap Research Funding Programme project number PSAU-2023-SDG-2023/SDG/31.
文摘Hearing and Speech impairment can be congenital or acquired.Hearing and speech-impaired students often hesitate to pursue higher education in reputable institutions due to their challenges.However,the development of automated assistive learning tools within the educational field has empowered disabled students to pursue higher education in any field of study.Assistive learning devices enable students to access institutional resources and facilities fully.The proposed assistive learning and communication tool allows hearing and speech-impaired students to interact productively with their teachers and classmates.This tool converts the audio signals into sign language videos for the speech and hearing-impaired to follow and converts the sign language to text format for the teachers to follow.This educational tool for the speech and hearing-impaired is implemented by customized deep learning models such as Convolution neural networks(CNN),Residual neural Networks(ResNet),and stacked Long short-term memory(LSTM)network models.This assistive learning tool is a novel framework that interprets the static and dynamic gesture actions in American Sign Language(ASL).Such communicative tools empower the speech and hearing impaired to communicate effectively in a classroom environment and foster inclusivity.Customized deep learning models were developed and experimentally evaluated with the standard performance metrics.The model exhibits an accuracy of 99.7% for all static gesture classification and 99% for specific vocabulary of gesture action words.This two-way communicative and educational tool encourages social inclusion and a promising career for disabled students.
基金supported by research fund from Chosun University,2024.
文摘The rapid evolution of virtual reality(VR)and augmented reality(AR)technologies has significantly transformed human-computer interaction,with applications spanning entertainment,education,healthcare,industry,and remote collaboration.A central challenge in these immersive systems lies in enabling intuitive,efficient,and natural interactions.Hand gesture recognition offers a compelling solution by leveraging the expressiveness of human hands to facilitate seamless control without relying on traditional input devices such as controllers or keyboards,which can limit immersion.However,achieving robust gesture recognition requires overcoming challenges related to accurate hand tracking,complex environmental conditions,and minimizing system latency.This study proposes an artificial intelligence(AI)-driven framework for recognizing both static and dynamic hand gestures in VR and AR environments using skeleton-based tracking compliant with the OpenXR standard.Our approach employs a lightweight neural network architecture capable of real-time classification within approximately 1.3mswhilemaintaining average accuracy of 95%.We also introduce a novel dataset generation method to support training robust models and demonstrate consistent classification of diverse gestures across widespread commercial VR devices.This work represents one of the first studies to implement and validate dynamic hand gesture recognition in real time using standardized VR hardware,laying the groundwork for more immersive,accessible,and user-friendly interaction systems.By advancing AI-driven gesture interfaces,this research has the potential to broaden the adoption of VR and AR across diverse domains and enhance the overall user experience.
基金the project BRIGHT(Project reference:MERA-NET3/0004/2021)financed by national funds from FCT-Fundacao para a Ciência e a Tecnologia,I.P.,in the scope of the projects LA/P/0037/2020,UIDP/50025/2020 and UIDB/50025/2020 of the Associate Laboratory Institute of Nanostructures,Nanomodelling and Nanofabrication-i3N+6 种基金the support from the i3N-FCT-Portuguese Foundation for Science and Technology through the Ph.D.(Scholarship grant no.UI/BD/151288/2021)also partially supported by European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreements number 952169(SYNERGY,H2020-WIDESPREAD-2020-5,CSA)and 101008701(EMERGE,H2020-INFRAIA-2020-1),and 101070255(REFORM,HORIZON-C L4-2021-DIGITAL-EMERGING-01)also supported by LISBOA-05-3559-FSE-000007CENTRO-04-3559FSE-000094 operationsco-funded by the Lisboa 2020,Centro 2020 programme,Portugal 2020,European Union,through the European Social FundFunda??o para a Ciência e Tecnologia(FCT)Agência Nacional de Inovacao(ANI)。
文摘Textiles for health and sporting activity monitoring are on the rise with the advent of smart portable wearables.The intention of this work is to design wireless monitoring wearables,based on widely available textiles and low environmental impact production technologies.Herein we have developed a polymeric ink which is able to functionalize different types of textile fibers(including silver conducting fibers,cotton,and commercial textile)with poly pyrrole.These fibers were weaved together with a thinner silver conducting fiber and carbon fiber to form a touch-sensitive energy harvesting system that would generate an electric output when mechanical pressure is applied to it.Different prototypes were manufactured with loom weaving accessories to simulate real textile cloths.By simple touch,the prototypes produced a maximum voltage of 244 V and a maximum power density of 2.29 W m^(-2).The current generated is then transformed into a digital signal,which is further utilized for human motion or gesture monitorization.The system comprises a wireless block for the Internet of Things(IoT)applicability that will be eventually extended to future remote health and sports monitoring systems.
基金Supported by the Sirindhorn International Institute of Technology,Thammasat University,EFS-G(Excellent foreign Student-Graduate)research fund.
文摘Background With the increasing prominence of hand and finger motion tracking in virtual reality(VR)applications and rehabilitation studies,data gloves have emerged as a prevalent solution.In this study,we developed an innovative,lightweight,and detachable data glove tailored for finger motion tracking in VR environments.Methods The glove design incorporates a potentiometer coupled with a flexible rack and pinion gear system,facilitating precise and natural hand gestures for interaction with VR applications.Initially,we calibrated the potentiometer to align with the actual finger bending angle,and verified the accuracy of angle measurements recorded by the data glove.To verify the precision and reliability of our data glove,we conducted repeatability testing for flexion(grip test)and extension(flat test),with 250 measurements each,across five users.We employed the Gage Repeatability and Reproducibility to analyze and interpret the repeatable data.Furthermore,we integrated the gloves into a SteamVR home environment using the OpenGlove auto-calibration tool.Conclusions The repeatability analysis revealed an aggregate error of 1.45 degrees in both the gripped and flat hand positions.This outcome was notably favorable when compared with the findings from assessments of nine alternative data gloves that employed similar protocols.In these experiments,users navigated and engaged with virtual objects,underlining the glove's exact tracking of finger motion.Furthermore,the proposed data glove exhibited a low response time of 17-34 ms and back-drive force of only 0.19 N.Additionally,according to a comfort evaluation using the Comfort Rating Scales,the proposed glove system is wearable,placing it at the WL1 level.
文摘With the growing application of intelligent robots in service,manufacturing,and medical fields,efficient and natural interaction between humans and robots has become key to improving collaboration efficiency and user experience.Gesture recognition,as an intuitive and contactless interaction method,can overcome the limitations of traditional interfaces and enable real-time control and feedback of robot movements and behaviors.This study first reviews mainstream gesture recognition algorithms and their application on different sensing platforms(RGB cameras,depth cameras,and inertial measurement units).It then proposes a gesture recognition method based on multimodal feature fusion and a lightweight deep neural network that balances recognition accuracy with computational efficiency.At system level,a modular human-robot interaction architecture is constructed,comprising perception,decision,and execution layers,and gesture commands are transmitted and mapped to robot actions in real time via the ROS communication protocol.Through multiple comparative experiments on public gesture datasets and a self-collected dataset,the proposed method’s superiority is validated in terms of accuracy,response latency,and system robustness,while user-experience tests assess the interface’s usability.The results provide a reliable technical foundation for robot collaboration and service in complex scenarios,offering broad prospects for practical application and deployment.
文摘This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.
文摘In today's global society,people from multiple cultural backgrounds often communicate with foreign friends on a daily basis,making it increasingly important to respect and understand cultural differences.For example,students who join international exchange programs may find that simple gestures considered polite in one culture,such as bowing or handshakes,might be impolite or confusing in another.
基金supported by the National Natural Science Foundation of China(Grant No.52372107)the Zhongyuan Talent Program(Talent Cultivation Series)-Leading Talents in Zhongyuan Basic Research,the Henan Center for Outstanding Overseas Scientists(Grant No.GZS2024003)+1 种基金the Natural Science Foundation of Henan Province in China(Grant No.252300421801)the China Postdoctoral Science Foundation(Grant No.2023M740992).
文摘As humanity advances into the virtual world and the era of mixed reality,high-fidelity gesture mapping has become a key interface connecting the physical world and the digital space.However,the existing solutions are limited by the bottlenecks of active sensors in terms of information richness and structure and have insufficient precise reconstruction capabilities for high-degree-of-freedom hand movements,making it difficult to achieve a natural and seamless interaction experience.Here,a dynamic gesture mapping system(DGMS)has been developed,which combines a lightweight and efficient hand motion signal acquisition device(HSAD)with an efficient signal processing algorithm(SPA).HSAD captures the motion information on 17°of freedom for the hand through sensor clusters and wirelessly transmits the data to the signal processing platform.On this platform,SPA converts electrical signals into bending angle information,with an error below 1°.DGMS synchronously and accurately maps the hand movement information output by SPA to the virtual environment,achieving real-time interaction with objects in the virtual space.This study has for the first time achieved such a high-degree-of-freedom and high-fidelity gesture mapping using passive sensors and explored the application of gesture mapping in the virtual-real coexisting intelligent enhancement platform,offering a novel technical route for digital twin applications.
文摘In a recent study,Prof.Rui Min and collaborators published their paper in the journal of Opto-Electronic Science that is entitled"Smart photonic wristband for pulse wave monitoring".The paper introduces novel realization of a sensor that us-es a polymer optical multi-mode fiber to sense pulse wave bio-signal from a wrist by analyzing the specklegram mea-sured at the output of the fiber.Applying machine learning techniques over the pulse wave signal allowed medical diag-nostics and recognizing different gestures with accuracy rate of 95%.
文摘The paper proposes that the understanding of human language evolution requires the comprehensive understanding of language in terms of language types, formations, and learnings and the comprehensive understanding of human biological evolution in terms of the emergences of various hominin species with various language capacities. This paper proposes language neuromechanics and the human biological-language evolution. Language is derived from bodily movement. Language neuro-mechanics combines neuroscience to study language brain and biomechanics to study language movement. Language neuromechanics consists of language type, language formation, and language learning. Language types for advanced animals include gestural language verse vocal language, instinctive language verse controllable language, and symbolic language verse iconic language. Language formation involves the developments of the different types of languages from different bodily movements phylogenetically and ontogenetically. Language learning involves the learning of controllable language to adapt to communicative environment through language brain regions and language genes. This paper proposes a gradual and step-by-step human language evolution from the language of great apes to the human language through the human biological evolution which chronologically and geographically consists of early hominins, early Homos, middle Homos, and late Homos with different language capacities. For hominins, vocal language and gestural language were evolved together. In conclusion, combining neuroscience and bio-mechanics, language neuromechanics provides the comprehensive understanding of language. The combination of language neuromechanics and the human biological-language evolution provides the clear evolutionary path from great apes’ articulate gestural language without articulate speech to human articulate gestural language and articulate speech.
基金supported by National Natural Science Foundation of China (NSFC) (No. 61804103)National Key R&D Program of China (No. 2017YFA0205002)+8 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Nos. 18KJA535001 and 14KJB 150020)Natural Science Foundation of Jiangsu Province of China (Nos. BK20170343 and BK20180242)China Postdoctoral Science Foundation (No. 2017M610346)State Key Laboratory of Silicon Materials, Zhejiang University (No. SKL2018-03)Nantong Municipal Science and Technology Program (No. GY12017001)Jiangsu Key Laboratory for Carbon-Based Functional Materials & Devices, Soochow University (KSL201803)supported by Collaborative Innovation Center of Suzhou Nano Science & Technology, the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)the 111 ProjectJoint International Research Laboratory of Carbon-Based Functional Materials and Devices
文摘Continuous deforming always leads to the performance degradation of a flexible triboelectric nanogenerator due to the Young’s modulus mismatch of different functional layers.In this work,we fabricated a fiber-shaped stretchable and tailorable triboelectric nanogenerator(FST-TENG)based on the geometric construction of a steel wire as electrode and ingenious selection of silicone rubber as triboelectric layer.Owing to the great robustness and continuous conductivity,the FST-TENGs demonstrate high stability,stretchability,and even tailorability.For a single device with ~6 cm in length and ~3 mm in diameter,the open-circuit voltage of ~59.7 V,transferred charge of ~23.7 nC,short-circuit current of ~2.67 μA and average power of ~2.13 μW can be obtained at 2.5 Hz.By knitting several FST-TENGs to be a fabric or a bracelet,it enables to harvest human motion energy and then to drive a wearable electronic device.Finally,it can also be woven on dorsum of glove to monitor the movements of gesture,which can recognize every single finger,different bending angle,and numbers of bent finger by analyzing voltage signals.
基金supported by Natural Science Foundation of Heilongjiang Province Youth Fund(No.QC2014C054)Foundation for University Young Key Scholar by Heilongjiang Province(No.1254G023)the Science Funds for the Young Innovative Talents of HUST(No.201304)
文摘Aim at the defects of easy to fall into the local minimum point and the low convergence speed of back propagation(BP)neural network in the gesture recognition, a new method that combines the chaos algorithm with the genetic algorithm(CGA) is proposed. According to the ergodicity of chaos algorithm and global convergence of genetic algorithm, the basic idea of this paper is to encode the weights and thresholds of BP neural network and obtain a general optimal solution with genetic algorithm, and then the general optimal solution is optimized to the accurate optimal solution by adding chaotic disturbance. The optimal results of the chaotic genetic algorithm are used as the initial weights and thresholds of the BP neural network to recognize the gesture. Simulation and experimental results show that the real-time performance and accuracy of the gesture recognition are greatly improved with CGA.
基金This work was supported by National Natural Science Foundation of China(51902035 and 52073037)Natural Science Foundation of Chongqing(cstc2020jcyj-msxmX0807)+1 种基金the Fundamental Research Funds for the Central Universities(2020CDJ-LHSS-001 and 2019CDXZWL001)Chongqing graduate tutor team construction project(ydstd1832).
文摘In human-machine interaction,robotic hands are useful in many scenarios.To operate robotic hands via gestures instead of handles will greatly improve the convenience and intuition of human-machine interaction.Here,we present a magnetic array assisted sliding triboelectric sensor for achieving a real-time gesture interaction between a human hand and robotic hand.With a finger’s traction movement of flexion or extension,the sensor can induce positive/negative pulse signals.Through counting the pulses in unit time,the degree,speed,and direction of finger motion can be judged in realtime.The magnetic array plays an important role in generating the quantifiable pulses.The designed two parts of magnetic array can transform sliding motion into contact-separation and constrain the sliding pathway,respectively,thus improve the durability,low speed signal amplitude,and stability of the system.This direct quantization approach and optimization of wearable gesture sensor provide a new strategy for achieving a natural,intuitive,and real-time human-robotic interaction.