The integration of Human-Robot Collaboration(HRC)into Virtual Reality(VR)technology is transforming industries by enhancing workforce skills,improving safety,and optimizing operational processes and efficiency through...The integration of Human-Robot Collaboration(HRC)into Virtual Reality(VR)technology is transforming industries by enhancing workforce skills,improving safety,and optimizing operational processes and efficiency through realistic simulations of industry-specific scenarios.Despite the growing adoption of VR integrated with HRC,comprehensive reviews of current research in HRC-VR within the construction and manufacturing fields are lacking.This review examines the latest advances in designing and implementing HRC using VR technology in these industries.The aim is to address the application domains of HRC-VR,types of robots used,VR setups,and software solutions used.To achieve this,a systematic literature review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology was conducted on the Web of Science and Google Scholar databases,analyzing 383 articles and selecting 53 papers that met the established selection criteria.The findings emphasize a significant focus on enhancing human-robot interaction with a trend toward using immersive VR experiences and interactive 3D content creation tools.However,the integration of HRC with VR,especially in the dynamic construction environment,presents unique challenges and opportunities for future research,including developing more realistic simulations and adaptable robot systems.This paper offers insights for researchers,practitioners,educators,industry professionals,and policymakers interested in leveraging the integration of HRC with VR in construction and manufacturing industries.展开更多
Despite the gradual transformation of traditional manufacturing by the Human-Robot Collaboration Assembly(HRCA),challenges remain in the robot’s ability to understand and predict human assembly intentions.This study ...Despite the gradual transformation of traditional manufacturing by the Human-Robot Collaboration Assembly(HRCA),challenges remain in the robot’s ability to understand and predict human assembly intentions.This study aims to enhance the robot’s comprehension and prediction capabilities of operator assembly intentions by capturing and analyzing operator behavior and movements.We propose a video feature extraction method based on the Temporal Shift Module Network(TSM-ResNet50)to extract spatiotemporal features from assembly videos and differentiate various assembly actions using feature differences between video frames.Furthermore,we construct an action recognition and segmentation model based on the Refined-Multi-Scale Temporal Convolutional Network(Refined-MS-TCN)to identify assembly action intervals and accurately acquire action categories.Experiments on our self-built reducer assembly action dataset demonstrate that our network can classify assembly actions frame by frame,achieving an accuracy rate of 83%.Additionally,we develop a HiddenMarkovModel(HMM)integrated with assembly task constraints to predict operator assembly intentions based on the probability transition matrix and assembly task constraints.The experimental results show that our method for predicting operator assembly intentions can achieve an accuracy of 90.6%,which is a 13.3%improvement over the HMM without task constraints.展开更多
The wearable exoskeleton system is a typical strongly coupled human-robotic system.Human-robotic is the environment for each other.The two support each other and compete with each other.Achieving high human-robotic co...The wearable exoskeleton system is a typical strongly coupled human-robotic system.Human-robotic is the environment for each other.The two support each other and compete with each other.Achieving high human-robotic compatibility is the most critical technology for wearable systems.Full structural compatibility can improve the intrinsic safety of the exoskeleton,and precise intention understanding and motion control can improve the comfort of the exoskeleton.This paper first designs a physiologically functional bionic lower limb exoskeleton based on the study of bone and joint functional anatomy and analyzes the drive mapping model of the dual closedloop four-link knee joint.Secondly,an exoskeleton dual closed-loop controller composed of a position inner loop and a force outer loop is designed.The inner loop of the controller adopts the PID control algorithm,and the outer loop adopts the adaptive admittance control algorithm based on human-robot interaction force(HRI).The controller can adaptively adjust the admittance parameters according to the HRI to respond to dynamic changes in the mechanical and physical parameters of the human-robot system,thereby improving control compliance and the wearing comfort of the exoskeleton system.Finally,we built a joint simulation experiment platform based on SolidWorks/Simulink to conduct virtual prototype simulation experiments and recruited volunteers to wear rehabilitation exoskeletons to conduct related control experiments.Experimental results show that the designed physiologically functional bionic exoskeleton and adaptive admittance controller can significantly improve the accuracy of human-robotic joint motion tracking,effectively reducing human-machine interaction forces and improving the comfort and safety of the wearer.This paper proposes a dual-closed loop four-link knee joint exoskeleton and a variable admittance control method based on HRI,which provides a new method for the design and control of exoskeletons with high compatibility.展开更多
A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that ...A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that the robot is regarded as the follower or only adjusts the leader and the follower in cooperation.In this paper,a self-learning method is proposed which can dynamically adapt and continuously adjust the initiative weight of the robot according to the change of the task.Firstly,the physical human-robot cooperation model,including the role factor is built.Then,a reinforcement learningmodel that can adjust the role factor in real time is established,and a reward and actionmodel is designed.The role factor can be adjusted continuously according to the comprehensive performance of the human-robot interaction force and the robot’s Jerk during the repeated installation.Finally,the roles adjustment rule established above continuously improves the comprehensive performance.Experiments of the dynamic roles allocation and the effect of the performance weighting coefficient on the result have been verified.The results show that the proposed method can realize the role adaptation and achieve the dual optimization goal of reducing the sum of the cooperator force and the robot’s Jerk.展开更多
Recently,wearable gait-assist robots have been evolving towards using soft materials designed for the elderly rather than individuals with disabilities,which emphasize modularization,simplification,and weight reductio...Recently,wearable gait-assist robots have been evolving towards using soft materials designed for the elderly rather than individuals with disabilities,which emphasize modularization,simplification,and weight reduction.Thus,synchronizing the robotic assistive force with that of the user’s leg movements is crucial for usability,which requires accurate recognition of the user’s gait intent.In this study,we propose a deep learning model capable of identifying not only gait mode and gait phase but also phase progression.Utilizing data from five inertial measurement units placed on the body,the proposed two-stage architecture incorporates a bidirectional long short-term memory-based model for robust classification of locomotion modes and phases.Subsequently,phase progression is estimated through 1D convolutional neural network-based regressors,each dedicated to a specific phase.The model was evaluated on a diverse dataset encompassing level walking,stair ascent and descent,and sit-to-stand activities from 10 healthy participants.The results demonstrate its ability to accurately classify locomotion phases and estimate phase progression.Accurate phase progression estimation is essential due to the age-related variability in gait phase durations,particularly evident in older adults,the primary demographic for gait-assist robots.These findings underscore the potential to enhance the assistance,comfort,and safety provided by gait-assist robots.展开更多
Aiming at the problems of traditional guide devices such as single environmental perception and poor terrain adaptability,this paper proposes an intelligent guide system based on a quadruped robot platform.Data fusion...Aiming at the problems of traditional guide devices such as single environmental perception and poor terrain adaptability,this paper proposes an intelligent guide system based on a quadruped robot platform.Data fusion between millimeter-wave radar(with an accuracy of±0.1°)and an RGB-D camera is achieved through multisensor spatiotemporal registration technology,and a dataset suitable for guide dog robots is constructed.For the application scenario of edge-end guide dog robots,a lightweight CA-YOLOv11 target detection model integrated with an attention mechanism is innovatively adopted,achieving a comprehensive recognition accuracy of 95.8% in complex scenarios,which is 2.2% higher than that of the benchmark YOLOv11 network.The system supports navigation on complex terrains such as stairs(25 cm steps)and slopes(35°gradient),and the response time to sudden disturbances is shortened to 100 ms.Actual tests show that the navigation success rate reaches 95% in eight types of scenarios,the user satisfaction score is 4.8/5.0,and the cost is 50% lower than that of traditional guide dogs.展开更多
Virtual reality(VR)technology revitalises rehabilitation training by creating rich,interactive virtual rehabilitation scenes and tasks that deeply engage patients.Robotics with immersive VR environments have the poten...Virtual reality(VR)technology revitalises rehabilitation training by creating rich,interactive virtual rehabilitation scenes and tasks that deeply engage patients.Robotics with immersive VR environments have the potential to significantly enhance the sense of immersion for patients during training.This paper proposes a rehabilitation robot system.The system integrates a VR environment,the exoskeleton entity,and research on rehabilitation assessment metrics derived from surface electromyographic signal(sEMG).Employing more realistic and engaging virtual stimuli,this method guides patients to actively participate,thereby enhancing the effectiveness of neural connection reconstruction—an essential aspect of rehabilitation.Furthermore,this study introduces a muscle activation model that merges linear and non-linear states of muscle,avoiding the impact of non-linear shape factors on model accuracy present in traditional models.A muscle strength assessment model based on optimised generalised regression(WOAGRNN)is also proposed,with a root mean square error of 0.017,347 and a mean absolute percentage error of 1.2461%,serving as critical assessment indicators for the effectiveness of rehabilitation.Finally,the system is preliminarily applied in human movement experiments,validating the practicality and potential effectiveness of VRcentred rehabilitation strategies in medical recovery.展开更多
This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The partici...This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.展开更多
为解决脑卒中疾病(cerebrovascular accident,CVA)导致的上肢偏瘫患者在康复过程中不规范以及康复医师不足的问题,提出一种5自由度(degree of freedom,DOF)上肢肘腕关节康复机器人及控制系统,旨在更好地满足患者灵活康复的需求。康复机...为解决脑卒中疾病(cerebrovascular accident,CVA)导致的上肢偏瘫患者在康复过程中不规范以及康复医师不足的问题,提出一种5自由度(degree of freedom,DOF)上肢肘腕关节康复机器人及控制系统,旨在更好地满足患者灵活康复的需求。康复机器人平台为3-RRR(3-转动副-转动副-转动副)串联机构,能够实现独立肘关节屈伸康复、独立腕关节屈伸和内外旋康复、肘腕关节协同屈伸康复、肘关节屈伸腕关节内外旋协同康复功能,同时考虑到患者左右偏瘫部位不同的情况,康复机器人设置镜像功能。触摸屏和控制器(planar lightwave circuit,PLC)进行通信,可以实现人机交互。使用Matlab软件运用蒙特卡洛法(Monte-Carlo,M-C)绘制机器人工作空间云图,从而进行更有效的任务规划和操作控制。结果表明:所设计研制的样机能够满足上肢肘腕关节多种模式下的运动康复,可以对屈肌群和伸肌群进行训练。展开更多
Human-robot safety is an important topic in wearable robotics,especially in supernumerary robotic limbs(SRLs).The proposal of flexible joint improves human-robot safety strategy,which allows physical contact between h...Human-robot safety is an important topic in wearable robotics,especially in supernumerary robotic limbs(SRLs).The proposal of flexible joint improves human-robot safety strategy,which allows physical contact between human and robots,rather than strictly limiting the human-robot motion.However,most researchers focus on the variable stiffness features of flexible joints,but few evaluate the performance of the flexible joint in the human-robot collision.Therefore,the performance of two typical flexible joints,including the series elastic joint(SEJ)and the passive variable stiffness joint(PVSJ),are compared through dynamic collision experiments.The results demonstrate that the SEJ absorbs 40.7%-58.7%of the collision force and 34.2%-45.2%of the collision torque in the driven-torque below 4 N·m and driven-speed of 3-7(°)/s,which is more stable than PVSJ.In addition,the stiffness error of SEJ is measured at 5.1%,significantly lower than the 23.04%measured in the PVSJ.The huge stiffness error of PVSJ leads to its unreliability in buffering collision.Furthermore,we analyze results and confirm that SEJ has a more stable human-robot safety performance in buffering dynamic collision.Consequently,the SEJ is suitable in SRLs for human-robot safety in our scenario.展开更多
The objective of this work is to develop an innovative system(ROSGPT)that merges large language models(LLMs)with the robot operating system(ROS),facilitating natural language voice control of mobile robots.This integr...The objective of this work is to develop an innovative system(ROSGPT)that merges large language models(LLMs)with the robot operating system(ROS),facilitating natural language voice control of mobile robots.This integration aims to bridge the gap between human-robot interaction(HRI)and artificial intelligence(AI).ROSGPT integrates several subsystems,including speech recognition,prompt engineering,LLM and ROS,enabling seamless control of robots through human voice or text commands.The LLM component is optimized,with its performance refined from the open-source Llama2 model through fine-tuning and quantization procedures.Through extensive experiments conducted in both real-world and virtual environments,ROSGPT demonstrates its efficacy in meeting user requirements and delivering user-friendly interactive experiences.The system demonstrates versatility and adaptability through its ability to comprehend diverse user commands and execute corresponding tasks with precision and reliability,thereby showcasing its potential for various practical applications in robotics and AI.The demonstration video can be viewed at https://iklxo6z9yv.feishu.cn/docx/Lux3dmTDxoZ5YnxWJTZcxUCWnTh.展开更多
This paper presents an innovative investigation on prototyping a digital twin(DT)as the platform for human-robot interactive welding and welder behavior analysis.This humanrobot interaction(HRI)working style helps to ...This paper presents an innovative investigation on prototyping a digital twin(DT)as the platform for human-robot interactive welding and welder behavior analysis.This humanrobot interaction(HRI)working style helps to enhance human users'operational productivity and comfort;while data-driven welder behavior analysis benefits to further novice welder training.This HRI system includes three modules:1)a human user who demonstrates the welding operations offsite with her/his operations recorded by the motion-tracked handles;2)a robot that executes the demonstrated welding operations to complete the physical welding tasks onsite;3)a DT system that is developed based on virtual reality(VR)as a digital replica of the physical human-robot interactive welding environment.The DT system bridges a human user and robot through a bi-directional information flow:a)transmitting demonstrated welding operations in VR to the robot in the physical environment;b)displaying the physical welding scenes to human users in VR.Compared to existing DT systems reported in the literatures,the developed one provides better capability in engaging human users in interacting with welding scenes,through an augmented VR.To verify the effectiveness,six welders,skilled with certain manual welding training and unskilled without any training,tested the system by completing the same welding job;three skilled welders produce satisfied welded workpieces,while the other three unskilled do not.A data-driven approach as a combination of fast Fourier transform(FFT),principal component analysis(PCA),and support vector machine(SVM)is developed to analyze their behaviors.Given an operation sequence,i.e.,motion speed sequence of the welding torch,frequency features are firstly extracted by FFT and then reduced in dimension through PCA,which are finally routed into SVM for classification.The trained model demonstrates a 94.44%classification accuracy in the testing dataset.The successful pattern recognition in skilled welder operations should benefit to accelerate novice welder training.展开更多
A facial expression emotion recognition based human-robot interaction(FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize huma...A facial expression emotion recognition based human-robot interaction(FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on2D-Gabor, uniform local binary pattern(LBP) operator, and multiclass extreme learning machine(ELM) classifier is presented,which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios,i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEERHRI system can be applied in home service, smart home, safe driving, and so on.展开更多
基金Supported by National Science Foundation under Grant No.2222881.
文摘The integration of Human-Robot Collaboration(HRC)into Virtual Reality(VR)technology is transforming industries by enhancing workforce skills,improving safety,and optimizing operational processes and efficiency through realistic simulations of industry-specific scenarios.Despite the growing adoption of VR integrated with HRC,comprehensive reviews of current research in HRC-VR within the construction and manufacturing fields are lacking.This review examines the latest advances in designing and implementing HRC using VR technology in these industries.The aim is to address the application domains of HRC-VR,types of robots used,VR setups,and software solutions used.To achieve this,a systematic literature review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology was conducted on the Web of Science and Google Scholar databases,analyzing 383 articles and selecting 53 papers that met the established selection criteria.The findings emphasize a significant focus on enhancing human-robot interaction with a trend toward using immersive VR experiences and interactive 3D content creation tools.However,the integration of HRC with VR,especially in the dynamic construction environment,presents unique challenges and opportunities for future research,including developing more realistic simulations and adaptable robot systems.This paper offers insights for researchers,practitioners,educators,industry professionals,and policymakers interested in leveraging the integration of HRC with VR in construction and manufacturing industries.
文摘Despite the gradual transformation of traditional manufacturing by the Human-Robot Collaboration Assembly(HRCA),challenges remain in the robot’s ability to understand and predict human assembly intentions.This study aims to enhance the robot’s comprehension and prediction capabilities of operator assembly intentions by capturing and analyzing operator behavior and movements.We propose a video feature extraction method based on the Temporal Shift Module Network(TSM-ResNet50)to extract spatiotemporal features from assembly videos and differentiate various assembly actions using feature differences between video frames.Furthermore,we construct an action recognition and segmentation model based on the Refined-Multi-Scale Temporal Convolutional Network(Refined-MS-TCN)to identify assembly action intervals and accurately acquire action categories.Experiments on our self-built reducer assembly action dataset demonstrate that our network can classify assembly actions frame by frame,achieving an accuracy rate of 83%.Additionally,we develop a HiddenMarkovModel(HMM)integrated with assembly task constraints to predict operator assembly intentions based on the probability transition matrix and assembly task constraints.The experimental results show that our method for predicting operator assembly intentions can achieve an accuracy of 90.6%,which is a 13.3%improvement over the HMM without task constraints.
基金Supported by National Natural Science Foundation of China(Grant Nos.U23A20338,62103131 and 62203149)Hebei Provincial Natural Science Foundation(Grant No.E2022202171).
文摘The wearable exoskeleton system is a typical strongly coupled human-robotic system.Human-robotic is the environment for each other.The two support each other and compete with each other.Achieving high human-robotic compatibility is the most critical technology for wearable systems.Full structural compatibility can improve the intrinsic safety of the exoskeleton,and precise intention understanding and motion control can improve the comfort of the exoskeleton.This paper first designs a physiologically functional bionic lower limb exoskeleton based on the study of bone and joint functional anatomy and analyzes the drive mapping model of the dual closedloop four-link knee joint.Secondly,an exoskeleton dual closed-loop controller composed of a position inner loop and a force outer loop is designed.The inner loop of the controller adopts the PID control algorithm,and the outer loop adopts the adaptive admittance control algorithm based on human-robot interaction force(HRI).The controller can adaptively adjust the admittance parameters according to the HRI to respond to dynamic changes in the mechanical and physical parameters of the human-robot system,thereby improving control compliance and the wearing comfort of the exoskeleton system.Finally,we built a joint simulation experiment platform based on SolidWorks/Simulink to conduct virtual prototype simulation experiments and recruited volunteers to wear rehabilitation exoskeletons to conduct related control experiments.Experimental results show that the designed physiologically functional bionic exoskeleton and adaptive admittance controller can significantly improve the accuracy of human-robotic joint motion tracking,effectively reducing human-machine interaction forces and improving the comfort and safety of the wearer.This paper proposes a dual-closed loop four-link knee joint exoskeleton and a variable admittance control method based on HRI,which provides a new method for the design and control of exoskeletons with high compatibility.
基金The research has been generously supported by Tianjin Education Commission Scientific Research Program(2020KJ056),ChinaTianjin Science and Technology Planning Project(22YDTPJC00970),China.The authors would like to express their sincere appreciation for all support provided.
文摘A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that the robot is regarded as the follower or only adjusts the leader and the follower in cooperation.In this paper,a self-learning method is proposed which can dynamically adapt and continuously adjust the initiative weight of the robot according to the change of the task.Firstly,the physical human-robot cooperation model,including the role factor is built.Then,a reinforcement learningmodel that can adjust the role factor in real time is established,and a reward and actionmodel is designed.The role factor can be adjusted continuously according to the comprehensive performance of the human-robot interaction force and the robot’s Jerk during the repeated installation.Finally,the roles adjustment rule established above continuously improves the comprehensive performance.Experiments of the dynamic roles allocation and the effect of the performance weighting coefficient on the result have been verified.The results show that the proposed method can realize the role adaptation and achieve the dual optimization goal of reducing the sum of the cooperator force and the robot’s Jerk.
基金supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI)funded by the Ministry of Health&Welfare,Republic of Korea(Grant Number:RS-2022-KH129263).
文摘Recently,wearable gait-assist robots have been evolving towards using soft materials designed for the elderly rather than individuals with disabilities,which emphasize modularization,simplification,and weight reduction.Thus,synchronizing the robotic assistive force with that of the user’s leg movements is crucial for usability,which requires accurate recognition of the user’s gait intent.In this study,we propose a deep learning model capable of identifying not only gait mode and gait phase but also phase progression.Utilizing data from five inertial measurement units placed on the body,the proposed two-stage architecture incorporates a bidirectional long short-term memory-based model for robust classification of locomotion modes and phases.Subsequently,phase progression is estimated through 1D convolutional neural network-based regressors,each dedicated to a specific phase.The model was evaluated on a diverse dataset encompassing level walking,stair ascent and descent,and sit-to-stand activities from 10 healthy participants.The results demonstrate its ability to accurately classify locomotion phases and estimate phase progression.Accurate phase progression estimation is essential due to the age-related variability in gait phase durations,particularly evident in older adults,the primary demographic for gait-assist robots.These findings underscore the potential to enhance the assistance,comfort,and safety provided by gait-assist robots.
文摘Aiming at the problems of traditional guide devices such as single environmental perception and poor terrain adaptability,this paper proposes an intelligent guide system based on a quadruped robot platform.Data fusion between millimeter-wave radar(with an accuracy of±0.1°)and an RGB-D camera is achieved through multisensor spatiotemporal registration technology,and a dataset suitable for guide dog robots is constructed.For the application scenario of edge-end guide dog robots,a lightweight CA-YOLOv11 target detection model integrated with an attention mechanism is innovatively adopted,achieving a comprehensive recognition accuracy of 95.8% in complex scenarios,which is 2.2% higher than that of the benchmark YOLOv11 network.The system supports navigation on complex terrains such as stairs(25 cm steps)and slopes(35°gradient),and the response time to sudden disturbances is shortened to 100 ms.Actual tests show that the navigation success rate reaches 95% in eight types of scenarios,the user satisfaction score is 4.8/5.0,and the cost is 50% lower than that of traditional guide dogs.
基金National Key Research and Development Program of China,Grant/Award Number:2022YFB4700701National Outstanding Youth Science Fund Project of National Natural Science Foundation of China,Grant/Award Number:52025054。
文摘Virtual reality(VR)technology revitalises rehabilitation training by creating rich,interactive virtual rehabilitation scenes and tasks that deeply engage patients.Robotics with immersive VR environments have the potential to significantly enhance the sense of immersion for patients during training.This paper proposes a rehabilitation robot system.The system integrates a VR environment,the exoskeleton entity,and research on rehabilitation assessment metrics derived from surface electromyographic signal(sEMG).Employing more realistic and engaging virtual stimuli,this method guides patients to actively participate,thereby enhancing the effectiveness of neural connection reconstruction—an essential aspect of rehabilitation.Furthermore,this study introduces a muscle activation model that merges linear and non-linear states of muscle,avoiding the impact of non-linear shape factors on model accuracy present in traditional models.A muscle strength assessment model based on optimised generalised regression(WOAGRNN)is also proposed,with a root mean square error of 0.017,347 and a mean absolute percentage error of 1.2461%,serving as critical assessment indicators for the effectiveness of rehabilitation.Finally,the system is preliminarily applied in human movement experiments,validating the practicality and potential effectiveness of VRcentred rehabilitation strategies in medical recovery.
文摘This article describes a pilot study aiming at generating social interactions between a humanoid robot and adolescents with autism spectrum disorder (ASD), through the practice of a gesture imitation game. The participants were a 17-year-old young lady with ASD and intellectual deficit, and a control participant: a preadolescent with ASD but no intellectual deficit (Asperger syndrome). The game is comprised of four phases: greetings, pairing, imitation, and closing. Field educators were involved, playing specific roles: visual or physical inciter. The use of a robot allows for catching the participants’ attention, playing the imitation game for a longer period of time than with a human partner, and preventing the game partner’s negative facial expressions resulting from tiredness, impatience, or boredom. The participants’ behavior was observed in terms of initial approach towards the robot, positioning relative to the robot in terms of distance and orientation, reactions to the robot’s voice or moves, signs of happiness, and imitation attempts. Results suggest a more and more natural approach towards the robot during the sessions, as well as a higher level of social interaction, based on the variations of the parameters listed above. We use these preliminary results to draw the next steps of our research work as well as identify further perspectives, with this aim in mind: improving social interactions with adolescents with ASD and intellectual deficit, allowing for better integration of these people into our societies.
基金supported by the Na⁃tional Natural Science Foundation of China(No.U22A20204)the Innovation Foundation from National Clinical Research Center for Orthopedics,Sports Medicine&Rehabilitation Foundation(No.23-NCRC-CXJJ-ZD3-8)。
文摘Human-robot safety is an important topic in wearable robotics,especially in supernumerary robotic limbs(SRLs).The proposal of flexible joint improves human-robot safety strategy,which allows physical contact between human and robots,rather than strictly limiting the human-robot motion.However,most researchers focus on the variable stiffness features of flexible joints,but few evaluate the performance of the flexible joint in the human-robot collision.Therefore,the performance of two typical flexible joints,including the series elastic joint(SEJ)and the passive variable stiffness joint(PVSJ),are compared through dynamic collision experiments.The results demonstrate that the SEJ absorbs 40.7%-58.7%of the collision force and 34.2%-45.2%of the collision torque in the driven-torque below 4 N·m and driven-speed of 3-7(°)/s,which is more stable than PVSJ.In addition,the stiffness error of SEJ is measured at 5.1%,significantly lower than the 23.04%measured in the PVSJ.The huge stiffness error of PVSJ leads to its unreliability in buffering collision.Furthermore,we analyze results and confirm that SEJ has a more stable human-robot safety performance in buffering dynamic collision.Consequently,the SEJ is suitable in SRLs for human-robot safety in our scenario.
基金National Natural Science Foundation of China(No.61601112)。
文摘The objective of this work is to develop an innovative system(ROSGPT)that merges large language models(LLMs)with the robot operating system(ROS),facilitating natural language voice control of mobile robots.This integration aims to bridge the gap between human-robot interaction(HRI)and artificial intelligence(AI).ROSGPT integrates several subsystems,including speech recognition,prompt engineering,LLM and ROS,enabling seamless control of robots through human voice or text commands.The LLM component is optimized,with its performance refined from the open-source Llama2 model through fine-tuning and quantization procedures.Through extensive experiments conducted in both real-world and virtual environments,ROSGPT demonstrates its efficacy in meeting user requirements and delivering user-friendly interactive experiences.The system demonstrates versatility and adaptability through its ability to comprehend diverse user commands and execute corresponding tasks with precision and reliability,thereby showcasing its potential for various practical applications in robotics and AI.The demonstration video can be viewed at https://iklxo6z9yv.feishu.cn/docx/Lux3dmTDxoZ5YnxWJTZcxUCWnTh.
文摘This paper presents an innovative investigation on prototyping a digital twin(DT)as the platform for human-robot interactive welding and welder behavior analysis.This humanrobot interaction(HRI)working style helps to enhance human users'operational productivity and comfort;while data-driven welder behavior analysis benefits to further novice welder training.This HRI system includes three modules:1)a human user who demonstrates the welding operations offsite with her/his operations recorded by the motion-tracked handles;2)a robot that executes the demonstrated welding operations to complete the physical welding tasks onsite;3)a DT system that is developed based on virtual reality(VR)as a digital replica of the physical human-robot interactive welding environment.The DT system bridges a human user and robot through a bi-directional information flow:a)transmitting demonstrated welding operations in VR to the robot in the physical environment;b)displaying the physical welding scenes to human users in VR.Compared to existing DT systems reported in the literatures,the developed one provides better capability in engaging human users in interacting with welding scenes,through an augmented VR.To verify the effectiveness,six welders,skilled with certain manual welding training and unskilled without any training,tested the system by completing the same welding job;three skilled welders produce satisfied welded workpieces,while the other three unskilled do not.A data-driven approach as a combination of fast Fourier transform(FFT),principal component analysis(PCA),and support vector machine(SVM)is developed to analyze their behaviors.Given an operation sequence,i.e.,motion speed sequence of the welding torch,frequency features are firstly extracted by FFT and then reduced in dimension through PCA,which are finally routed into SVM for classification.The trained model demonstrates a 94.44%classification accuracy in the testing dataset.The successful pattern recognition in skilled welder operations should benefit to accelerate novice welder training.
基金supported by the National Natural Science Foundation of China(61403422,61273102)the Hubei Provincial Natural Science Foundation of China(2015CFA010)+1 种基金the Ⅲ Project(B17040)the Fundamental Research Funds for National University,China University of Geosciences(Wuhan)
文摘A facial expression emotion recognition based human-robot interaction(FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on2D-Gabor, uniform local binary pattern(LBP) operator, and multiclass extreme learning machine(ELM) classifier is presented,which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios,i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEERHRI system can be applied in home service, smart home, safe driving, and so on.