The capability of whole-body proprioception,e.g.,pose estimation,is important for the control and interaction of continuum robots.However,existing pose estimation methods are often simplified through geometric assumpt...The capability of whole-body proprioception,e.g.,pose estimation,is important for the control and interaction of continuum robots.However,existing pose estimation methods are often simplified through geometric assumptions,primarily due to constraints such as computational and sensor deployment costs.We propose an explicit posture estimation method through a neural network,and implement it using an embedded camera for vision-based proprioception.We design a continuous location encoding neural network(LENN)by encoding continuous locational information.The LENN can capture deformation from changes in internal texture observed by an integrated camera,and output pose information—both position and orientation—for any point along the robot backbone,rather than only discrete points.Compared with interpolation-based estimation using a reduced model,our method reduces single-point estimation error by 33.6%.Furthermore,a systematic evaluation of hardware configurations demonstrates that our prototype achieves sub-millimetre accuracy in shape estimation(0.383 mm)while maintaining real-time inference speeds below 12 ms per frame.By combining a learning-based approach with a simple mechanical design,our method leverages internal visual information to estimate the whole-body pose,providing an effective solution for accurate shape estimation in continuum robots.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.52188102,52505008)the National Key Research and Development Program of China(Grant No.2024YFB4707902)。
文摘The capability of whole-body proprioception,e.g.,pose estimation,is important for the control and interaction of continuum robots.However,existing pose estimation methods are often simplified through geometric assumptions,primarily due to constraints such as computational and sensor deployment costs.We propose an explicit posture estimation method through a neural network,and implement it using an embedded camera for vision-based proprioception.We design a continuous location encoding neural network(LENN)by encoding continuous locational information.The LENN can capture deformation from changes in internal texture observed by an integrated camera,and output pose information—both position and orientation—for any point along the robot backbone,rather than only discrete points.Compared with interpolation-based estimation using a reduced model,our method reduces single-point estimation error by 33.6%.Furthermore,a systematic evaluation of hardware configurations demonstrates that our prototype achieves sub-millimetre accuracy in shape estimation(0.383 mm)while maintaining real-time inference speeds below 12 ms per frame.By combining a learning-based approach with a simple mechanical design,our method leverages internal visual information to estimate the whole-body pose,providing an effective solution for accurate shape estimation in continuum robots.