Robot grasp detection is a fundamental vision task for robots.Deep learning-based methods have shown excellent results in enhancing the grasp detection capabilities for model-free objects in unstructured scenes.Most p...Robot grasp detection is a fundamental vision task for robots.Deep learning-based methods have shown excellent results in enhancing the grasp detection capabilities for model-free objects in unstructured scenes.Most popular approaches explore deep network models and exploit RGB-D images combining colour and depth data to acquire enriched feature expressions.However,current work struggles to achieve a satisfactory balance between the accuracy and real-time performance;the variability of RGB and depth feature distributions receives inadequate attention.The treatment of predicted failure cases is also lacking.We propose an efficient fully convolutional network to predict the pixel-level antipodal grasp parameters in RGB-D images.A structure with hierarchical feature fusion is established using multiple lightweight feature extraction blocks.The feature fusion module with 3D global attention is used to select the complementary information in RGB and depth images suficiently.Additionally,a grasp configuration optimization method based on local grasp path is proposed to cope with the possible failures predicted by the model.Extensive experiments on two public grasping datasets,Cornell and Jacquard,demonstrate that the approach can improve the performance of grasping unknown objects.展开更多
Robot hands have been developing during the last few decades. There are many mechanical structures and analyti?cal methods for di erent hands. But many tough problems still limit robot hands to apply in homelike envir...Robot hands have been developing during the last few decades. There are many mechanical structures and analyti?cal methods for di erent hands. But many tough problems still limit robot hands to apply in homelike environment. The ability of grasping objects covering a large range of sizes and various shapes is fundamental for a home service robot to serve people better. In this paper, a new grasping mode based on a novel sucked?type underactuated(STU) hand is proposed. By combining the flexibility of soft material and the e ect of suction cups, the STU hand can grasp objects with a wide range of sizes, shapes and materials. Moreover, the new grasping mode is suitable for some situations where the force closure is failure. In this paper, we deduce the e ective range of sizes of objects which our hand using the new grasping mode can grasp. Thanks to the new grasping mode, the ratio of grasping size between the biggest object and the smallest is beyond 40, which makes it possible for our robot hand to grasp diverse objects in our daily life. For example, the STU hand can grasp a soccer(220 mm diameter, 420 g) and a fountain pen(9 mm diameter, 9 g). What’s more, we use the rigid body equilibrium conditions to analysis the force condition. Experiment evaluates the high load capacity, stability of the new grasping mode and displays the versatility of the STU hand. The STU hand has a wide range of applications especially in unstructured environment.展开更多
A versatile sensing platform employing inorganic MoS_(2) nanoflowers and organic poly(3,4-ethylene dioxythiophene):poly(styrene sulfonate)(PEDOT:PSS)has been investigated to develop the resistive and capacitive force-...A versatile sensing platform employing inorganic MoS_(2) nanoflowers and organic poly(3,4-ethylene dioxythiophene):poly(styrene sulfonate)(PEDOT:PSS)has been investigated to develop the resistive and capacitive force-sensitive devices.The microstructure of the sensing layer heightens the sensitivity and response time of the dual-mode pressure sensors by augmenting electron pathways and inner stress in response to mechanical stimuli.Consequently,the capacitive and resistive sensors exhibit sensitivities of 0.37 and 0.12 kPa^(-1),respectively,while demonstrating a remarkable response time of approximately 100 ms.Furthermore,it is noteworthy that the PEDOT:PSS layer exhibits excellent adhesion to polydimethylsiloxane(PDMS)substrates,which contributes to the development of highly robust force-sensitive sensors capable of enduring more than 10000loading/unloading cycles.The combination of MoS_(2)/PEDOT:PSS layers in these dual-mode sensors has shown promising results in detecting human joint movements and subtle physiological signals.Notably,the sensors have achieved a remarkable precision rate of 98%in identifying target objects.These outcomes underscore the significant potential of these sensors for integration into applications such as electronic skin and human-machine interaction.展开更多
In this work,we present a method that enables a mobile robot to hand over objects to humans efficiently and safely by combining mobile navigation with visual perception.Our robotic system can map its environment in re...In this work,we present a method that enables a mobile robot to hand over objects to humans efficiently and safely by combining mobile navigation with visual perception.Our robotic system can map its environment in real time and locate objects to pick up.It uses advanced algorithms to grasp objects in a way that suits human preference and employs path planning and obstacle avoidance to navigate back to the human user.The robot adjusts its movements during handover by analyzing the human’s posture and movements through visual sensors,ensuring a smooth and collision-free handover.Tests of our system show that it can successfully hand over various objects to humans and adapt to changes in the human’s hand position,highlighting improvements in safety and versatility for robotic handovers.展开更多
基金the National Natural Science Foundation of China(No.62173230)the Program of Science and Technology Commission of Shanghai Municipality(No.22511101400)。
文摘Robot grasp detection is a fundamental vision task for robots.Deep learning-based methods have shown excellent results in enhancing the grasp detection capabilities for model-free objects in unstructured scenes.Most popular approaches explore deep network models and exploit RGB-D images combining colour and depth data to acquire enriched feature expressions.However,current work struggles to achieve a satisfactory balance between the accuracy and real-time performance;the variability of RGB and depth feature distributions receives inadequate attention.The treatment of predicted failure cases is also lacking.We propose an efficient fully convolutional network to predict the pixel-level antipodal grasp parameters in RGB-D images.A structure with hierarchical feature fusion is established using multiple lightweight feature extraction blocks.The feature fusion module with 3D global attention is used to select the complementary information in RGB and depth images suficiently.Additionally,a grasp configuration optimization method based on local grasp path is proposed to cope with the possible failures predicted by the model.Extensive experiments on two public grasping datasets,Cornell and Jacquard,demonstrate that the approach can improve the performance of grasping unknown objects.
基金National Natural Science Foundation of China(Grant Nos.U1613216,61573333)
文摘Robot hands have been developing during the last few decades. There are many mechanical structures and analyti?cal methods for di erent hands. But many tough problems still limit robot hands to apply in homelike environment. The ability of grasping objects covering a large range of sizes and various shapes is fundamental for a home service robot to serve people better. In this paper, a new grasping mode based on a novel sucked?type underactuated(STU) hand is proposed. By combining the flexibility of soft material and the e ect of suction cups, the STU hand can grasp objects with a wide range of sizes, shapes and materials. Moreover, the new grasping mode is suitable for some situations where the force closure is failure. In this paper, we deduce the e ective range of sizes of objects which our hand using the new grasping mode can grasp. Thanks to the new grasping mode, the ratio of grasping size between the biggest object and the smallest is beyond 40, which makes it possible for our robot hand to grasp diverse objects in our daily life. For example, the STU hand can grasp a soccer(220 mm diameter, 420 g) and a fountain pen(9 mm diameter, 9 g). What’s more, we use the rigid body equilibrium conditions to analysis the force condition. Experiment evaluates the high load capacity, stability of the new grasping mode and displays the versatility of the STU hand. The STU hand has a wide range of applications especially in unstructured environment.
基金supported by the Natural Science Foundation of Guangdong Province(Grant No.2021A1515010691)the College Innovation Team Project of Guangdong Province(Grant No.2021KCXTD042)Wuyi University-Hong Kong-Macao Joint Research and Development Fund(Grant No.2019WGALH06)。
文摘A versatile sensing platform employing inorganic MoS_(2) nanoflowers and organic poly(3,4-ethylene dioxythiophene):poly(styrene sulfonate)(PEDOT:PSS)has been investigated to develop the resistive and capacitive force-sensitive devices.The microstructure of the sensing layer heightens the sensitivity and response time of the dual-mode pressure sensors by augmenting electron pathways and inner stress in response to mechanical stimuli.Consequently,the capacitive and resistive sensors exhibit sensitivities of 0.37 and 0.12 kPa^(-1),respectively,while demonstrating a remarkable response time of approximately 100 ms.Furthermore,it is noteworthy that the PEDOT:PSS layer exhibits excellent adhesion to polydimethylsiloxane(PDMS)substrates,which contributes to the development of highly robust force-sensitive sensors capable of enduring more than 10000loading/unloading cycles.The combination of MoS_(2)/PEDOT:PSS layers in these dual-mode sensors has shown promising results in detecting human joint movements and subtle physiological signals.Notably,the sensors have achieved a remarkable precision rate of 98%in identifying target objects.These outcomes underscore the significant potential of these sensors for integration into applications such as electronic skin and human-machine interaction.
基金supported by the National Natural Science Foundation of China(grant nos.62306185 and 62073274)the Guangdong Basic and Applied Basic Research Foundation(grant no.2023B1515020089)the Shenzhen Science and Technology Program(grant no.JSGGKQTD20221101115656029).
文摘In this work,we present a method that enables a mobile robot to hand over objects to humans efficiently and safely by combining mobile navigation with visual perception.Our robotic system can map its environment in real time and locate objects to pick up.It uses advanced algorithms to grasp objects in a way that suits human preference and employs path planning and obstacle avoidance to navigate back to the human user.The robot adjusts its movements during handover by analyzing the human’s posture and movements through visual sensors,ensuring a smooth and collision-free handover.Tests of our system show that it can successfully hand over various objects to humans and adapt to changes in the human’s hand position,highlighting improvements in safety and versatility for robotic handovers.