For a vision measurement system consisted of laser-CCD scanning sensors, an algorithm is proposed to extract and recognize the target object contour. Firstly, the two-dimensional(2D) point cloud that is output by th...For a vision measurement system consisted of laser-CCD scanning sensors, an algorithm is proposed to extract and recognize the target object contour. Firstly, the two-dimensional(2D) point cloud that is output by the integrated laser sensor is transformed into a binary image. Secondly, the potential target object contours are segmented and extracted based on the connected domain labeling and adaptive corner detection. Then, the target object contour is recognized by improved Hu invariant moments and BP neural network classifier. Finally, we extract the point data of the target object contour through the reverse transformation from a binary image to a 2D point cloud. The experimental results show that the average recognition rate is 98.5% and the average recognition time is 0.18 s per frame. This algorithm realizes the real-time tracking of the target object in the complex background and the condition of multi-moving objects.展开更多
Recent years have witnessed significant progress in image-based 3D face reconstruction using deep convolutional neural networks.However,current reconstruction methods often perform improperly in self-occluded regions ...Recent years have witnessed significant progress in image-based 3D face reconstruction using deep convolutional neural networks.However,current reconstruction methods often perform improperly in self-occluded regions and can lead to inaccurate correspondences between a 2D input image and a 3D face template,hindering use in real applications.To address these problems,we propose a deep shape reconstruction and texture completion network,SRTC-Net,which jointly reconstructs 3D facial geometry and completes texture with correspondences from a single input face image.In SRTC-Net,we leverage the geometric cues from completed 3D texture to reconstruct detailed structures of 3D shapes.The SRTC-Net pipeline has three stages.The first introduces a correspondence network to identify pixel-wise correspondence between the input 2D image and a 3D template model,and transfers the input 2D image to a U-V texture map.Then we complete the invisible and occluded areas in the U-V texture map using an inpainting network.To get the 3D facial geometries,we predict coarse shape(U-V position maps)from the segmented face from the correspondence network using a shape network,and then refine the 3D coarse shape by regressing the U-V displacement map from the completed U-V texture map in a pixel-to-pixel way.We examine our methods on 3D reconstruction tasks as well as face frontalization and pose invariant face recognition tasks,using both in-the-lab datasets(MICC,MultiPIE)and in-the-wild datasets(CFP).The qualitative and quantitative results demonstrate the effectiveness of our methods on inferring 3D facial geometry and complete texture;they outperform or are comparable to the state-of-the-art.展开更多
文摘For a vision measurement system consisted of laser-CCD scanning sensors, an algorithm is proposed to extract and recognize the target object contour. Firstly, the two-dimensional(2D) point cloud that is output by the integrated laser sensor is transformed into a binary image. Secondly, the potential target object contours are segmented and extracted based on the connected domain labeling and adaptive corner detection. Then, the target object contour is recognized by improved Hu invariant moments and BP neural network classifier. Finally, we extract the point data of the target object contour through the reverse transformation from a binary image to a 2D point cloud. The experimental results show that the average recognition rate is 98.5% and the average recognition time is 0.18 s per frame. This algorithm realizes the real-time tracking of the target object in the complex background and the condition of multi-moving objects.
基金supported by the National Natural Science Foundation of China(Nos.U1613211 and U1813218)Shenzhen Research Program(Nos.JCYJ20170818164704758 and JCYJ20150925163005055).
文摘Recent years have witnessed significant progress in image-based 3D face reconstruction using deep convolutional neural networks.However,current reconstruction methods often perform improperly in self-occluded regions and can lead to inaccurate correspondences between a 2D input image and a 3D face template,hindering use in real applications.To address these problems,we propose a deep shape reconstruction and texture completion network,SRTC-Net,which jointly reconstructs 3D facial geometry and completes texture with correspondences from a single input face image.In SRTC-Net,we leverage the geometric cues from completed 3D texture to reconstruct detailed structures of 3D shapes.The SRTC-Net pipeline has three stages.The first introduces a correspondence network to identify pixel-wise correspondence between the input 2D image and a 3D template model,and transfers the input 2D image to a U-V texture map.Then we complete the invisible and occluded areas in the U-V texture map using an inpainting network.To get the 3D facial geometries,we predict coarse shape(U-V position maps)from the segmented face from the correspondence network using a shape network,and then refine the 3D coarse shape by regressing the U-V displacement map from the completed U-V texture map in a pixel-to-pixel way.We examine our methods on 3D reconstruction tasks as well as face frontalization and pose invariant face recognition tasks,using both in-the-lab datasets(MICC,MultiPIE)and in-the-wild datasets(CFP).The qualitative and quantitative results demonstrate the effectiveness of our methods on inferring 3D facial geometry and complete texture;they outperform or are comparable to the state-of-the-art.