This paper presents a vision-based fingertip-writing character recognition system. The overall system is implemented through a CMOS image camera on a FPGA chip. A blue cover is mounted on the top of a finger to simpli...This paper presents a vision-based fingertip-writing character recognition system. The overall system is implemented through a CMOS image camera on a FPGA chip. A blue cover is mounted on the top of a finger to simplify fingertip detection and to enhance recognition accuracy. For each character stroke, 8 sample points (including start and end points) are recorded. 7 tangent angles between consecutive sampled points are also recorded as features. In addition, 3 features angles are extracted: angles of the triangle consisting of the start point, end point and average point of all (8 total) sampled points. According to these key feature angles, a simple template matching K-nearest-neighbor classifier is applied to distinguish each character stroke. Experimental result showed that the system can successfully recognize fingertip-writing character strokes of digits and small lower case letter alphabets with an accuracy of almost 100%. Overall, the proposed finger-tip-writing recognition system provides an easy-to-use and accurate visual character input method.展开更多
Background Interactions with virtual 3D objects in the virtual reality(VR)environment using the gesture of fingers captured in a wearable 2D camera have emerging applications in real-life.Method This paper presents an...Background Interactions with virtual 3D objects in the virtual reality(VR)environment using the gesture of fingers captured in a wearable 2D camera have emerging applications in real-life.Method This paper presents an approach of a two-stage convolutional neural network,one for the detection of hand and another for the fingertips.One purpose of VR environments is to transform a virtual 3D object with affine parameters by using the gesture of thumb and index fingers.Results To evaluate the performance of the proposed system,one existing,and another developed egocentric fingertip databases are employed so that learning involves large variations that are common in real-life.Experimental results show that the proposed fingertip detection system outperforms the existing systems in terms of the precision of detection.Conclusion The interaction performance of the proposed system in the VR environment is higher than that of the existing systems in terms of estimation error and correlation between the ground truth and estimated affine parameters.展开更多
A method is presented to convert any display screen into a touchscreen by using a pair of cameras. Most state of art touchscreens make use of special touch-sensitive hardware or depend on infrared sensors in various c...A method is presented to convert any display screen into a touchscreen by using a pair of cameras. Most state of art touchscreens make use of special touch-sensitive hardware or depend on infrared sensors in various configurations. We describe a novel computer-vision-based method that can robustly identify fingertips and detect touch with a precision of a few millimeters above the screen. In our system, the two cameras capture the display screen image simultaneously. Users can interact with a computer by the fingertip on the display screen. We have two important contributions: first, we develop a simple and robust hand detection method based on predicted images. Second, we determine whether a physical touch takes places by the homography of the two cameras. In this system, the appearance of the display screen in camera images is inherently predictable from the computer output images. Therefore, we can compute the predicted images and extract human hand precisely by simply subtracting the predicted images from captured images.展开更多
文摘This paper presents a vision-based fingertip-writing character recognition system. The overall system is implemented through a CMOS image camera on a FPGA chip. A blue cover is mounted on the top of a finger to simplify fingertip detection and to enhance recognition accuracy. For each character stroke, 8 sample points (including start and end points) are recorded. 7 tangent angles between consecutive sampled points are also recorded as features. In addition, 3 features angles are extracted: angles of the triangle consisting of the start point, end point and average point of all (8 total) sampled points. According to these key feature angles, a simple template matching K-nearest-neighbor classifier is applied to distinguish each character stroke. Experimental result showed that the system can successfully recognize fingertip-writing character strokes of digits and small lower case letter alphabets with an accuracy of almost 100%. Overall, the proposed finger-tip-writing recognition system provides an easy-to-use and accurate visual character input method.
文摘Background Interactions with virtual 3D objects in the virtual reality(VR)environment using the gesture of fingers captured in a wearable 2D camera have emerging applications in real-life.Method This paper presents an approach of a two-stage convolutional neural network,one for the detection of hand and another for the fingertips.One purpose of VR environments is to transform a virtual 3D object with affine parameters by using the gesture of thumb and index fingers.Results To evaluate the performance of the proposed system,one existing,and another developed egocentric fingertip databases are employed so that learning involves large variations that are common in real-life.Experimental results show that the proposed fingertip detection system outperforms the existing systems in terms of the precision of detection.Conclusion The interaction performance of the proposed system in the VR environment is higher than that of the existing systems in terms of estimation error and correlation between the ground truth and estimated affine parameters.
文摘A method is presented to convert any display screen into a touchscreen by using a pair of cameras. Most state of art touchscreens make use of special touch-sensitive hardware or depend on infrared sensors in various configurations. We describe a novel computer-vision-based method that can robustly identify fingertips and detect touch with a precision of a few millimeters above the screen. In our system, the two cameras capture the display screen image simultaneously. Users can interact with a computer by the fingertip on the display screen. We have two important contributions: first, we develop a simple and robust hand detection method based on predicted images. Second, we determine whether a physical touch takes places by the homography of the two cameras. In this system, the appearance of the display screen in camera images is inherently predictable from the computer output images. Therefore, we can compute the predicted images and extract human hand precisely by simply subtracting the predicted images from captured images.