期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Deep Transfer Learning Approach for Robust Hand Detection
1
作者 Stevica Cvetkovic Nemanja Savic Ivan Ciric 《Intelligent Automation & Soft Computing》 SCIE 2023年第4期967-979,共13页
Human hand detection in uncontrolled environments is a challenging visual recognition task due to numerous variations of hand poses and background image clutter.To achieve highly accurate results as well as provide re... Human hand detection in uncontrolled environments is a challenging visual recognition task due to numerous variations of hand poses and background image clutter.To achieve highly accurate results as well as provide real-time execution,we proposed a deep transfer learning approach over the state-of-the-art deep learning object detector.Our method,denoted as YOLOHANDS,is built on top of the You Only Look Once(YOLO)deep learning architecture,which is modified to adapt to the single class hand detection task.The model transfer is performed by modifying the higher convolutional layers including the last fully connected layer,while initializing lower non-modified layers with the generic pre-trained weights.To address robustness issues,we introduced a comprehensive augmentation procedure over the training image dataset,specifically adapted for the hand detection problem.Experimental evaluation of the proposed method,which is performed on a challenging public dataset,has demonstrated highly accurate results,comparable to the state-of-the-art methods. 展开更多
关键词 Deep learning model object detection hand detection transfer learning data augmentation
在线阅读 下载PDF
Lightweight Hand Acupoint Recognition Based on Middle Finger Cun Measurement
2
作者 Zili Meng Minglang Lu +4 位作者 Guanci Yang Tianyi Zhao Donghua Zheng Ling He Zhi Shan 《IET Cyber-Systems and Robotics》 2025年第3期81-96,共16页
Acupoint therapy plays a crucial role in the prevention and treatment of various diseases.Accurate and efficient intelligent acupoint recognition methods are essential for enhancing the operational capabilities of emb... Acupoint therapy plays a crucial role in the prevention and treatment of various diseases.Accurate and efficient intelligent acupoint recognition methods are essential for enhancing the operational capabilities of embodied intelligent robots in acupoint massage and related applications.This paper proposes a lightweight hand acupoint recognition(LHAR)method based on middle finger cun measurement.First,to obtain a lightweight model for rapid positioning of the hand area,on the basis of the design of the partially convolutional gated regularisation unit and the efficient shared convolutional detection head,an improved YOLO11 algorithm based on a lightweight efficient shared convolutional detection head(YOLO11-SH)was proposed.Second,according to the theory of traditional Chinese medicine,a method of positional relationship determination between acupoints based on middle finger cun measurement is established.The MediaPipe algorithm is subsequently used to obtain 21 keypoints of the hand and serves as a reference point for obtaining features of middle finger cun via positional relationship determination.Then,the offset-based localisation approach is adopted to achieve accurate recognition of acupoints by using the obtained feature of middle finger cun.Comparative experiments with five representative lightweight models demonstrate that YOLO11-SH achieves an mAP@0.5 of 97.3%,with 1.59×10°parameters,3.9×10°FLOPs,a model weight of 3.4 MB and an inference speed of 325.8 FPS,outperforming the comparison methods in terms of both recognition accuracy and model eff-ciency.The experimental results of acupoint recognition indicate that the overall recognition accuracy of LHAR has reached 94.49%.The average normalised displacement error for different acupoints ranges from 0.036 to 0.105,all within the error threshold of≤0.15.Finally,LHAR is integrated into the robotic platform,and a robotic massage experiment is conducted to verifytheeffectiveness of LHAR. 展开更多
关键词 acupoint recognition deep learning hand detection model health care intelligent robots object detection
原文传递
Converting a Display Screen into a Touchscreen
3
作者 Qun Wang Jun Cheng +2 位作者 San-Ming Shen Yi-Jiang Shen Jian-Xin Pang 《Journal of Electronic Science and Technology》 CAS 2014年第1期139-143,共5页
A method is presented to convert any display screen into a touchscreen by using a pair of cameras. Most state of art touchscreens make use of special touch-sensitive hardware or depend on infrared sensors in various c... A method is presented to convert any display screen into a touchscreen by using a pair of cameras. Most state of art touchscreens make use of special touch-sensitive hardware or depend on infrared sensors in various configurations. We describe a novel computer-vision-based method that can robustly identify fingertips and detect touch with a precision of a few millimeters above the screen. In our system, the two cameras capture the display screen image simultaneously. Users can interact with a computer by the fingertip on the display screen. We have two important contributions: first, we develop a simple and robust hand detection method based on predicted images. Second, we determine whether a physical touch takes places by the homography of the two cameras. In this system, the appearance of the display screen in camera images is inherently predictable from the computer output images. Therefore, we can compute the predicted images and extract human hand precisely by simply subtracting the predicted images from captured images. 展开更多
关键词 Fingertip detection hand detection predicted image touch detection touchscreen.
在线阅读 下载PDF
Smart Drawing for Online Teaching
4
作者 D.T.D.M.DAHANAYAKA A.R.LOKUGE +1 位作者 J.A.D.E.JAYAKODY I.U.ATTHANAYAKE 《Instrumentation》 2021年第2期56-66,共11页
Teaching and learning-related activities have embraced digital technology especially,with the current global pandemic restrictions pervaded during the last two years.On the other hand,most of the academic and professi... Teaching and learning-related activities have embraced digital technology especially,with the current global pandemic restrictions pervaded during the last two years.On the other hand,most of the academic and professional presentations are conducted using online platforms.But the presenter-audience interaction is hindered to a certain extent in an online case in contrary to face-to-face where real-time writing is beneficial when sketching is involved.The use of digital pens and pads is a solution for such instances,though the cost of acquiring such hardware is high.Economical solutions are essential as affordability is concerned.In this study,a real-time user-friendly,innovative drawing system is developed to address the issues related to the problems confronted with online presentations.This paper presents the development of an algorithm using Hand Landmark Detection,which replaces chalks,markers and regular ballpoint pens used in conventional communication and presentation,with online platforms.The proposed application in this study is acquired by Python and OpenCV libraries.The letters or the sketches drawn in the air were taken straight to the computer screen by this algorithm.The proposed algorithm continuously identifies the Hand Landmark using the images fed by the web camera,and text or drawing patterns are displayed on the screen according to the movements made by Hand Landmark on the image space.The developed user interface is also user-friendly.Hence the communication of letters and sketches were enabled.Although the concept has been developed and tested,with further research the versatility and accuracy of communication can be enhanced. 展开更多
关键词 hand Landmark detection Off-line Recognition Online Teaching Smart Drawing
原文传递
Affine transformation of virtual 3D object using 2D localization of fingertips
5
作者 Mohammad Mahmudul ALAM S.M.Mahbubur RAHMAN 《Virtual Reality & Intelligent Hardware》 2020年第6期534-555,共22页
Background Interactions with virtual 3D objects in the virtual reality(VR)environment using the gesture of fingers captured in a wearable 2D camera have emerging applications in real-life.Method This paper presents an... Background Interactions with virtual 3D objects in the virtual reality(VR)environment using the gesture of fingers captured in a wearable 2D camera have emerging applications in real-life.Method This paper presents an approach of a two-stage convolutional neural network,one for the detection of hand and another for the fingertips.One purpose of VR environments is to transform a virtual 3D object with affine parameters by using the gesture of thumb and index fingers.Results To evaluate the performance of the proposed system,one existing,and another developed egocentric fingertip databases are employed so that learning involves large variations that are common in real-life.Experimental results show that the proposed fingertip detection system outperforms the existing systems in terms of the precision of detection.Conclusion The interaction performance of the proposed system in the VR environment is higher than that of the existing systems in terms of estimation error and correlation between the ground truth and estimated affine parameters. 展开更多
关键词 Affine transformation detection of fingertips detection of hand Human-computer interaction Virtual reality
在线阅读 下载PDF
A Hand Gesture Based Interactive Presentation System Utilizing Heterogeneous Cameras 被引量:2
6
作者 Bobo Zeng Guijin Wang Xinggang Lin 《Tsinghua Science and Technology》 EI CAS 2012年第3期329-336,共8页
In this paper, a real-time system that utilizes hand gestures to interactively control the presentation is proposed. The system employs a thermal camera for robust human body segmentation to handle the complex backgro... In this paper, a real-time system that utilizes hand gestures to interactively control the presentation is proposed. The system employs a thermal camera for robust human body segmentation to handle the complex background and varying illumination posed by the projector. A fast and robust hand localization al- gorithm is proposed, with which the head, torso, and arm are sequentially localized. Hand trajectories are segmented and recognized as gestures for interactions. A dual-step calibration algorithm is utilized to map the interaction regions between the thermal camera and the projected contents by integrating a Web cam- era. Experiments show that the system has a high recognition rate for hand gestures, and corresponding interactions can be performed correctly. 展开更多
关键词 hand gesture detection gesture recognition thermal camera interactive presentation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部