期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
Human-computer interactions for virtual reality
1
作者 Feng TIAN 《Virtual Reality & Intelligent Hardware》 2019年第3期I0001-I0002,共2页
Human-computer interactions constitute an important subject for the development and popularization of information technologies,as they are not only an important frontier technology in computer science but also an impo... Human-computer interactions constitute an important subject for the development and popularization of information technologies,as they are not only an important frontier technology in computer science but also an important auxiliary technology in virtual reality(VR).In recent years,Chinese researchers have made significant advances in human-computer interactions.To systematically display China's latest advances in human-computer interactions and thus provide an impetus for the development of VR and other related fields,we have solicited articles for this special issue from experts in this area to participate in the review process.The following articles have been selected for publication in this special issue. 展开更多
关键词 COMPUTER HUMAN FRONTIER
在线阅读 下载PDF
Gesture interaction in virtual reality 被引量:9
2
作者 Yang LI Jin HUANG +2 位作者 Feng TIAN Hong-An WANG Guo-Zhong DAI 《Virtual Reality & Intelligent Hardware》 2019年第1期84-112,共29页
With the development of virtual reality(VR)and human-computer interaction technology,how to use natural and efficient interaction methods in the virtual environment has become a hot topic of research.Gesture is one of... With the development of virtual reality(VR)and human-computer interaction technology,how to use natural and efficient interaction methods in the virtual environment has become a hot topic of research.Gesture is one of the most important communication methods of human beings,which can effectively express users'demands.In the past few decades,gesture-based interaction has made significant progress.This article focuses on the gesture interaction technology and discusses the definition and classification of gestures,input devices for gesture interaction,and gesture interaction recognition technology.The application of gesture interaction technology in virtual reality is studied,the existing problems in the current gesture interaction are summarized,and the future development is prospected. 展开更多
关键词 Virtual reality Gesture interaction Gesture recognition
在线阅读 下载PDF
Influence of multi-modality on moving target selection in virtual reality 被引量:1
3
作者 Yang LI Dong WU +3 位作者 Jin HUANG Feng TIAN Hong'an WANG Guozhong DAI 《Virtual Reality & Intelligent Hardware》 2019年第3期303-315,共13页
Background Owing to recent advances in virtual reality(VR)technologies,effective user interaction with dynamic content in 3D scenes has become a research hotspot.Moving target selection is a basic interactive task in ... Background Owing to recent advances in virtual reality(VR)technologies,effective user interaction with dynamic content in 3D scenes has become a research hotspot.Moving target selection is a basic interactive task in which the user performance research in tasks is significant to user interface design in VR.Different from the existing static target selection studies,the moving target selection in VR is affected by the change in target speed,angle and size,and lack of research on some key factors.Methods This study designs an experimental scenario in which the users play badminton under the condition of VR.By adding seven kinds of modal clues such as vision,audio,haptics,and their combinations,five kinds of moving speed and four kinds of serving angles,and the effect of these factors on the performance and subjective feelings in moving target selection in VR,is studied.Results The results show that the moving speed of the shuttlecock has a significant impact on the user performance.The angle of service has a significant impact on hitting rate,but has no significant impact on the hitting distance.The acquisition of the user performance by the moving target is mainly influenced by vision under the combined modalities;adding additional modalities can improve user performance.Although the hitting distance of the target is increased in the trimodal condition,the hitting rate decreases.Conclusion This study analyses the results of user performance and subjective perception,and then provides suggestions on the combination of modality clues in different scenarios. 展开更多
关键词 MULTIMODAL Moving target selection Virtual reality
在线阅读 下载PDF
Non-Frontal Facial Expression Recognition Using a Depth-Patch Based Deep Neural Network 被引量:2
4
作者 Nai-Ming Yao Hui Chen +1 位作者 Qing-Pei Guo Hong-An Wang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第6期1172-1185,共14页
The challenge of coping with non-frontal head poses during facial expression recognition results in considerable reduction of accuracy and robustness when capturing expressions that occur during natural communications... The challenge of coping with non-frontal head poses during facial expression recognition results in considerable reduction of accuracy and robustness when capturing expressions that occur during natural communications. In this paper, we attempt to recognize facial expressions under poses with large rotation angles from 2D videos. A depth^patch based 4D expression representation model is proposed. It was reconstructed from 2D dynamic images for delineating continuous spatial changes and temporal context under non-frontal cases. Furthermore, we present an effective deep neural network classifier, which can accurately capture pose-variant expression features from the depth patches and recognize non-frontal expressions. Experimental results on the BU-4DFE database show that the proposed method achieves a high recognition accuracy of 86.87% for non-frontal facial expressions within a range of head rotation angle of up to 52%, outperforming existing methods. We also present a quantitative analysis of the components contributing to the performance gain through tests on the BU-4DFE and Multi-PIE datasets. 展开更多
关键词 facial expression recognition non-frontal head pose DEPTH spatial-temporal convolutional neural network
原文传递
EmotionMap:Visual Analysis of Video Emotional Content on a Map
5
作者 Cui-Xia Ma Jian-Cheng Song +3 位作者 Qian Zhu Kevin Maher Ze-Yuan Huang Hong-An Wang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第3期576-591,共16页
Emotion plays a crucial role in gratifying users’needs during their experience of movies and TV series,and may be underutilized as a framework for exploring video content and analysis.In this paper,we present Emotion... Emotion plays a crucial role in gratifying users’needs during their experience of movies and TV series,and may be underutilized as a framework for exploring video content and analysis.In this paper,we present EmotionMap,a novel way of presenting emotion for daily users in 2D geography,fusing spatio-temporal information with emotional data.The interface is composed of novel visualization elements interconnected to facilitate video content exploration,understanding,and searching.EmotionMap allows understanding of the overall emotion at a glance while also giving a rapid understanding of the details.Firstly,we develop EmotionDisc which is an effective tool for collecting audiences’emotion based on emotion representation models.We collect audience and character emotional data,and then integrate the metaphor of a map to visualize video content and emotion in a hierarchical structure.EmotionMap combines sketch interaction,providing a natural approach for users’active exploration.The novelty and the effectiveness of EmotionMap have been demonstrated by the user study and experts’feedback. 展开更多
关键词 video visualization emotion analysis visual analysis sketch interaction
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部