期刊文献+
共找到18篇文章
< 1 >
每页显示 20 50 100
Human-computer interactions for virtual reality
1
作者 Feng TIAN 《Virtual Reality & Intelligent Hardware》 2019年第3期I0001-I0002,共2页
Human-computer interactions constitute an important subject for the development and popularization of information technologies,as they are not only an important frontier technology in computer science but also an impo... Human-computer interactions constitute an important subject for the development and popularization of information technologies,as they are not only an important frontier technology in computer science but also an important auxiliary technology in virtual reality(VR).In recent years,Chinese researchers have made significant advances in human-computer interactions.To systematically display China's latest advances in human-computer interactions and thus provide an impetus for the development of VR and other related fields,we have solicited articles for this special issue from experts in this area to participate in the review process.The following articles have been selected for publication in this special issue. 展开更多
关键词 COMPUTER HUMAN FRONTIER
在线阅读 下载PDF
Design of space-centered interaction using invisible and intangible spatial inputs
2
作者 David Jean Kwangjin Hong Keechul Jung 《Journal of Measurement Science and Instrumentation》 CAS 2012年第2期137-145,共9页
In this paper,we investigate methodologies to improve direct-touch interaction on invisible and intangible spatial input.We firstly discuss about the motive of looking for a new input method for whole body interaction... In this paper,we investigate methodologies to improve direct-touch interaction on invisible and intangible spatial input.We firstly discuss about the motive of looking for a new input method for whole body interaction and how it can be meaningful.We also describe the role that can play spatial interaction to improve the freedom of interaction for a user.We propose a method of spatial centered interaction using invisible and intangible spatial inputs.However,given their lack of tactile feedback and visual representation,direct touch interaction on such input can be confused.In order to make a step toward understanding causes and solutions for such phenomena,we made 2 user experiments.In the first one,we test 5 setups of helper that provide information of the location of the input by constraining the dimension it is located at.The results show that using marker on the ground and a relationship with the height of the user’s body improve significantly the locative task.In the second experiment,we create a dancing game using invisible and intangible spatial inputs and we stress the results obtained in the first experiment within this cognitively demanding context.Results show that the same setup of helper is still providing very good results in that context. 展开更多
关键词 human-computer interaction(HCI) spatial centered interaction whole body interaction input method
在线阅读 下载PDF
Gesture interaction in virtual reality 被引量:12
3
作者 Yang LI Jin HUANG +2 位作者 Feng TIAN Hong-An WANG Guo-Zhong DAI 《Virtual Reality & Intelligent Hardware》 2019年第1期84-112,共29页
With the development of virtual reality(VR)and human-computer interaction technology,how to use natural and efficient interaction methods in the virtual environment has become a hot topic of research.Gesture is one of... With the development of virtual reality(VR)and human-computer interaction technology,how to use natural and efficient interaction methods in the virtual environment has become a hot topic of research.Gesture is one of the most important communication methods of human beings,which can effectively express users'demands.In the past few decades,gesture-based interaction has made significant progress.This article focuses on the gesture interaction technology and discusses the definition and classification of gestures,input devices for gesture interaction,and gesture interaction recognition technology.The application of gesture interaction technology in virtual reality is studied,the existing problems in the current gesture interaction are summarized,and the future development is prospected. 展开更多
关键词 Virtual reality Gesture interaction Gesture recognition
在线阅读 下载PDF
Multidimensional image morphing-fast image-based rendering of open 3D and VR environments
4
作者 Simon SEIBT Bastian KUTH +2 位作者 Bartosz von Rymon LIPINSKI Thomas CHANG Marc Erich LATOSCHIK 《虚拟现实与智能硬件(中英文)》 2025年第2期155-172,共18页
Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance b... Background In recent years,the demand for interactive photorealistic three-dimensional(3D)environments has increased in various fields,including architecture,engineering,and entertainment.However,achieving a balance between the quality and efficiency of high-performance 3D applications and virtual reality(VR)remains challenging.Methods This study addresses this issue by revisiting and extending view interpolation for image-based rendering(IBR),which enables the exploration of spacious open environments in 3D and VR.Therefore,we introduce multimorphing,a novel rendering method based on the spatial data structure of 2D image patches,called the image graph.Using this approach,novel views can be rendered with up to six degrees of freedom using only a sparse set of views.The rendering process does not require 3D reconstruction of the geometry or per-pixel depth information,and all relevant data for the output are extracted from the local morphing cells of the image graph.The detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time.In addition,a GPU-based solution was presented to resolve exposure inconsistencies within a dataset,enabling seamless transitions of brightness when moving between areas with varying light intensities.Results Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high"VR-compatible"frame rates,even on mid-range and legacy hardware,respectively.While achieving adequate visual quality even for sparse datasets,it outperforms other IBR and current neural rendering approaches.Conclusions Using the correspondence-based decomposition of input images into morphing cells of 2D image patches,multidimensional image morphing provides high-performance novel view generation,supporting open 3D and VR environments.Nevertheless,the handling of morphing artifacts in the parallax image regions remains a topic for future research. 展开更多
关键词 Computer graphics 3D real-time rendering Computer vision Image morphing Virtual reality
在线阅读 下载PDF
眼动数据可视化综述 被引量:31
5
作者 程时伟 孙凌云 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2014年第5期698-707,共10页
随着眼动跟踪技术在实际应用中的普及,大量眼动数据需要通过合理的可视化方式进行处理与分析,在这种背景下,眼动数据可视化在基础理论、方法和应用研究等方面得到了快速发展.文中总结了眼动数据的预处理与参数化方法,并在此基础上介绍... 随着眼动跟踪技术在实际应用中的普及,大量眼动数据需要通过合理的可视化方式进行处理与分析,在这种背景下,眼动数据可视化在基础理论、方法和应用研究等方面得到了快速发展.文中总结了眼动数据的预处理与参数化方法,并在此基础上介绍了眼动数据可视化的基本框架和4种主要可视化方法:扫描路径法、热区图法、感兴趣区法和三维空间法;进而介绍了眼动数据可视化在用户界面可用性评估等方面的应用实例.最后对眼动数据可视化未来的研究趋势进行了展望. 展开更多
关键词 眼动跟踪 可视化 热区图 扫描路径 可视分析 人机交互
在线阅读 下载PDF
用于移动设备人机交互的眼动跟踪方法 被引量:18
6
作者 程时伟 孙志强 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2014年第8期1354-1361,共8页
传统眼动跟踪设备构造复杂、体积和重量大,通常只能以桌面固定方式使用,无法支持普适计算环境下的移动式交互.为此,提出一种面向移动式交互的眼动跟踪方法,包括眼动图像处理、眼动特征检测、眼动数据计算和眼动交互应用4个层次.首先对... 传统眼动跟踪设备构造复杂、体积和重量大,通常只能以桌面固定方式使用,无法支持普适计算环境下的移动式交互.为此,提出一种面向移动式交互的眼动跟踪方法,包括眼动图像处理、眼动特征检测、眼动数据计算和眼动交互应用4个层次.首先对眼球红外图像进行滤波、二值化处理,进而基于瞳孔-角膜反射法,结合二次定位和改进的椭圆拟合方法检测瞳孔,设计缩放因子驱动的模板匹配法检测普洱钦斑;在此基础上,计算注视点坐标等眼动数据;最后设计与开发了一个基于单个红外摄像头的头戴式眼动跟踪原型系统.实际用户测试结果表明,该系统舒适度适中,并具有较高精度和鲁棒性,从而验证了文中方法在移动式交互中的可行性和有效性. 展开更多
关键词 眼动跟踪 瞳孔-角膜反射法 普洱钦斑 移动设备 人机交互
在线阅读 下载PDF
面向多设备交互的眼动跟踪方法 被引量:10
7
作者 程时伟 孙志强 陆煜华 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2016年第7期1094-1104,共11页
当前,越来越多的人机交互应用需要依靠多个设备共同完成,传统针对单个设备的眼动跟踪方法已很难适应多设备交互的需求.为此,提出一种面向多设备交互的眼动跟踪方法.针对用户眼球运动幅度显著变大给图像识别带来的影响,采用待选瞳孔区域... 当前,越来越多的人机交互应用需要依靠多个设备共同完成,传统针对单个设备的眼动跟踪方法已很难适应多设备交互的需求.为此,提出一种面向多设备交互的眼动跟踪方法.针对用户眼球运动幅度显著变大给图像识别带来的影响,采用待选瞳孔区域和瞳孔中心识别相结合的方法识别瞳孔;同时对普洱钦斑位置进行预测,插补识别过程中丢失的普洱钦斑;在此基础上,建立瞳孔-普洱钦斑反射向量.另一方面,利用边缘检测方法识别设备屏幕,并通过建立不同设备屏幕的顶点位置列表,比较屏幕形状和面积以区分不同设备;再根据瞳孔-普洱钦斑反射向量进行眼动注视点坐标拟合计算,并结合头部运动误差补偿方法提高多设备之间眼动注视点坐标的计算精度.最后设计开发了头戴式眼动跟踪系统Multi Gaze,用户测试结果表明,文中方法在多设备交互环境下能有效地提高注视点计算精度. 展开更多
关键词 眼动跟踪 注视点 多设备 人机交互
在线阅读 下载PDF
Automated Facial Expression Recognition and Age Estimation Using Deep Learning 被引量:3
8
作者 Syeda Amna Rizwan Yazeed Yasin Ghadi +1 位作者 Ahmad Jalal Kibum Kim 《Computers, Materials & Continua》 SCIE EI 2022年第6期5235-5252,共18页
With the advancement of computer vision techniques in surveillance systems,the need for more proficient,intelligent,and sustainable facial expressions and age recognition is necessary.The main purpose of this study is... With the advancement of computer vision techniques in surveillance systems,the need for more proficient,intelligent,and sustainable facial expressions and age recognition is necessary.The main purpose of this study is to develop accurate facial expressions and an age recognition system that is capable of error-free recognition of human expression and age in both indoor and outdoor environments.The proposed system first takes an input image pre-process it and then detects faces in the entire image.After that landmarks localization helps in the formation of synthetic face mask prediction.A novel set of features are extracted and passed to a classifier for the accurate classification of expressions and age group.The proposed system is tested over two benchmark datasets,namely,the Gallagher collection person dataset and the Images of Groups dataset.The system achieved remarkable results over these benchmark datasets about recognition accuracy and computational time.The proposed system would also be applicable in different consumer application domains such as online business negotiations,consumer behavior analysis,E-learning environments,and emotion robotics. 展开更多
关键词 Feature extraction face expression model local transform features and recurrent neural network(RNN)
在线阅读 下载PDF
A Context—Aware Infrastructure for Supporting Applications with Pen—Based Interaction 被引量:5
9
作者 栗阳 关志伟 +2 位作者 戴国忠 任向实 韩勇 《Journal of Computer Science & Technology》 SCIE EI CSCD 2003年第3期343-353,共11页
Pen-based user interfaces which leverage the affordances of the pen provide userswith more flexibility and natural interaction. However, it is difficult to construct usable pen-baseduser interfaces because of the lack... Pen-based user interfaces which leverage the affordances of the pen provide userswith more flexibility and natural interaction. However, it is difficult to construct usable pen-baseduser interfaces because of the lack of support for their development. Toolkit-level support has beenexploited to solve this problem, but this approach makes it hard to gain platform independence,easy maintenance and easy extension. In this paper a context-aware infrastructure is created,called WEAVER, to provide pen interaction services for both novel pen-based applications andlegacy GUI-based applications. WEAVER aims to support the pen as another standard interactivedevice along with the keyboard and mouse and present a high-level access interface to pen input.It employs application context to tailor its service to different applications. By modeling theapplication context and registering the relevant action adapters, WEAVER can offer services,such as gesture recognition, continuous handwriting and other fundamental ink manipulations, toappropriate applications. One of the distinct features of WEAVER is that off-the-shelf GUI-basedsoftware packages can be easily enhanced with pen interaction without modifying the existing code.In this paper, the architecture and components of WEAVER are described. In addition, examplesand feedbacks of its use are presented. 展开更多
原文传递
Influence of multi-modality on moving target selection in virtual reality 被引量:1
10
作者 Yang LI Dong WU +3 位作者 Jin HUANG Feng TIAN Hong'an WANG Guozhong DAI 《Virtual Reality & Intelligent Hardware》 2019年第3期303-315,共13页
Background Owing to recent advances in virtual reality(VR)technologies,effective user interaction with dynamic content in 3D scenes has become a research hotspot.Moving target selection is a basic interactive task in ... Background Owing to recent advances in virtual reality(VR)technologies,effective user interaction with dynamic content in 3D scenes has become a research hotspot.Moving target selection is a basic interactive task in which the user performance research in tasks is significant to user interface design in VR.Different from the existing static target selection studies,the moving target selection in VR is affected by the change in target speed,angle and size,and lack of research on some key factors.Methods This study designs an experimental scenario in which the users play badminton under the condition of VR.By adding seven kinds of modal clues such as vision,audio,haptics,and their combinations,five kinds of moving speed and four kinds of serving angles,and the effect of these factors on the performance and subjective feelings in moving target selection in VR,is studied.Results The results show that the moving speed of the shuttlecock has a significant impact on the user performance.The angle of service has a significant impact on hitting rate,but has no significant impact on the hitting distance.The acquisition of the user performance by the moving target is mainly influenced by vision under the combined modalities;adding additional modalities can improve user performance.Although the hitting distance of the target is increased in the trimodal condition,the hitting rate decreases.Conclusion This study analyses the results of user performance and subjective perception,and then provides suggestions on the combination of modality clues in different scenarios. 展开更多
关键词 MULTIMODAL Moving target selection Virtual reality
在线阅读 下载PDF
Trajectory prediction model for crossing-based target selection
11
作者 Hao ZHANG Jin HUANG +2 位作者 Feng TIAN Guozhong DAI Hongan WANG 《Virtual Reality & Intelligent Hardware》 2019年第3期330-340,共11页
Background Crossing-based target selection motion may attain less error rates and higher interactive speed in some cases.Most of the research in target selection fields are focused on the analysis of the interaction r... Background Crossing-based target selection motion may attain less error rates and higher interactive speed in some cases.Most of the research in target selection fields are focused on the analysis of the interaction results.Additionally,as trajectories play a much more important role in crossing-based target selection compared to the other interactive techniques,an ideal model for trajectories can help computer designers make predictions about interaction results during the process of target selection rather than at the end of the whole process.Methods In this paper,a trajectory prediction model for crossing based target selection tasks is proposed by taking the reference of a dynamic model theory.Results Simulation results demonstrate that our model performed well with regard to the prediction of trajectories,endpoints and hitting time for target-selection motion,and the average error of trajectories,endpoints and hitting time values were found to be 17.28%,2.73mm and 11.50%,respectively. 展开更多
关键词 Target selection Crossing-based selection Trajectory prediction
在线阅读 下载PDF
Ensemble Knowledge Distillation for Federated Semi-Supervised Image Classification 被引量:1
12
作者 Ertong Shang Hui Liu +2 位作者 Jingyang Zhang Runqi Zhao Junzhao Du 《Tsinghua Science and Technology》 2025年第1期112-123,共12页
Federated learning is an emerging privacy-preserving distributed learning paradigm,in which many clients collaboratively train a shared global model under the orchestration of a remote server.Most current works on fed... Federated learning is an emerging privacy-preserving distributed learning paradigm,in which many clients collaboratively train a shared global model under the orchestration of a remote server.Most current works on federated learning have focused on fully supervised learning settings,assuming that all the data are annotated with ground-truth labels.However,this work considers a more realistic and challenging setting,Federated Semi-Supervised Learning(FSSL),where clients have a large amount of unlabeled data and only the server hosts a small number of labeled samples.How to reasonably utilize the server-side labeled data and the client-side unlabeled data is the core challenge in this setting.In this paper,we propose a new FSSL algorithm for image classification based on consistency regularization and ensemble knowledge distillation,called EKDFSSL.Our algorithm uses the global model as the teacher in consistency regularization methods to enhance both the accuracy and stability of client-side unsupervised learning on unlabeled data.Besides,we introduce an additional ensemble knowledge distillation loss to mitigate model overfitting during server-side retraining on labeled data.Extensive experiments on several image classification datasets show that our EKDFSSL outperforms current baseline methods. 展开更多
关键词 federated learning semi-supervised learning federated semi-supervised learning knowledge distillation
原文传递
VRoot:A VR-Based application for manual root system architecture reconstruction
13
作者 Dirk N.Baker Tobias Selzner +5 位作者 Jens Henrik Göbbert Hanno Scharr Morris Riedel Ebba Þóra Hvannberg Andrea Schnepf Daniel Zielasko 《Plant Phenomics》 2025年第2期1-16,共16页
This article describes an immersive virtual reality reconstruction tool for root system architectures from 3D scans of soil columns.In practical scenarios,experimental conditions will be adapted to fit the need of the... This article describes an immersive virtual reality reconstruction tool for root system architectures from 3D scans of soil columns.In practical scenarios,experimental conditions will be adapted to fit the need of the data analysis pipeline,including sieving and drying the soil before scanning.Based on previous reports of automatic systems that do not represent what experts would annotate,we developed a virtual reality system to assist with the extraction of root systems in cases in which automated approaches fall short of expert knowledge.The aim of the present study is to evaluate whether our immersive method is superior to classical annotation approaches when tested on synthetic data sets using untrained participants.Our laboratory user study consists of evaluating the root extractions of participants,along with their rating on central user experience and usability measures.We show significant improvement in F1 score across conditions(noisy or clear data)as well as an improved usability.Our study highlights that using virtual reality in root extraction improves accuracy,and we perform an in-depth evaluation of biases that occur when users trace roots in soil volumes. 展开更多
关键词 Virtual reality Root phenotyping Root system architecture 3D image analysis Immersive analytics
原文传递
RFES: a real-time fire evacuation system for Mobile Web3D 被引量:4
14
作者 Feng-ting YAN Yong-hao HU +3 位作者 Jin-yuan JIA Qing-hua GUO He-hua ZHU Zhi-geng PAN 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2019年第8期1061-1075,共15页
There are many bottlenecks that limit the computing power of the Mobile Web3 D and they need to be solved before implementing a public fire evacuation system on this platform.In this study,we focus on three key proble... There are many bottlenecks that limit the computing power of the Mobile Web3 D and they need to be solved before implementing a public fire evacuation system on this platform.In this study,we focus on three key problems:(1)The scene data for large-scale building information modeling(BIM)are huge,so it is difficult to transmit the data via the Internet and visualize them on the Web;(2)The raw fire dynamic simulator(FDS)smoke diffusion data are also very large,so it is extremely difficult to transmit the data via the Internet and visualize them on the Web;(3)A smart artificial intelligence fire evacuation app for the public should be accurate and real-time.To address these problems,the following solutions are proposed:(1)The large-scale scene model is made lightweight;(2)The amount of dynamic smoke is also made lightweight;(3)The dynamic obstacle maps established from the scene model and smoke data are used for optimal path planning using a heuristic method.We propose a real-time fire evacuation system based on the ant colony optimization(RFES-ACO)algorithm with reused dynamic pheromones.Simulation results show that the public could use Mobile Web3 D devices to experience fire evacuation drills in real time smoothly.The real-time fire evacuation system(RFES)is efficient and the evacuation rate is better than those of the other two algorithms,i.e.,the leader-follower fire evacuation algorithm and the random fire evacuation algorithm. 展开更多
关键词 Fire evacuation drill Building information modeling(BIM)building space Mobile Web3D Real-time fire evacuation system based on ant colony optimization(RFES-ACO)algorithm
原文传递
Activity Recognition Based on RFID Object Usage for Smart Mobile Devices 被引量:2
15
作者 Jaeyoung Yang Joonwhan Lee Joongmin Choi 《Journal of Computer Science & Technology》 SCIE EI CSCD 2011年第2期239-246,共8页
Activity recognition is a core aspect of ubiquitous computing applications. In order to deploy activity recognition systems in the real world, we need simple sensing systems with lightweight computational modules to a... Activity recognition is a core aspect of ubiquitous computing applications. In order to deploy activity recognition systems in the real world, we need simple sensing systems with lightweight computational modules to accurately analyze sensed data. In this paper, we propose a simple method to recognize human activities using simple object information involved in activities. We apply activity theory for representing complex human activities and propose a penalized naive Bayes classifier for performing activity recognition. Our results show that our method reduces computation up to an order of magnitude in both learning and inference without penalizing accuracy, when compared to hidden Markov models and conditional random fields. 展开更多
关键词 activity recognition activity theory CONTEXT-AWARENESS RFID
原文传递
Non-Frontal Facial Expression Recognition Using a Depth-Patch Based Deep Neural Network 被引量:2
16
作者 Nai-Ming Yao Hui Chen +1 位作者 Qing-Pei Guo Hong-An Wang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第6期1172-1185,共14页
The challenge of coping with non-frontal head poses during facial expression recognition results in considerable reduction of accuracy and robustness when capturing expressions that occur during natural communications... The challenge of coping with non-frontal head poses during facial expression recognition results in considerable reduction of accuracy and robustness when capturing expressions that occur during natural communications. In this paper, we attempt to recognize facial expressions under poses with large rotation angles from 2D videos. A depth^patch based 4D expression representation model is proposed. It was reconstructed from 2D dynamic images for delineating continuous spatial changes and temporal context under non-frontal cases. Furthermore, we present an effective deep neural network classifier, which can accurately capture pose-variant expression features from the depth patches and recognize non-frontal expressions. Experimental results on the BU-4DFE database show that the proposed method achieves a high recognition accuracy of 86.87% for non-frontal facial expressions within a range of head rotation angle of up to 52%, outperforming existing methods. We also present a quantitative analysis of the components contributing to the performance gain through tests on the BU-4DFE and Multi-PIE datasets. 展开更多
关键词 facial expression recognition non-frontal head pose DEPTH spatial-temporal convolutional neural network
原文传递
Flexible computational photodetectors for self-powered activity sensing 被引量:1
17
作者 Dingtian Zhang Canek Fuentes-Hernandez +13 位作者 Raaghesh Vijayan Yang Zhang Yunzhi Li Jung Wook Park Yiyang Wang Yuhui Zhao Nivedita Arora Ali Mirzazadeh Youngwook Do Tingyu Cheng Saiganesh Swaminathan Thad Starner Trisha L.Andrew Gregory D.Abowd 《npj Flexible Electronics》 SCIE 2022年第1期45-52,共8页
Conventional vision-based systems,such as cameras,have demonstrated their enormous versatility in sensing human activities and developing interactive environments.However,these systems have long been criticized for in... Conventional vision-based systems,such as cameras,have demonstrated their enormous versatility in sensing human activities and developing interactive environments.However,these systems have long been criticized for incurring privacy,power,and latency issues due to their underlying structure of pixel-wise analog signal acquisition,computation,and communication.In this research,we overcome these limitations by introducing in-sensor analog computation through the distribution of interconnected photodetectors in space,having a weighted responsivity,to create what we call a computational photodetector.Computational photodetectors can be used to extract mid-level vision features as a single continuous analog signal measured via a two-pin connection.We develop computational photodetectors using thin and flexible low-noise organic photodiode arrays coupled with a self-powered wireless system to demonstrate a set of designs that capture position,orientation,direction,speed,and identification information,in a range of applications from explicit interactions on everyday surfaces to implicit activity detection. 展开更多
关键词 EVERYDAY OVERCOME PHOTODETECTOR
原文传递
EmotionMap:Visual Analysis of Video Emotional Content on a Map
18
作者 Cui-Xia Ma Jian-Cheng Song +3 位作者 Qian Zhu Kevin Maher Ze-Yuan Huang Hong-An Wang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第3期576-591,共16页
Emotion plays a crucial role in gratifying users’needs during their experience of movies and TV series,and may be underutilized as a framework for exploring video content and analysis.In this paper,we present Emotion... Emotion plays a crucial role in gratifying users’needs during their experience of movies and TV series,and may be underutilized as a framework for exploring video content and analysis.In this paper,we present EmotionMap,a novel way of presenting emotion for daily users in 2D geography,fusing spatio-temporal information with emotional data.The interface is composed of novel visualization elements interconnected to facilitate video content exploration,understanding,and searching.EmotionMap allows understanding of the overall emotion at a glance while also giving a rapid understanding of the details.Firstly,we develop EmotionDisc which is an effective tool for collecting audiences’emotion based on emotion representation models.We collect audience and character emotional data,and then integrate the metaphor of a map to visualize video content and emotion in a hierarchical structure.EmotionMap combines sketch interaction,providing a natural approach for users’active exploration.The novelty and the effectiveness of EmotionMap have been demonstrated by the user study and experts’feedback. 展开更多
关键词 video visualization emotion analysis visual analysis sketch interaction
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部