期刊文献+

为您找到了以下期刊:

共找到260篇文章
< 1 2 13 >
每页显示 20 50 100
Introducing article numbering to Visual Informatics
1
visual informatics 2025年第2期I0002-I0002,共1页
Within the publishing industry,article numbering has emerged as an easy and efficient way to cite journal articles.Article numbering has already been successfully rolled out to Elsevier's multidisciplinary open ac... Within the publishing industry,article numbering has emerged as an easy and efficient way to cite journal articles.Article numbering has already been successfully rolled out to Elsevier's multidisciplinary open access journal Chinese Journal of Aeronautics,as well as more than 1600 other journals,and has been well received by the academic community.Based on that positive feedback,we are now pleased to introduce article numbering to Visual Informatics from Volume 9,Issue 2. 展开更多
关键词 journal articles academic community article numbering visual informatics
原文传递
A human-centric perspective on interpretability in large language models
2
作者 Zihan Zhou Minfeng Zhu Wei Chen visual informatics 2025年第1期I0002-I0004,共3页
With the rapid advancement of natural language processing(NLP),large language models(LLMs)have demonstrated excep-tional performance across tasks(Xu et al.,2024;Lee et al.,2024;Tan et al.,2023)like machine translation... With the rapid advancement of natural language processing(NLP),large language models(LLMs)have demonstrated excep-tional performance across tasks(Xu et al.,2024;Lee et al.,2024;Tan et al.,2023)like machine translation,text summarization,and question-answering,significantly accelerating NLP research.Furthermore,LLMs have also facilitated advancements across di-verse fields.In robotics,for example,LLMs enhance the interpre-tation and translation of user voice commands,enabling precise planning and execution of robotic arm movements(Driess et al.,2023). 展开更多
关键词 large language models machine translation natural language processing human centric PERFORMANCE natural language processing nlp large language models llms INTERPRETABILITY machine translationtext summarizationand
原文传递
From perception to reflection:A layered framework for aesthetic education in the digital design of ancient painting
3
作者 Xiaojiao Chen Wenru Qi +2 位作者 Yulian Yang Xiaosong Wang Wei Chen visual informatics 2025年第4期1-3,共3页
As crucial carriers of culture,ancient paintings embody profound historical narratives and aesthetic values.Yet,the lack of professional expertise often hinders non-experts from fully appreciating these works.This dis... As crucial carriers of culture,ancient paintings embody profound historical narratives and aesthetic values.Yet,the lack of professional expertise often hinders non-experts from fully appreciating these works.This disconnect leads to misinterpretation and creates a barrier that diminishes the value of aesthetic education.Digital design is currently revolutionizing public engagement with ancient painting,offering transformative pathways to enhance aesthetic appreciation(Chen et al.,2025a;Tang et al.,2024). 展开更多
关键词 REFLECTION enhance aesthetic appreciation chen aesthetic education digital design layered framework ancient paintingoffering ancient painting historical narratives
原文传递
Photogrammetry engaged automated image labeling approach
4
作者 Jonathan Boyack Jongseong Brad Choi visual informatics 2025年第2期76-86,共11页
Deep learning models require many instances of training data to be able to accurately detect the desired object.However,the labeling of images is currently conducted manually due to the inclusion of irrelevant scenes ... Deep learning models require many instances of training data to be able to accurately detect the desired object.However,the labeling of images is currently conducted manually due to the inclusion of irrelevant scenes in the original images,especially for the data collected in a dynamic environment such as from drone imagery.In this work,we developed an automated extraction of training data set using photogrammetry.This approach works with continuous and arbitrary collection of visual data,such as video,encompassing a stationary object.A dense point cloud was first generated to estimate the geometric relationship between individual images using a structure-from-motion(SfM)technique,followed by user-designated region-of-interests,ROIs,that are automatically extracted from the original images.An orthophoto mosaic of the façade plane of the building shown in the point cloud was created to ease the user’s selection of an intended labeling region of the object,which is a one-time process.We verified this method by using the ROIs extracted from a previously obtained dataset to train and test a convolutional neural network which is modeled to detect damage locations.The method put forward in this work allows a relatively small amount of labeling to generate a large amount of training data.We successfully demonstrate the capabilities of the technique with the dataset previously collected by a drone from an abandoned building in which many of the glass windows have been damaged. 展开更多
关键词 PHOTOGRAMMETRY Deep learning Computer vision STRUCTURE-FROM-MOTION ORTHOPHOTO ROI Data labeling Visual inspection
原文传递
Sequential pattern recognition in CAD operations:A deep learning framework for next-action prediction
5
作者 Teerapord Lin Paisit Khanarsa visual informatics 2025年第4期47-48,共2页
1.Introduction CAD command prediction has progressed significantly,shifting from traditional statistical techniques to more advanced deep learning models.Early approaches—such as frequency-based methods(Myers,1998),f... 1.Introduction CAD command prediction has progressed significantly,shifting from traditional statistical techniques to more advanced deep learning models.Early approaches—such as frequency-based methods(Myers,1998),first-order Markov models(Company et al.,2005),and their higher-order extensions(Rabiner,1989)—offered foundational insights but fell short in capturing semantic relationships and modeling complex dependencies. 展开更多
关键词 Computer-Aided Design Recommendation system Deep learning Sentence embedding Sequential pattern recognition Geometric modeling
原文传递
Visual analysis of LLM-based entity resolution from scientific papers
6
作者 Siyu Wu Yi Yang +6 位作者 Weize Wu Ruiming Li Yuyang Zhang Ge Wang Huobin Tan Zipeng Liu Lei Shi visual informatics 2025年第2期41-50,共10页
This paper focuses on the visual analytics support for extracting domain-specific entities from extensive scientific literature,a task with inherent limitations using traditional named entity resolution methods.With t... This paper focuses on the visual analytics support for extracting domain-specific entities from extensive scientific literature,a task with inherent limitations using traditional named entity resolution methods.With the advent of large language models(LLMs)such as GPT-4,significant improvements over conventional machine learning approaches have been achieved due to LLM’s capability on entity resolution integrate abilities such as understanding multiple types of text.This research introduces a new visual analysis pipeline that integrates these advanced LLMs with versatile visualization and interaction designs to support batch entity resolution.Specifically,we focus on a specific material science field of Metal-Organic Frameworks(MOFs)and a large data collection namely CSD-MOFs.Through collaboration with domain experts in material science,we obtain well-labeled synthesis paragraphs.We propose human-in-the-loop refinement over the entity resolution process using visual analytics techniques,which allows domain experts to interactively integrate insights into LLM intelligence,including error analysis and interpretation of the retrieval-augmented generation(RAG)algorithm.Our evaluation through the case study of example selection for RAG demonstrates that this visual analysis approach effectively improves the accuracy of single-document entity resolution. 展开更多
关键词 Entity resolution Large language models(LLMs) Visual analytics Scientific literature analysis Interactive visualization Domain-specific knowledge structuring
原文传递
VisMocap: Interactive visualization and analysis for multi-source motion capture data
7
作者 Lishuang Zhan Rongting Li +2 位作者 Rui Cao Juncong Lin Shihui Guo visual informatics 2025年第2期30-40,共11页
With the rapid advancement of artificial intelligence,research on enabling computers to assist humans in achieving intelligent augmentation-thereby enhancing the accuracy and efficiency of information perception and p... With the rapid advancement of artificial intelligence,research on enabling computers to assist humans in achieving intelligent augmentation-thereby enhancing the accuracy and efficiency of information perception and processing-has been steadily evolving.Among these developments,innovations in human motion capture technology have been emerging rapidly,leading to an increasing diversity in motion capture data types.This diversity necessitates the establishment of a unified standard for multi-source data to facilitate effective analysis and comparison of their capability to represent human motion.Additionally,motion capture data often suffer from significant noise,acquisition delays,and asynchrony,making their effective processing and visualization a critical challenge.In this paper,we utilized data collected from a prototype of flexible fabric-based motion capture clothing and optical motion capture devices as inputs.Time synchronization and error analysis between the two data types were conducted,individual actions from continuous motion sequences were segmented,and the processed results were presented through a concise and intuitive visualization interface.Finally,we evaluated various system metrics,including the accuracy of time synchronization,data fitting error from fabric resistance to joint angles,precision of motion segmentation,and user feedback. 展开更多
关键词 Multi-source motion capture data Time synchronization Error analysis Motion segmentation Visualization system
原文传递
Transforming cinematography lighting education in the metaverse
8
作者 Xian Xu Wai Tong +3 位作者 Zheng Wei Meng Xia Lik-Hang Lee Huamin Qu visual informatics 2025年第1期1-17,共17页
Lighting education is a foundational component of cinematography education.However,many art schools do not have expensive soundstages for traditional cinematography lessons.Migrating physical setups to virtual experie... Lighting education is a foundational component of cinematography education.However,many art schools do not have expensive soundstages for traditional cinematography lessons.Migrating physical setups to virtual experiences is a potential solution driven by metaverse initiatives.Yet there is still a lack of knowledge on the design of a VR system for teaching cinematography.We first analyzed the educational needs for cinematography lighting education by conducting interviews with six cinematography professionals from academia and industry.Accordingly,we presented Art Mirror,a VR soundstage for teachers and students to emulate cinematography lighting in virtual scenarios.We evaluated Art Mirror from the aspects of usability,realism,presence,sense of agency,and collaboration.Sixteen participants were invited to take a cinematography lighting course and assess the design elements of Art Mirror.Our results demonstrate that Art Mirror is usable and useful for cinematography lighting education,which sheds light on the design of VR cinematography education. 展开更多
关键词 EDUCATION LEARNING Virtual reality System Cinematography lighting
原文传递
EmotionLens:Interactive visual exploration of the circumplex emotion space in literary works via affective word clouds
9
作者 Bingyuan Wang Qing Shi +3 位作者 Xiaohan Wang You Zhou Wei Zeng Zeyu Wang visual informatics 2025年第1期84-98,共15页
Emotion(e.g.,valence and arousal)is an important factor in literature(e.g.,poetry and prose),and has rich values for plotting the life and knowledge of historical figures and appreciating the aesthetics of literary wo... Emotion(e.g.,valence and arousal)is an important factor in literature(e.g.,poetry and prose),and has rich values for plotting the life and knowledge of historical figures and appreciating the aesthetics of literary works.Currently,digital humanities and computational literature apply data statistics extensively in emotion analysis but lack visual analytics for efficient exploration.To fill the gap,we propose a user-centric approach that integrates advanced machine learning models and intuitive visualization for emotion analysis in literature.We make three main contributions.First,we consolidate a new emotion dataset of literary works in different periods,literary genres,and language contexts,augmented with fine-grained valence and arousal labels.Next,we design an interactive visual analytic system named EmotionLens,which allows users to perform multi-granularity(e.g.,individual,group,society)and multi-faceted(e.g.,distribution,chronology,correlation)analyses of literary emotions,supporting both exploratory and confirmatory approaches in digital humanities.Specifically,we introduce a novel affective word cloud with augmented word weight,position,and color,to facilitate literary text analysis from an emotional perspective.To validate the usability and effectiveness of EmotionLens,we provide two consecutive case studies,two user studies,and interviews with experts from different domains.Our results show that EmotionLens bridges literary text,emotion,and various other attributes,enables efficient knowledge discovery in massive data,and facilitates raising and validating domain-specific hypotheses in literature. 展开更多
关键词 Emotion visualization Word cloud Emotion analysis Literary text analysis Digital humanities
原文传递
Out-of-focus artifacts mitigation and autofocus methods for 3D displays
10
作者 T.Chlubna T.Milet P.Zemčík visual informatics 2025年第1期31-42,共12页
This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display.The method addresses a common problem that visualized content is often out of focus,which adversely affects perceive... This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display.The method addresses a common problem that visualized content is often out of focus,which adversely affects perceived 3D content.The method outperforms existing focusing method,having the error lower by almost 30%.The existing and novel focusing is extended with depth-of-field enhancement of the scene to mitigate out-of-focus artifacts.The relation between the total depth range of the scene and the visual quality of the result is discussed and evaluated according to human perception experiments.A space-warping method for synthetic scenes is proposed to reduce out-of-focus artifacts while maintaining the scene appearance.A user study was conducted to evaluate the proposed methods and identify the crucial parameters in the scene-focusing process on the 3D stereoscopic display by Looking Glass Factory.The study confirmed the efficiency of the proposals and discovered that the depth-of-field artifact mitigation might not be suitable for all scenes despite theoretical hypotheses.The overall proposal of this paper is a set of methods that can be used to produce the best user experience with an arbitrary scene displayed on a 3D display. 展开更多
关键词 3D display AUTOFOCUS Stereoscopy HOLOGRAPHY Image processing
原文传递
Visual analysis of multi-subject association patterns in high-dimensional time-varying student performance data
11
作者 Lianen Ji Ziyi Wang +2 位作者 Shirong Qiu Guang Yang Sufang Zhang visual informatics 2025年第2期51-62,共12页
Exploring the association patterns of student performance in depth can help administrators and teachers optimize the curriculum structure and teaching plans more specifically to improve teaching effectiveness in a col... Exploring the association patterns of student performance in depth can help administrators and teachers optimize the curriculum structure and teaching plans more specifically to improve teaching effectiveness in a college undergraduate major.However,these high-dimensional time-varying student performance data involve multiple associated subjects,such as student,course,and teacher,which exhibit complex interrelationships in academic semesters,knowledge categories,and student groups.This makes it challenging to conduct a comprehensive analysis of association patterns.To this end,we construct a visual analysis framework,called MAPVis,to support multi-method and multi-level interactive exploration of the association patterns in student performance.MAPVis consists of two stages:in the first stage,we extract students’learning patterns and further introduce mutual information to explore the distribution of learning patterns;in the second stage,various learning patterns and subject attributes are integrated based on a hierarchical apriori algorithm to achieve a multi-subject interactive exploration of the association patterns among students,courses,and teachers.Finally,we conduct a case study using real student performance data to verify the applicability and effectiveness of MAPVis. 展开更多
关键词 Learning pattern Course grades Multi-subject Association pattern Visual analysis
原文传递
Dye advection without the blur:ML-based flow visualization
12
作者 Sebastian Künzel Daniel Weiskopf visual informatics 2025年第3期13-29,共17页
Semi-Lagrangian texture advection(SLTA)enables efficient visualization of 2D and 3D unsteady flow.The major drawback of SLTA-based visualizations is numerical diffusion caused by iterative texture interpolation.We foc... Semi-Lagrangian texture advection(SLTA)enables efficient visualization of 2D and 3D unsteady flow.The major drawback of SLTA-based visualizations is numerical diffusion caused by iterative texture interpolation.We focus on reducing numerical diffusion in techniques that use textures sparsely populated by solid blobs,such as typically in dye advection.A ReLU-based model architecture is the foundation of our ML-based approach.Multiple model configurations are trained to learn a performant interpolation model that reduces numerical diffusion.Our evaluation investigates the models’ability to generalize concerning the flow and length of the advection process.The model with the best tradeoff between the computational effort to compute,quality of the result,and generality of application is found to be single-layer ReLU-based.This model is further analyzed and explained in-depth and improved using symmetry constraints.Additionally,a metamodel is fitted to predict single-layer ReLU model parameters for advection processes of any length.The metamodel removes the need for any prior training when applying our technique to a new scenario.Additionally,we show that our model is compatible with Back and Forth Error Compensation and Correction to improve the quality of the advection result further.We demonstrate that our model shows excellent diffusion reduction properties in typical examples of 3D steady and unsteady flow visualization.Finally,we utilize the strong diffusion reduction capabilities of our model to compute dye advection with exponential decay,a novel method that we introduce to visualize the extent and evolution of unsteadiness in both 2D and 3D unsteady flow. 展开更多
关键词 Flow visualization 2D and 3D unsteady flow Semi-Lagrangian texture advection Machine learning ReLU
原文传递
What about thematic information?An analysis of the multidimensional visualization of individual mobility
13
作者 Aline Menin Clément Quere +6 位作者 Jorge Wagner Sonia Chardonnel Paule-Annick Davoine Wolfgang Stuerzlinger Carla Maria Dal Sasso Freitas Luciana Nedel Marco Winckler visual informatics 2025年第1期99-115,共17页
This paper reviews the literature on the visualization of individual mobility data,with a focus on thematic integration.It emphasizes the importance of visualization in understanding mobility patterns within a populat... This paper reviews the literature on the visualization of individual mobility data,with a focus on thematic integration.It emphasizes the importance of visualization in understanding mobility patterns within a population and how it helps mobility experts address domain-specific questions.We analyze 38 papers published between 2010 and 2024 in GIS and VIS venues that describe visualizations of multidimensional data related to individual movements in urban environments,concentrating on individual mobility rather than traffic data.Our primary aim is to report advances in interactive visualization for individual mobility analysis,particularly regarding the representation of thematic information about people’s motivations for mobility.Our findings indicate that the thematic dimension is only partially represented in the literature,despite its critical significance in transportation.This gap often stems from the challenge of identifying data sources that inherently provide this information,necessitating visualization designers and developers to navigate multiple,heterogeneous data sources.We identify the strengths and limitations of existing visualizations and suggest potential research directions for the field. 展开更多
关键词 Individual mobility Information visualization Spatio-temporal visualization Human mobility Thematic properties
原文传递
Contextualized visual analytics for multivariate events
14
作者 Lei Peng Ziyue Lin +2 位作者 Natalia Andrienko Gennady Andrienko Siming Chen visual informatics 2025年第2期14-29,共16页
For event analysis,the information from both before and after the event can be crucial in certain scenarios.By incorporating a contextualized perspective in event analysis,analysts can gain deeper insights from the ev... For event analysis,the information from both before and after the event can be crucial in certain scenarios.By incorporating a contextualized perspective in event analysis,analysts can gain deeper insights from the events.We propose a contextualized visual analysis framework which enables the identification and interpretation of temporal patterns within and across multivariate events.The framework consists of a design of visual representation for multivariate event contexts,a data processing workflow to support the visualization,and a context-centered visual analysis system to facilitate the interactive exploration of temporal patterns.To demonstrate the applicability and effectiveness of our framework,we present case studies using real-world datasets from two different domains and an expert study conducted with experienced data analysts. 展开更多
关键词 Visual analytics Event analysis Contextualized analysis Interactive exploration Visualization design
原文传递
Self-similarity guided regression with contrast enhancement for spine segmentation Author links open overlay panel
15
作者 Xiaojia Zhu Chunyu Li +1 位作者 Rui Chen Zhiwen Shao visual informatics 2025年第4期61-71,共11页
Accurate spine segmentation is critical for scoliosis diagnosis and treatment.For instance,automatic Cobb angle measurement for scoliosis relies on precisely localized vertebral masks.However,it remains a challenging ... Accurate spine segmentation is critical for scoliosis diagnosis and treatment.For instance,automatic Cobb angle measurement for scoliosis relies on precisely localized vertebral masks.However,it remains a challenging task due to low tissue contrast,blurred vertebral edges,and overlapping anatomical structures.In this paper,we propose SRNet,a pure segmentation network that produces binary masks of each vertebra.SRNet integrates two novel components,a Self-similarity Guided Dynamic Convolution(SGDC)module and a Contrast-Enhanced Boundary Decoder(CEBD).SGDC exploits the repetitive structure of vertebrae by leveraging non-local attention to compute self-similarity across feature maps and dynamic convolution to combine multiple convolution kernels adaptively.CEBD sharpens segmentation boundaries via a reverse-attention mechanism that erases the coarse prediction and focuses on missing edge details,combined with a spectral-residual filter that amplifies high-frequency edge information.Extensive experiments on the AASCE spine X-ray dataset show that our SRNet achieves a high Dice score of 92.37%,outperforming state-of-the-art approaches.While our primary focus here is mask segmentation,the accurate vertebral masks produced by SRNet could readily support future tasks such as scoliosis Cobb angle estimation. 展开更多
关键词 Spine segmentation SELF-SIMILARITY Dynamic convolution Reverse-attention Spectral-residual
原文传递
YOLO-SAATD: An efficient SAR airport and aircraft target detector
16
作者 Daobin Ma Zhanhong Lu +5 位作者 Zixuan Dai Yangyue Wei Li Yang Haimiao Hu Wenqiao Zhang Dongping Zhang visual informatics 2025年第2期87-93,共7页
While object detection performs well in natural images,it faces challenges in Synthetic Aperture Radar(SAR)images for detecting airports and aircraft due to discrete scattering points,complex backgrounds,and multi-sca... While object detection performs well in natural images,it faces challenges in Synthetic Aperture Radar(SAR)images for detecting airports and aircraft due to discrete scattering points,complex backgrounds,and multi-scale targets.Existing methods struggle with computational inefficiency,omission of small targets,and low accuracy.We propose a SAR airport and aircraft target detection model based on YOLO,named YOLO-SAATD(You Only Look Once-SAR Airport and Aircraft Target Detector),which tackles the aforementioned challenges from three perspectives:1.Efficiency:A lightweight hierarchical multi-scale backbone reduces parameters and enhances detection speed.2.Fine granularity:A"ScaleNimble Neck"integrates feature reshaping and scale-aware aggregation to enhance detail detection and feature capture in multi-scale SAR images.3.Precision:Wise-IoU loss function is used to optimize bounding box localization and enhance detection accuracy.Experiments on the SAR-Airport-1.0 and SAR-AirCraft-1.0 datasets show that YOLO-SAATD improves precision and mAP50 by 1%-2%,increases detection frame rate by 15%,and reduces model parameters by 25%compared to YOLOv8n,thus validating its effectiveness for SAR airport and aircraft target detection. 展开更多
关键词 SAR image Aircraft detection Airport detection Deep learning YOLO
原文传递
VirtuNarrator:Crafting museum narratives via spatial layout in creating customized virtual museums
17
作者 Yonghao Chen Tan Tang +3 位作者 Xiaojiao Chen Yueying Li Qinghua Liu Xiaosong Wang visual informatics 2025年第3期71-84,共14页
Curation in museums involves not only presenting exhibits for visitors but also deeply shaping a systematic narrative experience through deliberate spatial layout design of the museum space.In contrast,the dynamic nat... Curation in museums involves not only presenting exhibits for visitors but also deeply shaping a systematic narrative experience through deliberate spatial layout design of the museum space.In contrast,the dynamic nature of virtual reality(VR)environments establishes virtual museums as a more potent space for both layout optimization and narrative construction,particularly when integrating visitors’diverse preferences to optimize the virtual museum and convey narratives.Therefore,we first collaborated with experienced curators to conduct a formative study to understand the workflow of curation and summarize the museum narratives that weave exhibits,galleries,and museum architecture into a compelling story.We then proposed a museum spatial layout framework that clarified three narrative levels(exhibit level,gallery level,and architecture level)to support the controllable spatial layout of the museum’s elements.Based on that,we developed VirtuNarrator,a proof-of-concept prototype designed to assist visitors in choosing different narrative themes,filtering exhibits,creating and adjusting galleries,and freely connecting them.The evaluation results validated that visitors received a more systematic museum narrative experience and perceptions of multi-perspective narrative design in VirtuNarrator.We also provided insights into VR-based museum narrative enhancement beyond spatial layout design. 展开更多
关键词 Virtual museum Virtual reality Museum narrative Museum layout User experience
原文传递
Unified 3D Gaussian splatting for motion and defocus blur reconstruction
18
作者 Li Liu Jing Duan +2 位作者 Xiaodong Fu Wei Peng Lijun Liu visual informatics 2025年第4期84-95,共12页
This paper proposes a unified 3D Gaussian splatting framework consisting of three key components for motion and defocus blur reconstruction.First,a dual-blur perception module is designed to generate pixel-wise masks ... This paper proposes a unified 3D Gaussian splatting framework consisting of three key components for motion and defocus blur reconstruction.First,a dual-blur perception module is designed to generate pixel-wise masks and predict the types of motion and defocus blur,guiding structural feature extraction.Second,a blur-aware Gaussian splatting integrates blur-aware features into the splatting process for accurate modeling of the global and local scene structure.Third,an Unoptimized Gaussian Ratio(UGR)-opacity joint optimization strategy is proposed to refine under-optimized regions,improving reconstruction accuracy under complex blur conditions.Experiments on a newly constructed motion and defocus blur dataset demonstrate the effectiveness of the proposed method for novel view synthesis.Compared with state-of-the-art methods,our framework achieves improvements of 0.28 dB,2.46%and 39.88%on PSNR,SSIM,and LPIPS,respectively.For deblurring tasks,it achieves improvements of 0.36 dB,3.24%and 28.96%on the same metrics.These results highlight the robustness and effectiveness of this approach. 展开更多
关键词 3D gaussian splatting Blur reconstruction Dual-blur perception Blur-aware feature Joint optimization
原文传递
Leveraging personality as a proxy of perceived transparency in hierarchical visualizations
19
作者 Tomás Alves Carlota Dias +1 位作者 Daniel Gonçalves Sandra Gama visual informatics 2025年第1期43-57,共15页
Understanding which factors affect information visualization transparency continues to be one of the most relevant challenges in current research,especially since trust models how users build on the knowledge and use ... Understanding which factors affect information visualization transparency continues to be one of the most relevant challenges in current research,especially since trust models how users build on the knowledge and use it.This work extends the current body of research by studying the user’s subjective evaluation of the visualization transparency of hierarchical charts through the clarity,coverage,and look and feel dimensions.Additionally,we extend the user profile to better understand whether personality facets manifest a biasing effect on the trust-building process.Our results show that the data encodings do not affect how users perceive visualization transparency while controlling for personality factors.Regarding personality,the propensity to trust affects how they judge the clarity of a hierarchical chart.Our findings provide new insights into the research challenges of measuring trust and understanding the transparency of information visualization.Specifically,we explore how personality factors manifest in this trust-building relationship and user interaction within visualization systems. 展开更多
关键词 CONSCIENTIOUSNESS TRUST Hierarchical data Visualization transparency Perception
原文传递
Key-isovalue selection and hierarchical exploration visualization of weather forecast ensembles
20
作者 Feng Zhou Hao Hu +3 位作者 Fengjie Wang Jiamin Zhu Wenwen Gao Min Zhu visual informatics 2025年第1期58-70,共13页
Weather forecast ensembles are commonly used to assess the uncertainty and confidence of weather predictions.Conventional methods in meteorology often employ ensemble mean and standard devia-tion plots,as well as spag... Weather forecast ensembles are commonly used to assess the uncertainty and confidence of weather predictions.Conventional methods in meteorology often employ ensemble mean and standard devia-tion plots,as well as spaghetti plots,to visualize ensemble data.However,these methods suffer from significant information loss and visual clutter.In this paper,we propose a new approach for uncertainty visualization of weather forecast ensembles,including isovalue selection based on information loss and hierarchical visualization that integrates visual abstraction and detail preservation.Our approach uses non-uniform downsampling to select key-isovalues and provides an interactive visualization method based on hierarchical clustering.Firstly,we sample key-isovalues by contour probability similarity and determine the optimal sampling number using an information loss curve.Then,the corresponding isocontours are presented to guide users in selecting key-isovalues.Once the isovalue is chosen,we perform agglomerative hierarchical clustering on the isocontours based on signed distance fields and generate visual abstractions for each isocontour cluster to avoid visual clutter.We link a bubble tree to the visual abstractions to explore the details of isocontour clusters at different levels.We demonstrate the utility of our approach through two case studies with meteorological experts on real-world data.We further validate its effectiveness by quantitatively assessing information loss and visual clutter.Additionally,we confirm its usability through expert evaluation. 展开更多
关键词 Ensemble visualization Isocontours Hierarchical visualization Isovalue selection Spaghetti plots
原文传递
上一页 1 2 13 下一页 到第
使用帮助 返回顶部