Aviation data analysis can help airlines to understand passenger needs,so as to provide passengers with more sophisticated and better services.How to explore the implicit message and analyze contained features from la...Aviation data analysis can help airlines to understand passenger needs,so as to provide passengers with more sophisticated and better services.How to explore the implicit message and analyze contained features from large amounts of data has become an important issue in the civil aviation passenger data analysis process.The uncertainty analysis and visualization methods of data record and property measurement are offered in this paper,based on the visual analysis and uncertainty measure theory combined with parallel coordinates,radar chart,histogram,pixel chart and good interaction.At the same time,the data source expression clearly shows the uncertainty and hidden information as an information base for passengers’service展开更多
As industrial production progresses toward digitalization,massive amounts of data have been collected,transmitted,and stored,with characteristics of large-scale,high-dimensional,heterogeneous,and spatiotemporal dynami...As industrial production progresses toward digitalization,massive amounts of data have been collected,transmitted,and stored,with characteristics of large-scale,high-dimensional,heterogeneous,and spatiotemporal dynamics.The high complexity of industrial big data poses challenges for the practical decision-making of domain experts,leading to ever-increasing needs for integrating computational intelligence with human perception into traditional data analysis.Industrial big data visualization integrates theoretical methods and practical technologies from multiple disciplines,including data mining,information visualization,computer graphics,and human-computer interaction,providing a highly effective manner for understanding and exploring the complex industrial processes.This review summarizes the state-of-the-art approaches,characterizes them with six visualization methods,and categorizes them based on analytical tasks and applications.Furthermore,key research challenges and potential future directions are identified.展开更多
Geochemical survey data are essential across Earth Science disciplines but are often affected by noise,which can obscure important geological signals and compromise subsequent prediction and interpretation.Quantifying...Geochemical survey data are essential across Earth Science disciplines but are often affected by noise,which can obscure important geological signals and compromise subsequent prediction and interpretation.Quantifying prediction uncertainty is hence crucial for robust geoscientific decision-making.This study proposes a novel deep learning framework,the Spatially Constrained Variational Autoencoder(SC-VAE),for denoising geochemical survey data with integrated uncertainty quantification.The SC-VAE incorporates spatial regularization,which enforces spatial coherence by modeling inter-sample relationships directly within the latent space.The performance of the SC-VAE was systematically evaluated against a standard Variational Autoencoder(VAE)using geochemical data from the gold polymetallic district in the northwestern part of Sichuan Province,China.Both models were optimized using Bayesian optimization,with objective functions specifically designed to maintain essential geostatistical characteristics.Evaluation metrics include variogram analysis,quantitative measures of spatial interpolation accuracy,visual assessment of denoised maps,and statistical analysis of data distributions,as well as decomposition of uncertainties.Results show that the SC-VAE achieves superior noise suppression and better preservation of spatial structure compared to the standard VAE,as demonstrated by a significant reduction in the variogram nugget effect and an increased partial sill.The SC-VAE produces denoised maps with clearer anomaly delineation and more regularized data distributions,effectively mitigating outliers and reducing kurtosis.Additionally,it delivers improved interpolation accuracy and spatially explicit uncertainty estimates,facilitating more reliable and interpretable assessments of prediction confidence.The SC-VAE framework thus provides a robust,geostatistically informed solution for enhancing the quality and interpretability of geochemical data,with broad applicability in mineral exploration,environmental geochemistry,and other Earth Science domains.展开更多
Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from sei...Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.展开更多
Hue-Saturation-Intensity (HSI) color model, a psychologically appealing color model, was employed to visualize uncertainty represented by relative prediction error based on the case of spatial prediction of pH of to...Hue-Saturation-Intensity (HSI) color model, a psychologically appealing color model, was employed to visualize uncertainty represented by relative prediction error based on the case of spatial prediction of pH of topsoil in the peri-urban Beijing. A two-dimensional legend was designed to accompany the visualization-vertical axis (hues) for visualizing the predicted values and horizontal axis (whiteness) for visualizing the prediction error. Moreover, different ways of visualizing uncertainty were briefly reviewed in this paper. This case study indicated that visualization of both predictions and prediction uncertainty offered a possibility to enhance visual exploration of the data uncertainty and to compare different prediction methods or predictions of totally different variables. The whitish region of the visualization map can be simply interpreted as unsatisfactory prediction results, where may need additional samples or more suitable prediction models for a better prediction results.展开更多
The control system of Hefei Light Source II(HLS-Ⅱ) is a distributed system based on the experimental physics and industrial control system(EPICS). It is necessary to maintain the central configuration files for the e...The control system of Hefei Light Source II(HLS-Ⅱ) is a distributed system based on the experimental physics and industrial control system(EPICS). It is necessary to maintain the central configuration files for the existing archiving system. When the process variables in the control system are added, removed, or updated, the configuration files must be manually modified to maintain consistency with the control system. This paper presents a new method for data archiving, which realizes the automatic configuration of the archiving parameters. The system uses microservice architecture to integrate the EPICS Archiver Appliance and Rec Sync. In this way, the system can collect all the archived meta-configuration from the distributed input/output controllers and enter them into the EPICS Archiver Appliance automatically. Furthermore, we also developed a web-based GUI to provide automatic visualization of real-time and historical data. At present,this system is under commissioning at HLS-Ⅱ. The results indicate that the new archiving system is reliable and convenient to operate. The operation mode without maintenance is valuable for large-scale scientific facilities.展开更多
A visualization tool was developed through a web browser based on Java applets embedded into HTML pages, in order to provide a world access to the EAST experimental data. It can display data from various trees in diff...A visualization tool was developed through a web browser based on Java applets embedded into HTML pages, in order to provide a world access to the EAST experimental data. It can display data from various trees in different servers in a single panel. With WebScope, it is easier to make a comparison between different data sources and perform a simple calculation over different data sources.展开更多
Monitoring data are often used to identify stormwater runoff characteristics and in stormwater runoff modelling without consideration of their inherent uncertainties. Integrated with discrete sample analysis and error...Monitoring data are often used to identify stormwater runoff characteristics and in stormwater runoff modelling without consideration of their inherent uncertainties. Integrated with discrete sample analysis and error propagation analysis, this study attempted to quantify the uncertainties of discrete chemical oxygen demand (COD), total suspended solids (TSS) concentration, stormwater flowrate, stormwater event volumes, COD event mean concentration (EMC), and COD event loads in terms of flow measurement, sample collection, storage and laboratory analysis. The results showed that the uncertainties due to sample collection, storage and laboratory analysis of COD from stormwater runoff are 13.99%, 19.48% and 12.28%. Meanwhile, flow measurement uncertainty was 12.82%, and the sample collection uncertainty of TSS from stormwater runoff was 31.63%. Based on the law of propagation of uncertainties, the uncertainties regarding event flow volume, COD EMC and COD event loads were quantified as 7.03%, 10.26% and 18.47%.展开更多
With long-term marine surveys and research,and especially with the development of new marine environment monitoring technologies,prodigious amounts of complex marine environmental data are generated,and continuously i...With long-term marine surveys and research,and especially with the development of new marine environment monitoring technologies,prodigious amounts of complex marine environmental data are generated,and continuously increase rapidly.Features of these data include massive volume,widespread distribution,multiple-sources,heterogeneous,multi-dimensional and dynamic in structure and time.The present study recommends an integrative visualization solution for these data,to enhance the visual display of data and data archives,and to develop a joint use of these data distributed among different organizations or communities.This study also analyses the web services technologies and defines the concept of the marine information gird,then focuses on the spatiotemporal visualization method and proposes a process-oriented spatiotemporal visualization method.We discuss how marine environmental data can be organized based on the spatiotemporal visualization method,and how organized data are represented for use with web services and stored in a reusable fashion.In addition,we provide an original visualization architecture that is integrative and based on the explored technologies.In the end,we propose a prototype system of marine environmental data of the South China Sea for visualizations of Argo floats,sea surface temperature fields,sea current fields,salinity,in-situ investigation data,and ocean stations.An integration visualization architecture is illustrated on the prototype system,which highlights the process-oriented temporal visualization method and demonstrates the benefit of the architecture and the methods described in this study.展开更多
Simulation and interpretation of marine controlled-source electromagnetic(CSEM) data often approximate the transmitter source as an ideal horizontal electric dipole(HED) and assume that the receivers are located on a ...Simulation and interpretation of marine controlled-source electromagnetic(CSEM) data often approximate the transmitter source as an ideal horizontal electric dipole(HED) and assume that the receivers are located on a flat seabed.Actually,however,the transmitter dipole source will be rotated,tilted and deviated from the survey profile due to ocean currents.And free-fall receivers may be also rotated to some arbitrary horizontal orientation and located on sloping seafloor.In this paper,we investigate the effects of uncertainties in the transmitter tilt,transmitter rotation and transmitter deviation from the survey profile as well as in the receiver's location and orientation on marine CSEM data.The model study shows that the uncertainties of all position and orientation parameters of both the transmitter and receivers can propagate into observed data uncertainties,but to a different extent.In interpreting marine data,field data uncertainties caused by the position and orientation uncertainties of both the transmitter and receivers need to be taken into account.展开更多
Cyber security has been thrust into the limelight in the modern technological era because of an array of attacks often bypassing tmtrained intrusion detection systems (IDSs). Therefore, greater attention has been di...Cyber security has been thrust into the limelight in the modern technological era because of an array of attacks often bypassing tmtrained intrusion detection systems (IDSs). Therefore, greater attention has been directed on being able deciphering better methods for identifying attack types to train IDSs more effectively. Keycyber-attack insights exist in big data; however, an efficient approach is required to determine strong attack types to train IDSs to become more effective in key areas. Despite the rising growth in IDS research, there is a lack of studies involving big data visualization, which is key. The KDD99 data set has served as a strong benchmark since 1999; therefore, we utilized this data set in our experiment. In this study, we utilized hash algorithm, a weight table, and sampling method to deal with the inherent problems caused by analyzing big data; volume, variety, and velocity. By utilizing a visualization algorithm, we were able to gain insights into the KDD99 data set with a clear iden- tification of "normal" clusters and described distinct clusters of effective attacks.展开更多
A GIS for ocean applications called "the Xiamen Atmospheric and Oceanographic Data Management and Display System (AODMDS)" has been designed and developed. The system is based on ArcObjects (AO), a component-bas...A GIS for ocean applications called "the Xiamen Atmospheric and Oceanographic Data Management and Display System (AODMDS)" has been designed and developed. The system is based on ArcObjects (AO), a component-based GIS de- velopment tool. The paper discusses in detail the storage and organization of the atmospheric and oceanographic data, the strategy and methods for the visualization and mapping of oceanographic and atmospheric data, and the implementation of the methods in AODMDS. It also discusses some advanced display control techniques that expand the functions of ArcObjects One of the techniques is "gradient-fill-style color-map control," which provides a feasible color-rich display control for all types of raster maps. As a stand-alone desktop GIS system built on AO, AODMDS provides effective data management and powerful mapping and visualization functions for atmospheric and oceanographic data.展开更多
Graphical methods are used for construction.Data analysis and visualization are an important area of applications of big data.At the same time,visual analysis is also an important method for big data analysis.Data vis...Graphical methods are used for construction.Data analysis and visualization are an important area of applications of big data.At the same time,visual analysis is also an important method for big data analysis.Data visualization refers to data that is presented in a visual form,such as a chart or map,to help people understand the meaning of the data.Data visualization helps people extract meaning from data quickly and easily.Visualization can be used to fully demonstrate the patterns,trends,and dependencies of your data,which can be found in other displays.Big data visualization analysis combines the advantages of computers,which can be static or interactive,interactive analysis methods and interactive technologies,which can directly help people and effectively understand the information behind big data.It is indispensable in the era of big data visualization,and it can be very intuitive if used properly.Graphical analysis also found that valuable information becomes a powerful tool in complex data relationships,and it represents a significant business opportunity.With the rise of big data,important technologies suitable for dealing with complex relationships have emerged.Graphics come in a variety of shapes and sizes for a variety of business problems.Graphic analysis is first in the visualization.The step is to get the right data and answer the goal.In short,to choose the right method,you must understand each relative strengths and weaknesses and understand the data.Key steps to get data:target;collect;clean;connect.展开更多
The availability and quantity of remotely sensed and terrestrial geospatial data sets are on the rise.Historically,these data sets have been analyzed and quarried on 2D desktop computers;however,immersive technologies...The availability and quantity of remotely sensed and terrestrial geospatial data sets are on the rise.Historically,these data sets have been analyzed and quarried on 2D desktop computers;however,immersive technologies and specifically immersive virtual reality(iVR)allow for the integration,visualization,analysis,and exploration of these 3D geospatial data sets.iVR can deliver remote and large-scale geospatial data sets to the laboratory,providing embodied experiences of field sites across the earth and beyond.We describe a workflow for the ingestion of geospatial data sets and the development of an iVR workbench,and present the application of these for an experience of Iceland’s Thrihnukar volcano where we:(1)combined satellite imagery with terrain elevation data to create a basic reconstruction of the physical site;(2)used terrestrial LiDAR data to provide a geo-referenced point cloud model of the magmatic-volcanic system,as well as the LiDAR intensity values for the identification of rock types;and(3)used Structure-from-Motion(SfM)to construct a photorealistic point cloud of the inside volcano.The workbench provides tools for the direct manipulation of the georeferenced data sets,including scaling,rotation,and translation,and a suite of geometric measurement tools,including length,area,and volume.Future developments will be inspired by an ongoing user study that formally evaluates the workbench’s mature components in the context of fieldwork and analyses activities.展开更多
This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while ...This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.展开更多
In recent years,with the wide application of image data visual extraction technology in the field of industrial engineering,the development of industrial economy has reached a new situation.To explore the interaction ...In recent years,with the wide application of image data visual extraction technology in the field of industrial engineering,the development of industrial economy has reached a new situation.To explore the interaction between the pellet microstructure and compressive strength,firstly,the pellet microstructure needed for the experiment was obtained using a Leica DM4500P microscope.The area proportions of hematite,calcium ferrite,magnetite,calcium silicate and pore in pellet microstructure were extracted by visual extraction technology of image data.Moreover,the relationship between the area proportions of mineral components and compressive strength was established by backpropagation neural network(BPNN),generalized regression neural network(GRNN)and beetle antennae search-generalized regression neural network(BAS-GRNN)algorithms,which proves that the pellet microstructure can be used as the prediction standard of compressive strength.The errors of BPNN and BAS-GRNN are 5.13%and 3.37%,respectively,both of which are less than 5.5%.Therefore,through data visualization,we are able to discuss the connection between various components of pellet microstructure and compressive strength and provide new research ideas for improving the compressive strength and metallurgical performance of pellet.展开更多
The mathematic theory for uncertainty model of line segment are summed up to achieve a general conception, and the line error hand model of εσ is a basic uncertainty model that can depict the line accuracy and quali...The mathematic theory for uncertainty model of line segment are summed up to achieve a general conception, and the line error hand model of εσ is a basic uncertainty model that can depict the line accuracy and quality efficiently while the model of εm and error entropy can be regarded as the supplement of it. The error band model will reflect and describe the influence of line uncertainty on polygon uncertainty. Therefore, the statistical characteristic of the line error is studied deeply by analyzing the probability that the line error falls into a certain range. Moreover, the theory accordance is achieved in the selecting the error buffer for line feature and the error indicator. The relationship of the accuracy of area for a polygon with the error loop for a polygon boundary is deduced and computed.展开更多
Water resources are one of the basic resources for human survival,and water protection has been becoming a major problem for countries around the world.However,most of the traditional water quality monitoring research...Water resources are one of the basic resources for human survival,and water protection has been becoming a major problem for countries around the world.However,most of the traditional water quality monitoring research work is still concerned with the collection of water quality indicators,and ignored the analysis of water quality monitoring data and its value.In this paper,by adopting Laravel and AdminTE framework,we introduced how to design and implement a water quality data visualization platform based on Baidu ECharts.Through the deployed water quality sensor,the collected water quality indicator data is transmitted to the big data processing platform that deployed on Tencent Cloud in real time through the 4G network.The collected monitoring data is analyzed,and the processing result is visualized by Baidu ECharts.The test results showed that the designed system could run well and will provide decision support for water resource protection.展开更多
文摘Aviation data analysis can help airlines to understand passenger needs,so as to provide passengers with more sophisticated and better services.How to explore the implicit message and analyze contained features from large amounts of data has become an important issue in the civil aviation passenger data analysis process.The uncertainty analysis and visualization methods of data record and property measurement are offered in this paper,based on the visual analysis and uncertainty measure theory combined with parallel coordinates,radar chart,histogram,pixel chart and good interaction.At the same time,the data source expression clearly shows the uncertainty and hidden information as an information base for passengers’service
基金supported in part by the National Key Research and Development Plan Project(2022YFB3304700)in part by the Xinliao Talent Program of Liaoning Province(XLYC2202002).
文摘As industrial production progresses toward digitalization,massive amounts of data have been collected,transmitted,and stored,with characteristics of large-scale,high-dimensional,heterogeneous,and spatiotemporal dynamics.The high complexity of industrial big data poses challenges for the practical decision-making of domain experts,leading to ever-increasing needs for integrating computational intelligence with human perception into traditional data analysis.Industrial big data visualization integrates theoretical methods and practical technologies from multiple disciplines,including data mining,information visualization,computer graphics,and human-computer interaction,providing a highly effective manner for understanding and exploring the complex industrial processes.This review summarizes the state-of-the-art approaches,characterizes them with six visualization methods,and categorizes them based on analytical tasks and applications.Furthermore,key research challenges and potential future directions are identified.
基金supported by the National Natural Science Foundation of China(Nos.42530801,42425208)the Natural Science Foundation of Hubei Province(China)(No.2023AFA001)+1 种基金the MOST Special Fund from State Key Laboratory of Geological Processes and Mineral Resources,China University of Geosciences(No.MSFGPMR2025-401)the China Scholarship Council(No.202306410181)。
文摘Geochemical survey data are essential across Earth Science disciplines but are often affected by noise,which can obscure important geological signals and compromise subsequent prediction and interpretation.Quantifying prediction uncertainty is hence crucial for robust geoscientific decision-making.This study proposes a novel deep learning framework,the Spatially Constrained Variational Autoencoder(SC-VAE),for denoising geochemical survey data with integrated uncertainty quantification.The SC-VAE incorporates spatial regularization,which enforces spatial coherence by modeling inter-sample relationships directly within the latent space.The performance of the SC-VAE was systematically evaluated against a standard Variational Autoencoder(VAE)using geochemical data from the gold polymetallic district in the northwestern part of Sichuan Province,China.Both models were optimized using Bayesian optimization,with objective functions specifically designed to maintain essential geostatistical characteristics.Evaluation metrics include variogram analysis,quantitative measures of spatial interpolation accuracy,visual assessment of denoised maps,and statistical analysis of data distributions,as well as decomposition of uncertainties.Results show that the SC-VAE achieves superior noise suppression and better preservation of spatial structure compared to the standard VAE,as demonstrated by a significant reduction in the variogram nugget effect and an increased partial sill.The SC-VAE produces denoised maps with clearer anomaly delineation and more regularized data distributions,effectively mitigating outliers and reducing kurtosis.Additionally,it delivers improved interpolation accuracy and spatially explicit uncertainty estimates,facilitating more reliable and interpretable assessments of prediction confidence.The SC-VAE framework thus provides a robust,geostatistically informed solution for enhancing the quality and interpretability of geochemical data,with broad applicability in mineral exploration,environmental geochemistry,and other Earth Science domains.
文摘Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.
基金Under the auspices of Knowledge Innovation Frontier Project of Institute of Soil Science,Chinese Academy of Sciences(No.ISSASIP0716 )the National Nature Science Foundation of China ( No.40701070,40571065)
文摘Hue-Saturation-Intensity (HSI) color model, a psychologically appealing color model, was employed to visualize uncertainty represented by relative prediction error based on the case of spatial prediction of pH of topsoil in the peri-urban Beijing. A two-dimensional legend was designed to accompany the visualization-vertical axis (hues) for visualizing the predicted values and horizontal axis (whiteness) for visualizing the prediction error. Moreover, different ways of visualizing uncertainty were briefly reviewed in this paper. This case study indicated that visualization of both predictions and prediction uncertainty offered a possibility to enhance visual exploration of the data uncertainty and to compare different prediction methods or predictions of totally different variables. The whitish region of the visualization map can be simply interpreted as unsatisfactory prediction results, where may need additional samples or more suitable prediction models for a better prediction results.
基金supported by the National Natural Science Foundation of China(No.11375186)
文摘The control system of Hefei Light Source II(HLS-Ⅱ) is a distributed system based on the experimental physics and industrial control system(EPICS). It is necessary to maintain the central configuration files for the existing archiving system. When the process variables in the control system are added, removed, or updated, the configuration files must be manually modified to maintain consistency with the control system. This paper presents a new method for data archiving, which realizes the automatic configuration of the archiving parameters. The system uses microservice architecture to integrate the EPICS Archiver Appliance and Rec Sync. In this way, the system can collect all the archived meta-configuration from the distributed input/output controllers and enter them into the EPICS Archiver Appliance automatically. Furthermore, we also developed a web-based GUI to provide automatic visualization of real-time and historical data. At present,this system is under commissioning at HLS-Ⅱ. The results indicate that the new archiving system is reliable and convenient to operate. The operation mode without maintenance is valuable for large-scale scientific facilities.
基金supported by National Natural Science Foundation of China (No.10835009)Chinese Academy of Sciences for the Key Project of Knowledge Innovation Program (No.KJCX3.SYW.N4)Chinese Ministry of Sciences for the 973 project (No.2009GB103000)
文摘A visualization tool was developed through a web browser based on Java applets embedded into HTML pages, in order to provide a world access to the EAST experimental data. It can display data from various trees in different servers in a single panel. With WebScope, it is easier to make a comparison between different data sources and perform a simple calculation over different data sources.
基金supported by the National Natural Science Foundation of China(No.50778098)the Youth Project of Fujian Provincial Department of Science&Technology(No.2007F3093)
文摘Monitoring data are often used to identify stormwater runoff characteristics and in stormwater runoff modelling without consideration of their inherent uncertainties. Integrated with discrete sample analysis and error propagation analysis, this study attempted to quantify the uncertainties of discrete chemical oxygen demand (COD), total suspended solids (TSS) concentration, stormwater flowrate, stormwater event volumes, COD event mean concentration (EMC), and COD event loads in terms of flow measurement, sample collection, storage and laboratory analysis. The results showed that the uncertainties due to sample collection, storage and laboratory analysis of COD from stormwater runoff are 13.99%, 19.48% and 12.28%. Meanwhile, flow measurement uncertainty was 12.82%, and the sample collection uncertainty of TSS from stormwater runoff was 31.63%. Based on the law of propagation of uncertainties, the uncertainties regarding event flow volume, COD EMC and COD event loads were quantified as 7.03%, 10.26% and 18.47%.
基金Supported by the Knowledge Innovation Program of the Chinese Academy of Sciences (No.KZCX1-YW-12-04)the National High Technology Research and Development Program of China (863 Program) (Nos.2009AA12Z148,2007AA092202)Support for this study was provided by the Institute of Geographical Sciences and the Natural Resources Research,Chinese Academy of Science (IGSNRR,CAS) and the Institute of Oceanology, CAS
文摘With long-term marine surveys and research,and especially with the development of new marine environment monitoring technologies,prodigious amounts of complex marine environmental data are generated,and continuously increase rapidly.Features of these data include massive volume,widespread distribution,multiple-sources,heterogeneous,multi-dimensional and dynamic in structure and time.The present study recommends an integrative visualization solution for these data,to enhance the visual display of data and data archives,and to develop a joint use of these data distributed among different organizations or communities.This study also analyses the web services technologies and defines the concept of the marine information gird,then focuses on the spatiotemporal visualization method and proposes a process-oriented spatiotemporal visualization method.We discuss how marine environmental data can be organized based on the spatiotemporal visualization method,and how organized data are represented for use with web services and stored in a reusable fashion.In addition,we provide an original visualization architecture that is integrative and based on the explored technologies.In the end,we propose a prototype system of marine environmental data of the South China Sea for visualizations of Argo floats,sea surface temperature fields,sea current fields,salinity,in-situ investigation data,and ocean stations.An integration visualization architecture is illustrated on the prototype system,which highlights the process-oriented temporal visualization method and demonstrates the benefit of the architecture and the methods described in this study.
基金funded by the National Natural Science Foundation of China (41130420)the State High-Tech Development Plan of China (2012AA09A20101)
文摘Simulation and interpretation of marine controlled-source electromagnetic(CSEM) data often approximate the transmitter source as an ideal horizontal electric dipole(HED) and assume that the receivers are located on a flat seabed.Actually,however,the transmitter dipole source will be rotated,tilted and deviated from the survey profile due to ocean currents.And free-fall receivers may be also rotated to some arbitrary horizontal orientation and located on sloping seafloor.In this paper,we investigate the effects of uncertainties in the transmitter tilt,transmitter rotation and transmitter deviation from the survey profile as well as in the receiver's location and orientation on marine CSEM data.The model study shows that the uncertainties of all position and orientation parameters of both the transmitter and receivers can propagate into observed data uncertainties,but to a different extent.In interpreting marine data,field data uncertainties caused by the position and orientation uncertainties of both the transmitter and receivers need to be taken into account.
文摘Cyber security has been thrust into the limelight in the modern technological era because of an array of attacks often bypassing tmtrained intrusion detection systems (IDSs). Therefore, greater attention has been directed on being able deciphering better methods for identifying attack types to train IDSs more effectively. Keycyber-attack insights exist in big data; however, an efficient approach is required to determine strong attack types to train IDSs to become more effective in key areas. Despite the rising growth in IDS research, there is a lack of studies involving big data visualization, which is key. The KDD99 data set has served as a strong benchmark since 1999; therefore, we utilized this data set in our experiment. In this study, we utilized hash algorithm, a weight table, and sampling method to deal with the inherent problems caused by analyzing big data; volume, variety, and velocity. By utilizing a visualization algorithm, we were able to gain insights into the KDD99 data set with a clear iden- tification of "normal" clusters and described distinct clusters of effective attacks.
文摘A GIS for ocean applications called "the Xiamen Atmospheric and Oceanographic Data Management and Display System (AODMDS)" has been designed and developed. The system is based on ArcObjects (AO), a component-based GIS de- velopment tool. The paper discusses in detail the storage and organization of the atmospheric and oceanographic data, the strategy and methods for the visualization and mapping of oceanographic and atmospheric data, and the implementation of the methods in AODMDS. It also discusses some advanced display control techniques that expand the functions of ArcObjects One of the techniques is "gradient-fill-style color-map control," which provides a feasible color-rich display control for all types of raster maps. As a stand-alone desktop GIS system built on AO, AODMDS provides effective data management and powerful mapping and visualization functions for atmospheric and oceanographic data.
基金This research work is supported by Hunan Provincial Education Science 13th Five Year Plan(Grant No.XJK016BXX001)Social Science Foundation of Hunan Province(Grant No.17YBA049)+2 种基金Hunan Provincial Natural Science Foundation of China(Grant No.2017JJ2016)National Students’platform for innovation and entrepreneurship training(Grant No.201811532010)The work is also supported by Open foundation for University Innovation Platform from Hunan Province,China(Grand No.16K013)and the 2011 Collaborative Innovation Center of Big Data for Financial and Economical Asset Development and Utility in Universities of Hunan Province.We also thank the anonymous reviewers for their valuable comments and insightful suggestions.
文摘Graphical methods are used for construction.Data analysis and visualization are an important area of applications of big data.At the same time,visual analysis is also an important method for big data analysis.Data visualization refers to data that is presented in a visual form,such as a chart or map,to help people understand the meaning of the data.Data visualization helps people extract meaning from data quickly and easily.Visualization can be used to fully demonstrate the patterns,trends,and dependencies of your data,which can be found in other displays.Big data visualization analysis combines the advantages of computers,which can be static or interactive,interactive analysis methods and interactive technologies,which can directly help people and effectively understand the information behind big data.It is indispensable in the era of big data visualization,and it can be very intuitive if used properly.Graphical analysis also found that valuable information becomes a powerful tool in complex data relationships,and it represents a significant business opportunity.With the rise of big data,important technologies suitable for dealing with complex relationships have emerged.Graphics come in a variety of shapes and sizes for a variety of business problems.Graphic analysis is first in the visualization.The step is to get the right data and answer the goal.In short,to choose the right method,you must understand each relative strengths and weaknesses and understand the data.Key steps to get data:target;collect;clean;connect.
基金This work was supported by the National Science Foundation[grant numbers 1526520 to AK and 0711456 to PL].
文摘The availability and quantity of remotely sensed and terrestrial geospatial data sets are on the rise.Historically,these data sets have been analyzed and quarried on 2D desktop computers;however,immersive technologies and specifically immersive virtual reality(iVR)allow for the integration,visualization,analysis,and exploration of these 3D geospatial data sets.iVR can deliver remote and large-scale geospatial data sets to the laboratory,providing embodied experiences of field sites across the earth and beyond.We describe a workflow for the ingestion of geospatial data sets and the development of an iVR workbench,and present the application of these for an experience of Iceland’s Thrihnukar volcano where we:(1)combined satellite imagery with terrain elevation data to create a basic reconstruction of the physical site;(2)used terrestrial LiDAR data to provide a geo-referenced point cloud model of the magmatic-volcanic system,as well as the LiDAR intensity values for the identification of rock types;and(3)used Structure-from-Motion(SfM)to construct a photorealistic point cloud of the inside volcano.The workbench provides tools for the direct manipulation of the georeferenced data sets,including scaling,rotation,and translation,and a suite of geometric measurement tools,including length,area,and volume.Future developments will be inspired by an ongoing user study that formally evaluates the workbench’s mature components in the context of fieldwork and analyses activities.
文摘This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.
基金supported by the National Natural Science Foundation of China(51674121)Fund for Distinguished Youth Scholars in North China University of Science and Technology(JQ201705).
文摘In recent years,with the wide application of image data visual extraction technology in the field of industrial engineering,the development of industrial economy has reached a new situation.To explore the interaction between the pellet microstructure and compressive strength,firstly,the pellet microstructure needed for the experiment was obtained using a Leica DM4500P microscope.The area proportions of hematite,calcium ferrite,magnetite,calcium silicate and pore in pellet microstructure were extracted by visual extraction technology of image data.Moreover,the relationship between the area proportions of mineral components and compressive strength was established by backpropagation neural network(BPNN),generalized regression neural network(GRNN)and beetle antennae search-generalized regression neural network(BAS-GRNN)algorithms,which proves that the pellet microstructure can be used as the prediction standard of compressive strength.The errors of BPNN and BAS-GRNN are 5.13%and 3.37%,respectively,both of which are less than 5.5%.Therefore,through data visualization,we are able to discuss the connection between various components of pellet microstructure and compressive strength and provide new research ideas for improving the compressive strength and metallurgical performance of pellet.
基金Project supported by the National Natural Science Foundation of China (No.40301043) .
文摘The mathematic theory for uncertainty model of line segment are summed up to achieve a general conception, and the line error hand model of εσ is a basic uncertainty model that can depict the line accuracy and quality efficiently while the model of εm and error entropy can be regarded as the supplement of it. The error band model will reflect and describe the influence of line uncertainty on polygon uncertainty. Therefore, the statistical characteristic of the line error is studied deeply by analyzing the probability that the line error falls into a certain range. Moreover, the theory accordance is achieved in the selecting the error buffer for line feature and the error indicator. The relationship of the accuracy of area for a polygon with the error loop for a polygon boundary is deduced and computed.
基金This work is supported by National Natural Science Foundation of China 61304208by the 2011 Collaborative Innovation Center for Development and Utilization of Finance and Economics Big Data Property Open Fund Project 20181901CRP04+2 种基金by the Scientific Research Fund of Hunan Province Education Department 18C0003by the Research Project on Teaching Reform in General Colleges and Universities,Hunan Provincial Education Department 20190147by the Hunan Normal University Ungraduated Innovation and Entrepreneurship Training Plan Project 2019127.
文摘Water resources are one of the basic resources for human survival,and water protection has been becoming a major problem for countries around the world.However,most of the traditional water quality monitoring research work is still concerned with the collection of water quality indicators,and ignored the analysis of water quality monitoring data and its value.In this paper,by adopting Laravel and AdminTE framework,we introduced how to design and implement a water quality data visualization platform based on Baidu ECharts.Through the deployed water quality sensor,the collected water quality indicator data is transmitted to the big data processing platform that deployed on Tencent Cloud in real time through the 4G network.The collected monitoring data is analyzed,and the processing result is visualized by Baidu ECharts.The test results showed that the designed system could run well and will provide decision support for water resource protection.