Multivariate anomaly detection plays a critical role in maintaining the stable operation of information systems.However,in existing research,multivariate data are often influenced by various factors during the data co...Multivariate anomaly detection plays a critical role in maintaining the stable operation of information systems.However,in existing research,multivariate data are often influenced by various factors during the data collection process,resulting in temporal misalignment or displacement.Due to these factors,the node representations carry substantial noise,which reduces the adaptability of the multivariate coupled network structure and subsequently degrades anomaly detection performance.Accordingly,this study proposes a novel multivariate anomaly detection model grounded in graph structure learning.Firstly,a recommendation strategy is employed to identify strongly coupled variable pairs,which are then used to construct a recommendation-driven multivariate coupling network.Secondly,a multi-channel graph encoding layer is used to dynamically optimize the structural properties of the multivariate coupling network,while a multi-head attention mechanism enhances the spatial characteristics of the multivariate data.Finally,unsupervised anomaly detection is conducted using a dynamic threshold selection algorithm.Experimental results demonstrate that effectively integrating the structural and spatial features of multivariate data significantly mitigates anomalies caused by temporal dependency misalignment.展开更多
The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack...The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.展开更多
The dynamic behaviors of a large-scale ring neural network with a triangular coupling structure are investigated.The characteristic equation of the high-dimensional system using Coate’s flow graph method is calculate...The dynamic behaviors of a large-scale ring neural network with a triangular coupling structure are investigated.The characteristic equation of the high-dimensional system using Coate’s flow graph method is calculated.Time delay is selected as the bifurcation parameter,and sufficient conditions for stability and Hopf bifurcation are derived.It is found that the connection coefficient and time delay play a crucial role in the dynamic behaviors of the model.Furthermore,a phase diagram of multiple equilibrium points with one saddle point and two stable nodes is presented.Finally,the effectiveness of the theory is verified through simulation results.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes...Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape.展开更多
Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effectiv...Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effective and innovative digital platform to observe trend from social media users’ perspective who are direct or indirect witnesses of the calamitous event. This paper aims to collect and analyze twitter data related to the recent wildfire in California to perform a trend analysis by classifying firsthand and credible information from Twitter users. This work investigates tweets on the recent wildfire in California and classifies them based on witnesses into two types: 1) direct witnesses and 2) indirect witnesses. The collected and analyzed information can be useful for law enforcement agencies and humanitarian organizations for communication and verification of the situational awareness during wildfire hazards. Trend analysis is an aggregated approach that includes sentimental analysis and topic modeling performed through domain-expert manual annotation and machine learning. Trend analysis ultimately builds a fine-grained analysis to assess evacuation routes and provide valuable information to the firsthand emergency responders<span style="font-family:Verdana;">.</span>展开更多
Traditional anomaly detection methods often assume that data points are independent or exhibit regularly structured relationships,as in Euclidean data such as time series or image grids.However,real-world data frequen...Traditional anomaly detection methods often assume that data points are independent or exhibit regularly structured relationships,as in Euclidean data such as time series or image grids.However,real-world data frequently involve irregular,interconnected structures,requiring a shift toward non-Euclidean approaches.This study introduces a novel anomaly detection framework designed to handle non-Euclidean data by modeling transactions as graph signals.By leveraging graph convolution filters,we extract meaningful connection strengths that capture relational dependencies often overlooked in traditional methods.Utilizing the Graph Convolutional Networks(GCN)framework,we integrate graph-based embeddings with conventional anomaly detection models,enhancing performance through relational insights.Ourmethod is validated on European credit card transaction data,demonstrating its effectiveness in detecting fraudulent transactions,particularly thosewith subtle patterns that evade traditional,amountbased detection techniques.The results highlight the advantages of incorporating temporal and structural dependencies into fraud detection,showcasing the robustness and applicability of our approach in complex,real-world scenarios.展开更多
Social bots are automated programs designed to spread rumors and misinformation,posing significant threats to online security.Existing research shows that the structure of a social network significantly affects the be...Social bots are automated programs designed to spread rumors and misinformation,posing significant threats to online security.Existing research shows that the structure of a social network significantly affects the behavioral patterns of social bots:a higher number of connected components weakens their collaborative capabilities,thereby reducing their proportion within the overall network.However,current social bot detection methods still make limited use of topological features.Furthermore,both graph neural network(GNN)-based methods that rely on local features and those that leverage global features suffer from their own limitations,and existing studies lack an effective fusion of multi-scale information.To address these issues,this paper proposes a topology-aware multi-scale social bot detection method,which jointly learns local and global representations through a co-training mechanism.At the local level,topological features are effectively embedded into node representations,enhancing expressiveness while alleviating the over-smoothing problem in GNNs.At the global level,a clustering attention mechanism is introduced to learn global node representations,mitigating the over-globalization problem.Experimental results demonstrate that our method effectively overcomes the limitations of single-scale approaches.Our code is publicly available at https://anonymous.4open.science/r/TopoMSG-2C41/(accessed on 27 October 2025).展开更多
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities...The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.展开更多
Lateral movement represents the most covert and critical phase of Advanced Persistent Threats(APTs),and its detection still faces two primary challenges:sample scarcity and“cold start”of new entities.To address thes...Lateral movement represents the most covert and critical phase of Advanced Persistent Threats(APTs),and its detection still faces two primary challenges:sample scarcity and“cold start”of new entities.To address these challenges,we propose an Uncertainty-Driven Graph Embedding-Enhanced Lateral Movement Detection framework(UGEA-LMD).First,the framework employs event-level incremental encoding on a continuous-time graph to capture fine-grained behavioral evolution,enabling newly appearing nodes to retain temporal contextual awareness even in the absence of historical interactions and thereby fundamentally mitigating the cold-start problem.Second,in the embedding space,we model the dependency structure among feature dimensions using a Gaussian copula to quantify the uncertainty distribution,and generate augmented samples with consistent structural and semantic properties through adaptive sampling,thus expanding the representation space of sparse samples and enhancing the model’s generalization under sparse sample conditions.Unlike static graph methods that cannot model temporal dependencies or data augmentation techniques that depend on predefined structures,UGEA-LMD offers both superior temporaldynamic modeling and structural generalization.Experimental results on the large-scale LANL log dataset demonstrate that,under the transductive setting,UGEA-LMD achieves an AUC of 0.9254;even when 10%of nodes or edges are withheld during training,UGEA-LMD significantly outperforms baseline methods on metrics such as recall and AUC,confirming its robustness and generalization capability in sparse-sample and cold-start scenarios.展开更多
In the face of a growing number of large-scale data sets, affinity propagation clustering algorithm to calculate the process required to build the similarity matrix, will bring huge storage and computation. Therefore,...In the face of a growing number of large-scale data sets, affinity propagation clustering algorithm to calculate the process required to build the similarity matrix, will bring huge storage and computation. Therefore, this paper proposes an improved affinity propagation clustering algorithm. First, add the subtraction clustering, using the density value of the data points to obtain the point of initial clusters. Then, calculate the similarity distance between the initial cluster points, and reference the idea of semi-supervised clustering, adding pairs restriction information, structure sparse similarity matrix. Finally, the cluster representative points conduct AP clustering until a suitable cluster division.Experimental results show that the algorithm allows the calculation is greatly reduced, the similarity matrix storage capacity is also reduced, and better than the original algorithm on the clustering effect and processing speed.展开更多
This paper proposes a Graph regularized Lpsmooth non-negative matrix factorization(GSNMF) method by incorporating graph regularization and L_p smoothing constraint, which considers the intrinsic geometric information ...This paper proposes a Graph regularized Lpsmooth non-negative matrix factorization(GSNMF) method by incorporating graph regularization and L_p smoothing constraint, which considers the intrinsic geometric information of a data set and produces smooth and stable solutions. The main contributions are as follows: first, graph regularization is added into NMF to discover the hidden semantics and simultaneously respect the intrinsic geometric structure information of a data set. Second,the Lpsmoothing constraint is incorporated into NMF to combine the merits of isotropic(L_2-norm) and anisotropic(L_1-norm)diffusion smoothing, and produces a smooth and more accurate solution to the optimization problem. Finally, the update rules and proof of convergence of GSNMF are given. Experiments on several data sets show that the proposed method outperforms related state-of-the-art methods.展开更多
Using the advantages of web crawlers in data collection and distributed storage technologies,we accessed to a wealth of forestry-related data.Combined with the mature big data technology at its present stage,Hadoop...Using the advantages of web crawlers in data collection and distributed storage technologies,we accessed to a wealth of forestry-related data.Combined with the mature big data technology at its present stage,Hadoop's distributed system was selected to solve the storage problem of massive forestry big data and the memory-based Spark computing framework to realize real-time and fast processing of data.The forestry data contains a wealth of information,and mining this information is of great significance for guiding the development of forestry.We conducts co-word and cluster analyses on the keywords of forestry data,extracts the rules hidden in the data,analyzes the research hotspots more accurately,grasps the evolution trend of subject topics,and plays an important role in promoting the research and development of subject areas.The co-word analysis and clustering algorithm have important practical significance for the topic structure,research hotspot or development trend in the field of forestry research.Distributed storage framework and parallel computing have greatly improved the performance of data mining algorithms.Therefore,the forestry big data mining system by big data technology has important practical significance for promoting the development of intelligent forestry.展开更多
Outlier detection has very important applied value in data mining literature. Different outlier detection algorithms based on distinct theories have different definitions and mining processes. The three-dimensional sp...Outlier detection has very important applied value in data mining literature. Different outlier detection algorithms based on distinct theories have different definitions and mining processes. The three-dimensional space graph for constructing applied algorithms and an improved GridOf algorithm were proposed in terms of analyzing the existing outlier detection algorithms from criterion and theory. Key words outlier - detection - three-dimensional space graph - data mining CLC number TP 311. 13 - TP 391 Foundation item: Supported by the National Natural Science Foundation of China (70371015)Biography: ZHANG Jing (1975-), female, Ph. D, lecturer, research direction: data mining and knowledge discovery.展开更多
Graphical methods are used for construction.Data analysis and visualization are an important area of applications of big data.At the same time,visual analysis is also an important method for big data analysis.Data vis...Graphical methods are used for construction.Data analysis and visualization are an important area of applications of big data.At the same time,visual analysis is also an important method for big data analysis.Data visualization refers to data that is presented in a visual form,such as a chart or map,to help people understand the meaning of the data.Data visualization helps people extract meaning from data quickly and easily.Visualization can be used to fully demonstrate the patterns,trends,and dependencies of your data,which can be found in other displays.Big data visualization analysis combines the advantages of computers,which can be static or interactive,interactive analysis methods and interactive technologies,which can directly help people and effectively understand the information behind big data.It is indispensable in the era of big data visualization,and it can be very intuitive if used properly.Graphical analysis also found that valuable information becomes a powerful tool in complex data relationships,and it represents a significant business opportunity.With the rise of big data,important technologies suitable for dealing with complex relationships have emerged.Graphics come in a variety of shapes and sizes for a variety of business problems.Graphic analysis is first in the visualization.The step is to get the right data and answer the goal.In short,to choose the right method,you must understand each relative strengths and weaknesses and understand the data.Key steps to get data:target;collect;clean;connect.展开更多
Multiple efforts have been performed worldwide around diverse aspects of land administra-tion.However,land administration data and systems’notorious heterogeneity remains a longstanding challenge to develop a harmoni...Multiple efforts have been performed worldwide around diverse aspects of land administra-tion.However,land administration data and systems’notorious heterogeneity remains a longstanding challenge to develop a harmonized vision.In this sense,the traditional Spatial Data Infrastructures adoption is not enough to overcome this challenge since data sources’heterogeneity implies needs related to harmonization interoperability,sharing,and integration in land administration development.This paper proposes a graph-based represen-tation of knowledge for integrating multiple and heterogeneous data sources(tables,shape-files,geodatabases,and WFS services)belonging to two Colombian agencies within a decentralized land administration scenario.These knowledge graphs are developed on an ontology-based knowledge representation using national and international standards for land administration.Our approach aims to prevent data isolation,enable cross-datasets integration,accomplish machine-processable data,and facilitate the reuse and exploitation of multi-jurisdictional datasets in a single approach.A real case study demonstrates the applicability of the land administration data cycle deployed.展开更多
With the purpose of making calculation more efficient in practical hydraulic simulations, an improved algorithm was proposed and was applied in the practical water distribution field. This methodology was developed by...With the purpose of making calculation more efficient in practical hydraulic simulations, an improved algorithm was proposed and was applied in the practical water distribution field. This methodology was developed by expanding the traditional loop-equation theory through utilization of the advantages of the graph theory in efficiency. The utilization of the spanning tree technique from graph theory makes the proposed algorithm efficient in calculation and simple to use for computer coding. The algorithms for topological generation and practical implementations are presented in detail in this paper. Through the application to a practical urban system, the consumption of the CPU time and computation memory were decreased while the accuracy was greatly enhanced compared with the present existing methods.展开更多
This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 te...This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 teachers and undergraduates from four universities in Beijing. The results showed people's different attitudes towards the two forms of display.展开更多
Purpose: Our work seeks to overcome data quality issues related to incomplete author affiliation data in bibliographic records in order to support accurate and reliable measurement of international research collaborat...Purpose: Our work seeks to overcome data quality issues related to incomplete author affiliation data in bibliographic records in order to support accurate and reliable measurement of international research collaboration(IRC).Design/methodology/approch: We propose, implement, and evaluate a method that leverages the Web-based knowledge graph Wikidata to resolve publication affiliation data to particular countries. The method is tested with general and domain-specific data sets.Findings: Our evaluation covers the magnitude of improvement, accuracy, and consistency. Results suggest the method is beneficial, reliable, and consistent, and thus a viable and improved approach to measuring IRC.Research limitations: Though our evaluation suggests the method works with both general and domain-specific bibliographic data sets, it may perform differently with data sets not tested here. Further limitations stem from the use of the R programming language and R libraries for country identification as well as imbalanced data coverage and quality in Wikidata that may also change over time.Practical implications: The new method helps to increase the accuracy in IRC studies and provides a basis for further development into a general tool that enriches bibliographic data using the Wikidata knowledge graph.Originality: This is the first attempt to enrich bibliographic data using a peer-produced, Webbased knowledge graph like Wikidata.展开更多
基金supported by Natural Science Foundation of Qinghai Province(2025-ZJ-994M)Scientific Research Innovation Capability Support Project for Young Faculty(SRICSPYF-BS2025007)National Natural Science Foundation of China(62566050).
文摘Multivariate anomaly detection plays a critical role in maintaining the stable operation of information systems.However,in existing research,multivariate data are often influenced by various factors during the data collection process,resulting in temporal misalignment or displacement.Due to these factors,the node representations carry substantial noise,which reduces the adaptability of the multivariate coupled network structure and subsequently degrades anomaly detection performance.Accordingly,this study proposes a novel multivariate anomaly detection model grounded in graph structure learning.Firstly,a recommendation strategy is employed to identify strongly coupled variable pairs,which are then used to construct a recommendation-driven multivariate coupling network.Secondly,a multi-channel graph encoding layer is used to dynamically optimize the structural properties of the multivariate coupling network,while a multi-head attention mechanism enhances the spatial characteristics of the multivariate data.Finally,unsupervised anomaly detection is conducted using a dynamic threshold selection algorithm.Experimental results demonstrate that effectively integrating the structural and spatial features of multivariate data significantly mitigates anomalies caused by temporal dependency misalignment.
基金funded by the Science and Technology Project of State Grid Corporation of China(5108-202355437A-3-2-ZN).
文摘The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.
基金Supported by Natural Science Foundation of Shandong Province of China(Grant Nos.ZR2020MF080 and ZR2020MF065).
文摘The dynamic behaviors of a large-scale ring neural network with a triangular coupling structure are investigated.The characteristic equation of the high-dimensional system using Coate’s flow graph method is calculated.Time delay is selected as the bifurcation parameter,and sufficient conditions for stability and Hopf bifurcation are derived.It is found that the connection coefficient and time delay play a crucial role in the dynamic behaviors of the model.Furthermore,a phase diagram of multiple equilibrium points with one saddle point and two stable nodes is presented.Finally,the effectiveness of the theory is verified through simulation results.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
基金Supported by Project of National Natural Science Foundation(No.41874134)
文摘Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape.
文摘Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effective and innovative digital platform to observe trend from social media users’ perspective who are direct or indirect witnesses of the calamitous event. This paper aims to collect and analyze twitter data related to the recent wildfire in California to perform a trend analysis by classifying firsthand and credible information from Twitter users. This work investigates tweets on the recent wildfire in California and classifies them based on witnesses into two types: 1) direct witnesses and 2) indirect witnesses. The collected and analyzed information can be useful for law enforcement agencies and humanitarian organizations for communication and verification of the situational awareness during wildfire hazards. Trend analysis is an aggregated approach that includes sentimental analysis and topic modeling performed through domain-expert manual annotation and machine learning. Trend analysis ultimately builds a fine-grained analysis to assess evacuation routes and provide valuable information to the firsthand emergency responders<span style="font-family:Verdana;">.</span>
基金supported by the National Research Foundation of Korea(NRF)funded by the Korea government(RS-2023-00249743)Additionally,this research was supported by the Global-Learning&Academic Research Institution for Master’s,PhD Students,and Postdocs(LAMP)Program of the National Research Foundation of Korea(NRF)grant funded by the Ministry of Education(RS-2024-00443714)This research was also supported by the“Research Base Construction Fund Support Program”funded by Jeonbuk National University in 2025.
文摘Traditional anomaly detection methods often assume that data points are independent or exhibit regularly structured relationships,as in Euclidean data such as time series or image grids.However,real-world data frequently involve irregular,interconnected structures,requiring a shift toward non-Euclidean approaches.This study introduces a novel anomaly detection framework designed to handle non-Euclidean data by modeling transactions as graph signals.By leveraging graph convolution filters,we extract meaningful connection strengths that capture relational dependencies often overlooked in traditional methods.Utilizing the Graph Convolutional Networks(GCN)framework,we integrate graph-based embeddings with conventional anomaly detection models,enhancing performance through relational insights.Ourmethod is validated on European credit card transaction data,demonstrating its effectiveness in detecting fraudulent transactions,particularly thosewith subtle patterns that evade traditional,amountbased detection techniques.The results highlight the advantages of incorporating temporal and structural dependencies into fraud detection,showcasing the robustness and applicability of our approach in complex,real-world scenarios.
基金supported by“the Fundamental Research Funds for the Central Universities”(Grant No.CUCAI2511).
文摘Social bots are automated programs designed to spread rumors and misinformation,posing significant threats to online security.Existing research shows that the structure of a social network significantly affects the behavioral patterns of social bots:a higher number of connected components weakens their collaborative capabilities,thereby reducing their proportion within the overall network.However,current social bot detection methods still make limited use of topological features.Furthermore,both graph neural network(GNN)-based methods that rely on local features and those that leverage global features suffer from their own limitations,and existing studies lack an effective fusion of multi-scale information.To address these issues,this paper proposes a topology-aware multi-scale social bot detection method,which jointly learns local and global representations through a co-training mechanism.At the local level,topological features are effectively embedded into node representations,enhancing expressiveness while alleviating the over-smoothing problem in GNNs.At the global level,a clustering attention mechanism is introduced to learn global node representations,mitigating the over-globalization problem.Experimental results demonstrate that our method effectively overcomes the limitations of single-scale approaches.Our code is publicly available at https://anonymous.4open.science/r/TopoMSG-2C41/(accessed on 27 October 2025).
文摘The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.
基金supported by the Zhongyuan University of Technology Discipline Backbone Teacher Support Program Project(No.GG202417)the Key Research and Development Program of Henan under Grant 251111212000.
文摘Lateral movement represents the most covert and critical phase of Advanced Persistent Threats(APTs),and its detection still faces two primary challenges:sample scarcity and“cold start”of new entities.To address these challenges,we propose an Uncertainty-Driven Graph Embedding-Enhanced Lateral Movement Detection framework(UGEA-LMD).First,the framework employs event-level incremental encoding on a continuous-time graph to capture fine-grained behavioral evolution,enabling newly appearing nodes to retain temporal contextual awareness even in the absence of historical interactions and thereby fundamentally mitigating the cold-start problem.Second,in the embedding space,we model the dependency structure among feature dimensions using a Gaussian copula to quantify the uncertainty distribution,and generate augmented samples with consistent structural and semantic properties through adaptive sampling,thus expanding the representation space of sparse samples and enhancing the model’s generalization under sparse sample conditions.Unlike static graph methods that cannot model temporal dependencies or data augmentation techniques that depend on predefined structures,UGEA-LMD offers both superior temporaldynamic modeling and structural generalization.Experimental results on the large-scale LANL log dataset demonstrate that,under the transductive setting,UGEA-LMD achieves an AUC of 0.9254;even when 10%of nodes or edges are withheld during training,UGEA-LMD significantly outperforms baseline methods on metrics such as recall and AUC,confirming its robustness and generalization capability in sparse-sample and cold-start scenarios.
基金This research has been partially supported by the national natural science foundation of China (51175169) and the national science and technology support program (2012BAF02B01).
文摘In the face of a growing number of large-scale data sets, affinity propagation clustering algorithm to calculate the process required to build the similarity matrix, will bring huge storage and computation. Therefore, this paper proposes an improved affinity propagation clustering algorithm. First, add the subtraction clustering, using the density value of the data points to obtain the point of initial clusters. Then, calculate the similarity distance between the initial cluster points, and reference the idea of semi-supervised clustering, adding pairs restriction information, structure sparse similarity matrix. Finally, the cluster representative points conduct AP clustering until a suitable cluster division.Experimental results show that the algorithm allows the calculation is greatly reduced, the similarity matrix storage capacity is also reduced, and better than the original algorithm on the clustering effect and processing speed.
基金supported by the National Natural Science Foundation of China(61702251,61363049,11571011)the State Scholarship Fund of China Scholarship Council(CSC)(201708360040)+3 种基金the Natural Science Foundation of Jiangxi Province(20161BAB212033)the Natural Science Basic Research Plan in Shaanxi Province of China(2018JM6030)the Doctor Scientific Research Starting Foundation of Northwest University(338050050)Youth Academic Talent Support Program of Northwest University
文摘This paper proposes a Graph regularized Lpsmooth non-negative matrix factorization(GSNMF) method by incorporating graph regularization and L_p smoothing constraint, which considers the intrinsic geometric information of a data set and produces smooth and stable solutions. The main contributions are as follows: first, graph regularization is added into NMF to discover the hidden semantics and simultaneously respect the intrinsic geometric structure information of a data set. Second,the Lpsmoothing constraint is incorporated into NMF to combine the merits of isotropic(L_2-norm) and anisotropic(L_1-norm)diffusion smoothing, and produces a smooth and more accurate solution to the optimization problem. Finally, the update rules and proof of convergence of GSNMF are given. Experiments on several data sets show that the proposed method outperforms related state-of-the-art methods.
基金grants from the Fundamental Research Funds for the Central Universities(Grant No.2572018BH02)Special Funds for Scientific Research in the Forestry Public Welfare Industry(Grant Nos.201504307-03)。
文摘Using the advantages of web crawlers in data collection and distributed storage technologies,we accessed to a wealth of forestry-related data.Combined with the mature big data technology at its present stage,Hadoop's distributed system was selected to solve the storage problem of massive forestry big data and the memory-based Spark computing framework to realize real-time and fast processing of data.The forestry data contains a wealth of information,and mining this information is of great significance for guiding the development of forestry.We conducts co-word and cluster analyses on the keywords of forestry data,extracts the rules hidden in the data,analyzes the research hotspots more accurately,grasps the evolution trend of subject topics,and plays an important role in promoting the research and development of subject areas.The co-word analysis and clustering algorithm have important practical significance for the topic structure,research hotspot or development trend in the field of forestry research.Distributed storage framework and parallel computing have greatly improved the performance of data mining algorithms.Therefore,the forestry big data mining system by big data technology has important practical significance for promoting the development of intelligent forestry.
文摘Outlier detection has very important applied value in data mining literature. Different outlier detection algorithms based on distinct theories have different definitions and mining processes. The three-dimensional space graph for constructing applied algorithms and an improved GridOf algorithm were proposed in terms of analyzing the existing outlier detection algorithms from criterion and theory. Key words outlier - detection - three-dimensional space graph - data mining CLC number TP 311. 13 - TP 391 Foundation item: Supported by the National Natural Science Foundation of China (70371015)Biography: ZHANG Jing (1975-), female, Ph. D, lecturer, research direction: data mining and knowledge discovery.
基金This research work is supported by Hunan Provincial Education Science 13th Five Year Plan(Grant No.XJK016BXX001)Social Science Foundation of Hunan Province(Grant No.17YBA049)+2 种基金Hunan Provincial Natural Science Foundation of China(Grant No.2017JJ2016)National Students’platform for innovation and entrepreneurship training(Grant No.201811532010)The work is also supported by Open foundation for University Innovation Platform from Hunan Province,China(Grand No.16K013)and the 2011 Collaborative Innovation Center of Big Data for Financial and Economical Asset Development and Utility in Universities of Hunan Province.We also thank the anonymous reviewers for their valuable comments and insightful suggestions.
文摘Graphical methods are used for construction.Data analysis and visualization are an important area of applications of big data.At the same time,visual analysis is also an important method for big data analysis.Data visualization refers to data that is presented in a visual form,such as a chart or map,to help people understand the meaning of the data.Data visualization helps people extract meaning from data quickly and easily.Visualization can be used to fully demonstrate the patterns,trends,and dependencies of your data,which can be found in other displays.Big data visualization analysis combines the advantages of computers,which can be static or interactive,interactive analysis methods and interactive technologies,which can directly help people and effectively understand the information behind big data.It is indispensable in the era of big data visualization,and it can be very intuitive if used properly.Graphical analysis also found that valuable information becomes a powerful tool in complex data relationships,and it represents a significant business opportunity.With the rise of big data,important technologies suitable for dealing with complex relationships have emerged.Graphics come in a variety of shapes and sizes for a variety of business problems.Graphic analysis is first in the visualization.The step is to get the right data and answer the goal.In short,to choose the right method,you must understand each relative strengths and weaknesses and understand the data.Key steps to get data:target;collect;clean;connect.
基金supported by Colfuturo and Ministerio de Tecnologías de la Información y las Comunicaciones de Colombia,CYTED program-520RT0010[Red GeoLIBERO-Consolidación de una red de geomática libre aplicada a las necesidades de Iberoamérica],and SIP-IPN 20210677[Generación de grafos de conocimiento sobre eventos meteorológicos urbanos].
文摘Multiple efforts have been performed worldwide around diverse aspects of land administra-tion.However,land administration data and systems’notorious heterogeneity remains a longstanding challenge to develop a harmonized vision.In this sense,the traditional Spatial Data Infrastructures adoption is not enough to overcome this challenge since data sources’heterogeneity implies needs related to harmonization interoperability,sharing,and integration in land administration development.This paper proposes a graph-based represen-tation of knowledge for integrating multiple and heterogeneous data sources(tables,shape-files,geodatabases,and WFS services)belonging to two Colombian agencies within a decentralized land administration scenario.These knowledge graphs are developed on an ontology-based knowledge representation using national and international standards for land administration.Our approach aims to prevent data isolation,enable cross-datasets integration,accomplish machine-processable data,and facilitate the reuse and exploitation of multi-jurisdictional datasets in a single approach.A real case study demonstrates the applicability of the land administration data cycle deployed.
文摘With the purpose of making calculation more efficient in practical hydraulic simulations, an improved algorithm was proposed and was applied in the practical water distribution field. This methodology was developed by expanding the traditional loop-equation theory through utilization of the advantages of the graph theory in efficiency. The utilization of the spanning tree technique from graph theory makes the proposed algorithm efficient in calculation and simple to use for computer coding. The algorithms for topological generation and practical implementations are presented in detail in this paper. Through the application to a practical urban system, the consumption of the CPU time and computation memory were decreased while the accuracy was greatly enhanced compared with the present existing methods.
基金Project supported partly by the National Basic Research Program (973) of China (No. 2002B312103)+2 种基金the National Natural Science Foundation of China (No. 3027466)the Chinese Academy of Sciences
文摘This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 teachers and undergraduates from four universities in Beijing. The results showed people's different attitudes towards the two forms of display.
文摘Purpose: Our work seeks to overcome data quality issues related to incomplete author affiliation data in bibliographic records in order to support accurate and reliable measurement of international research collaboration(IRC).Design/methodology/approch: We propose, implement, and evaluate a method that leverages the Web-based knowledge graph Wikidata to resolve publication affiliation data to particular countries. The method is tested with general and domain-specific data sets.Findings: Our evaluation covers the magnitude of improvement, accuracy, and consistency. Results suggest the method is beneficial, reliable, and consistent, and thus a viable and improved approach to measuring IRC.Research limitations: Though our evaluation suggests the method works with both general and domain-specific bibliographic data sets, it may perform differently with data sets not tested here. Further limitations stem from the use of the R programming language and R libraries for country identification as well as imbalanced data coverage and quality in Wikidata that may also change over time.Practical implications: The new method helps to increase the accuracy in IRC studies and provides a basis for further development into a general tool that enriches bibliographic data using the Wikidata knowledge graph.Originality: This is the first attempt to enrich bibliographic data using a peer-produced, Webbased knowledge graph like Wikidata.