Long-termpetroleum production forecasting is essential for the effective development andmanagement of oilfields.Due to its ability to extract complex patterns,deep learning has gained popularity for production forecas...Long-termpetroleum production forecasting is essential for the effective development andmanagement of oilfields.Due to its ability to extract complex patterns,deep learning has gained popularity for production forecasting.However,existing deep learning models frequently overlook the selective utilization of information from other production wells,resulting in suboptimal performance in long-term production forecasting across multiple wells.To achieve accurate long-term petroleum production forecast,we propose a spatial-geological perception graph convolutional neural network(SGP-GCN)that accounts for the temporal,spatial,and geological dependencies inherent in petroleum production.Utilizing the attention mechanism,the SGP-GCN effectively captures intricate correlations within production and geological data,forming the representations of each production well.Based on the spatial distances and geological feature correlations,we construct a spatial-geological matrix as the weight matrix to enable differential utilization of information from other wells.Additionally,a matrix sparsification algorithm based on production clustering(SPC)is also proposed to optimize the weight distribution within the spatial-geological matrix,thereby enhancing long-term forecasting performance.Empirical evaluations have shown that the SGP-GCN outperforms existing deep learning models,such as CNN-LSTM-SA,in long-term petroleum production forecasting.This demonstrates the potential of the SGP-GCN as a valuable tool for long-term petroleum production forecasting across multiple wells.展开更多
Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to sca...Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.展开更多
Pattern recognition is critical to map data handling and their applications.This study presents a model that combines the Shape Context(SC)descriptor and Graph Convolutional Neural Network(GCNN)to classify the pattern...Pattern recognition is critical to map data handling and their applications.This study presents a model that combines the Shape Context(SC)descriptor and Graph Convolutional Neural Network(GCNN)to classify the patterns of interchanges,which are indispensable parts of urban road networks.In the SC-GCNN model,an interchange is modeled as a graph,wherein nodes and edges represent the interchange segments and their connections,respectively.Then,a novel SC descriptor is implemented to describe the contextual information of each interchange segment and serve as descriptive features of graph nodes.Finally,a GCNN is designed by combining graph convolution and pooling operations to process the constructed graphs and classify the interchange patterns.The SC-GCNN model was validated using interchange samples obtained from the road networks of 15 cities downloaded from OpenStreetMap.The classification accuracy was 87.06%,which was higher than that of the image-based AlexNet,GoogLeNet,and Random Forest models.展开更多
A significant advantage of medical image processing is that it allows non-invasive exploration of internal anatomy in great detail.It is possible to create and study 3D models of anatomical structures to improve treatm...A significant advantage of medical image processing is that it allows non-invasive exploration of internal anatomy in great detail.It is possible to create and study 3D models of anatomical structures to improve treatment outcomes,develop more effective medical devices,or arrive at a more accurate diagnosis.This paper aims to present a fused evolutionary algorithm that takes advantage of both whale optimization and bacterial foraging optimization to optimize feature extraction.The classification process was conducted with the aid of a convolu-tional neural network(CNN)with dual graphs.Evaluation of the performance of the fused model is carried out with various methods.In the initial input Com-puter Tomography(CT)image,150 images are pre-processed and segmented to identify cancerous and non-cancerous nodules.The geometrical,statistical,struc-tural,and texture features are extracted from the preprocessed segmented image using various methods such as Gray-level co-occurrence matrix(GLCM),Histo-gram-oriented gradient features(HOG),and Gray-level dependence matrix(GLDM).To select the optimal features,a novel fusion approach known as Whale-Bacterial Foraging Optimization is proposed.For the classification of lung cancer,dual graph convolutional neural networks have been employed.A com-parison of classification algorithms and optimization algorithms has been con-ducted.According to the evaluated results,the proposed fused algorithm is successful with an accuracy of 98.72%in predicting lung tumors,and it outper-forms other conventional approaches.展开更多
Drug-protein interaction(DPI)prediction in drug discovery and new drug design plays a key role,but the traditional in vitro experiments would incur significant temporal and financial costs,cannot smoothly advance drug...Drug-protein interaction(DPI)prediction in drug discovery and new drug design plays a key role,but the traditional in vitro experiments would incur significant temporal and financial costs,cannot smoothly advance drug-protein interaction research,so many computer prediction models have emerged,and the current commonly used is based on deep learning method.In this paper,a deep learning model Computer-based Drug-Protein Interaction CBSG_DPI is proposed to predict drug-protein interactions.This model uses the protein features extracted by the Computed Tomography CT and Bert method and the drug features extracted by the SMILES2Vec method and input into the graph convolutional neural network(GCN)to complete the prediction of drug-protein interactions.The obtained results show that the proposed model can not only predict drug-protein interactions more accurately but also train hundreds of times faster than the traditional deep learning model by abandoning the traditional grid search algorithm to find the best parameters.展开更多
Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome th...Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome this issue,we propose a convolutional graph neural network(CGNN)model,which we enhance with multilayer feature fusion and a squeeze-and-excitation block.Additionally,we introduce a spatially balanced mean squared error(SBMSE)loss function to address the imbalanced distribution and spatial variability of meteorological variables.The CGNN is capable of extracting essential spatial features and aggregating them from a global perspective,thereby improving the accuracy of prediction and enhancing the model's generalization ability.Based on the experimental results,CGNN has certain advantages in terms of bias distribution,exhibiting a smaller variance.When it comes to precipitation,both UNet and AE also demonstrate relatively small biases.As for temperature,AE and CNNdense perform outstandingly during the winter.The time correlation coefficients show an improvement of at least 10%at daily and monthly scales for both temperature and precipitation.Furthermore,the SBMSE loss function displays an advantage over existing loss functions in predicting the98th percentile and identifying areas where extreme events occur.However,the SBMSE tends to overestimate the distribution of extreme precipitation,which may be due to the theoretical assumptions about the posterior distribution of data that partially limit the effectiveness of the loss function.In future work,we will further optimize the SBMSE to improve prediction accuracy.展开更多
In order to address the issues of predefined adjacency matrices inadequately representing information in road networks,insufficiently capturing spatial dependencies of traffic networks,and the potential problem of exc...In order to address the issues of predefined adjacency matrices inadequately representing information in road networks,insufficiently capturing spatial dependencies of traffic networks,and the potential problem of excessive smoothing or neglecting initial node information as the layers of graph convolutional neural networks increase,thus affecting traffic prediction performance,this paper proposes a prediction model based on Adaptive Multi-channel Graph Convolutional Neural Networks(AMGCN).The model utilizes an adaptive adjacency matrix to automatically learn implicit graph structures from data,introduces a mixed skip propagation graph convolutional neural network model,which retains the original node states and selectively acquires outputs of convolutional layers,thus avoiding the loss of node initial states and comprehensively capturing spatial correlations of traffic flow.Finally,the output is fed into Long Short-Term Memory networks to capture temporal correlations.Comparative experiments on two real datasets validate the effectiveness of the proposed model.展开更多
Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,...Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,sentiment analysis and question-answering systems.However,previous studies ignored much unusedstructural information in sentences that could enhance the performance of the relation extraction task.Moreover,most existing dependency-based models utilize self-attention to distinguish the importance of context,whichhardly deals withmultiple-structure information.To efficiently leverage multiple structure information,this paperproposes a dynamic structure attention mechanism model based on textual structure information,which deeplyintegrates word embedding,named entity recognition labels,part of speech,dependency tree and dependency typeinto a graph convolutional network.Specifically,our model extracts text features of different structures from theinput sentence.Textual Structure information Graph Convolutional Networks employs the dynamic structureattention mechanism to learn multi-structure attention,effectively distinguishing important contextual features invarious structural information.In addition,multi-structure weights are carefully designed as amergingmechanismin the different structure attention to dynamically adjust the final attention.This paper combines these featuresand trains a graph convolutional network for relation extraction.We experiment on supervised relation extractiondatasets including SemEval 2010 Task 8,TACRED,TACREV,and Re-TACED,the result significantly outperformsthe previous.展开更多
It is of great significance to quickly detect underwater cracks as they can seriously threaten the safety of underwater structures.Research to date has mainly focused on the detection of above-water-level cracks and h...It is of great significance to quickly detect underwater cracks as they can seriously threaten the safety of underwater structures.Research to date has mainly focused on the detection of above-water-level cracks and hasn’t considered the large scale cracks.In this paper,a large-scale underwater crack examination method is proposed based on image stitching and segmentation.In addition,a purpose of this paper is to design a new convolution method to segment underwater images.An improved As-Projective-As-Possible(APAP)algorithm was designed to extract and stitch keyframes from videos.The graph convolutional neural network(GCN)was used to segment the stitched image.The GCN’s m-IOU is 24.02%higher than Fully convolutional networks(FCN),proving that GCN has great potential of application in image segmentation and underwater image processing.The result shows that the improved APAP algorithm and GCN can adapt to complex underwater environments and perform well in different study areas.展开更多
Action recognition has been recognized as an activity in which individuals’behaviour can be observed.Assembling profiles of regular activities such as activities of daily living can support identifying trends in the ...Action recognition has been recognized as an activity in which individuals’behaviour can be observed.Assembling profiles of regular activities such as activities of daily living can support identifying trends in the data during critical events.A skeleton representation of the human body has been proven to be effective for this task.The skeletons are presented in graphs form-like.However,the topology of a graph is not structured like Euclideanbased data.Therefore,a new set of methods to perform the convolution operation upon the skeleton graph is proposed.Our proposal is based on the Spatial Temporal-Graph Convolutional Network(ST-GCN)framework.In this study,we proposed an improved set of label mapping methods for the ST-GCN framework.We introduce three split techniques(full distance split,connection split,and index split)as an alternative approach for the convolution operation.The experiments presented in this study have been trained using two benchmark datasets:NTU-RGB+D and Kinetics to evaluate the performance.Our results indicate that our split techniques outperform the previous partition strategies and aremore stable during training without using the edge importance weighting additional training parameter.Therefore,our proposal can provide a more realistic solution for real-time applications centred on daily living recognition systems activities for indoor environments.展开更多
The relation is a semantic expression relevant to two named entities in a sentence.Since a sentence usually contains several named entities,it is essential to learn a structured sentence representation that encodes de...The relation is a semantic expression relevant to two named entities in a sentence.Since a sentence usually contains several named entities,it is essential to learn a structured sentence representation that encodes dependency information specific to the two named entities.In related work,graph convolutional neural networks are widely adopted to learn semantic dependencies,where a dependency tree initializes the adjacency matrix.However,this approach has two main issues.First,parsing a sentence heavily relies on external toolkits,which can be errorprone.Second,the dependency tree only encodes the syntactical structure of a sentence,which may not align with the relational semantic expression.In this paper,we propose an automatic graph learningmethod to autonomously learn a sentence’s structural information.Instead of using a fixed adjacency matrix initialized by a dependency tree,we introduce an Adaptive Adjacency Matrix to encode the semantic dependency between tokens.The elements of thismatrix are dynamically learned during the training process and optimized by task-relevant learning objectives,enabling the construction of task-relevant semantic dependencies within a sentence.Our model demonstrates superior performance on the TACRED and SemEval 2010 datasets,surpassing previous works by 1.3%and 0.8%,respectively.These experimental results show that our model excels in the relation extraction task,outperforming prior models.展开更多
The classification of point cloud data is the key technology of point cloud data information acquisition and 3D reconstruction, which has a wide range of applications. However, the existing point cloud classification ...The classification of point cloud data is the key technology of point cloud data information acquisition and 3D reconstruction, which has a wide range of applications. However, the existing point cloud classification methods have some shortcomings when extracting point cloud features, such as insufficient extraction of local information and overlooking the information in other neighborhood features in the point cloud, and not focusing on the point cloud channel information and spatial information. To solve the above problems, a point cloud classification network based on graph convolution and fusion attention mechanism is proposed to achieve more accurate classification results. Firstly, the point cloud is regarded as a node on the graph, the k-nearest neighbor algorithm is used to compose the graph and the information between points is dynamically captured by stacking multiple graph convolution layers;then, with the assistance of 2D experience of attention mechanism, an attention mechanism which has the capability to integrate more attention to point cloud spatial and channel information is introduced to increase the feature information of point cloud, aggregate local useful features and suppress useless features. Through the classification experiments on ModelNet40 dataset, the experimental results show that compared with PointNet network without considering the local feature information of the point cloud, the average classification accuracy of the proposed model has a 4.4% improvement and the overall classification accuracy has a 4.4% improvement. Compared with other networks, the classification accuracy of the proposed model has also been improved.展开更多
The proliferation of rumors on social media has caused serious harm to society.Although previous research has attempted to use deep learning methods for rumor detection,they did not simultaneously consider the two key...The proliferation of rumors on social media has caused serious harm to society.Although previous research has attempted to use deep learning methods for rumor detection,they did not simultaneously consider the two key features of temporal and spatial domains.More importantly,these methods struggle to automatically generate convincing explanations for the detection results,which is crucial for preventing the further spread of rumors.To address these limitations,this paper proposes a novel method that integrates both temporal and spatial features while leveraging Large Language Models(LLMs)to automatically generate explanations for the detection results.Our method constructs a dynamic graph model to represent the evolving,tree-like propagation structure of rumors across different time periods.Spatial features are extracted using a Graph Convolutional Network,which captures the interactions and relationships between entities within the rumor network.Temporal features are extracted using a Recurrent Neural Network,which accounts for the dynamics of rumor spread over time.To automatically generate explanations,we utilize Llama-3-8B,a large language model,to provide clear and contextually relevant rationales for the detected rumors.We evaluate our method on two real-world datasets and demonstrate that it outperforms current state-of-the-art techniques,achieving superior detection accuracy while also offering the added capability of automatically generating interpretable and convincing explanations.Our results highlight the effectiveness of combining temporal and spatial features,along with LLMs,for improving rumor detection and understanding.展开更多
The impact of pesticides on insect pollinators has caused worldwide concern. Both global bee decline and stopping the use of pesticides may have serious consequences for food security. Automated and accurate predictio...The impact of pesticides on insect pollinators has caused worldwide concern. Both global bee decline and stopping the use of pesticides may have serious consequences for food security. Automated and accurate prediction of chemical poisoning of honey bees is a challenging task owing to a lack of understanding of chemical toxicity and introspection. Deep learning(DL) shows potential utility for general and highly variable tasks across fields. Here, we developed a new DL model of deep graph attention convolutional neural networks(GACNN) with the combination of undirected graph(UG) and attention convolutional neural networks(ACNN) to accurately classify chemical poisoning of honey bees. We used a training dataset of 720 pesticides and an external validation dataset of 90 pesticides, which is one order of magnitude larger than the previous datasets. We tested its performance in two ways: poisonous versus nonpoisonous and GACNN versus other frequently-used machine learning models. The first case represents the accuracy in identifying bee poisonous chemicals. The second represents performance advantages. The GACNN achieved ~6% higher performance for predicting toxic samples and more stable with ~7%Matthews Correlation Coefficient(MCC) higher compared to all tested models, demonstrating GACNN is capable of accurately classifying chemicals and has considerable potential in practical applications.In addition, we also summarized and evaluated the mechanisms underlying the response of honey bees to chemical exposure based on the mapping of molecular similarity. Moreover, our cloud platform(http://beetox.cn) of this model provides low-cost universal access to information, which could vitally enhance environmental risk assessment.展开更多
Graph neural networks have been shown to be very effective in utilizing pairwise relationships across samples.Recently,there have been several successful proposals to generalize graph neural networks to hypergraph neu...Graph neural networks have been shown to be very effective in utilizing pairwise relationships across samples.Recently,there have been several successful proposals to generalize graph neural networks to hypergraph neural networks to exploit more com-plex relationships.In particular,the hypergraph collaborative networks yield superior results compared to other hypergraph neural net-works for various semi-supervised learning tasks.The collaborative network can provide high quality vertex embeddings and hyperedge embeddings together by formulating them as a joint optimization problem and by using their consistency in reconstructing the given hy-pergraph.In this paper,we aim to establish the algorithmic stability of the core layer of the collaborative network and provide generaliz--ation guarantees.The analysis sheds light on the design of hypergraph filters in collaborative networks,for instance,how the data and hypergraph filters should be scaled to achieve uniform stability of the learning process.Some experimental results on real-world datasets are presented to illustrate the theory.展开更多
In recent years,deep learning methods have developed rapidly and found application in many fields,including natural language processing.In the field of aspect-level sentiment analysis,deep learning methods can also gr...In recent years,deep learning methods have developed rapidly and found application in many fields,including natural language processing.In the field of aspect-level sentiment analysis,deep learning methods can also greatly improve the performance of models.However,previous studies did not take into account the relationship between user feature extraction and contextual terms.To address this issue,we use data feature extraction and deep learning combined to develop an aspect-level sentiment analysis method.To be specific,we design user comment feature extraction(UCFE)to distill salient features from users’historical comments and transform them into representative user feature vectors.Then,the aspect-sentence graph convolutional neural network(ASGCN)is used to incorporate innovative techniques for calculating adjacency matrices;meanwhile,ASGCN emphasizes capturing nuanced semantics within relationships among aspect words and syntactic dependency types.Afterward,three embedding methods are devised to embed the user feature vector into the ASGCN model.The empirical validations verify the effectiveness of these models,consistently surpassing conventional benchmarks and reaffirming the indispensable role of deep learning in advancing sentiment analysis methodologies.展开更多
Machine learning(ML)integrated with density functional theory(DFT)calculations have recently been used to accelerate the design and discovery of single-atom catalysts(SACs)by establishing deep structure–activity rela...Machine learning(ML)integrated with density functional theory(DFT)calculations have recently been used to accelerate the design and discovery of single-atom catalysts(SACs)by establishing deep structure–activity relationships.The traditional ML models are always difficult to identify the structural differences among the single-atom systems with different modification methods,leading to the limitation of the potential application range.Aiming to the structural properties of several typical two-dimensional MA_(2)Z_(4)-based single-atom systems(bare MA_(2)Z_(4) and metal single-atom doped/supported MA_(2)Z_(4)),an improved crystal graph convolutional neural network(CGCNN)classification model was employed,instead of the traditional machine learning regression model,to address the challenge of incompatibility in the studied systems.The CGCNN model was optimized using crystal graph representation in which the geometric configuration was divided into active layer,surface layer,and bulk layer(ASB-GCNN).Through ML and DFT calculations,five potential single-atom hydrogen evolution reaction(HER)catalysts were screened from chemical space of 600 MA_(2)Z_(4)-based materials,especially V_(1)/HfSn_(2)N_(4)(S)with high stability and activity(Δ_(GH*)is 0.06 eV).Further projected density of states(pDOS)analysis in combination with the wave function analysis of the SAC-H bond revealed that the SAC-dz^(2)orbital coincided with the H-s orbital around the energy level of−2.50 eV,and orbital analysis confirmed the formation ofσbonds.This study provides an efficient multistep screening design framework of metal single-atom catalyst for HER systems with similar two-dimensional supports but different geometric configurations.展开更多
Graph convolutional neural networks(GCNs)have emerged as an effective approach to extending deep learning for graph data analytics,but they are computationally challenging given the irregular graphs and the large num-...Graph convolutional neural networks(GCNs)have emerged as an effective approach to extending deep learning for graph data analytics,but they are computationally challenging given the irregular graphs and the large num-ber of nodes in a graph.GCNs involve chain sparse-dense matrix multiplications with six loops,which results in a large de-sign space for GCN accelerators.Prior work on GCN acceleration either employs limited loop optimization techniques,or determines the design variables based on random sampling,which can hardly exploit data reuse efficiently,thus degrading system efficiency.To overcome this limitation,this paper proposes GShuttle,a GCN acceleration scheme that maximizes memory access efficiency to achieve high performance and energy efficiency.GShuttle systematically explores loop opti-mization techniques for GCN acceleration,and quantitatively analyzes the design objectives(e.g.,required DRAM access-es and SRAM accesses)by analytical calculation based on multiple design variables.GShuttle further employs two ap-proaches,pruned search space sweeping and greedy search,to find the optimal design variables under certain design con-straints.We demonstrated the efficacy of GShuttle by evaluation on five widely used graph datasets.The experimental simulations show that GShuttle reduces the number of DRAM accesses by a factor of 1.5 and saves energy by a factor of 1.7 compared with the state-of-the-art approaches.展开更多
Aiming at the problem that existing models in aspect-level sentiment analysis cannot fully and effectively utilize sentence semantic and syntactic structure information, this paper proposes a graph neural network-base...Aiming at the problem that existing models in aspect-level sentiment analysis cannot fully and effectively utilize sentence semantic and syntactic structure information, this paper proposes a graph neural network-based aspect-level sentiment classification model. Self-attention, aspectual word multi-head attention and dependent syntactic relations are fused and the node representations are enhanced with graph convolutional networks to enable the model to fully learn the global semantic and syntactic structural information of sentences. Experimental results show that the model performs well on three public benchmark datasets Rest14, Lap14, and Twitter, improving the accuracy of sentiment classification.展开更多
In the study of graph convolutional networks,the information aggregation of nodes is important for downstream tasks.However,current graph convolutional networks do not differentiate the importance of different neighbo...In the study of graph convolutional networks,the information aggregation of nodes is important for downstream tasks.However,current graph convolutional networks do not differentiate the importance of different neighboring nodes from the perspective of network topology when ag-gregating messages from neighboring nodes.Therefore,based on network topology,this paper proposes a weighted graph convolutional network based on network node degree and efficiency(W-GCN)model for semi-supervised node classification.To distinguish the importance of nodes,this paper uses the degree and the efficiency of nodes in the network to construct the impor-tance matrix of nodes,rather than the adjacency matrix,which usually is a normalized symmetry Laplacian matrix in graph convolutional network.So that weights of neighbor nodes can be as-signed respectively in the process of graph convolution operation.The proposed method is ex-amined through several real benchmark datasets(Cora,CiteSeer and PubMed)in the experimen-tal part.And compared with the graph convolutional network method.The experimental results show that the W-GCN model proposed in this paper is better than the graph convolutional net-work model in prediction accuracy and achieves better results.展开更多
基金funded by National Natural Science Foundation of China,grant number 62071491.
文摘Long-termpetroleum production forecasting is essential for the effective development andmanagement of oilfields.Due to its ability to extract complex patterns,deep learning has gained popularity for production forecasting.However,existing deep learning models frequently overlook the selective utilization of information from other production wells,resulting in suboptimal performance in long-term production forecasting across multiple wells.To achieve accurate long-term petroleum production forecast,we propose a spatial-geological perception graph convolutional neural network(SGP-GCN)that accounts for the temporal,spatial,and geological dependencies inherent in petroleum production.Utilizing the attention mechanism,the SGP-GCN effectively captures intricate correlations within production and geological data,forming the representations of each production well.Based on the spatial distances and geological feature correlations,we construct a spatial-geological matrix as the weight matrix to enable differential utilization of information from other wells.Additionally,a matrix sparsification algorithm based on production clustering(SPC)is also proposed to optimize the weight distribution within the spatial-geological matrix,thereby enhancing long-term forecasting performance.Empirical evaluations have shown that the SGP-GCN outperforms existing deep learning models,such as CNN-LSTM-SA,in long-term petroleum production forecasting.This demonstrates the potential of the SGP-GCN as a valuable tool for long-term petroleum production forecasting across multiple wells.
基金supported by the National Natural Science Foundation of China-China State Railway Group Co.,Ltd.Railway Basic Research Joint Fund (Grant No.U2268217)the Scientific Funding for China Academy of Railway Sciences Corporation Limited (No.2021YJ183).
文摘Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.
基金supported by the National Natural Science Foundation of China[grant numbers 42071450 and 42001415].
文摘Pattern recognition is critical to map data handling and their applications.This study presents a model that combines the Shape Context(SC)descriptor and Graph Convolutional Neural Network(GCNN)to classify the patterns of interchanges,which are indispensable parts of urban road networks.In the SC-GCNN model,an interchange is modeled as a graph,wherein nodes and edges represent the interchange segments and their connections,respectively.Then,a novel SC descriptor is implemented to describe the contextual information of each interchange segment and serve as descriptive features of graph nodes.Finally,a GCNN is designed by combining graph convolution and pooling operations to process the constructed graphs and classify the interchange patterns.The SC-GCNN model was validated using interchange samples obtained from the road networks of 15 cities downloaded from OpenStreetMap.The classification accuracy was 87.06%,which was higher than that of the image-based AlexNet,GoogLeNet,and Random Forest models.
文摘A significant advantage of medical image processing is that it allows non-invasive exploration of internal anatomy in great detail.It is possible to create and study 3D models of anatomical structures to improve treatment outcomes,develop more effective medical devices,or arrive at a more accurate diagnosis.This paper aims to present a fused evolutionary algorithm that takes advantage of both whale optimization and bacterial foraging optimization to optimize feature extraction.The classification process was conducted with the aid of a convolu-tional neural network(CNN)with dual graphs.Evaluation of the performance of the fused model is carried out with various methods.In the initial input Com-puter Tomography(CT)image,150 images are pre-processed and segmented to identify cancerous and non-cancerous nodules.The geometrical,statistical,struc-tural,and texture features are extracted from the preprocessed segmented image using various methods such as Gray-level co-occurrence matrix(GLCM),Histo-gram-oriented gradient features(HOG),and Gray-level dependence matrix(GLDM).To select the optimal features,a novel fusion approach known as Whale-Bacterial Foraging Optimization is proposed.For the classification of lung cancer,dual graph convolutional neural networks have been employed.A com-parison of classification algorithms and optimization algorithms has been con-ducted.According to the evaluated results,the proposed fused algorithm is successful with an accuracy of 98.72%in predicting lung tumors,and it outper-forms other conventional approaches.
文摘Drug-protein interaction(DPI)prediction in drug discovery and new drug design plays a key role,but the traditional in vitro experiments would incur significant temporal and financial costs,cannot smoothly advance drug-protein interaction research,so many computer prediction models have emerged,and the current commonly used is based on deep learning method.In this paper,a deep learning model Computer-based Drug-Protein Interaction CBSG_DPI is proposed to predict drug-protein interactions.This model uses the protein features extracted by the Computed Tomography CT and Bert method and the drug features extracted by the SMILES2Vec method and input into the graph convolutional neural network(GCN)to complete the prediction of drug-protein interactions.The obtained results show that the proposed model can not only predict drug-protein interactions more accurately but also train hundreds of times faster than the traditional deep learning model by abandoning the traditional grid search algorithm to find the best parameters.
基金partially funded by the National Natural Science Foundation of China(U2142205)the Guangdong Major Project of Basic and Applied Basic Research(2020B0301030004)+1 种基金the Special Fund for Forecasters of China Meteorological Administration(CMAYBY2020-094)the Graduate Student Research and Innovation Program of Central South University(2023ZZTS0347)。
文摘Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome this issue,we propose a convolutional graph neural network(CGNN)model,which we enhance with multilayer feature fusion and a squeeze-and-excitation block.Additionally,we introduce a spatially balanced mean squared error(SBMSE)loss function to address the imbalanced distribution and spatial variability of meteorological variables.The CGNN is capable of extracting essential spatial features and aggregating them from a global perspective,thereby improving the accuracy of prediction and enhancing the model's generalization ability.Based on the experimental results,CGNN has certain advantages in terms of bias distribution,exhibiting a smaller variance.When it comes to precipitation,both UNet and AE also demonstrate relatively small biases.As for temperature,AE and CNNdense perform outstandingly during the winter.The time correlation coefficients show an improvement of at least 10%at daily and monthly scales for both temperature and precipitation.Furthermore,the SBMSE loss function displays an advantage over existing loss functions in predicting the98th percentile and identifying areas where extreme events occur.However,the SBMSE tends to overestimate the distribution of extreme precipitation,which may be due to the theoretical assumptions about the posterior distribution of data that partially limit the effectiveness of the loss function.In future work,we will further optimize the SBMSE to improve prediction accuracy.
文摘In order to address the issues of predefined adjacency matrices inadequately representing information in road networks,insufficiently capturing spatial dependencies of traffic networks,and the potential problem of excessive smoothing or neglecting initial node information as the layers of graph convolutional neural networks increase,thus affecting traffic prediction performance,this paper proposes a prediction model based on Adaptive Multi-channel Graph Convolutional Neural Networks(AMGCN).The model utilizes an adaptive adjacency matrix to automatically learn implicit graph structures from data,introduces a mixed skip propagation graph convolutional neural network model,which retains the original node states and selectively acquires outputs of convolutional layers,thus avoiding the loss of node initial states and comprehensively capturing spatial correlations of traffic flow.Finally,the output is fed into Long Short-Term Memory networks to capture temporal correlations.Comparative experiments on two real datasets validate the effectiveness of the proposed model.
文摘Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,sentiment analysis and question-answering systems.However,previous studies ignored much unusedstructural information in sentences that could enhance the performance of the relation extraction task.Moreover,most existing dependency-based models utilize self-attention to distinguish the importance of context,whichhardly deals withmultiple-structure information.To efficiently leverage multiple structure information,this paperproposes a dynamic structure attention mechanism model based on textual structure information,which deeplyintegrates word embedding,named entity recognition labels,part of speech,dependency tree and dependency typeinto a graph convolutional network.Specifically,our model extracts text features of different structures from theinput sentence.Textual Structure information Graph Convolutional Networks employs the dynamic structureattention mechanism to learn multi-structure attention,effectively distinguishing important contextual features invarious structural information.In addition,multi-structure weights are carefully designed as amergingmechanismin the different structure attention to dynamically adjust the final attention.This paper combines these featuresand trains a graph convolutional network for relation extraction.We experiment on supervised relation extractiondatasets including SemEval 2010 Task 8,TACRED,TACREV,and Re-TACED,the result significantly outperformsthe previous.
基金This work was supported by the National Natural Science Foundation of China(Grant Nos.51979027,52079022,51769033 and 51779035).
文摘It is of great significance to quickly detect underwater cracks as they can seriously threaten the safety of underwater structures.Research to date has mainly focused on the detection of above-water-level cracks and hasn’t considered the large scale cracks.In this paper,a large-scale underwater crack examination method is proposed based on image stitching and segmentation.In addition,a purpose of this paper is to design a new convolution method to segment underwater images.An improved As-Projective-As-Possible(APAP)algorithm was designed to extract and stitch keyframes from videos.The graph convolutional neural network(GCN)was used to segment the stitched image.The GCN’s m-IOU is 24.02%higher than Fully convolutional networks(FCN),proving that GCN has great potential of application in image segmentation and underwater image processing.The result shows that the improved APAP algorithm and GCN can adapt to complex underwater environments and perform well in different study areas.
文摘Action recognition has been recognized as an activity in which individuals’behaviour can be observed.Assembling profiles of regular activities such as activities of daily living can support identifying trends in the data during critical events.A skeleton representation of the human body has been proven to be effective for this task.The skeletons are presented in graphs form-like.However,the topology of a graph is not structured like Euclideanbased data.Therefore,a new set of methods to perform the convolution operation upon the skeleton graph is proposed.Our proposal is based on the Spatial Temporal-Graph Convolutional Network(ST-GCN)framework.In this study,we proposed an improved set of label mapping methods for the ST-GCN framework.We introduce three split techniques(full distance split,connection split,and index split)as an alternative approach for the convolution operation.The experiments presented in this study have been trained using two benchmark datasets:NTU-RGB+D and Kinetics to evaluate the performance.Our results indicate that our split techniques outperform the previous partition strategies and aremore stable during training without using the edge importance weighting additional training parameter.Therefore,our proposal can provide a more realistic solution for real-time applications centred on daily living recognition systems activities for indoor environments.
基金supported by the Technology Projects of Guizhou Province under Grant[2024]003National Natural Science Foundation of China(GrantNos.62166007,62066008,62066007)Guizhou Provincial Science and Technology Projects under Grant No.ZK[2023]300.
文摘The relation is a semantic expression relevant to two named entities in a sentence.Since a sentence usually contains several named entities,it is essential to learn a structured sentence representation that encodes dependency information specific to the two named entities.In related work,graph convolutional neural networks are widely adopted to learn semantic dependencies,where a dependency tree initializes the adjacency matrix.However,this approach has two main issues.First,parsing a sentence heavily relies on external toolkits,which can be errorprone.Second,the dependency tree only encodes the syntactical structure of a sentence,which may not align with the relational semantic expression.In this paper,we propose an automatic graph learningmethod to autonomously learn a sentence’s structural information.Instead of using a fixed adjacency matrix initialized by a dependency tree,we introduce an Adaptive Adjacency Matrix to encode the semantic dependency between tokens.The elements of thismatrix are dynamically learned during the training process and optimized by task-relevant learning objectives,enabling the construction of task-relevant semantic dependencies within a sentence.Our model demonstrates superior performance on the TACRED and SemEval 2010 datasets,surpassing previous works by 1.3%and 0.8%,respectively.These experimental results show that our model excels in the relation extraction task,outperforming prior models.
文摘The classification of point cloud data is the key technology of point cloud data information acquisition and 3D reconstruction, which has a wide range of applications. However, the existing point cloud classification methods have some shortcomings when extracting point cloud features, such as insufficient extraction of local information and overlooking the information in other neighborhood features in the point cloud, and not focusing on the point cloud channel information and spatial information. To solve the above problems, a point cloud classification network based on graph convolution and fusion attention mechanism is proposed to achieve more accurate classification results. Firstly, the point cloud is regarded as a node on the graph, the k-nearest neighbor algorithm is used to compose the graph and the information between points is dynamically captured by stacking multiple graph convolution layers;then, with the assistance of 2D experience of attention mechanism, an attention mechanism which has the capability to integrate more attention to point cloud spatial and channel information is introduced to increase the feature information of point cloud, aggregate local useful features and suppress useless features. Through the classification experiments on ModelNet40 dataset, the experimental results show that compared with PointNet network without considering the local feature information of the point cloud, the average classification accuracy of the proposed model has a 4.4% improvement and the overall classification accuracy has a 4.4% improvement. Compared with other networks, the classification accuracy of the proposed model has also been improved.
基金supported by General Scientific Research Project of Zhejiang Provincial Department of Education(Y202353247).
文摘The proliferation of rumors on social media has caused serious harm to society.Although previous research has attempted to use deep learning methods for rumor detection,they did not simultaneously consider the two key features of temporal and spatial domains.More importantly,these methods struggle to automatically generate convincing explanations for the detection results,which is crucial for preventing the further spread of rumors.To address these limitations,this paper proposes a novel method that integrates both temporal and spatial features while leveraging Large Language Models(LLMs)to automatically generate explanations for the detection results.Our method constructs a dynamic graph model to represent the evolving,tree-like propagation structure of rumors across different time periods.Spatial features are extracted using a Graph Convolutional Network,which captures the interactions and relationships between entities within the rumor network.Temporal features are extracted using a Recurrent Neural Network,which accounts for the dynamics of rumor spread over time.To automatically generate explanations,we utilize Llama-3-8B,a large language model,to provide clear and contextually relevant rationales for the detected rumors.We evaluate our method on two real-world datasets and demonstrate that it outperforms current state-of-the-art techniques,achieving superior detection accuracy while also offering the added capability of automatically generating interpretable and convincing explanations.Our results highlight the effectiveness of combining temporal and spatial features,along with LLMs,for improving rumor detection and understanding.
基金This work was supported in part by the National Key Research and Development Program of China(2017YFD0200506)the National Natural Science Foundation of China(21837001 and 21907036).
文摘The impact of pesticides on insect pollinators has caused worldwide concern. Both global bee decline and stopping the use of pesticides may have serious consequences for food security. Automated and accurate prediction of chemical poisoning of honey bees is a challenging task owing to a lack of understanding of chemical toxicity and introspection. Deep learning(DL) shows potential utility for general and highly variable tasks across fields. Here, we developed a new DL model of deep graph attention convolutional neural networks(GACNN) with the combination of undirected graph(UG) and attention convolutional neural networks(ACNN) to accurately classify chemical poisoning of honey bees. We used a training dataset of 720 pesticides and an external validation dataset of 90 pesticides, which is one order of magnitude larger than the previous datasets. We tested its performance in two ways: poisonous versus nonpoisonous and GACNN versus other frequently-used machine learning models. The first case represents the accuracy in identifying bee poisonous chemicals. The second represents performance advantages. The GACNN achieved ~6% higher performance for predicting toxic samples and more stable with ~7%Matthews Correlation Coefficient(MCC) higher compared to all tested models, demonstrating GACNN is capable of accurately classifying chemicals and has considerable potential in practical applications.In addition, we also summarized and evaluated the mechanisms underlying the response of honey bees to chemical exposure based on the mapping of molecular similarity. Moreover, our cloud platform(http://beetox.cn) of this model provides low-cost universal access to information, which could vitally enhance environmental risk assessment.
基金Ng was supported in part by Hong Kong Research Grant Council General Research Fund(GRF),China(Nos.12300218,12300519,117201020,17300021,CRF C1013-21GF,C7004-21GF and Joint NSFC-RGC NHKU76921)Wu is supported by National Natural Science Foundation of China(No.62206111)+3 种基金Young Talent Support Project of Guangzhou Association for Science and Technology,China(No.QT-2023-017)Guangzhou Basic and Applied Basic Research Foundation,China(No.2023A04J1058)Fundamental Research Funds for the Central Universities,China(No.21622326)China Postdoctoral Science Foundation(No.2022M721343).
文摘Graph neural networks have been shown to be very effective in utilizing pairwise relationships across samples.Recently,there have been several successful proposals to generalize graph neural networks to hypergraph neural networks to exploit more com-plex relationships.In particular,the hypergraph collaborative networks yield superior results compared to other hypergraph neural net-works for various semi-supervised learning tasks.The collaborative network can provide high quality vertex embeddings and hyperedge embeddings together by formulating them as a joint optimization problem and by using their consistency in reconstructing the given hy-pergraph.In this paper,we aim to establish the algorithmic stability of the core layer of the collaborative network and provide generaliz--ation guarantees.The analysis sheds light on the design of hypergraph filters in collaborative networks,for instance,how the data and hypergraph filters should be scaled to achieve uniform stability of the learning process.Some experimental results on real-world datasets are presented to illustrate the theory.
基金This work is partly supported by the Fundamental Research Funds for the Central Universities(CUC230A013)It is partly supported by Natural Science Foundation of Beijing Municipality(No.4222038)It is also supported by National Natural Science Foundation of China(Grant No.62176240).
文摘In recent years,deep learning methods have developed rapidly and found application in many fields,including natural language processing.In the field of aspect-level sentiment analysis,deep learning methods can also greatly improve the performance of models.However,previous studies did not take into account the relationship between user feature extraction and contextual terms.To address this issue,we use data feature extraction and deep learning combined to develop an aspect-level sentiment analysis method.To be specific,we design user comment feature extraction(UCFE)to distill salient features from users’historical comments and transform them into representative user feature vectors.Then,the aspect-sentence graph convolutional neural network(ASGCN)is used to incorporate innovative techniques for calculating adjacency matrices;meanwhile,ASGCN emphasizes capturing nuanced semantics within relationships among aspect words and syntactic dependency types.Afterward,three embedding methods are devised to embed the user feature vector into the ASGCN model.The empirical validations verify the effectiveness of these models,consistently surpassing conventional benchmarks and reaffirming the indispensable role of deep learning in advancing sentiment analysis methodologies.
基金supported by the National Key R&D Program of China(2021YFA1500900)National Natural Science Foundation of China(U21A20298,22141001).
文摘Machine learning(ML)integrated with density functional theory(DFT)calculations have recently been used to accelerate the design and discovery of single-atom catalysts(SACs)by establishing deep structure–activity relationships.The traditional ML models are always difficult to identify the structural differences among the single-atom systems with different modification methods,leading to the limitation of the potential application range.Aiming to the structural properties of several typical two-dimensional MA_(2)Z_(4)-based single-atom systems(bare MA_(2)Z_(4) and metal single-atom doped/supported MA_(2)Z_(4)),an improved crystal graph convolutional neural network(CGCNN)classification model was employed,instead of the traditional machine learning regression model,to address the challenge of incompatibility in the studied systems.The CGCNN model was optimized using crystal graph representation in which the geometric configuration was divided into active layer,surface layer,and bulk layer(ASB-GCNN).Through ML and DFT calculations,five potential single-atom hydrogen evolution reaction(HER)catalysts were screened from chemical space of 600 MA_(2)Z_(4)-based materials,especially V_(1)/HfSn_(2)N_(4)(S)with high stability and activity(Δ_(GH*)is 0.06 eV).Further projected density of states(pDOS)analysis in combination with the wave function analysis of the SAC-H bond revealed that the SAC-dz^(2)orbital coincided with the H-s orbital around the energy level of−2.50 eV,and orbital analysis confirmed the formation ofσbonds.This study provides an efficient multistep screening design framework of metal single-atom catalyst for HER systems with similar two-dimensional supports but different geometric configurations.
基金supported by the U.S.National Science Foundation under Grant Nos.CCF-2131946,CCF-1953980,and CCF-1702980.
文摘Graph convolutional neural networks(GCNs)have emerged as an effective approach to extending deep learning for graph data analytics,but they are computationally challenging given the irregular graphs and the large num-ber of nodes in a graph.GCNs involve chain sparse-dense matrix multiplications with six loops,which results in a large de-sign space for GCN accelerators.Prior work on GCN acceleration either employs limited loop optimization techniques,or determines the design variables based on random sampling,which can hardly exploit data reuse efficiently,thus degrading system efficiency.To overcome this limitation,this paper proposes GShuttle,a GCN acceleration scheme that maximizes memory access efficiency to achieve high performance and energy efficiency.GShuttle systematically explores loop opti-mization techniques for GCN acceleration,and quantitatively analyzes the design objectives(e.g.,required DRAM access-es and SRAM accesses)by analytical calculation based on multiple design variables.GShuttle further employs two ap-proaches,pruned search space sweeping and greedy search,to find the optimal design variables under certain design con-straints.We demonstrated the efficacy of GShuttle by evaluation on five widely used graph datasets.The experimental simulations show that GShuttle reduces the number of DRAM accesses by a factor of 1.5 and saves energy by a factor of 1.7 compared with the state-of-the-art approaches.
文摘Aiming at the problem that existing models in aspect-level sentiment analysis cannot fully and effectively utilize sentence semantic and syntactic structure information, this paper proposes a graph neural network-based aspect-level sentiment classification model. Self-attention, aspectual word multi-head attention and dependent syntactic relations are fused and the node representations are enhanced with graph convolutional networks to enable the model to fully learn the global semantic and syntactic structural information of sentences. Experimental results show that the model performs well on three public benchmark datasets Rest14, Lap14, and Twitter, improving the accuracy of sentiment classification.
基金mainly supported by Fundamental Research Program of Shanxi Province(No.202203021211305)Shanxi Scholarship Council of China(2023-013).
文摘In the study of graph convolutional networks,the information aggregation of nodes is important for downstream tasks.However,current graph convolutional networks do not differentiate the importance of different neighboring nodes from the perspective of network topology when ag-gregating messages from neighboring nodes.Therefore,based on network topology,this paper proposes a weighted graph convolutional network based on network node degree and efficiency(W-GCN)model for semi-supervised node classification.To distinguish the importance of nodes,this paper uses the degree and the efficiency of nodes in the network to construct the impor-tance matrix of nodes,rather than the adjacency matrix,which usually is a normalized symmetry Laplacian matrix in graph convolutional network.So that weights of neighbor nodes can be as-signed respectively in the process of graph convolution operation.The proposed method is ex-amined through several real benchmark datasets(Cora,CiteSeer and PubMed)in the experimen-tal part.And compared with the graph convolutional network method.The experimental results show that the W-GCN model proposed in this paper is better than the graph convolutional net-work model in prediction accuracy and achieves better results.