期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Semantic and spatial‐spectral feature fusion transformer network for the classification of hyperspectral image
1
作者 Erxin Xie Na Chen +3 位作者 Jiangtao Peng Weiwei Sun Qian Du xinge you 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1308-1322,共15页
Recently,transformer‐based networks have been introduced for the classification of hyperspectral image(HSI).Although transformer‐based methods can well capture spectral sequence information,their ability to fuse dif... Recently,transformer‐based networks have been introduced for the classification of hyperspectral image(HSI).Although transformer‐based methods can well capture spectral sequence information,their ability to fuse different types of information contained in HSI is still insufficient.To exploit rich spectral,spatial and semantic information in HSI,a novel semantic and spatial‐spectral feature fusion transformer(S3FFT)network is proposed in this study.In the proposed S3FFT method,spatial attention and efficient channel attention(ECA)modules are employed for the extraction of shallow spatialspectral features.Then,a transformer‐based module is designed to extract advanced fused features and to produce the pseudo‐label and class probability of each pixel for semantic feature extraction.Finally,the semantic,spatial and spectral features are combined by the transformer for classification.Compared with traditional deep learning methods and recently transformer‐based methods,the proposed S3FFT shows relatively better results on three HSI datasets. 展开更多
关键词 image classification machine learning
在线阅读 下载PDF
Robust training of open-set graph neural networks on graphs with in-distribution and out-of-distribution noise
2
作者 Sichao FU Qinmu PENG +3 位作者 Weihua OU Bin ZOU Xiao-Yuan JING xinge you 《Science China(Technological Sciences)》 2026年第3期225-240,共16页
The node labels collected from real-world applications are often accompanied by the occurrence of in-distribution noise(seen class nodes with wrong labels) and out-of-distribution noise(unseen class nodes with seen cl... The node labels collected from real-world applications are often accompanied by the occurrence of in-distribution noise(seen class nodes with wrong labels) and out-of-distribution noise(unseen class nodes with seen class labels), which significantly degrade the superior performance of recently emerged open-set graph neural networks(GNN). Nowadays, only a few researchers have attempted to introduce sample selection strategies developed in non-graph areas to limit the influence of noisy node labels. These studies often neglect the impact of inaccurate graph structure relationships, invalid utilization of noisy nodes and unlabeled nodes self-supervision information for noisy node labels constraint. More importantly, simply enhancing the accuracy of graph structure relationships or the utilization of nodes' self-supervision information still cannot minimize the influence of noisy node labels for open-set GNN. In this paper, we propose a novel RT-OGNN(robust training of open-set GNN) framework to solve the above-mentioned issues. Specifically, an effective graph structure learning module is proposed to weaken the impact of structure noise and extend the receptive field of nodes. Then, the augmented graph is sent to a pair of peer GNNs to accurately distinguish noisy node labels of labeled nodes. Third, the label propagation and multilayer perceptron-based decoder modules are simultaneously introduced to discover more supervision information from remaining nodes apart from clean nodes. Finally, we jointly optimize the above modules and open-set GNN in an end-to-end way via consistency regularization loss and cross-entropy loss, which minimizes the influence of noisy node labels and provides more supervision guidance for open-set GNN optimization.Extensive experiments on three benchmarks and various noise rates validate the superiority of RT-OGNN over state-of-the-art models. 展开更多
关键词 graph neural networks open-set recognition in-distribution noise out-of-distribution noise
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部