Convolutional neural networks(CNNs)have gained popularity for categorizing hyperspectral(HS)images due to their ability to capture representations of spatial-spectral features.However,their ability to model relationsh...Convolutional neural networks(CNNs)have gained popularity for categorizing hyperspectral(HS)images due to their ability to capture representations of spatial-spectral features.However,their ability to model relationships between data is limited.Graph convolutional networks(GCNs)have been introduced as an alternative,as they are effective in representing and analyzing irregular data beyond grid samplingconstraints.WhileGCNs have traditionally.been computationally intensive,minibatch GCNs(miniGCNs)enable minibatch training of large-scale GCNs.We have improved the classification performance by using miniGCNs to infer out-of-sample data without retraining the network.In addition,fuzing the capabilities of CNNs and GCNs,through concatenative fusion has been shown to improve performance compared to using CNNs or GCNs individually.Finally,support vector machine(SvM)is employed instead of softmax in the classification stage.These techniques were tested on two HS datasets and achieved an average accuracy of 92.80 using Indian Pines dataset,demonstrating the effectiveness of miniGCNs and fusion strategies.展开更多
基金supported by Research start up fund for high level talents of FuZhou University of International Studies and Trade[grant no FWKQJ202006]2022 Guiding Project of Fujian Science and Technology Department[grant no 2022H0026].
文摘Convolutional neural networks(CNNs)have gained popularity for categorizing hyperspectral(HS)images due to their ability to capture representations of spatial-spectral features.However,their ability to model relationships between data is limited.Graph convolutional networks(GCNs)have been introduced as an alternative,as they are effective in representing and analyzing irregular data beyond grid samplingconstraints.WhileGCNs have traditionally.been computationally intensive,minibatch GCNs(miniGCNs)enable minibatch training of large-scale GCNs.We have improved the classification performance by using miniGCNs to infer out-of-sample data without retraining the network.In addition,fuzing the capabilities of CNNs and GCNs,through concatenative fusion has been shown to improve performance compared to using CNNs or GCNs individually.Finally,support vector machine(SvM)is employed instead of softmax in the classification stage.These techniques were tested on two HS datasets and achieved an average accuracy of 92.80 using Indian Pines dataset,demonstrating the effectiveness of miniGCNs and fusion strategies.