Images obtained from hyperspectral sensors provide information about the target area that extends beyond the visible portions of the electromagnetic spectrum.However,due to sensor limitations and imperfections during ...Images obtained from hyperspectral sensors provide information about the target area that extends beyond the visible portions of the electromagnetic spectrum.However,due to sensor limitations and imperfections during the image acquisition and transmission phases,noise is introduced into the acquired image,which can have a negative impact on downstream analyses such as classification,target tracking,and spectral unmixing.Noise in hyperspectral images(HSI)is modelled as a combination from several sources,including Gaussian/impulse noise,stripes,and deadlines.An HSI restoration method for such a mixed noise model is proposed.First,a joint optimisation framework is proposed for recovering hyperspectral data corrupted by mixed Gaussian-impulse noise by estimating both the clean data as well as the sparse/impulse noise levels.Second,a hyper-Laplacian prior is used along both the spatial and spectral dimensions to express sparsity in clean image gradients.Third,to model the sparse nature of impulse noise,anℓ_(1)−norm over the impulse noise gradient is used.Because the proposed methodology employs two distinct priors,the authors refer to it as the hyperspectral dual prior(HySpDualP)denoiser.To the best of authors'knowledge,this joint optimisation framework is the first attempt in this direction.To handle the non-smooth and nonconvex nature of the generalℓ_(p)−norm-based regularisation term,a generalised shrinkage/thresholding(GST)solver is employed.Finally,an efficient split-Bregman approach is used to solve the resulting optimisation problem.Experimental results on synthetic data and real HSI datacube obtained from hyperspectral sensors demonstrate that the authors’proposed model outperforms state-of-the-art methods,both visually and in terms of various image quality assessment metrics.展开更多
Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vi...Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vision, attracting the attention of many researchers. However, most HSI SR methods focus on the tradeoff between spatial resolution and spectral information, and cannot guarantee the efficient extraction of image information. In this paper, a multidimensional features network(MFNet) for HSI SR is proposed, which simultaneously learns and fuses the spatial,spectral, and frequency multidimensional features of HSI. Spatial features contain rich local details,spectral features contain the information and correlation between spectral bands, and frequency feature can reflect the global information of the image and can be used to obtain the global context of HSI. The fusion of the three features can better guide image super-resolution, to obtain higher-quality high-resolution hyperspectral images. In MFNet, we use the frequency feature extraction module(FFEM) to extract the frequency feature. On this basis, a multidimensional features extraction module(MFEM) is designed to learn and fuse multidimensional features. In addition, experimental results on two public datasets demonstrate that MFNet achieves state-of-the-art performance.展开更多
In Hyperspectral Imaging(HSI),the detrimental influence of noise and distortions on data quality is profound,which has severely affected the following-on analytics and decisionmaking such as land mapping.This study pr...In Hyperspectral Imaging(HSI),the detrimental influence of noise and distortions on data quality is profound,which has severely affected the following-on analytics and decisionmaking such as land mapping.This study presents an innovative framework for assessing HSI band quality and reconstructing the low-quality bands,based on the Prophet model.By introducing a comprehensive quality metric to start,the authors approach factors in both spatial and spectral characteristics across local and global scales.This metric effectively captures the intricate noise and distortions inherent in the HSI data.Subsequently,the authors employ the Prophet model to forecast information within the low-quality bands,leveraging insights from neighbouring high-quality bands.To validate the effectiveness of the authors’proposed model,extensive experiments on three publicly available uncorrected datasets are conducted.In a head-to-head comparison,the framework against six state-ofthe-art band reconstruction algorithms including three spectral methods,two spatialspectral methods and one deep learning method is benchmarked.The authors’experiments also delve into strategies for band selection based on quality metrics and the quality evaluation of the reconstructed bands.In addition,the authors assess the classification accuracy utilising these reconstructed bands.In various experiments,the results consistently affirm the efficacy of the authors’method in HSI quality assessment and band reconstruction.Notably,the authors’approach obviates the need for manually prefiltering of noisy bands.This comprehensive framework holds promise in addressing HSI data quality concerns whilst enhancing the overall utility of HSI.展开更多
Among hyperspectral imaging technologies, interferometric spectral imaging is widely used in remote sening due to advantages of large luminous flux and high resolution. However, with complicated mechanism, interferome...Among hyperspectral imaging technologies, interferometric spectral imaging is widely used in remote sening due to advantages of large luminous flux and high resolution. However, with complicated mechanism, interferometric imaging faces the impact of multi-stage degradation. Most exsiting interferometric spectrum reconstruction methods are based on tradition model-based framework with multiple steps, showing poor efficiency and restricted performance. Thus, we propose an interferometric spectrum reconstruction method based on degradation synthesis and deep learning.Firstly, based on imaging mechanism, we proposed an mathematical model of interferometric imaging to analyse the degradation components as noises and trends during imaging. The model consists of three stages, namely instrument degradation, sensing degradation, and signal-independent degradation process. Then, we designed calibration-based method to estimate parameters in the model, of which the results are used for synthesizing realistic dataset for learning-based algorithms.In addition, we proposed a dual-stage interferogram spectrum reconstruction framework, which supports pre-training and integration of denoising DNNs. Experiments exhibits the reliability of our degradation model and synthesized data, and the effectiveness of the proposed reconstruction method.展开更多
Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification...Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification accuracy of hyperspectral images.To address this problem,this article proposes an algorithm based on multiscale fusion and transformer network for hyperspectral image classification.Firstly,the low-level spatial-spectral features are extracted by multi-scale residual structure.Secondly,an attention module is introduced to focus on the more important spatialspectral information.Finally,high-level semantic features are represented and learned by a token learner and an improved transformer encoder.The proposed algorithm is compared with six classical hyperspectral classification algorithms on real hyperspectral images.The experimental results show that the proposed algorithm effectively improves the land cover classification accuracy of hyperspectral images.展开更多
Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identi...Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.展开更多
Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convol...Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.展开更多
Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces...Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.展开更多
Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi...Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.展开更多
To extract vegetation pigment concentration and physiological status has been studied in two test areas covered with swamp and flourish vegetation using pushbroom hyperspectral imager (PHI) data which flied in Septemb...To extract vegetation pigment concentration and physiological status has been studied in two test areas covered with swamp and flourish vegetation using pushbroom hyperspectral imager (PHI) data which flied in September of 2000 at Daxing'anling district of Heilongjiang Province, China. The ratio analysis of reflectance spectra (RARS) indices, which were put forward by Chappelle et al (1992), are chosen in this paper owing to their effect and simpleness against both comparison with various methods and techniques for exploration of pigment concentration and characteristics of PHI data. The correlation coefficients between RARS indices and pigment concentration of vegetation were up to 0.8. The new RARS indices modes are established in the two test areas using both PHI data and spectra of different vegetations measured in the field. The indices' parameter images of chlorophyll a (Chl a), chlorophyll b (Chl b) and carotenoids (Cars) of the test areas covered with swamp and flourish vegetation are acquired by the new RARS indices modes. Furthermore, the regional concentration of Chl a and Chl b are extracted and quantified using regression equations between RARS indices and pigment concentrations, which were built by Blackburn (1998). The results showed the physiological status and variety clearly, and are in good agreement with the distribution of vegetation in the field.展开更多
Graph learning is an effective manner to analyze the intrinsic properties of data.It has been widely used in the fields of dimensionality reduction and classification for data.In this paper,we focus on the graph learn...Graph learning is an effective manner to analyze the intrinsic properties of data.It has been widely used in the fields of dimensionality reduction and classification for data.In this paper,we focus on the graph learning-based dimensionality reduction for a hyperspectral image.Firstly,we review the development of graph learning and its application in a hyperspectral image.Then,we mainly discuss several representative graph methods including two manifold learning methods,two sparse graph learning methods,and two hypergraph learning methods.For manifold learning,we analyze neighborhood preserving embedding and locality preserving projections which are two classic manifold learning methods and can be transformed into the form of a graph.For sparse graph,we introduce sparsity preserving graph embedding and sparse graph-based discriminant analysis which can adaptively reveal data structure to construct a graph.For hypergraph learning,we review binary hypergraph and discriminant hyper-Laplacian projection which can represent the high-order relationship of data.展开更多
Hyperspectral images(HSI)provide a new way to exploit the internal physical composition of the land scene.The basic platform for acquiring HSI data-sets are airborne or spaceborne spectral imaging.Retrieving useful in...Hyperspectral images(HSI)provide a new way to exploit the internal physical composition of the land scene.The basic platform for acquiring HSI data-sets are airborne or spaceborne spectral imaging.Retrieving useful information from hyperspectral images can be grouped into four categories.(1)Classification:Hyperspectral images provide so much spectral and spatial information that remotely sensed image classification has become a complex task.(2)Endmember extraction and spectral unmixing:Among images,only HSI have a complete model to represent the internal structure of each pixel where the endmembers are the elements.Identification of endmembers from HSI thus becomes the foremost step in interpretation of each pixel.With proper endmembers,the corresponding abundances can also be exactly calculated.(3)Target detection:Another practical problem is how to determine the existence of certain resolved or full pixel objects from a complex background.Constructing a reliable rule for separating target signals from all the other background signals,even in the case of low target occurrence and high spectral variation,comprises the key to this problem.(4)Change detection:Although change detection is not a new problem,detecting changes from hyperspectral images has brought new challenges,since the spectral bands are so many,accurate band-to-band correspondences and minor changes in subclass land objects can be depicted in HSI.In this paper,the basic theory and the most canonical works are discussed,along with the most recent advances in each aspect of hyperspectral image processing.展开更多
Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,w...Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,which is beneficial for subsequent applications.The development of deep learning has promoted significant progress in hyperspectral image super-resolution,and the powerful expression capabilities of deep neural networks make the predicted results more reliable.Recently,several latest deep learning technologies have made the hyperspectral image super-resolution method explode.However,a comprehensive review and analysis of the latest deep learning methods from the hyperspectral image super-resolution perspective is absent.To this end,in this survey,we first introduce the concept of hyperspectral image super-resolution and classify the methods from the perspectives with or without auxiliary information.Then,we review the learning-based methods in three categories,including single hyperspectral image super-resolution,panchromatic-based hyperspectral image super-resolution,and multispectral-based hyperspectral image super-resolution.Subsequently,we summarize the commonly used hyperspectral dataset,and the evaluations for some representative methods in three categories are performed qualitatively and quantitatively.Moreover,we briefly introduce several typical applications of hyperspectral image super-resolution,including ground object classification,urban change detection,and ecosystem monitoring.Finally,we provide the conclusion and challenges in existing learning-based methods,looking forward to potential future research directions.展开更多
Hyperspectral image(HSI)classification has been one of themost important tasks in the remote sensing community over the last few decades.Due to the presence of highly correlated bands and limited training samples in H...Hyperspectral image(HSI)classification has been one of themost important tasks in the remote sensing community over the last few decades.Due to the presence of highly correlated bands and limited training samples in HSI,discriminative feature extraction was challenging for traditional machine learning methods.Recently,deep learning based methods have been recognized as powerful feature extraction tool and have drawn a significant amount of attention in HSI classification.Among various deep learning models,convolutional neural networks(CNNs)have shown huge success and offered great potential to yield high performance in HSI classification.Motivated by this successful performance,this paper presents a systematic review of different CNN architectures for HSI classification and provides some future guidelines.To accomplish this,our study has taken a few important steps.First,we have focused on different CNN architectures,which are able to extract spectral,spatial,and joint spectral-spatial features.Then,many publications related to CNN based HSI classifications have been reviewed systematically.Further,a detailed comparative performance analysis has been presented between four CNN models namely 1D CNN,2D CNN,3D CNN,and feature fusion based CNN(FFCNN).Four benchmark HSI datasets have been used in our experiment for evaluating the performance.Finally,we concluded the paper with challenges on CNN based HSI classification and future guidelines that may help the researchers to work on HSI classification using CNN.展开更多
With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and th...With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.展开更多
Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed...Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.展开更多
To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize tr...To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.展开更多
Hyperspectral image(HSI)contains a wealth of spectral information,which makes fine classification of ground objects possible.In the meanwhile,overly redundant information in HSI brings many challenges.Specifically,the...Hyperspectral image(HSI)contains a wealth of spectral information,which makes fine classification of ground objects possible.In the meanwhile,overly redundant information in HSI brings many challenges.Specifically,the lack of training samples and the high computational cost are the inevitable obstacles in the design of classifier.In order to solve these problems,dimensionality reduction is usually adopted.Recently,graph-based dimensionality reduction has become a hot topic.In this paper,the graph-based methods for HSI dimensionality reduction are summarized from the following aspects.1)The traditional graph-based methods employ Euclidean distance to explore the local information of samples in spectral feature space.2)The dimensionality-reduction methods based on sparse or collaborative representation regard the sparse or collaborative coefficients as graph weights to effectively reduce reconstruction errors and represent most important information of HSI in the dictionary.3)Improved methods based on sparse or collaborative graph have made great progress by considering global low-rank information,local intra-class information and spatial information.In order to compare typical techniques,three real HSI datasets were used to carry out relevant experiments,and then the experimental results were analysed and discussed.Finally,the future development of this research field is prospected.展开更多
Most methods for classifying hyperspectral data only consider the local spatial relation-ship among samples,ignoring the important non-local topological relationship.However,the non-local topological relationship is b...Most methods for classifying hyperspectral data only consider the local spatial relation-ship among samples,ignoring the important non-local topological relationship.However,the non-local topological relationship is better at representing the structure of hyperspectral data.This paper proposes a deep learning model called Topology and semantic information fusion classification network(TSFnet)that incorporates a topology structure and semantic information transmis-sion network to accurately classify traditional Chinese medicine in hyperspectral images.TSFnet uses a convolutional neural network(CNN)to extract features and a graph convolution network(GCN)to capture potential topological relationships among different types of Chinese herbal medicines.The results show that TSFnet outperforms other state-of-the-art deep learning classification algorithms in two different scenarios of herbal medicine datasets.Additionally,the proposed TSFnet model is lightweight and can be easily deployed for mobile herbal medicine classification.展开更多
Accurate histopathology classification is a crucial factor in the diagnosis and treatment of Cholangiocarcinoma(CCA).Hyperspectral images(HSI)provide rich spectral information than ordinary RGB images,making them more...Accurate histopathology classification is a crucial factor in the diagnosis and treatment of Cholangiocarcinoma(CCA).Hyperspectral images(HSI)provide rich spectral information than ordinary RGB images,making them more useful for medical diagnosis.The Convolutional Neural Network(CNN)is commonly employed in hyperspectral image classification due to its remarkable capacity for feature extraction and image classification.However,many existing CNN-based HSI classification methods tend to ignore the importance of image spatial context information and the interdependence between spectral channels,leading to unsatisfied classification performance.Thus,to address these issues,this paper proposes a Spatial-Spectral Joint Network(SSJN)model for hyperspectral image classification that utilizes spatial self-attention and spectral feature extraction.The SSJN model is derived from the ResNet18 network and implemented with the non-local and Coordinate Attention(CA)modules,which extract long-range dependencies on image space and enhance spatial features through the Branch Attention(BA)module to emphasize the region of interest.Furthermore,the SSJN model employs Conv-LSTM modules to extract long-range depen-dencies in the image spectral domain.This addresses the gradient disappearance/explosion phenom-ena and enhances the model classification accuracy.The experimental results show that the pro-posed SSJN model is more efficient in leveraging the spatial and spectral information of hyperspec-tral images on multidimensional microspectral datasets of CCA,leading to higher classification accuracy,and may have useful references for medical diagnosis of CCA.展开更多
文摘Images obtained from hyperspectral sensors provide information about the target area that extends beyond the visible portions of the electromagnetic spectrum.However,due to sensor limitations and imperfections during the image acquisition and transmission phases,noise is introduced into the acquired image,which can have a negative impact on downstream analyses such as classification,target tracking,and spectral unmixing.Noise in hyperspectral images(HSI)is modelled as a combination from several sources,including Gaussian/impulse noise,stripes,and deadlines.An HSI restoration method for such a mixed noise model is proposed.First,a joint optimisation framework is proposed for recovering hyperspectral data corrupted by mixed Gaussian-impulse noise by estimating both the clean data as well as the sparse/impulse noise levels.Second,a hyper-Laplacian prior is used along both the spatial and spectral dimensions to express sparsity in clean image gradients.Third,to model the sparse nature of impulse noise,anℓ_(1)−norm over the impulse noise gradient is used.Because the proposed methodology employs two distinct priors,the authors refer to it as the hyperspectral dual prior(HySpDualP)denoiser.To the best of authors'knowledge,this joint optimisation framework is the first attempt in this direction.To handle the non-smooth and nonconvex nature of the generalℓ_(p)−norm-based regularisation term,a generalised shrinkage/thresholding(GST)solver is employed.Finally,an efficient split-Bregman approach is used to solve the resulting optimisation problem.Experimental results on synthetic data and real HSI datacube obtained from hyperspectral sensors demonstrate that the authors’proposed model outperforms state-of-the-art methods,both visually and in terms of various image quality assessment metrics.
基金supported by the Fundamental Research Funds for the Provincial Universities of Zhejiang (No.GK249909299001-036)National Key Research and Development Program of China (No. 2023YFB4502803)Zhejiang Provincial Natural Science Foundation of China (No.LDT23F01014F01)。
文摘Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vision, attracting the attention of many researchers. However, most HSI SR methods focus on the tradeoff between spatial resolution and spectral information, and cannot guarantee the efficient extraction of image information. In this paper, a multidimensional features network(MFNet) for HSI SR is proposed, which simultaneously learns and fuses the spatial,spectral, and frequency multidimensional features of HSI. Spatial features contain rich local details,spectral features contain the information and correlation between spectral bands, and frequency feature can reflect the global information of the image and can be used to obtain the global context of HSI. The fusion of the three features can better guide image super-resolution, to obtain higher-quality high-resolution hyperspectral images. In MFNet, we use the frequency feature extraction module(FFEM) to extract the frequency feature. On this basis, a multidimensional features extraction module(MFEM) is designed to learn and fuse multidimensional features. In addition, experimental results on two public datasets demonstrate that MFNet achieves state-of-the-art performance.
基金National Natural Science Foundation Major Project of China,Grant/Award Number:42192580Guangdong Province Key Construction Discipline Scientific Research Ability Promotion Project,Grant/Award Number:2022ZDJS015。
文摘In Hyperspectral Imaging(HSI),the detrimental influence of noise and distortions on data quality is profound,which has severely affected the following-on analytics and decisionmaking such as land mapping.This study presents an innovative framework for assessing HSI band quality and reconstructing the low-quality bands,based on the Prophet model.By introducing a comprehensive quality metric to start,the authors approach factors in both spatial and spectral characteristics across local and global scales.This metric effectively captures the intricate noise and distortions inherent in the HSI data.Subsequently,the authors employ the Prophet model to forecast information within the low-quality bands,leveraging insights from neighbouring high-quality bands.To validate the effectiveness of the authors’proposed model,extensive experiments on three publicly available uncorrected datasets are conducted.In a head-to-head comparison,the framework against six state-ofthe-art band reconstruction algorithms including three spectral methods,two spatialspectral methods and one deep learning method is benchmarked.The authors’experiments also delve into strategies for band selection based on quality metrics and the quality evaluation of the reconstructed bands.In addition,the authors assess the classification accuracy utilising these reconstructed bands.In various experiments,the results consistently affirm the efficacy of the authors’method in HSI quality assessment and band reconstruction.Notably,the authors’approach obviates the need for manually prefiltering of noisy bands.This comprehensive framework holds promise in addressing HSI data quality concerns whilst enhancing the overall utility of HSI.
文摘Among hyperspectral imaging technologies, interferometric spectral imaging is widely used in remote sening due to advantages of large luminous flux and high resolution. However, with complicated mechanism, interferometric imaging faces the impact of multi-stage degradation. Most exsiting interferometric spectrum reconstruction methods are based on tradition model-based framework with multiple steps, showing poor efficiency and restricted performance. Thus, we propose an interferometric spectrum reconstruction method based on degradation synthesis and deep learning.Firstly, based on imaging mechanism, we proposed an mathematical model of interferometric imaging to analyse the degradation components as noises and trends during imaging. The model consists of three stages, namely instrument degradation, sensing degradation, and signal-independent degradation process. Then, we designed calibration-based method to estimate parameters in the model, of which the results are used for synthesizing realistic dataset for learning-based algorithms.In addition, we proposed a dual-stage interferogram spectrum reconstruction framework, which supports pre-training and integration of denoising DNNs. Experiments exhibits the reliability of our degradation model and synthesized data, and the effectiveness of the proposed reconstruction method.
基金National Natural Science Foundation of China(No.62201457)Natural Science Foundation of Shaanxi Province(Nos.2022JQ-668,2022JQ-588)。
文摘Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification accuracy of hyperspectral images.To address this problem,this article proposes an algorithm based on multiscale fusion and transformer network for hyperspectral image classification.Firstly,the low-level spatial-spectral features are extracted by multi-scale residual structure.Secondly,an attention module is introduced to focus on the more important spatialspectral information.Finally,high-level semantic features are represented and learned by a token learner and an improved transformer encoder.The proposed algorithm is compared with six classical hyperspectral classification algorithms on real hyperspectral images.The experimental results show that the proposed algorithm effectively improves the land cover classification accuracy of hyperspectral images.
基金the National Natural Science Foun-dation of China(Nos.61471263,61872267 and U21B2024)the Natural Science Foundation of Tianjin,China(No.16JCZDJC31100)Tianjin University Innovation Foundation(No.2021XZC0024).
文摘Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.
基金Natural Science Foundation of Shandong Province,China(Grant No.ZR202111230202).
文摘Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.
基金the Researchers Supporting Project number(RSPD2024R848),King Saud University,Riyadh,Saudi Arabia.
文摘Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.
基金supported by the UC-National Lab In-Residence Graduate Fellowship Grant L21GF3606supported by a DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowship+1 种基金supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project numbers 20170668PRD1 and 20210213ERsupported by the NGA under Contract No.HM04762110003.
文摘Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.
文摘To extract vegetation pigment concentration and physiological status has been studied in two test areas covered with swamp and flourish vegetation using pushbroom hyperspectral imager (PHI) data which flied in September of 2000 at Daxing'anling district of Heilongjiang Province, China. The ratio analysis of reflectance spectra (RARS) indices, which were put forward by Chappelle et al (1992), are chosen in this paper owing to their effect and simpleness against both comparison with various methods and techniques for exploration of pigment concentration and characteristics of PHI data. The correlation coefficients between RARS indices and pigment concentration of vegetation were up to 0.8. The new RARS indices modes are established in the two test areas using both PHI data and spectra of different vegetations measured in the field. The indices' parameter images of chlorophyll a (Chl a), chlorophyll b (Chl b) and carotenoids (Cars) of the test areas covered with swamp and flourish vegetation are acquired by the new RARS indices modes. Furthermore, the regional concentration of Chl a and Chl b are extracted and quantified using regression equations between RARS indices and pigment concentrations, which were built by Blackburn (1998). The results showed the physiological status and variety clearly, and are in good agreement with the distribution of vegetation in the field.
基金This work is supported by the National Natural Science Foundation of China[grant number 61801336]the China Postdoctoral Science Foundation[grant number 2019M662717 and 2017M622521]the China Postdoctoral Program for Innovative Talent[grant number BX201700182].
文摘Graph learning is an effective manner to analyze the intrinsic properties of data.It has been widely used in the fields of dimensionality reduction and classification for data.In this paper,we focus on the graph learning-based dimensionality reduction for a hyperspectral image.Firstly,we review the development of graph learning and its application in a hyperspectral image.Then,we mainly discuss several representative graph methods including two manifold learning methods,two sparse graph learning methods,and two hypergraph learning methods.For manifold learning,we analyze neighborhood preserving embedding and locality preserving projections which are two classic manifold learning methods and can be transformed into the form of a graph.For sparse graph,we introduce sparsity preserving graph embedding and sparse graph-based discriminant analysis which can adaptively reveal data structure to construct a graph.For hypergraph learning,we review binary hypergraph and discriminant hyper-Laplacian projection which can represent the high-order relationship of data.
基金This work was supported in part by the National Basic Research Program of China(973 Program)under Grant 2012CB719905 and 2011CB707105the National Natural Science Foundation of China under Grant 61102128+2 种基金HuBei Province Natural Science Foundation under Grant No.2011CDB455China’s Post-doctoral Science Foundation under 211–180,788the Fundamental Research Funds for the Central Universities under 211-274633.
文摘Hyperspectral images(HSI)provide a new way to exploit the internal physical composition of the land scene.The basic platform for acquiring HSI data-sets are airborne or spaceborne spectral imaging.Retrieving useful information from hyperspectral images can be grouped into four categories.(1)Classification:Hyperspectral images provide so much spectral and spatial information that remotely sensed image classification has become a complex task.(2)Endmember extraction and spectral unmixing:Among images,only HSI have a complete model to represent the internal structure of each pixel where the endmembers are the elements.Identification of endmembers from HSI thus becomes the foremost step in interpretation of each pixel.With proper endmembers,the corresponding abundances can also be exactly calculated.(3)Target detection:Another practical problem is how to determine the existence of certain resolved or full pixel objects from a complex background.Constructing a reliable rule for separating target signals from all the other background signals,even in the case of low target occurrence and high spectral variation,comprises the key to this problem.(4)Change detection:Although change detection is not a new problem,detecting changes from hyperspectral images has brought new challenges,since the spectral bands are so many,accurate band-to-band correspondences and minor changes in subclass land objects can be depicted in HSI.In this paper,the basic theory and the most canonical works are discussed,along with the most recent advances in each aspect of hyperspectral image processing.
基金supported in part by the National Natural Science Foundation of China(62276192)。
文摘Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,which is beneficial for subsequent applications.The development of deep learning has promoted significant progress in hyperspectral image super-resolution,and the powerful expression capabilities of deep neural networks make the predicted results more reliable.Recently,several latest deep learning technologies have made the hyperspectral image super-resolution method explode.However,a comprehensive review and analysis of the latest deep learning methods from the hyperspectral image super-resolution perspective is absent.To this end,in this survey,we first introduce the concept of hyperspectral image super-resolution and classify the methods from the perspectives with or without auxiliary information.Then,we review the learning-based methods in three categories,including single hyperspectral image super-resolution,panchromatic-based hyperspectral image super-resolution,and multispectral-based hyperspectral image super-resolution.Subsequently,we summarize the commonly used hyperspectral dataset,and the evaluations for some representative methods in three categories are performed qualitatively and quantitatively.Moreover,we briefly introduce several typical applications of hyperspectral image super-resolution,including ground object classification,urban change detection,and ecosystem monitoring.Finally,we provide the conclusion and challenges in existing learning-based methods,looking forward to potential future research directions.
文摘Hyperspectral image(HSI)classification has been one of themost important tasks in the remote sensing community over the last few decades.Due to the presence of highly correlated bands and limited training samples in HSI,discriminative feature extraction was challenging for traditional machine learning methods.Recently,deep learning based methods have been recognized as powerful feature extraction tool and have drawn a significant amount of attention in HSI classification.Among various deep learning models,convolutional neural networks(CNNs)have shown huge success and offered great potential to yield high performance in HSI classification.Motivated by this successful performance,this paper presents a systematic review of different CNN architectures for HSI classification and provides some future guidelines.To accomplish this,our study has taken a few important steps.First,we have focused on different CNN architectures,which are able to extract spectral,spatial,and joint spectral-spatial features.Then,many publications related to CNN based HSI classifications have been reviewed systematically.Further,a detailed comparative performance analysis has been presented between four CNN models namely 1D CNN,2D CNN,3D CNN,and feature fusion based CNN(FFCNN).Four benchmark HSI datasets have been used in our experiment for evaluating the performance.Finally,we concluded the paper with challenges on CNN based HSI classification and future guidelines that may help the researchers to work on HSI classification using CNN.
文摘With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.
基金National Natural Foundation of China(No.41971279)Fundamental Research Funds of the Central Universities(No.B200202012)。
文摘Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.
基金supported by the National Natural Science Foundationof China (60702012)the Scientific Research Foundation for the Re-turned Overseas Chinese Scholars, State Education Ministry
文摘To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.
基金supported by the National Key Research and Development Project(No.2020YFC1512000)the National Natural Science Foundation of China(No.41601344)+2 种基金the Fundamental Research Funds for the Central Universities(Nos.300102320107 and 201924)in part by the General Projects of Key R&D Programs in Shaanxi Province(No.2020GY-060)Xi’an Science&Technology Project(Nos.2020KJRC0126 and 202018)。
文摘Hyperspectral image(HSI)contains a wealth of spectral information,which makes fine classification of ground objects possible.In the meanwhile,overly redundant information in HSI brings many challenges.Specifically,the lack of training samples and the high computational cost are the inevitable obstacles in the design of classifier.In order to solve these problems,dimensionality reduction is usually adopted.Recently,graph-based dimensionality reduction has become a hot topic.In this paper,the graph-based methods for HSI dimensionality reduction are summarized from the following aspects.1)The traditional graph-based methods employ Euclidean distance to explore the local information of samples in spectral feature space.2)The dimensionality-reduction methods based on sparse or collaborative representation regard the sparse or collaborative coefficients as graph weights to effectively reduce reconstruction errors and represent most important information of HSI in the dictionary.3)Improved methods based on sparse or collaborative graph have made great progress by considering global low-rank information,local intra-class information and spatial information.In order to compare typical techniques,three real HSI datasets were used to carry out relevant experiments,and then the experimental results were analysed and discussed.Finally,the future development of this research field is prospected.
基金supported by the National Natural Science Foundation of China(No.62001023)Beijing Natural Science Foundation(No.JQ20021)。
文摘Most methods for classifying hyperspectral data only consider the local spatial relation-ship among samples,ignoring the important non-local topological relationship.However,the non-local topological relationship is better at representing the structure of hyperspectral data.This paper proposes a deep learning model called Topology and semantic information fusion classification network(TSFnet)that incorporates a topology structure and semantic information transmis-sion network to accurately classify traditional Chinese medicine in hyperspectral images.TSFnet uses a convolutional neural network(CNN)to extract features and a graph convolution network(GCN)to capture potential topological relationships among different types of Chinese herbal medicines.The results show that TSFnet outperforms other state-of-the-art deep learning classification algorithms in two different scenarios of herbal medicine datasets.Additionally,the proposed TSFnet model is lightweight and can be easily deployed for mobile herbal medicine classification.
基金supported by National Natural Science Foundation of China(No.62101040).
文摘Accurate histopathology classification is a crucial factor in the diagnosis and treatment of Cholangiocarcinoma(CCA).Hyperspectral images(HSI)provide rich spectral information than ordinary RGB images,making them more useful for medical diagnosis.The Convolutional Neural Network(CNN)is commonly employed in hyperspectral image classification due to its remarkable capacity for feature extraction and image classification.However,many existing CNN-based HSI classification methods tend to ignore the importance of image spatial context information and the interdependence between spectral channels,leading to unsatisfied classification performance.Thus,to address these issues,this paper proposes a Spatial-Spectral Joint Network(SSJN)model for hyperspectral image classification that utilizes spatial self-attention and spectral feature extraction.The SSJN model is derived from the ResNet18 network and implemented with the non-local and Coordinate Attention(CA)modules,which extract long-range dependencies on image space and enhance spatial features through the Branch Attention(BA)module to emphasize the region of interest.Furthermore,the SSJN model employs Conv-LSTM modules to extract long-range depen-dencies in the image spectral domain.This addresses the gradient disappearance/explosion phenom-ena and enhances the model classification accuracy.The experimental results show that the pro-posed SSJN model is more efficient in leveraging the spatial and spectral information of hyperspec-tral images on multidimensional microspectral datasets of CCA,leading to higher classification accuracy,and may have useful references for medical diagnosis of CCA.