We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both sp...We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both spatial and feature domains.We commence by simultaneously learning a projection matrix,which preserves spatial localities,and a similarity matrix,which encodes feature similarities.We map the pixels of multi-source images by the projection matrix to a set fusion vectors that preserve spatial localities of the image.On the other hand,by applying the Laplacian eigen-decomposition to the similarity matrix,we obtain another set of fusion vectors that preserve the feature local similarities.We concatenate the fusion vectors for both spatial and feature locality preservation and obtain the fusion image.Finally,we classify the fusion image pixels by a novel sliding ensemble strategy,which enhances the locality preservation in classification.Our locality preserving fusion framework is effective in classifying multi-source sea-ice images(e.g.,multi-spectral and synthetic aperture radar(SAR)images)because it not only comprehensively captures the spatial neighboring relationships but also intrinsically characterizes the feature associations between different types of sea-ices.Experimental evaluations validate the effectiveness of our framework.展开更多
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi...Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.展开更多
Remote sensing image super-resolution technology is pivotal for enhancing image quality in critical applications including environmental monitoring,urban planning,and disaster assessment.However,traditional methods ex...Remote sensing image super-resolution technology is pivotal for enhancing image quality in critical applications including environmental monitoring,urban planning,and disaster assessment.However,traditional methods exhibit deficiencies in detail recovery and noise suppression,particularly when processing complex landscapes(e.g.,forests,farmlands),leading to artifacts and spectral distortions that limit practical utility.To address this,we propose an enhanced Super-Resolution Generative Adversarial Network(SRGAN)framework featuring three key innovations:(1)Replacement of L1/L2 loss with a robust Charbonnier loss to suppress noise while preserving edge details via adaptive gradient balancing;(2)A multi-loss joint optimization strategy dynamically weighting Charbonnier loss(β=0.5),Visual Geometry Group(VGG)perceptual loss(α=1),and adversarial loss(γ=0.1)to synergize pixel-level accuracy and perceptual quality;(3)A multi-scale residual network(MSRN)capturing cross-scale texture features(e.g.,forest canopies,mountain contours).Validated on Sentinel-2(10 m)and SPOT-6/7(2.5 m)datasets covering 904 km2 in Motuo County,Xizang,our method outperforms the SRGAN baseline(SR4RS)with Peak Signal-to-Noise Ratio(PSNR)gains of 0.29 dB and Structural Similarity Index(SSIM)improvements of 3.08%on forest imagery.Visual comparisons confirm enhanced texture continuity despite marginal Learned Perceptual Image Patch Similarity(LPIPS)increases.The method significantly improves noise robustness and edge retention in complex geomorphology,demonstrating 18%faster response in forest fire early warning and providing high-resolution support for agricultural/urban monitoring.Future work will integrate spectral constraints and lightweight architectures.展开更多
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru...Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.展开更多
This paper aims at providing multi-source remote sensing images registered in geometric space for image fusion.Focusing on the characteristics and differences of multi-source remote sensing images,a feature-based regi...This paper aims at providing multi-source remote sensing images registered in geometric space for image fusion.Focusing on the characteristics and differences of multi-source remote sensing images,a feature-based registration algorithm is implemented.The key technologies include image scale-space for implementing multi-scale properties,Harris corner detection for keypoints extraction,and partial intensity invariant feature descriptor(PIIFD)for keypoints description.Eventually,a multi-scale Harris-PIIFD image registration algorithm framework is proposed.The experimental results of fifteen sets of representative real data show that the algorithm has excellent,stable performance in multi-source remote sensing image registration,and can achieve accurate spatial alignment,which has strong practical application value and certain generalization ability.展开更多
The automatic registration of multi-source remote sensing images (RSI) is a research hotspot of remote sensing image preprocessing currently. A special automatic image registration module named the Image Autosync has ...The automatic registration of multi-source remote sensing images (RSI) is a research hotspot of remote sensing image preprocessing currently. A special automatic image registration module named the Image Autosync has been embedded into the ERDAS IMAGINE software of version 9.0 and above. The registration accuracies of the module verified for the remote sensing images obtained from different platforms or their different spatial resolution. Four tested registration experiments are discussed in this article to analyze the accuracy differences based on the remote sensing data which have different spatial resolution. The impact factors inducing the differences of registration accuracy are also analyzed.展开更多
Snow depth (SD) is a key parameter for research into global climate changes and land surface processes. A method was developed to obtain daily SD images at a higher 4 km spatial resolution and higher precision with ...Snow depth (SD) is a key parameter for research into global climate changes and land surface processes. A method was developed to obtain daily SD images at a higher 4 km spatial resolution and higher precision with SD measurements from in situ observations and passive microwave remote sensing of Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and snow cover measurements of the Interactive Multisensor Snow and Ice Mapping System (IMS). AMSR-E SD at 25 km spatial resolution was retrieved from AMSR-E products of snow density and snow water equivalent and then corrected using the SD from in situ observations and IMS snow cover. Corrected AMSR-E SD images were then resampled to act as "virtual" in situ observations to combine with the real in situ observations to interpolate at 4 km spatial resolution SD using the Cressman method. Finally, daily SD data generation for several regions of China demonstrated that the method is well suited to the generation of higher spatial resolution SD data in regions with a lower Digital Elevation Model (DEM) but not so well suited to regions at high altitude and with an undulating terrain, such as the Tibetan Plateau. Analysis of the longer time period SD data generation for January between 2003 and 2010 in northern Xinjiang also demonstrated the feasibility of the method.展开更多
Based on the homography between a multi-source image and three-dimensional (3D) measurement points, this letter proposes a novel 3D registration and integration method based on scale-invariant feature matching. The ...Based on the homography between a multi-source image and three-dimensional (3D) measurement points, this letter proposes a novel 3D registration and integration method based on scale-invariant feature matching. The matching relationships of two-dimensional (2D) texture gray images and two-and-a-half- dimensional (2.5D) range images are constructed using the scale-invariant feature transform algorithms. Then, at least three non-collinear 3D measurement points corresponding to image feature points are used to achieve a registration relationship accurately. According to the index of overlapping images and the local 3D border search method, multi-view registration data are rapidly and accurately integrated. Exper- imental results on real models demonstrate that the algorithm is robust and effective.展开更多
Image registration is an indispensable component in multi-source remote sensing image processing. In this paper, we put forward a remote sensing image registration method by including an improved multi-scale and multi...Image registration is an indispensable component in multi-source remote sensing image processing. In this paper, we put forward a remote sensing image registration method by including an improved multi-scale and multi-direction Harris algorithm and a novel compound feature. Multi-scale circle Gaussian combined invariant moments and multi-direction gray level co-occurrence matrix are extracted as features for image matching. The proposed algorithm is evaluated on numerous multi-source remote sensor images with noise and illumination changes. Extensive experimental studies prove that our proposed method is capable of receiving stable and even distribution of key points as well as obtaining robust and accurate correspondence matches. It is a promising scheme in multi-source remote sensing image registration.展开更多
In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Fi...In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Firstly,the source images are separated into a series of high and low frequency components.Secondly,three visual features of the source image are extracted to construct a decision graph model.Thirdly,a fast weighted guided filter is raised to optimize the result obtained in the previous step and reduce the time complexity by considering the correlation among neighboring pixels.Finally,the image obtained in the previous step is combined with the weight map to realize the image fusion.The proposed algorithm is applied to multi-focus,visible-infrared and multi-modal image respectively and the final results show that the algorithm effectively solves the halo artifacts of the merged images with higher efficiency,and is better than the traditional method considering subjective visual consequent and objective evaluation.展开更多
Remote Sensing image fusion is an effective way to use the large volume ofdata from multi-source images. This paper introduces a new method of remote sensing image fusionbased on support vector machine (SVM), using hi...Remote Sensing image fusion is an effective way to use the large volume ofdata from multi-source images. This paper introduces a new method of remote sensing image fusionbased on support vector machine (SVM), using high spatial resolution data SPIN-2 and multi-spectralremote sensing data SPOT-4. Firstly, the new method is established by building a model of remotesensing image fusion based on SVM. Then by using SPIN-2 data and SPOT-4 data, image classificationfusion is tested. Finally, an evaluation of the fusion result is made in two ways. 1) Fromsubjectivity assessment, the spatial resolution of the fused image is improved compared to theSPOT-4. And it is clearly that the texture of the fused image is distinctive. 2) From quantitativeanalysis, the effect of classification fusion is better. As a whole, the re-suit shows that theaccuracy of image fusion based on SVM is high and the SVM algorithm can be recommended forapplication in remote sensing image fusion processes.展开更多
The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is i...The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is imperative to ensure thorough tumor resection.In this study,we integrate Raman imaging technology with an artificial intelligence(AI)generative model,proposing an innovative approach for intraoperative margin status diagnosis.This method utilizes Raman imaging to swiftly and non-invasively capture tissue Raman images,which are then transformed into hematoxylin-eosin(H&E)-stained histopathological images using an AI generative model for histopathological diagnosis.The generated H&E-stained images clearly illustrate the tissue’s pathological conditions.Independently reviewed by three pathologists,the overall diagnostic accuracy for distinguishing between tumor tissue and normal muscle tissue reaches 86.7%.Notably,it outperforms current clinical practices,especially in TSCC with positive lymph node metastasis or moderately differentiated grades.This advancement highlights the potential of AI-enhanced Raman imaging to significantly improve intraoperative assessments and surgical margin evaluations,promising a versatile diagnostic tool beyond TSCC.展开更多
The application of deep learning for target detection in aerial images captured by Unmanned Aerial Vehicles(UAV)has emerged as a prominent research focus.Due to the considerable distance between UAVs and the photograp...The application of deep learning for target detection in aerial images captured by Unmanned Aerial Vehicles(UAV)has emerged as a prominent research focus.Due to the considerable distance between UAVs and the photographed objects,coupled with complex shooting environments,existing models often struggle to achieve accurate real-time target detection.In this paper,a You Only Look Once v8(YOLOv8)model is modified from four aspects:the detection head,the up-sampling module,the feature extraction module,and the parameter optimization of positive sample screening,and the YOLO-S3DT model is proposed to improve the performance of the model for detecting small targets in aerial images.Experimental results show that all detection indexes of the proposed model are significantly improved without increasing the number of model parameters and with the limited growth of computation.Moreover,this model also has the best performance compared to other detecting models,demonstrating its advancement within this category of tasks.展开更多
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f...The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.展开更多
Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale who...Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.展开更多
The precise identification of quartz minerals is crucial in mineralogy and geology due to their widespread occurrence and industrial significance.Traditional methods of quartz identification in thin sections are labor...The precise identification of quartz minerals is crucial in mineralogy and geology due to their widespread occurrence and industrial significance.Traditional methods of quartz identification in thin sections are labor-intensive and require significant expertise,often complicated by the coexistence of other minerals.This study presents a novel approach leveraging deep learning techniques combined with hyperspectral imaging to automate the identification process of quartz minerals.The utilizied four advanced deep learning models—PSPNet,U-Net,FPN,and LinkNet—has significant advancements in efficiency and accuracy.Among these models,PSPNet exhibited superior performance,achieving the highest intersection over union(IoU)scores and demonstrating exceptional reliability in segmenting quartz minerals,even in complex scenarios.The study involved a comprehensive dataset of 120 thin sections,encompassing 2470 hyperspectral images prepared from 20 rock samples.Expert-reviewed masks were used for model training,ensuring robust segmentation results.This automated approach not only expedites the recognition process but also enhances reliability,providing a valuable tool for geologists and advancing the field of mineralogical analysis.展开更多
Mudflat vegetation plays a crucial role in the ecological function of wetland environment,and obtaining its fine spatial distri-bution is of great significance for wetland protection and management.Remote sensing tech...Mudflat vegetation plays a crucial role in the ecological function of wetland environment,and obtaining its fine spatial distri-bution is of great significance for wetland protection and management.Remote sensing techniques can realize the rapid extraction of wetland vegetation over a large area.However,the imaging of optical sensors is easily restricted by weather conditions,and the backs-cattered information reflected by Synthetic Aperture Radar(SAR)images is easily disturbed by many factors.Although both data sources have been applied in wetland vegetation classification,there is a lack of comparative study on how the selection of data sources affects the classification effect.This study takes the vegetation of the tidal flat wetland in Chongming Island,Shanghai,China,in 2019,as the research subject.A total of 22 optical feature parameters and 11 SAR feature parameters were extracted from the optical data source(Sentinel-2)and SAR data source(Sentinel-1),respectively.The performance of optical and SAR data and their feature paramet-ers in wetland vegetation classification was quantitatively compared and analyzed by different feature combinations.Furthermore,by simulating the scenario of missing optical images,the impact of optical image missing on vegetation classification accuracy and the compensatory effect of integrating SAR data were revealed.Results show that:1)under the same classification algorithm,the Overall Accuracy(OA)of the combined use of optical and SAR images was the highest,reaching 95.50%.The OA of using only optical images was slightly lower,while using only SAR images yields the lowest accuracy,but still achieved 86.48%.2)Compared to using the spec-tral reflectance of optical data and the backscattering coefficient of SAR data directly,the constructed optical and SAR feature paramet-ers contributed to improving classification accuracy.The inclusion of optical(vegetation index,spatial texture,and phenology features)and SAR feature parameters(SAR index and SAR texture features)in the classification algorithm resulted in an OA improvement of 4.56%and 9.47%,respectively.SAR backscatter,SAR index,optical phenological features,and vegetation index were identified as the top-ranking important features.3)When the optical data were missing continuously for six months,the OA dropped to a minimum of 41.56%.However,when combined with SAR data,the OA could be improved to 71.62%.This indicates that the incorporation of SAR features can effectively compensate for the loss of accuracy caused by optical image missing,especially in regions with long-term cloud cover.展开更多
Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concept...Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concepts, such as independent single evaluation index, union single evaluation index, synthetic evaluation index were proposed. Based on these concepts, synthetic evaluation system for digital image fusion was formed. The experiments with the wavelet fusion method, which was applied to fuse the multi-spectral image and panchromatic remote sensing image, the IR image and visible image, the CT and MRI image, and the multi-focus images show that it is an objective, uniform and effective quantitative method for image fusion evaluation.展开更多
The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper...The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.展开更多
To elucidate the fracturing mechanism of deep hard rock under complex disturbance environments,this study investigates the dynamic failure behavior of pre-damaged granite subjected to multi-source dynamic disturbances...To elucidate the fracturing mechanism of deep hard rock under complex disturbance environments,this study investigates the dynamic failure behavior of pre-damaged granite subjected to multi-source dynamic disturbances.Blasting vibration monitoring was conducted in a deep-buried drill-and-blast tunnel to characterize in-situ dynamic loading conditions.Subsequently,true triaxial compression tests incorporating multi-source disturbances were performed using a self-developed wide-low-frequency true triaxial system to simulate disturbance accumulation and damage evolution in granite.The results demonstrate that combined dynamic disturbances and unloading damage significantly accelerate strength degradation and trigger shear-slip failure along preferentially oriented blast-induced fractures,with strength reductions up to 16.7%.Layered failure was observed on the free surface of pre-damaged granite under biaxial loading,indicating a disturbance-induced fracture localization mechanism.Time-stress-fracture-energy coupling fields were constructed to reveal the spatiotemporal characteristics of fracture evolution.Critical precursor frequency bands(105-150,185-225,and 300-325 kHz)were identified,which serve as diagnostic signatures of impending failure.A dynamic instability mechanism driven by multi-source disturbance superposition and pre-damage evolution was established.Furthermore,a grouting-based wave-absorption control strategy was proposed to mitigate deep dynamic disasters by attenuating disturbance amplitude and reducing excitation frequency.展开更多
基金The National Natural Science Foundation of China under contract No.61671481the Qingdao Applied Fundamental Research under contract No.16-5-1-11-jchthe Fundamental Research Funds for Central Universities under contract No.18CX05014A
文摘We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both spatial and feature domains.We commence by simultaneously learning a projection matrix,which preserves spatial localities,and a similarity matrix,which encodes feature similarities.We map the pixels of multi-source images by the projection matrix to a set fusion vectors that preserve spatial localities of the image.On the other hand,by applying the Laplacian eigen-decomposition to the similarity matrix,we obtain another set of fusion vectors that preserve the feature local similarities.We concatenate the fusion vectors for both spatial and feature locality preservation and obtain the fusion image.Finally,we classify the fusion image pixels by a novel sliding ensemble strategy,which enhances the locality preservation in classification.Our locality preserving fusion framework is effective in classifying multi-source sea-ice images(e.g.,multi-spectral and synthetic aperture radar(SAR)images)because it not only comprehensively captures the spatial neighboring relationships but also intrinsically characterizes the feature associations between different types of sea-ices.Experimental evaluations validate the effectiveness of our framework.
基金funded by University of Transport and Communications(UTC)under grant number T2025-CN-004.
文摘Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.
基金This study was supported by:Inner Mongolia Academy of Forestry Sciences Open Research Project(Grant No.KF2024MS03)The Project to Improve the Scientific Research Capacity of the Inner Mongolia Academy of Forestry Sciences(Grant No.2024NLTS04)The Innovation and Entrepreneurship Training Program for Undergraduates of Beijing Forestry University(Grant No.X202410022268).
文摘Remote sensing image super-resolution technology is pivotal for enhancing image quality in critical applications including environmental monitoring,urban planning,and disaster assessment.However,traditional methods exhibit deficiencies in detail recovery and noise suppression,particularly when processing complex landscapes(e.g.,forests,farmlands),leading to artifacts and spectral distortions that limit practical utility.To address this,we propose an enhanced Super-Resolution Generative Adversarial Network(SRGAN)framework featuring three key innovations:(1)Replacement of L1/L2 loss with a robust Charbonnier loss to suppress noise while preserving edge details via adaptive gradient balancing;(2)A multi-loss joint optimization strategy dynamically weighting Charbonnier loss(β=0.5),Visual Geometry Group(VGG)perceptual loss(α=1),and adversarial loss(γ=0.1)to synergize pixel-level accuracy and perceptual quality;(3)A multi-scale residual network(MSRN)capturing cross-scale texture features(e.g.,forest canopies,mountain contours).Validated on Sentinel-2(10 m)and SPOT-6/7(2.5 m)datasets covering 904 km2 in Motuo County,Xizang,our method outperforms the SRGAN baseline(SR4RS)with Peak Signal-to-Noise Ratio(PSNR)gains of 0.29 dB and Structural Similarity Index(SSIM)improvements of 3.08%on forest imagery.Visual comparisons confirm enhanced texture continuity despite marginal Learned Perceptual Image Patch Similarity(LPIPS)increases.The method significantly improves noise robustness and edge retention in complex geomorphology,demonstrating 18%faster response in forest fire early warning and providing high-resolution support for agricultural/urban monitoring.Future work will integrate spectral constraints and lightweight architectures.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01295).
文摘Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.
文摘This paper aims at providing multi-source remote sensing images registered in geometric space for image fusion.Focusing on the characteristics and differences of multi-source remote sensing images,a feature-based registration algorithm is implemented.The key technologies include image scale-space for implementing multi-scale properties,Harris corner detection for keypoints extraction,and partial intensity invariant feature descriptor(PIIFD)for keypoints description.Eventually,a multi-scale Harris-PIIFD image registration algorithm framework is proposed.The experimental results of fifteen sets of representative real data show that the algorithm has excellent,stable performance in multi-source remote sensing image registration,and can achieve accurate spatial alignment,which has strong practical application value and certain generalization ability.
文摘The automatic registration of multi-source remote sensing images (RSI) is a research hotspot of remote sensing image preprocessing currently. A special automatic image registration module named the Image Autosync has been embedded into the ERDAS IMAGINE software of version 9.0 and above. The registration accuracies of the module verified for the remote sensing images obtained from different platforms or their different spatial resolution. Four tested registration experiments are discussed in this article to analyze the accuracy differences based on the remote sensing data which have different spatial resolution. The impact factors inducing the differences of registration accuracy are also analyzed.
基金Meteorological Research in the Public Interest,No.GYHY201106014Beijing Nova Program,No.2010B037China Special Fund for the National High Technology Research and Development Program of China(863 Program),No.412230
文摘Snow depth (SD) is a key parameter for research into global climate changes and land surface processes. A method was developed to obtain daily SD images at a higher 4 km spatial resolution and higher precision with SD measurements from in situ observations and passive microwave remote sensing of Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and snow cover measurements of the Interactive Multisensor Snow and Ice Mapping System (IMS). AMSR-E SD at 25 km spatial resolution was retrieved from AMSR-E products of snow density and snow water equivalent and then corrected using the SD from in situ observations and IMS snow cover. Corrected AMSR-E SD images were then resampled to act as "virtual" in situ observations to combine with the real in situ observations to interpolate at 4 km spatial resolution SD using the Cressman method. Finally, daily SD data generation for several regions of China demonstrated that the method is well suited to the generation of higher spatial resolution SD data in regions with a lower Digital Elevation Model (DEM) but not so well suited to regions at high altitude and with an undulating terrain, such as the Tibetan Plateau. Analysis of the longer time period SD data generation for January between 2003 and 2010 in northern Xinjiang also demonstrated the feasibility of the method.
基金supported by the National "863" Program of China (No. 2009BAI81B02)the Doctoral Foundation of the Ministry of Education (No.20070287055)
文摘Based on the homography between a multi-source image and three-dimensional (3D) measurement points, this letter proposes a novel 3D registration and integration method based on scale-invariant feature matching. The matching relationships of two-dimensional (2D) texture gray images and two-and-a-half- dimensional (2.5D) range images are constructed using the scale-invariant feature transform algorithms. Then, at least three non-collinear 3D measurement points corresponding to image feature points are used to achieve a registration relationship accurately. According to the index of overlapping images and the local 3D border search method, multi-view registration data are rapidly and accurately integrated. Exper- imental results on real models demonstrate that the algorithm is robust and effective.
基金supported by National Nature Science Foundation of China (Nos. 61462046 and 61762052)Natural Science Foundation of Jiangxi Province (Nos. 20161BAB202049 and 20161BAB204172)+2 种基金the Bidding Project of the Key Laboratory of Watershed Ecology and Geographical Environment Monitoring, NASG (Nos. WE2016003, WE2016013 and WE2016015)the Science and Technology Research Projects of Jiangxi Province Education Department (Nos. GJJ160741, GJJ170632 and GJJ170633)the Art Planning Project of Jiangxi Province (Nos. YG2016250 and YG2017381)
文摘Image registration is an indispensable component in multi-source remote sensing image processing. In this paper, we put forward a remote sensing image registration method by including an improved multi-scale and multi-direction Harris algorithm and a novel compound feature. Multi-scale circle Gaussian combined invariant moments and multi-direction gray level co-occurrence matrix are extracted as features for image matching. The proposed algorithm is evaluated on numerous multi-source remote sensor images with noise and illumination changes. Extensive experimental studies prove that our proposed method is capable of receiving stable and even distribution of key points as well as obtaining robust and accurate correspondence matches. It is a promising scheme in multi-source remote sensing image registration.
基金supported by the National Natural Science Foundation of China(61472324 61671383)+1 种基金Shaanxi Key Industry Innovation Chain Project(2018ZDCXL-G-12-2 2019ZDLGY14-02-02)
文摘In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Firstly,the source images are separated into a series of high and low frequency components.Secondly,three visual features of the source image are extracted to construct a decision graph model.Thirdly,a fast weighted guided filter is raised to optimize the result obtained in the previous step and reduce the time complexity by considering the correlation among neighboring pixels.Finally,the image obtained in the previous step is combined with the weight map to realize the image fusion.The proposed algorithm is applied to multi-focus,visible-infrared and multi-modal image respectively and the final results show that the algorithm effectively solves the halo artifacts of the merged images with higher efficiency,and is better than the traditional method considering subjective visual consequent and objective evaluation.
文摘Remote Sensing image fusion is an effective way to use the large volume ofdata from multi-source images. This paper introduces a new method of remote sensing image fusionbased on support vector machine (SVM), using high spatial resolution data SPIN-2 and multi-spectralremote sensing data SPOT-4. Firstly, the new method is established by building a model of remotesensing image fusion based on SVM. Then by using SPIN-2 data and SPOT-4 data, image classificationfusion is tested. Finally, an evaluation of the fusion result is made in two ways. 1) Fromsubjectivity assessment, the spatial resolution of the fused image is improved compared to theSPOT-4. And it is clearly that the texture of the fused image is distinctive. 2) From quantitativeanalysis, the effect of classification fusion is better. As a whole, the re-suit shows that theaccuracy of image fusion based on SVM is high and the SVM algorithm can be recommended forapplication in remote sensing image fusion processes.
基金supported by the National Natural Science Foundation of China(Grant Nos.82272955 and 22203057)the Natural Science Foundation of Fujian Province(Grant No.2021J011361).
文摘The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is imperative to ensure thorough tumor resection.In this study,we integrate Raman imaging technology with an artificial intelligence(AI)generative model,proposing an innovative approach for intraoperative margin status diagnosis.This method utilizes Raman imaging to swiftly and non-invasively capture tissue Raman images,which are then transformed into hematoxylin-eosin(H&E)-stained histopathological images using an AI generative model for histopathological diagnosis.The generated H&E-stained images clearly illustrate the tissue’s pathological conditions.Independently reviewed by three pathologists,the overall diagnostic accuracy for distinguishing between tumor tissue and normal muscle tissue reaches 86.7%.Notably,it outperforms current clinical practices,especially in TSCC with positive lymph node metastasis or moderately differentiated grades.This advancement highlights the potential of AI-enhanced Raman imaging to significantly improve intraoperative assessments and surgical margin evaluations,promising a versatile diagnostic tool beyond TSCC.
文摘The application of deep learning for target detection in aerial images captured by Unmanned Aerial Vehicles(UAV)has emerged as a prominent research focus.Due to the considerable distance between UAVs and the photographed objects,coupled with complex shooting environments,existing models often struggle to achieve accurate real-time target detection.In this paper,a You Only Look Once v8(YOLOv8)model is modified from four aspects:the detection head,the up-sampling module,the feature extraction module,and the parameter optimization of positive sample screening,and the YOLO-S3DT model is proposed to improve the performance of the model for detecting small targets in aerial images.Experimental results show that all detection indexes of the proposed model are significantly improved without increasing the number of model parameters and with the limited growth of computation.Moreover,this model also has the best performance compared to other detecting models,demonstrating its advancement within this category of tasks.
基金Supported by the Henan Province Key Research and Development Project(231111211300)the Central Government of Henan Province Guides Local Science and Technology Development Funds(Z20231811005)+2 种基金Henan Province Key Research and Development Project(231111110100)Henan Provincial Outstanding Foreign Scientist Studio(GZS2024006)Henan Provincial Joint Fund for Scientific and Technological Research and Development Plan(Application and Overcoming Technical Barriers)(242103810028)。
文摘The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.
基金supported by the National Natural Science Foundation of China(No.82371933)the National Natural Science Foundation of Shandong Province of China(No.ZR2021MH120)+1 种基金the Taishan Scholars Project(No.tsqn202211378)the Shandong Provincial Natural Science Foundation for Excellent Young Scholars(No.ZR2024YQ075).
文摘Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.
文摘The precise identification of quartz minerals is crucial in mineralogy and geology due to their widespread occurrence and industrial significance.Traditional methods of quartz identification in thin sections are labor-intensive and require significant expertise,often complicated by the coexistence of other minerals.This study presents a novel approach leveraging deep learning techniques combined with hyperspectral imaging to automate the identification process of quartz minerals.The utilizied four advanced deep learning models—PSPNet,U-Net,FPN,and LinkNet—has significant advancements in efficiency and accuracy.Among these models,PSPNet exhibited superior performance,achieving the highest intersection over union(IoU)scores and demonstrating exceptional reliability in segmenting quartz minerals,even in complex scenarios.The study involved a comprehensive dataset of 120 thin sections,encompassing 2470 hyperspectral images prepared from 20 rock samples.Expert-reviewed masks were used for model training,ensuring robust segmentation results.This automated approach not only expedites the recognition process but also enhances reliability,providing a valuable tool for geologists and advancing the field of mineralogical analysis.
基金Under the auspices of the National Key Research and Development Program of China(No.2023YFC3208500)Shanghai Municipal Natural Science Foundation(No.22ZR1421500)+3 种基金National Natural Science Foundation of China(No.U2243207)National Science and Technology Basic Resources Survey Project(No.2023FY01001)Open Research Fund of State Key Laboratory of Estuarine and Coastal Research(No.SKLEC-KF202406)Project from Science and Technology Commission of Shanghai Municipality(No.22DZ1202700)。
文摘Mudflat vegetation plays a crucial role in the ecological function of wetland environment,and obtaining its fine spatial distri-bution is of great significance for wetland protection and management.Remote sensing techniques can realize the rapid extraction of wetland vegetation over a large area.However,the imaging of optical sensors is easily restricted by weather conditions,and the backs-cattered information reflected by Synthetic Aperture Radar(SAR)images is easily disturbed by many factors.Although both data sources have been applied in wetland vegetation classification,there is a lack of comparative study on how the selection of data sources affects the classification effect.This study takes the vegetation of the tidal flat wetland in Chongming Island,Shanghai,China,in 2019,as the research subject.A total of 22 optical feature parameters and 11 SAR feature parameters were extracted from the optical data source(Sentinel-2)and SAR data source(Sentinel-1),respectively.The performance of optical and SAR data and their feature paramet-ers in wetland vegetation classification was quantitatively compared and analyzed by different feature combinations.Furthermore,by simulating the scenario of missing optical images,the impact of optical image missing on vegetation classification accuracy and the compensatory effect of integrating SAR data were revealed.Results show that:1)under the same classification algorithm,the Overall Accuracy(OA)of the combined use of optical and SAR images was the highest,reaching 95.50%.The OA of using only optical images was slightly lower,while using only SAR images yields the lowest accuracy,but still achieved 86.48%.2)Compared to using the spec-tral reflectance of optical data and the backscattering coefficient of SAR data directly,the constructed optical and SAR feature paramet-ers contributed to improving classification accuracy.The inclusion of optical(vegetation index,spatial texture,and phenology features)and SAR feature parameters(SAR index and SAR texture features)in the classification algorithm resulted in an OA improvement of 4.56%and 9.47%,respectively.SAR backscatter,SAR index,optical phenological features,and vegetation index were identified as the top-ranking important features.3)When the optical data were missing continuously for six months,the OA dropped to a minimum of 41.56%.However,when combined with SAR data,the OA could be improved to 71.62%.This indicates that the incorporation of SAR features can effectively compensate for the loss of accuracy caused by optical image missing,especially in regions with long-term cloud cover.
基金National Natural Science Foundation ofChina (No. 60375008) Shanghai EXPOSpecial Project ( No.2004BA908B07 )Shanghai NRC International CooperationProject (No.05SN07118)
文摘Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concepts, such as independent single evaluation index, union single evaluation index, synthetic evaluation index were proposed. Based on these concepts, synthetic evaluation system for digital image fusion was formed. The experiments with the wavelet fusion method, which was applied to fuse the multi-spectral image and panchromatic remote sensing image, the IR image and visible image, the CT and MRI image, and the multi-focus images show that it is an objective, uniform and effective quantitative method for image fusion evaluation.
文摘The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.
基金supported by the National Key R&D Program of China(No.2023YFB2603602)the National Natural Science Foundation of China(Nos.52222810 and 52178383).
文摘To elucidate the fracturing mechanism of deep hard rock under complex disturbance environments,this study investigates the dynamic failure behavior of pre-damaged granite subjected to multi-source dynamic disturbances.Blasting vibration monitoring was conducted in a deep-buried drill-and-blast tunnel to characterize in-situ dynamic loading conditions.Subsequently,true triaxial compression tests incorporating multi-source disturbances were performed using a self-developed wide-low-frequency true triaxial system to simulate disturbance accumulation and damage evolution in granite.The results demonstrate that combined dynamic disturbances and unloading damage significantly accelerate strength degradation and trigger shear-slip failure along preferentially oriented blast-induced fractures,with strength reductions up to 16.7%.Layered failure was observed on the free surface of pre-damaged granite under biaxial loading,indicating a disturbance-induced fracture localization mechanism.Time-stress-fracture-energy coupling fields were constructed to reveal the spatiotemporal characteristics of fracture evolution.Critical precursor frequency bands(105-150,185-225,and 300-325 kHz)were identified,which serve as diagnostic signatures of impending failure.A dynamic instability mechanism driven by multi-source disturbance superposition and pre-damage evolution was established.Furthermore,a grouting-based wave-absorption control strategy was proposed to mitigate deep dynamic disasters by attenuating disturbance amplitude and reducing excitation frequency.