We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both sp...We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both spatial and feature domains.We commence by simultaneously learning a projection matrix,which preserves spatial localities,and a similarity matrix,which encodes feature similarities.We map the pixels of multi-source images by the projection matrix to a set fusion vectors that preserve spatial localities of the image.On the other hand,by applying the Laplacian eigen-decomposition to the similarity matrix,we obtain another set of fusion vectors that preserve the feature local similarities.We concatenate the fusion vectors for both spatial and feature locality preservation and obtain the fusion image.Finally,we classify the fusion image pixels by a novel sliding ensemble strategy,which enhances the locality preservation in classification.Our locality preserving fusion framework is effective in classifying multi-source sea-ice images(e.g.,multi-spectral and synthetic aperture radar(SAR)images)because it not only comprehensively captures the spatial neighboring relationships but also intrinsically characterizes the feature associations between different types of sea-ices.Experimental evaluations validate the effectiveness of our framework.展开更多
In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Fi...In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Firstly,the source images are separated into a series of high and low frequency components.Secondly,three visual features of the source image are extracted to construct a decision graph model.Thirdly,a fast weighted guided filter is raised to optimize the result obtained in the previous step and reduce the time complexity by considering the correlation among neighboring pixels.Finally,the image obtained in the previous step is combined with the weight map to realize the image fusion.The proposed algorithm is applied to multi-focus,visible-infrared and multi-modal image respectively and the final results show that the algorithm effectively solves the halo artifacts of the merged images with higher efficiency,and is better than the traditional method considering subjective visual consequent and objective evaluation.展开更多
Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concept...Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concepts, such as independent single evaluation index, union single evaluation index, synthetic evaluation index were proposed. Based on these concepts, synthetic evaluation system for digital image fusion was formed. The experiments with the wavelet fusion method, which was applied to fuse the multi-spectral image and panchromatic remote sensing image, the IR image and visible image, the CT and MRI image, and the multi-focus images show that it is an objective, uniform and effective quantitative method for image fusion evaluation.展开更多
Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational com...Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational complexity and neglecting the local features of an image,resulting in limited image detail retention and a high registration misalignment sensitivity.In order to overcome these shortcomings and the noise existing in the image of the fusion process,this paper proposes a new signal decomposition model,namely the multi-source image fusion algorithm of the gradient regularization convolution SR(CSR).The main innovation of this work is using the sparse optimization function to perform two-scale decomposition of the source image to obtain high-frequency components and low-frequency components.The sparse coefficient is obtained by the gradient regularization CSR model,and the sparse coefficient is taken as the maximum value to get the optimal high frequency component of the fused image.The best low frequency component is obtained by using the fusion strategy of the extreme or the average value.The final fused image is obtained by adding two optimal components.Experimental results demonstrate that this method greatly improves the ability to maintain image details and reduces image registration sensitivity.展开更多
Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aimin...Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems.展开更多
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist...The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.展开更多
Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time perfor...Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time performance and monitoring scope.To address this,a temperature detection method based on infrared image processing has been proposed:utilizing the median filtering algorithm to denoise the original infrared image,then applying an image segmentation algorithm to divide the image.展开更多
Karst fractures serve as crucial seepage channels and storage spaces for carbonate natural gas reservoirs,and electrical image logs are vital data for visualizing and characterizing such fractures.However,the conventi...Karst fractures serve as crucial seepage channels and storage spaces for carbonate natural gas reservoirs,and electrical image logs are vital data for visualizing and characterizing such fractures.However,the conventional approach of identifying fractures using electrical image logs predominantly relies on manual processes that are not only time-consuming but also highly subjective.In addition,the heterogeneity and strong dissolution tendency of karst carbonate reservoirs lead to complexity and variety in fracture geometry,which makes it difficult to accurately identify fractures.In this paper,the electrical image logs network(EILnet)da deep-learning-based intelligent semantic segmentation model with a selective attention mechanism and selective feature fusion moduledwas created to enable the intelligent identification and segmentation of different types of fractures through electrical logging images.Data from electrical image logs representing structural and induced fractures were first selected using the sliding window technique before image inpainting and data augmentation were implemented for these images to improve the generalizability of the model.Various image-processing tools,including the bilateral filter,Laplace operator,and Gaussian low-pass filter,were also applied to the electrical logging images to generate a multi-attribute dataset to help the model learn the semantic features of the fractures.The results demonstrated that the EILnet model outperforms mainstream deep-learning semantic segmentation models,such as Fully Convolutional Networks(FCN-8s),U-Net,and SegNet,for both the single-channel dataset and the multi-attribute dataset.The EILnet provided significant advantages for the single-channel dataset,and its mean intersection over union(MIoU)and pixel accuracy(PA)were 81.32%and 89.37%,respectively.In the case of the multi-attribute dataset,the identification capability of all models improved to varying degrees,with the EILnet achieving the highest MIoU and PA of 83.43%and 91.11%,respectively.Further,applying the EILnet model to various blind wells demonstrated its ability to provide reliable fracture identification,thereby indicating its promising potential applications.展开更多
The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is i...The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is imperative to ensure thorough tumor resection.In this study,we integrate Raman imaging technology with an artificial intelligence(AI)generative model,proposing an innovative approach for intraoperative margin status diagnosis.This method utilizes Raman imaging to swiftly and non-invasively capture tissue Raman images,which are then transformed into hematoxylin-eosin(H&E)-stained histopathological images using an AI generative model for histopathological diagnosis.The generated H&E-stained images clearly illustrate the tissue’s pathological conditions.Independently reviewed by three pathologists,the overall diagnostic accuracy for distinguishing between tumor tissue and normal muscle tissue reaches 86.7%.Notably,it outperforms current clinical practices,especially in TSCC with positive lymph node metastasis or moderately differentiated grades.This advancement highlights the potential of AI-enhanced Raman imaging to significantly improve intraoperative assessments and surgical margin evaluations,promising a versatile diagnostic tool beyond TSCC.展开更多
Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image qual...Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image quality of the reconstruction is still difficult,which is particularly useful for scheme design of X-ray imaging systems,testing and improvement of imaging algorithms,and scientific research of X-ray sources.Currently,there is no specified method to quantitatively evaluate the quality of X-ray image reconstruction and the point-spread function(PSF)of an X-ray imager.In this paper,we propose percentage proximity degree(PPD)by considering the imaging characteristics of X-ray image reconstruction and in particular,sidelobes and their effects on imaging quality.After testing a variety of imaging quality assessments in six aspects,we utilized the technique for order preference by similarity to ideal solution to the indices that meet the requirements.Then we develop the final quality index for X-ray image reconstruction,QuIX,which consists of the selected indices and the new PPD.QuIX performs well in a series of tests,including assessment of instrument PSF and simulation tests under different grid configurations,as well as imaging tests with RHESSI data.It is also a useful tool for testing of imaging algorithms,and determination of imaging parameters for both RHESSI and ASO-S/Hard X-ray Imager,such as field of view,beam width factor,and detector selection.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method...Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method,an indirect imaging technique that requires algorithms to reconstruct and optimize images.During the last decade,a variety of algorithms have been developed and improved.However,it is difficult to quantitatively evaluate the image quality of different solutions without a true,reference image of observation.How to choose the values of imaging parameters for these algorithms to get the best performance is also an open question.In this study,we present a detailed test of the characteristics of these algorithms,imaging dynamic range and a crucial parameter for the CLEAN method,clean beam width factor(CBWF).We first used SDO/AIA EUV images to compute DEM maps and calculate thermal X-ray maps.Then these realistic sources and several types of simulated sources are used as the ground truth in the imaging simulations for both RHESSI and ASO-S/HXI.The different solutions are evaluated quantitatively by a number of means.The overall results suggest that EM,PIXON,and CLEAN are exceptional methods for sidelobe elimination,producing images with clear source details.Although MEM_GE,MEM_NJIT,VIS_WV and VIS_CS possess fast imaging processes and generate good images,they too possess associated imperfections unique to each method.The two forward fit algorithms,VF and FF,perform differently,and VF appears to be more robust and useful.We also demonstrated the imaging capability of HXI and available HXI algorithms.Furthermore,the effect of CBWF on image quality was investigated,and the optimal settings for both RHESSI and HXI were proposed.展开更多
Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive te...Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).展开更多
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f...The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.展开更多
Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps i...Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps is influenced by the representations of background images and map symbols.Many researchers explored the optimizations for background images and symbolization techniques for symbols to reduce the complexity of image-maps and improve the usability.However,little literature was found for the optimum amount of symbol loading.This study focuses on the effects of background image complexity and map symbol load on the usability(i.e.,effectiveness and efficiency)of image-maps.Experiments were conducted by user studies via eye-tracking equipment and an online questionnaire survey.Experimental data sets included image-maps with ten levels of map symbol load in ten areas.Forty volunteers took part in the target searching experiments.It has been found that the usability,i.e.,average time viewed(efficiency)and average revisits(effectiveness)of targets recorded,is influenced by the complexity of background images,a peak exists for optimum symbol load for an image-map.The optimum levels for symbol load for different image-maps also have a peak when the complexity of the background image/image map increases.The complexity of background images serves as a guideline for optimum map symbol load in image-map design.This study enhanced user experience by optimizing visual clarity and managing cognitive load.Understanding how these factors interact can help create adaptive maps that maintain clarity and usability,guiding AI algorithms to adjust symbol density based on user context.This research establishes the practices for map design,making cartographic tools more innovative and more user-centric.展开更多
The unmanned aerial vehicle(UAV)images captured under low-light conditions are often suffering from noise and uneven illumination.To address these issues,we propose a low-light image enhancement algorithm for UAV imag...The unmanned aerial vehicle(UAV)images captured under low-light conditions are often suffering from noise and uneven illumination.To address these issues,we propose a low-light image enhancement algorithm for UAV images,which is inspired by the Retinex theory and guided by a light weighted map.Firstly,we propose a new network for reflectance component processing to suppress the noise in images.Secondly,we construct an illumination enhancement module that uses a light weighted map to guide the enhancement process.Finally,the processed reflectance and illumination components are recombined to obtain the enhancement results.Experimental results show that our method can suppress the noise in images while enhancing image brightness,and prevent over enhancement in bright regions.Code and data are available at https://gitee.com/baixiaotong2/uav-images.git.展开更多
Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale who...Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.展开更多
Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models...Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models achieve speed-accuracy trade-offs via fixed convolution kernels and manual feature fusion,their rigid architectures struggle with multi-scale adaptability,as exemplified by YOLOv8n’s 36.4%mAP and 13.9%small-object AP on VisDrone2019.This paper presents YOLO-LE,a lightweight framework addressing these limitations through three novel designs:(1)We introduce the C2f-Dy and LDown modules to enhance the backbone’s sensitivity to small-object features while reducing backbone parameters,thereby improving model efficiency.(2)An adaptive feature fusion module is designed to dynamically integrate multi-scale feature maps,optimizing the neck structure,reducing neck complexity,and enhancing overall model performance.(3)We replace the original loss function with a distributed focal loss and incorporate a lightweight self-attention mechanism to improve small-object recognition and bounding box regression accuracy.Experimental results demonstrate that YOLO-LE achieves 39.9%mAP@0.5 on VisDrone2019,representing a 9.6%improvement over YOLOv8n,while maintaining 8.5 GFLOPs computational efficiency.This provides an efficient solution for UAV object detection in complex scenarios.展开更多
In low-light environments,captured images often exhibit issues such as insufficient clarity and detail loss,which significantly degrade the accuracy of subsequent target recognition tasks.To tackle these challenges,th...In low-light environments,captured images often exhibit issues such as insufficient clarity and detail loss,which significantly degrade the accuracy of subsequent target recognition tasks.To tackle these challenges,this study presents a novel low-light image enhancement algorithm that leverages virtual hazy image generation through dehazing models based on statistical analysis.The proposed algorithm initiates the enhancement process by transforming the low-light image into a virtual hazy image,followed by image segmentation using a quadtree method.To improve the accuracy and robustness of atmospheric light estimation,the algorithm incorporates a genetic algorithm to optimize the quadtree-based estimation of atmospheric light regions.Additionally,this method employs an adaptive window adjustment mechanism to derive the dark channel prior image,which is subsequently refined using morphological operations and guided filtering.The final enhanced image is reconstructed through the hazy image degradation model.Extensive experimental evaluations across multiple datasets verify the superiority of the designed framework,achieving a peak signal-to-noise ratio(PSNR)of 17.09 and a structural similarity index(SSIM)of 0.74.These results indicate that the proposed algorithm not only effectively enhances image contrast and brightness but also outperforms traditional methods in terms of subjective and objective evaluation metrics.展开更多
基金The National Natural Science Foundation of China under contract No.61671481the Qingdao Applied Fundamental Research under contract No.16-5-1-11-jchthe Fundamental Research Funds for Central Universities under contract No.18CX05014A
文摘We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both spatial and feature domains.We commence by simultaneously learning a projection matrix,which preserves spatial localities,and a similarity matrix,which encodes feature similarities.We map the pixels of multi-source images by the projection matrix to a set fusion vectors that preserve spatial localities of the image.On the other hand,by applying the Laplacian eigen-decomposition to the similarity matrix,we obtain another set of fusion vectors that preserve the feature local similarities.We concatenate the fusion vectors for both spatial and feature locality preservation and obtain the fusion image.Finally,we classify the fusion image pixels by a novel sliding ensemble strategy,which enhances the locality preservation in classification.Our locality preserving fusion framework is effective in classifying multi-source sea-ice images(e.g.,multi-spectral and synthetic aperture radar(SAR)images)because it not only comprehensively captures the spatial neighboring relationships but also intrinsically characterizes the feature associations between different types of sea-ices.Experimental evaluations validate the effectiveness of our framework.
基金supported by the National Natural Science Foundation of China(61472324 61671383)+1 种基金Shaanxi Key Industry Innovation Chain Project(2018ZDCXL-G-12-2 2019ZDLGY14-02-02)
文摘In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Firstly,the source images are separated into a series of high and low frequency components.Secondly,three visual features of the source image are extracted to construct a decision graph model.Thirdly,a fast weighted guided filter is raised to optimize the result obtained in the previous step and reduce the time complexity by considering the correlation among neighboring pixels.Finally,the image obtained in the previous step is combined with the weight map to realize the image fusion.The proposed algorithm is applied to multi-focus,visible-infrared and multi-modal image respectively and the final results show that the algorithm effectively solves the halo artifacts of the merged images with higher efficiency,and is better than the traditional method considering subjective visual consequent and objective evaluation.
基金National Natural Science Foundation ofChina (No. 60375008) Shanghai EXPOSpecial Project ( No.2004BA908B07 )Shanghai NRC International CooperationProject (No.05SN07118)
文摘Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concepts, such as independent single evaluation index, union single evaluation index, synthetic evaluation index were proposed. Based on these concepts, synthetic evaluation system for digital image fusion was formed. The experiments with the wavelet fusion method, which was applied to fuse the multi-spectral image and panchromatic remote sensing image, the IR image and visible image, the CT and MRI image, and the multi-focus images show that it is an objective, uniform and effective quantitative method for image fusion evaluation.
基金the National Natural Science Foundation of China(61671383)Shaanxi Key Industry Innovation Chain Project(2018ZDCXL-G-12-2,2019ZDLGY14-02-02,2019ZDLGY14-02-03).
文摘Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational complexity and neglecting the local features of an image,resulting in limited image detail retention and a high registration misalignment sensitivity.In order to overcome these shortcomings and the noise existing in the image of the fusion process,this paper proposes a new signal decomposition model,namely the multi-source image fusion algorithm of the gradient regularization convolution SR(CSR).The main innovation of this work is using the sparse optimization function to perform two-scale decomposition of the source image to obtain high-frequency components and low-frequency components.The sparse coefficient is obtained by the gradient regularization CSR model,and the sparse coefficient is taken as the maximum value to get the optimal high frequency component of the fused image.The best low frequency component is obtained by using the fusion strategy of the extreme or the average value.The final fused image is obtained by adding two optimal components.Experimental results demonstrate that this method greatly improves the ability to maintain image details and reduces image registration sensitivity.
基金The Key R&D Project of Hainan Province under contract No.ZDYF2023SHFZ097the National Natural Science Foundation of China under contract No.42376180。
文摘Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems.
文摘The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.
基金supported by the National Key Research and Development Project of China(No.2023YFB3709605)the National Natural Science Foundation of China(No.62073193)the National College Student Innovation Training Program(No.202310422122)。
文摘Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time performance and monitoring scope.To address this,a temperature detection method based on infrared image processing has been proposed:utilizing the median filtering algorithm to denoise the original infrared image,then applying an image segmentation algorithm to divide the image.
基金the National Natural Science Foundation of China(42472194,42302153,and 42002144)the Fundamental Research Funds for the Central Univer-sities(22CX06002A).
文摘Karst fractures serve as crucial seepage channels and storage spaces for carbonate natural gas reservoirs,and electrical image logs are vital data for visualizing and characterizing such fractures.However,the conventional approach of identifying fractures using electrical image logs predominantly relies on manual processes that are not only time-consuming but also highly subjective.In addition,the heterogeneity and strong dissolution tendency of karst carbonate reservoirs lead to complexity and variety in fracture geometry,which makes it difficult to accurately identify fractures.In this paper,the electrical image logs network(EILnet)da deep-learning-based intelligent semantic segmentation model with a selective attention mechanism and selective feature fusion moduledwas created to enable the intelligent identification and segmentation of different types of fractures through electrical logging images.Data from electrical image logs representing structural and induced fractures were first selected using the sliding window technique before image inpainting and data augmentation were implemented for these images to improve the generalizability of the model.Various image-processing tools,including the bilateral filter,Laplace operator,and Gaussian low-pass filter,were also applied to the electrical logging images to generate a multi-attribute dataset to help the model learn the semantic features of the fractures.The results demonstrated that the EILnet model outperforms mainstream deep-learning semantic segmentation models,such as Fully Convolutional Networks(FCN-8s),U-Net,and SegNet,for both the single-channel dataset and the multi-attribute dataset.The EILnet provided significant advantages for the single-channel dataset,and its mean intersection over union(MIoU)and pixel accuracy(PA)were 81.32%and 89.37%,respectively.In the case of the multi-attribute dataset,the identification capability of all models improved to varying degrees,with the EILnet achieving the highest MIoU and PA of 83.43%and 91.11%,respectively.Further,applying the EILnet model to various blind wells demonstrated its ability to provide reliable fracture identification,thereby indicating its promising potential applications.
基金supported by the National Natural Science Foundation of China(Grant Nos.82272955 and 22203057)the Natural Science Foundation of Fujian Province(Grant No.2021J011361).
文摘The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is imperative to ensure thorough tumor resection.In this study,we integrate Raman imaging technology with an artificial intelligence(AI)generative model,proposing an innovative approach for intraoperative margin status diagnosis.This method utilizes Raman imaging to swiftly and non-invasively capture tissue Raman images,which are then transformed into hematoxylin-eosin(H&E)-stained histopathological images using an AI generative model for histopathological diagnosis.The generated H&E-stained images clearly illustrate the tissue’s pathological conditions.Independently reviewed by three pathologists,the overall diagnostic accuracy for distinguishing between tumor tissue and normal muscle tissue reaches 86.7%.Notably,it outperforms current clinical practices,especially in TSCC with positive lymph node metastasis or moderately differentiated grades.This advancement highlights the potential of AI-enhanced Raman imaging to significantly improve intraoperative assessments and surgical margin evaluations,promising a versatile diagnostic tool beyond TSCC.
基金supported by the National Natural Science Foundation of China(NSFC)12333010the National Key R&D Program of China 2022YFF0503002+3 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(grant No.XDB0560000)the NSFC 11921003supported by the Prominent Postdoctoral Project of Jiangsu Province(2023ZB304)supported by the Strategic Priority Research Program on Space Science,the Chinese Academy of Sciences,grant No.XDA15320000.
文摘Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image quality of the reconstruction is still difficult,which is particularly useful for scheme design of X-ray imaging systems,testing and improvement of imaging algorithms,and scientific research of X-ray sources.Currently,there is no specified method to quantitatively evaluate the quality of X-ray image reconstruction and the point-spread function(PSF)of an X-ray imager.In this paper,we propose percentage proximity degree(PPD)by considering the imaging characteristics of X-ray image reconstruction and in particular,sidelobes and their effects on imaging quality.After testing a variety of imaging quality assessments in six aspects,we utilized the technique for order preference by similarity to ideal solution to the indices that meet the requirements.Then we develop the final quality index for X-ray image reconstruction,QuIX,which consists of the selected indices and the new PPD.QuIX performs well in a series of tests,including assessment of instrument PSF and simulation tests under different grid configurations,as well as imaging tests with RHESSI data.It is also a useful tool for testing of imaging algorithms,and determination of imaging parameters for both RHESSI and ASO-S/Hard X-ray Imager,such as field of view,beam width factor,and detector selection.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金supported by the National Key R&D Program of China 2022YFF0503002the National Natural Science Foundation of China(NSFC,Grant Nos.12333010 and 12233012)+2 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(grant No.XDB0560000)supported by the Prominent Postdoctoral Project of Jiangsu Province(2023ZB304)supported by the Strategic Priority Research Program on Space Science,the Chinese Academy of Sciences,grant No.XDA15320000.
文摘Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method,an indirect imaging technique that requires algorithms to reconstruct and optimize images.During the last decade,a variety of algorithms have been developed and improved.However,it is difficult to quantitatively evaluate the image quality of different solutions without a true,reference image of observation.How to choose the values of imaging parameters for these algorithms to get the best performance is also an open question.In this study,we present a detailed test of the characteristics of these algorithms,imaging dynamic range and a crucial parameter for the CLEAN method,clean beam width factor(CBWF).We first used SDO/AIA EUV images to compute DEM maps and calculate thermal X-ray maps.Then these realistic sources and several types of simulated sources are used as the ground truth in the imaging simulations for both RHESSI and ASO-S/HXI.The different solutions are evaluated quantitatively by a number of means.The overall results suggest that EM,PIXON,and CLEAN are exceptional methods for sidelobe elimination,producing images with clear source details.Although MEM_GE,MEM_NJIT,VIS_WV and VIS_CS possess fast imaging processes and generate good images,they too possess associated imperfections unique to each method.The two forward fit algorithms,VF and FF,perform differently,and VF appears to be more robust and useful.We also demonstrated the imaging capability of HXI and available HXI algorithms.Furthermore,the effect of CBWF on image quality was investigated,and the optimal settings for both RHESSI and HXI were proposed.
文摘Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).
基金Supported by the Henan Province Key Research and Development Project(231111211300)the Central Government of Henan Province Guides Local Science and Technology Development Funds(Z20231811005)+2 种基金Henan Province Key Research and Development Project(231111110100)Henan Provincial Outstanding Foreign Scientist Studio(GZS2024006)Henan Provincial Joint Fund for Scientific and Technological Research and Development Plan(Application and Overcoming Technical Barriers)(242103810028)。
文摘The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.
基金National Natural Science Foundation of China(No.42301518)Hubei Key Laboratory of Regional Development and Environmental Response(No.2023(A)002)Key Laboratory of the Evaluation and Monitoring of Southwest Land Resources(Ministry of Education)(No.TDSYS202304).
文摘Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps is influenced by the representations of background images and map symbols.Many researchers explored the optimizations for background images and symbolization techniques for symbols to reduce the complexity of image-maps and improve the usability.However,little literature was found for the optimum amount of symbol loading.This study focuses on the effects of background image complexity and map symbol load on the usability(i.e.,effectiveness and efficiency)of image-maps.Experiments were conducted by user studies via eye-tracking equipment and an online questionnaire survey.Experimental data sets included image-maps with ten levels of map symbol load in ten areas.Forty volunteers took part in the target searching experiments.It has been found that the usability,i.e.,average time viewed(efficiency)and average revisits(effectiveness)of targets recorded,is influenced by the complexity of background images,a peak exists for optimum symbol load for an image-map.The optimum levels for symbol load for different image-maps also have a peak when the complexity of the background image/image map increases.The complexity of background images serves as a guideline for optimum map symbol load in image-map design.This study enhanced user experience by optimizing visual clarity and managing cognitive load.Understanding how these factors interact can help create adaptive maps that maintain clarity and usability,guiding AI algorithms to adjust symbol density based on user context.This research establishes the practices for map design,making cartographic tools more innovative and more user-centric.
基金supported by the National Natural Science Foundation of China(Nos.62201454 and 62306235)the Xi’an Science and Technology Program of Xi’an Science and Technology Bureau(No.23SFSF0004)。
文摘The unmanned aerial vehicle(UAV)images captured under low-light conditions are often suffering from noise and uneven illumination.To address these issues,we propose a low-light image enhancement algorithm for UAV images,which is inspired by the Retinex theory and guided by a light weighted map.Firstly,we propose a new network for reflectance component processing to suppress the noise in images.Secondly,we construct an illumination enhancement module that uses a light weighted map to guide the enhancement process.Finally,the processed reflectance and illumination components are recombined to obtain the enhancement results.Experimental results show that our method can suppress the noise in images while enhancing image brightness,and prevent over enhancement in bright regions.Code and data are available at https://gitee.com/baixiaotong2/uav-images.git.
基金supported by the National Natural Science Foundation of China(No.82371933)the National Natural Science Foundation of Shandong Province of China(No.ZR2021MH120)+1 种基金the Taishan Scholars Project(No.tsqn202211378)the Shandong Provincial Natural Science Foundation for Excellent Young Scholars(No.ZR2024YQ075).
文摘Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.
文摘Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models achieve speed-accuracy trade-offs via fixed convolution kernels and manual feature fusion,their rigid architectures struggle with multi-scale adaptability,as exemplified by YOLOv8n’s 36.4%mAP and 13.9%small-object AP on VisDrone2019.This paper presents YOLO-LE,a lightweight framework addressing these limitations through three novel designs:(1)We introduce the C2f-Dy and LDown modules to enhance the backbone’s sensitivity to small-object features while reducing backbone parameters,thereby improving model efficiency.(2)An adaptive feature fusion module is designed to dynamically integrate multi-scale feature maps,optimizing the neck structure,reducing neck complexity,and enhancing overall model performance.(3)We replace the original loss function with a distributed focal loss and incorporate a lightweight self-attention mechanism to improve small-object recognition and bounding box regression accuracy.Experimental results demonstrate that YOLO-LE achieves 39.9%mAP@0.5 on VisDrone2019,representing a 9.6%improvement over YOLOv8n,while maintaining 8.5 GFLOPs computational efficiency.This provides an efficient solution for UAV object detection in complex scenarios.
基金supported by the Natural Science Foundation of Shandong Province(nos.ZR2023MF047,ZR2024MA055 and ZR2023QF139)the Enterprise Commissioned Project(nos.2024HX104 and 2024HX140)+1 种基金the China University Industry-University-Research Innovation Foundation(nos.2021ZYA11003 and 2021ITA05032)the Science and Technology Plan for Youth Innovation of Shandong's Universities(no.2019KJN012).
文摘In low-light environments,captured images often exhibit issues such as insufficient clarity and detail loss,which significantly degrade the accuracy of subsequent target recognition tasks.To tackle these challenges,this study presents a novel low-light image enhancement algorithm that leverages virtual hazy image generation through dehazing models based on statistical analysis.The proposed algorithm initiates the enhancement process by transforming the low-light image into a virtual hazy image,followed by image segmentation using a quadtree method.To improve the accuracy and robustness of atmospheric light estimation,the algorithm incorporates a genetic algorithm to optimize the quadtree-based estimation of atmospheric light regions.Additionally,this method employs an adaptive window adjustment mechanism to derive the dark channel prior image,which is subsequently refined using morphological operations and guided filtering.The final enhanced image is reconstructed through the hazy image degradation model.Extensive experimental evaluations across multiple datasets verify the superiority of the designed framework,achieving a peak signal-to-noise ratio(PSNR)of 17.09 and a structural similarity index(SSIM)of 0.74.These results indicate that the proposed algorithm not only effectively enhances image contrast and brightness but also outperforms traditional methods in terms of subjective and objective evaluation metrics.