In this work, we propose a new variational model for multi-modal image registration and present an efficient numerical implementation. The model minimizes a new functional based on using reformulated normalized gradie...In this work, we propose a new variational model for multi-modal image registration and present an efficient numerical implementation. The model minimizes a new functional based on using reformulated normalized gradients of the images as the fidelity term and higher-order derivatives as the regularizer. A key feature of the model is its ability of guaranteeing a diffeomorphic transformation which is achieved by a control term motivated by the quasi-conformal map and Beltrami coefficient. The existence of the solution of this model is established. To solve the model numerically, we design a Gauss-Newton method to solve the resulting discrete optimization problem and prove its convergence;a multilevel technique is employed to speed up the initialization and avoid likely local minima of the underlying functional. Finally, numerical experiments demonstrate that this new model can deliver good performances for multi-modal image registration and simultaneously generate an accurate diffeomorphic transformation.展开更多
Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status...Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status.Each of these methods contributes unique diagnostic insights,enhancing the overall assessment of patient condition.Nevertheless,the amalgamation of data from multiple modalities presents difficulties due to disparities in resolution,data collection methods,and noise levels.While traditional models like Convolutional Neural Networks(CNNs)excel in single-modality tasks,they struggle to handle multi-modal complexities,lacking the capacity to model global relationships.This research presents a novel approach for examining multi-modal medical imagery using a transformer-based system.The framework employs self-attention and cross-attention mechanisms to synchronize and integrate features across various modalities.Additionally,it shows resilience to variations in noise and image quality,making it adaptable for real-time clinical use.To address the computational hurdles linked to transformer models,particularly in real-time clinical applications in resource-constrained environments,several optimization techniques have been integrated to boost scalability and efficiency.Initially,a streamlined transformer architecture was adopted to minimize the computational load while maintaining model effectiveness.Methods such as model pruning,quantization,and knowledge distillation have been applied to reduce the parameter count and enhance the inference speed.Furthermore,efficient attention mechanisms such as linear or sparse attention were employed to alleviate the substantial memory and processing requirements of traditional self-attention operations.For further deployment optimization,researchers have implemented hardware-aware acceleration strategies,including the use of TensorRT and ONNX-based model compression,to ensure efficient execution on edge devices.These optimizations allow the approach to function effectively in real-time clinical settings,ensuring viability even in environments with limited resources.Future research directions include integrating non-imaging data to facilitate personalized treatment and enhancing computational efficiency for implementation in resource-limited environments.This study highlights the transformative potential of transformer models in multi-modal medical imaging,offering improvements in diagnostic accuracy and patient care outcomes.展开更多
The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring ef...The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models.展开更多
For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual diffe...For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual differences,conventional automatic segmentation methods perform poorly.Since the success of deep learning in the segmentation of medical images has been shown in the past few years,it has been applied to this task in a number of ways.The multi-scale and multi-modal features of lumbar tissues,however,are rarely explored by methodologies of deep learning.Because of the inadequacies in medical images availability,it is crucial to effectively fuse various modes of data collection for model training to alleviate the problem of insufficient samples.In this paper,we propose a novel multi-modality hierarchical fusion network(MHFN)for improving lumbar spine segmentation by learning robust feature representations from multi-modality magnetic resonance images.An adaptive group fusion module(AGFM)is introduced in this paper to fuse features from various modes to extract cross-modality features that could be valuable.Furthermore,to combine features from low to high levels of cross-modality,we design a hierarchical fusion structure based on AGFM.Compared to the other feature fusion methods,AGFM is more effective based on experimental results on multi-modality MR images of the lumbar spine.To further enhance segmentation accuracy,we compare our network with baseline fusion structures.Compared to the baseline fusion structures(input-level:76.27%,layer-level:78.10%,decision-level:79.14%),our network was able to segment fractured vertebrae more accurately(85.05%).展开更多
The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera...The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable.展开更多
Prostate cancer(PCa)is characterized by high incidence and propensity for easy metastasis,presenting significant challenges in clinical diagnosis and treatment.Tumor microenvironment(TME)-responsive nanomaterials prov...Prostate cancer(PCa)is characterized by high incidence and propensity for easy metastasis,presenting significant challenges in clinical diagnosis and treatment.Tumor microenvironment(TME)-responsive nanomaterials provide a promising prospect for imaging-guided precision therapy.Considering that tumor-derived alkaline phosphatase(ALP)is over-expressed in metastatic PCa,it makes a great chance to develop a theranostics system with ALP responsive in the TME.Herein,an ALP-responsive aggregationinduced emission luminogens(AIEgens)nanoprobe AMNF self-assembly was designed for enhancing the diagnosis and treatment of metastatic PCa.The nanoprobe exhibited self-aggregation in the presence of ALP resulted in aggregation-induced fluorescence,and enhanced accumulation and prolonged retention period at the tumor site.In terms of detection,the fluorescence(FL)/computed tomography(CT)/magnetic resonance(MR)multi-mode imaging effect of nanoprobe was significantly improved post-aggregation,enabling precise diagnosis through the amalgamation of multiple imaging modes.Enhanced CT/MR imaging can achieve assist preoperative tumor diagnosis,and enhanced FL imaging technology can achieve“intraoperative visual navigation”,showing its potential application value in clinical tumor detection and surgical guidance.In terms of treatment,AMNF showed strong absorption in the near infrared region after aggregation,which improved the photothermal treatment effect.Overall,our work developed an effective aggregation-enhanced theranostic strategy for ALP-related cancers.展开更多
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist...The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.展开更多
Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time perfor...Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time performance and monitoring scope.To address this,a temperature detection method based on infrared image processing has been proposed:utilizing the median filtering algorithm to denoise the original infrared image,then applying an image segmentation algorithm to divide the image.展开更多
The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is i...The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is imperative to ensure thorough tumor resection.In this study,we integrate Raman imaging technology with an artificial intelligence(AI)generative model,proposing an innovative approach for intraoperative margin status diagnosis.This method utilizes Raman imaging to swiftly and non-invasively capture tissue Raman images,which are then transformed into hematoxylin-eosin(H&E)-stained histopathological images using an AI generative model for histopathological diagnosis.The generated H&E-stained images clearly illustrate the tissue’s pathological conditions.Independently reviewed by three pathologists,the overall diagnostic accuracy for distinguishing between tumor tissue and normal muscle tissue reaches 86.7%.Notably,it outperforms current clinical practices,especially in TSCC with positive lymph node metastasis or moderately differentiated grades.This advancement highlights the potential of AI-enhanced Raman imaging to significantly improve intraoperative assessments and surgical margin evaluations,promising a versatile diagnostic tool beyond TSCC.展开更多
Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image qual...Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image quality of the reconstruction is still difficult,which is particularly useful for scheme design of X-ray imaging systems,testing and improvement of imaging algorithms,and scientific research of X-ray sources.Currently,there is no specified method to quantitatively evaluate the quality of X-ray image reconstruction and the point-spread function(PSF)of an X-ray imager.In this paper,we propose percentage proximity degree(PPD)by considering the imaging characteristics of X-ray image reconstruction and in particular,sidelobes and their effects on imaging quality.After testing a variety of imaging quality assessments in six aspects,we utilized the technique for order preference by similarity to ideal solution to the indices that meet the requirements.Then we develop the final quality index for X-ray image reconstruction,QuIX,which consists of the selected indices and the new PPD.QuIX performs well in a series of tests,including assessment of instrument PSF and simulation tests under different grid configurations,as well as imaging tests with RHESSI data.It is also a useful tool for testing of imaging algorithms,and determination of imaging parameters for both RHESSI and ASO-S/Hard X-ray Imager,such as field of view,beam width factor,and detector selection.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method...Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method,an indirect imaging technique that requires algorithms to reconstruct and optimize images.During the last decade,a variety of algorithms have been developed and improved.However,it is difficult to quantitatively evaluate the image quality of different solutions without a true,reference image of observation.How to choose the values of imaging parameters for these algorithms to get the best performance is also an open question.In this study,we present a detailed test of the characteristics of these algorithms,imaging dynamic range and a crucial parameter for the CLEAN method,clean beam width factor(CBWF).We first used SDO/AIA EUV images to compute DEM maps and calculate thermal X-ray maps.Then these realistic sources and several types of simulated sources are used as the ground truth in the imaging simulations for both RHESSI and ASO-S/HXI.The different solutions are evaluated quantitatively by a number of means.The overall results suggest that EM,PIXON,and CLEAN are exceptional methods for sidelobe elimination,producing images with clear source details.Although MEM_GE,MEM_NJIT,VIS_WV and VIS_CS possess fast imaging processes and generate good images,they too possess associated imperfections unique to each method.The two forward fit algorithms,VF and FF,perform differently,and VF appears to be more robust and useful.We also demonstrated the imaging capability of HXI and available HXI algorithms.Furthermore,the effect of CBWF on image quality was investigated,and the optimal settings for both RHESSI and HXI were proposed.展开更多
Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive te...Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).展开更多
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f...The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.展开更多
Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps i...Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps is influenced by the representations of background images and map symbols.Many researchers explored the optimizations for background images and symbolization techniques for symbols to reduce the complexity of image-maps and improve the usability.However,little literature was found for the optimum amount of symbol loading.This study focuses on the effects of background image complexity and map symbol load on the usability(i.e.,effectiveness and efficiency)of image-maps.Experiments were conducted by user studies via eye-tracking equipment and an online questionnaire survey.Experimental data sets included image-maps with ten levels of map symbol load in ten areas.Forty volunteers took part in the target searching experiments.It has been found that the usability,i.e.,average time viewed(efficiency)and average revisits(effectiveness)of targets recorded,is influenced by the complexity of background images,a peak exists for optimum symbol load for an image-map.The optimum levels for symbol load for different image-maps also have a peak when the complexity of the background image/image map increases.The complexity of background images serves as a guideline for optimum map symbol load in image-map design.This study enhanced user experience by optimizing visual clarity and managing cognitive load.Understanding how these factors interact can help create adaptive maps that maintain clarity and usability,guiding AI algorithms to adjust symbol density based on user context.This research establishes the practices for map design,making cartographic tools more innovative and more user-centric.展开更多
Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale who...Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.展开更多
Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models...Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models achieve speed-accuracy trade-offs via fixed convolution kernels and manual feature fusion,their rigid architectures struggle with multi-scale adaptability,as exemplified by YOLOv8n’s 36.4%mAP and 13.9%small-object AP on VisDrone2019.This paper presents YOLO-LE,a lightweight framework addressing these limitations through three novel designs:(1)We introduce the C2f-Dy and LDown modules to enhance the backbone’s sensitivity to small-object features while reducing backbone parameters,thereby improving model efficiency.(2)An adaptive feature fusion module is designed to dynamically integrate multi-scale feature maps,optimizing the neck structure,reducing neck complexity,and enhancing overall model performance.(3)We replace the original loss function with a distributed focal loss and incorporate a lightweight self-attention mechanism to improve small-object recognition and bounding box regression accuracy.Experimental results demonstrate that YOLO-LE achieves 39.9%mAP@0.5 on VisDrone2019,representing a 9.6%improvement over YOLOv8n,while maintaining 8.5 GFLOPs computational efficiency.This provides an efficient solution for UAV object detection in complex scenarios.展开更多
In low-light environments,captured images often exhibit issues such as insufficient clarity and detail loss,which significantly degrade the accuracy of subsequent target recognition tasks.To tackle these challenges,th...In low-light environments,captured images often exhibit issues such as insufficient clarity and detail loss,which significantly degrade the accuracy of subsequent target recognition tasks.To tackle these challenges,this study presents a novel low-light image enhancement algorithm that leverages virtual hazy image generation through dehazing models based on statistical analysis.The proposed algorithm initiates the enhancement process by transforming the low-light image into a virtual hazy image,followed by image segmentation using a quadtree method.To improve the accuracy and robustness of atmospheric light estimation,the algorithm incorporates a genetic algorithm to optimize the quadtree-based estimation of atmospheric light regions.Additionally,this method employs an adaptive window adjustment mechanism to derive the dark channel prior image,which is subsequently refined using morphological operations and guided filtering.The final enhanced image is reconstructed through the hazy image degradation model.Extensive experimental evaluations across multiple datasets verify the superiority of the designed framework,achieving a peak signal-to-noise ratio(PSNR)of 17.09 and a structural similarity index(SSIM)of 0.74.These results indicate that the proposed algorithm not only effectively enhances image contrast and brightness but also outperforms traditional methods in terms of subjective and objective evaluation metrics.展开更多
In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantl...In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantly degrades image quality.Traditional denoising methods,typically based on filter techniques,often face challenges related to inefficiency and limited adaptability.To address these limitations,this study proposes a novel SAR image denoising algorithm based on an enhanced residual network architecture,with the objective of enhancing the utility of SAR imagery in complex electromagnetic environments.The proposed algorithm integrates residual network modules,which directly process the noisy input images to generate denoised outputs.This approach not only reduces computational complexity but also mitigates the difficulties associated with model training.By combining the Transformer module with the residual block,the algorithm enhances the network's ability to extract global features,offering superior feature extraction capabilities compared to CNN-based residual modules.Additionally,the algorithm employs the adaptive activation function Meta-ACON,which dynamically adjusts the activation patterns of neurons,thereby improving the network's feature extraction efficiency.The effectiveness of the proposed denoising method is empirically validated using real SAR images from the RSOD dataset.The proposed algorithm exhibits remarkable performance in terms of EPI,SSIM,and ENL,while achieving a substantial enhancement in PSNR when compared to traditional and deep learning-based algorithms.The PSNR performance is enhanced by over twofold.Moreover,the evaluation of the MSTAR SAR dataset substantiates the algorithm's robustness and applicability in SAR denoising tasks,with a PSNR of 25.2021 being attained.These findings underscore the efficacy of the proposed algorithm in mitigating speckle noise while preserving critical features in SAR imagery,thereby enhancing its quality and usability in practical scenarios.展开更多
文摘In this work, we propose a new variational model for multi-modal image registration and present an efficient numerical implementation. The model minimizes a new functional based on using reformulated normalized gradients of the images as the fidelity term and higher-order derivatives as the regularizer. A key feature of the model is its ability of guaranteeing a diffeomorphic transformation which is achieved by a control term motivated by the quasi-conformal map and Beltrami coefficient. The existence of the solution of this model is established. To solve the model numerically, we design a Gauss-Newton method to solve the resulting discrete optimization problem and prove its convergence;a multilevel technique is employed to speed up the initialization and avoid likely local minima of the underlying functional. Finally, numerical experiments demonstrate that this new model can deliver good performances for multi-modal image registration and simultaneously generate an accurate diffeomorphic transformation.
基金supported by the Deanship of Research and Graduate Studies at King Khalid University under Small Research Project grant number RGP1/139/45.
文摘Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status.Each of these methods contributes unique diagnostic insights,enhancing the overall assessment of patient condition.Nevertheless,the amalgamation of data from multiple modalities presents difficulties due to disparities in resolution,data collection methods,and noise levels.While traditional models like Convolutional Neural Networks(CNNs)excel in single-modality tasks,they struggle to handle multi-modal complexities,lacking the capacity to model global relationships.This research presents a novel approach for examining multi-modal medical imagery using a transformer-based system.The framework employs self-attention and cross-attention mechanisms to synchronize and integrate features across various modalities.Additionally,it shows resilience to variations in noise and image quality,making it adaptable for real-time clinical use.To address the computational hurdles linked to transformer models,particularly in real-time clinical applications in resource-constrained environments,several optimization techniques have been integrated to boost scalability and efficiency.Initially,a streamlined transformer architecture was adopted to minimize the computational load while maintaining model effectiveness.Methods such as model pruning,quantization,and knowledge distillation have been applied to reduce the parameter count and enhance the inference speed.Furthermore,efficient attention mechanisms such as linear or sparse attention were employed to alleviate the substantial memory and processing requirements of traditional self-attention operations.For further deployment optimization,researchers have implemented hardware-aware acceleration strategies,including the use of TensorRT and ONNX-based model compression,to ensure efficient execution on edge devices.These optimizations allow the approach to function effectively in real-time clinical settings,ensuring viability even in environments with limited resources.Future research directions include integrating non-imaging data to facilitate personalized treatment and enhancing computational efficiency for implementation in resource-limited environments.This study highlights the transformative potential of transformer models in multi-modal medical imaging,offering improvements in diagnostic accuracy and patient care outcomes.
基金supported by the National Natural Science Foundation of China(Grant Nos.62071315 and 62271336).
文摘The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models.
基金supported in part by the Technology Innovation 2030 under Grant 2022ZD0211700.
文摘For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual differences,conventional automatic segmentation methods perform poorly.Since the success of deep learning in the segmentation of medical images has been shown in the past few years,it has been applied to this task in a number of ways.The multi-scale and multi-modal features of lumbar tissues,however,are rarely explored by methodologies of deep learning.Because of the inadequacies in medical images availability,it is crucial to effectively fuse various modes of data collection for model training to alleviate the problem of insufficient samples.In this paper,we propose a novel multi-modality hierarchical fusion network(MHFN)for improving lumbar spine segmentation by learning robust feature representations from multi-modality magnetic resonance images.An adaptive group fusion module(AGFM)is introduced in this paper to fuse features from various modes to extract cross-modality features that could be valuable.Furthermore,to combine features from low to high levels of cross-modality,we design a hierarchical fusion structure based on AGFM.Compared to the other feature fusion methods,AGFM is more effective based on experimental results on multi-modality MR images of the lumbar spine.To further enhance segmentation accuracy,we compare our network with baseline fusion structures.Compared to the baseline fusion structures(input-level:76.27%,layer-level:78.10%,decision-level:79.14%),our network was able to segment fractured vertebrae more accurately(85.05%).
基金the National Natural Science Foundation of China(No.61976080)the Academic Degrees&Graduate Education Reform Project of Henan Province(No.2021SJGLX195Y)+1 种基金the Teaching Reform Research and Practice Project of Henan Undergraduate Universities(No.2022SYJXLX008)the Key Project on Research and Practice of Henan University Graduate Education and Teaching Reform(No.YJSJG2023XJ006)。
文摘The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable.
基金supported by Natural Science Foundation of Jilin Province(No.SKL202302002)Key Research and Development project of Jilin Provincial Science and Technology Department(No.20210204142YY)+2 种基金The Science and Technology Development Program of Jilin Province(No.2020122256JC)Beijing Kechuang Medical Development Foundation Fund of China(No.KC2023-JX-0186BQ079)Talent Reserve Program(TRP),the First Hospital of Jilin University(No.JDYY-TRP-2024007)。
文摘Prostate cancer(PCa)is characterized by high incidence and propensity for easy metastasis,presenting significant challenges in clinical diagnosis and treatment.Tumor microenvironment(TME)-responsive nanomaterials provide a promising prospect for imaging-guided precision therapy.Considering that tumor-derived alkaline phosphatase(ALP)is over-expressed in metastatic PCa,it makes a great chance to develop a theranostics system with ALP responsive in the TME.Herein,an ALP-responsive aggregationinduced emission luminogens(AIEgens)nanoprobe AMNF self-assembly was designed for enhancing the diagnosis and treatment of metastatic PCa.The nanoprobe exhibited self-aggregation in the presence of ALP resulted in aggregation-induced fluorescence,and enhanced accumulation and prolonged retention period at the tumor site.In terms of detection,the fluorescence(FL)/computed tomography(CT)/magnetic resonance(MR)multi-mode imaging effect of nanoprobe was significantly improved post-aggregation,enabling precise diagnosis through the amalgamation of multiple imaging modes.Enhanced CT/MR imaging can achieve assist preoperative tumor diagnosis,and enhanced FL imaging technology can achieve“intraoperative visual navigation”,showing its potential application value in clinical tumor detection and surgical guidance.In terms of treatment,AMNF showed strong absorption in the near infrared region after aggregation,which improved the photothermal treatment effect.Overall,our work developed an effective aggregation-enhanced theranostic strategy for ALP-related cancers.
文摘The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.
基金supported by the National Key Research and Development Project of China(No.2023YFB3709605)the National Natural Science Foundation of China(No.62073193)the National College Student Innovation Training Program(No.202310422122)。
文摘Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time performance and monitoring scope.To address this,a temperature detection method based on infrared image processing has been proposed:utilizing the median filtering algorithm to denoise the original infrared image,then applying an image segmentation algorithm to divide the image.
基金supported by the National Natural Science Foundation of China(Grant Nos.82272955 and 22203057)the Natural Science Foundation of Fujian Province(Grant No.2021J011361).
文摘The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is imperative to ensure thorough tumor resection.In this study,we integrate Raman imaging technology with an artificial intelligence(AI)generative model,proposing an innovative approach for intraoperative margin status diagnosis.This method utilizes Raman imaging to swiftly and non-invasively capture tissue Raman images,which are then transformed into hematoxylin-eosin(H&E)-stained histopathological images using an AI generative model for histopathological diagnosis.The generated H&E-stained images clearly illustrate the tissue’s pathological conditions.Independently reviewed by three pathologists,the overall diagnostic accuracy for distinguishing between tumor tissue and normal muscle tissue reaches 86.7%.Notably,it outperforms current clinical practices,especially in TSCC with positive lymph node metastasis or moderately differentiated grades.This advancement highlights the potential of AI-enhanced Raman imaging to significantly improve intraoperative assessments and surgical margin evaluations,promising a versatile diagnostic tool beyond TSCC.
基金supported by the National Natural Science Foundation of China(NSFC)12333010the National Key R&D Program of China 2022YFF0503002+3 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(grant No.XDB0560000)the NSFC 11921003supported by the Prominent Postdoctoral Project of Jiangsu Province(2023ZB304)supported by the Strategic Priority Research Program on Space Science,the Chinese Academy of Sciences,grant No.XDA15320000.
文摘Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image quality of the reconstruction is still difficult,which is particularly useful for scheme design of X-ray imaging systems,testing and improvement of imaging algorithms,and scientific research of X-ray sources.Currently,there is no specified method to quantitatively evaluate the quality of X-ray image reconstruction and the point-spread function(PSF)of an X-ray imager.In this paper,we propose percentage proximity degree(PPD)by considering the imaging characteristics of X-ray image reconstruction and in particular,sidelobes and their effects on imaging quality.After testing a variety of imaging quality assessments in six aspects,we utilized the technique for order preference by similarity to ideal solution to the indices that meet the requirements.Then we develop the final quality index for X-ray image reconstruction,QuIX,which consists of the selected indices and the new PPD.QuIX performs well in a series of tests,including assessment of instrument PSF and simulation tests under different grid configurations,as well as imaging tests with RHESSI data.It is also a useful tool for testing of imaging algorithms,and determination of imaging parameters for both RHESSI and ASO-S/Hard X-ray Imager,such as field of view,beam width factor,and detector selection.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金supported by the National Key R&D Program of China 2022YFF0503002the National Natural Science Foundation of China(NSFC,Grant Nos.12333010 and 12233012)+2 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(grant No.XDB0560000)supported by the Prominent Postdoctoral Project of Jiangsu Province(2023ZB304)supported by the Strategic Priority Research Program on Space Science,the Chinese Academy of Sciences,grant No.XDA15320000.
文摘Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method,an indirect imaging technique that requires algorithms to reconstruct and optimize images.During the last decade,a variety of algorithms have been developed and improved.However,it is difficult to quantitatively evaluate the image quality of different solutions without a true,reference image of observation.How to choose the values of imaging parameters for these algorithms to get the best performance is also an open question.In this study,we present a detailed test of the characteristics of these algorithms,imaging dynamic range and a crucial parameter for the CLEAN method,clean beam width factor(CBWF).We first used SDO/AIA EUV images to compute DEM maps and calculate thermal X-ray maps.Then these realistic sources and several types of simulated sources are used as the ground truth in the imaging simulations for both RHESSI and ASO-S/HXI.The different solutions are evaluated quantitatively by a number of means.The overall results suggest that EM,PIXON,and CLEAN are exceptional methods for sidelobe elimination,producing images with clear source details.Although MEM_GE,MEM_NJIT,VIS_WV and VIS_CS possess fast imaging processes and generate good images,they too possess associated imperfections unique to each method.The two forward fit algorithms,VF and FF,perform differently,and VF appears to be more robust and useful.We also demonstrated the imaging capability of HXI and available HXI algorithms.Furthermore,the effect of CBWF on image quality was investigated,and the optimal settings for both RHESSI and HXI were proposed.
文摘Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).
基金Supported by the Henan Province Key Research and Development Project(231111211300)the Central Government of Henan Province Guides Local Science and Technology Development Funds(Z20231811005)+2 种基金Henan Province Key Research and Development Project(231111110100)Henan Provincial Outstanding Foreign Scientist Studio(GZS2024006)Henan Provincial Joint Fund for Scientific and Technological Research and Development Plan(Application and Overcoming Technical Barriers)(242103810028)。
文摘The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.
基金National Natural Science Foundation of China(No.42301518)Hubei Key Laboratory of Regional Development and Environmental Response(No.2023(A)002)Key Laboratory of the Evaluation and Monitoring of Southwest Land Resources(Ministry of Education)(No.TDSYS202304).
文摘Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps is influenced by the representations of background images and map symbols.Many researchers explored the optimizations for background images and symbolization techniques for symbols to reduce the complexity of image-maps and improve the usability.However,little literature was found for the optimum amount of symbol loading.This study focuses on the effects of background image complexity and map symbol load on the usability(i.e.,effectiveness and efficiency)of image-maps.Experiments were conducted by user studies via eye-tracking equipment and an online questionnaire survey.Experimental data sets included image-maps with ten levels of map symbol load in ten areas.Forty volunteers took part in the target searching experiments.It has been found that the usability,i.e.,average time viewed(efficiency)and average revisits(effectiveness)of targets recorded,is influenced by the complexity of background images,a peak exists for optimum symbol load for an image-map.The optimum levels for symbol load for different image-maps also have a peak when the complexity of the background image/image map increases.The complexity of background images serves as a guideline for optimum map symbol load in image-map design.This study enhanced user experience by optimizing visual clarity and managing cognitive load.Understanding how these factors interact can help create adaptive maps that maintain clarity and usability,guiding AI algorithms to adjust symbol density based on user context.This research establishes the practices for map design,making cartographic tools more innovative and more user-centric.
基金supported by the National Natural Science Foundation of China(No.82371933)the National Natural Science Foundation of Shandong Province of China(No.ZR2021MH120)+1 种基金the Taishan Scholars Project(No.tsqn202211378)the Shandong Provincial Natural Science Foundation for Excellent Young Scholars(No.ZR2024YQ075).
文摘Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.
文摘Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models achieve speed-accuracy trade-offs via fixed convolution kernels and manual feature fusion,their rigid architectures struggle with multi-scale adaptability,as exemplified by YOLOv8n’s 36.4%mAP and 13.9%small-object AP on VisDrone2019.This paper presents YOLO-LE,a lightweight framework addressing these limitations through three novel designs:(1)We introduce the C2f-Dy and LDown modules to enhance the backbone’s sensitivity to small-object features while reducing backbone parameters,thereby improving model efficiency.(2)An adaptive feature fusion module is designed to dynamically integrate multi-scale feature maps,optimizing the neck structure,reducing neck complexity,and enhancing overall model performance.(3)We replace the original loss function with a distributed focal loss and incorporate a lightweight self-attention mechanism to improve small-object recognition and bounding box regression accuracy.Experimental results demonstrate that YOLO-LE achieves 39.9%mAP@0.5 on VisDrone2019,representing a 9.6%improvement over YOLOv8n,while maintaining 8.5 GFLOPs computational efficiency.This provides an efficient solution for UAV object detection in complex scenarios.
基金supported by the Natural Science Foundation of Shandong Province(nos.ZR2023MF047,ZR2024MA055 and ZR2023QF139)the Enterprise Commissioned Project(nos.2024HX104 and 2024HX140)+1 种基金the China University Industry-University-Research Innovation Foundation(nos.2021ZYA11003 and 2021ITA05032)the Science and Technology Plan for Youth Innovation of Shandong's Universities(no.2019KJN012).
文摘In low-light environments,captured images often exhibit issues such as insufficient clarity and detail loss,which significantly degrade the accuracy of subsequent target recognition tasks.To tackle these challenges,this study presents a novel low-light image enhancement algorithm that leverages virtual hazy image generation through dehazing models based on statistical analysis.The proposed algorithm initiates the enhancement process by transforming the low-light image into a virtual hazy image,followed by image segmentation using a quadtree method.To improve the accuracy and robustness of atmospheric light estimation,the algorithm incorporates a genetic algorithm to optimize the quadtree-based estimation of atmospheric light regions.Additionally,this method employs an adaptive window adjustment mechanism to derive the dark channel prior image,which is subsequently refined using morphological operations and guided filtering.The final enhanced image is reconstructed through the hazy image degradation model.Extensive experimental evaluations across multiple datasets verify the superiority of the designed framework,achieving a peak signal-to-noise ratio(PSNR)of 17.09 and a structural similarity index(SSIM)of 0.74.These results indicate that the proposed algorithm not only effectively enhances image contrast and brightness but also outperforms traditional methods in terms of subjective and objective evaluation metrics.
文摘In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantly degrades image quality.Traditional denoising methods,typically based on filter techniques,often face challenges related to inefficiency and limited adaptability.To address these limitations,this study proposes a novel SAR image denoising algorithm based on an enhanced residual network architecture,with the objective of enhancing the utility of SAR imagery in complex electromagnetic environments.The proposed algorithm integrates residual network modules,which directly process the noisy input images to generate denoised outputs.This approach not only reduces computational complexity but also mitigates the difficulties associated with model training.By combining the Transformer module with the residual block,the algorithm enhances the network's ability to extract global features,offering superior feature extraction capabilities compared to CNN-based residual modules.Additionally,the algorithm employs the adaptive activation function Meta-ACON,which dynamically adjusts the activation patterns of neurons,thereby improving the network's feature extraction efficiency.The effectiveness of the proposed denoising method is empirically validated using real SAR images from the RSOD dataset.The proposed algorithm exhibits remarkable performance in terms of EPI,SSIM,and ENL,while achieving a substantial enhancement in PSNR when compared to traditional and deep learning-based algorithms.The PSNR performance is enhanced by over twofold.Moreover,the evaluation of the MSTAR SAR dataset substantiates the algorithm's robustness and applicability in SAR denoising tasks,with a PSNR of 25.2021 being attained.These findings underscore the efficacy of the proposed algorithm in mitigating speckle noise while preserving critical features in SAR imagery,thereby enhancing its quality and usability in practical scenarios.