Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image dis...Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image distortion,inaccurate localization of the tampered regions,and difficulty in recovering content.Given these shortcomings,a fragile image watermarking algorithm for tampering blind-detection and content self-recovery is proposed.The multi-feature watermarking authentication code(AC)is constructed using texture feature of local binary patterns(LBP),direct coefficient of discrete cosine transform(DCT)and contrast feature of gray level co-occurrence matrix(GLCM)for detecting the tampered region,and the recovery code(RC)is designed according to the average grayscale value of pixels in image blocks for recovering the tampered content.Optimal pixel adjustment process(OPAP)and least significant bit(LSB)algorithms are used to embed the recovery code and authentication code into the image in a staggered manner.When detecting the integrity of the image,the authentication code comparison method and threshold judgment method are used to perform two rounds of tampering detection on the image and blindly recover the tampered content.Experimental results show that this algorithm has good transparency,strong and blind detection,and self-recovery performance against four types of malicious attacks and some conventional signal processing operations.When resisting copy-paste,text addition,cropping and vector quantization under the tampering rate(TR)10%,the average tampering detection rate is up to 94.09%,and the peak signal-to-noise ratio(PSNR)of the watermarked image and the recovered image are both greater than 41.47 and 40.31 dB,which demonstrates its excellent advantages compared with other related algorithms in recent years.展开更多
Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approach...Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approaches,while effective in global illumination modeling,often struggle to simultaneously suppress noise and preserve structural details,especially under heterogeneous lighting.Furthermore,misalignment between luminance and color channels introduces additional challenges to accurate enhancement.In response to the aforementioned difficulties,we introduce a single-stage framework,M2ATNet,using the multi-scale multi-attention and Transformer architecture.First,to address the problems of texture blurring and residual noise,we design a multi-scale multi-attention denoising module(MMAD),which is applied separately to the luminance and color channels to enhance the structural and texture modeling capabilities.Secondly,to solve the non-alignment problem of the luminance and color channels,we introduce the multi-channel feature fusion Transformer(CFFT)module,which effectively recovers the dark details and corrects the color shifts through cross-channel alignment and deep feature interaction.To guide the model to learn more stably and efficiently,we also fuse multiple types of loss functions to form a hybrid loss term.We extensively evaluate the proposed method on various standard datasets,including LOL-v1,LOL-v2,DICM,LIME,and NPE.Evaluation in terms of numerical metrics and visual quality demonstrate that M2ATNet consistently outperforms existing advanced approaches.Ablation studies further confirm the critical roles played by the MMAD and CFFT modules to detail preservation and visual fidelity under challenging illumination-deficient environments.展开更多
High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes an...High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes and wealth of spatial details pose challenges for semantic segmentation.While convolutional neural networks(CNNs)excel at capturing local features,they are limited in modeling long-range dependencies.Conversely,transformers utilize multihead self-attention to integrate global context effectively,but this approach often incurs a high computational cost.This paper proposes a global-local multiscale context network(GLMCNet)to extract both global and local multiscale contextual information from HRSIs.A detail-enhanced filtering module(DEFM)is proposed at the end of the encoder to refine the encoder outputs further,thereby enhancing the key details extracted by the encoder and effectively suppressing redundant information.In addition,a global-local multiscale transformer block(GLMTB)is proposed in the decoding stage to enable the modeling of rich multiscale global and local information.We also design a stair fusion mechanism to transmit deep semantic information from deep to shallow layers progressively.Finally,we propose the semantic awareness enhancement module(SAEM),which further enhances the representation of multiscale semantic features through spatial attention and covariance channel attention.Extensive ablation analyses and comparative experiments were conducted to evaluate the performance of the proposed method.Specifically,our method achieved a mean Intersection over Union(mIoU)of 86.89%on the ISPRS Potsdam dataset and 84.34%on the ISPRS Vaihingen dataset,outperforming existing models such as ABCNet and BANet.展开更多
Driven by advancements in mobile internet technology,images have become a crucial data medium.Ensuring the security of image information during transmission has thus emerged as an urgent challenge.This study proposes ...Driven by advancements in mobile internet technology,images have become a crucial data medium.Ensuring the security of image information during transmission has thus emerged as an urgent challenge.This study proposes a novel image encryption algorithm specifically designed for grayscale image security.This research introduces a new Cantor diagonal matrix permutation method.The proposed permutation method uses row and column index sequences to control the Cantor diagonal matrix,where the row and column index sequences are generated by a spatiotemporal chaotic system named coupled map lattice(CML).The high initial value sensitivity of the CML system makes the permutation method highly sensitive and secure.Additionally,leveraging fractal theory,this study introduces a chaotic fractal matrix and applies this matrix in the diffusion process.This chaotic fractal matrix exhibits selfsimilarity and irregularity.Using the Cantor diagonal matrix and chaotic fractal matrix,this paper introduces a fast image encryption algorithm involving two diffusion steps and one permutation step.Moreover,the algorithm achieves robust security with only a single encryption round,ensuring high operational efficiency.Experimental results show that the proposed algorithm features an expansive key space,robust security,high sensitivity,high efficiency,and superior statistical properties for the ciphered images.Thus,the proposed algorithm not only provides a practical solution for secure image transmission but also bridges fractal theory with image encryption techniques,thereby opening new research avenues in chaotic cryptography and advancing the development of information security technology.展开更多
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi...Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.展开更多
Remote sensing image super-resolution technology is pivotal for enhancing image quality in critical applications including environmental monitoring,urban planning,and disaster assessment.However,traditional methods ex...Remote sensing image super-resolution technology is pivotal for enhancing image quality in critical applications including environmental monitoring,urban planning,and disaster assessment.However,traditional methods exhibit deficiencies in detail recovery and noise suppression,particularly when processing complex landscapes(e.g.,forests,farmlands),leading to artifacts and spectral distortions that limit practical utility.To address this,we propose an enhanced Super-Resolution Generative Adversarial Network(SRGAN)framework featuring three key innovations:(1)Replacement of L1/L2 loss with a robust Charbonnier loss to suppress noise while preserving edge details via adaptive gradient balancing;(2)A multi-loss joint optimization strategy dynamically weighting Charbonnier loss(β=0.5),Visual Geometry Group(VGG)perceptual loss(α=1),and adversarial loss(γ=0.1)to synergize pixel-level accuracy and perceptual quality;(3)A multi-scale residual network(MSRN)capturing cross-scale texture features(e.g.,forest canopies,mountain contours).Validated on Sentinel-2(10 m)and SPOT-6/7(2.5 m)datasets covering 904 km2 in Motuo County,Xizang,our method outperforms the SRGAN baseline(SR4RS)with Peak Signal-to-Noise Ratio(PSNR)gains of 0.29 dB and Structural Similarity Index(SSIM)improvements of 3.08%on forest imagery.Visual comparisons confirm enhanced texture continuity despite marginal Learned Perceptual Image Patch Similarity(LPIPS)increases.The method significantly improves noise robustness and edge retention in complex geomorphology,demonstrating 18%faster response in forest fire early warning and providing high-resolution support for agricultural/urban monitoring.Future work will integrate spectral constraints and lightweight architectures.展开更多
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru...Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.展开更多
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist...The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.展开更多
Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time perfor...Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time performance and monitoring scope.To address this,a temperature detection method based on infrared image processing has been proposed:utilizing the median filtering algorithm to denoise the original infrared image,then applying an image segmentation algorithm to divide the image.展开更多
Karst fractures serve as crucial seepage channels and storage spaces for carbonate natural gas reservoirs,and electrical image logs are vital data for visualizing and characterizing such fractures.However,the conventi...Karst fractures serve as crucial seepage channels and storage spaces for carbonate natural gas reservoirs,and electrical image logs are vital data for visualizing and characterizing such fractures.However,the conventional approach of identifying fractures using electrical image logs predominantly relies on manual processes that are not only time-consuming but also highly subjective.In addition,the heterogeneity and strong dissolution tendency of karst carbonate reservoirs lead to complexity and variety in fracture geometry,which makes it difficult to accurately identify fractures.In this paper,the electrical image logs network(EILnet)da deep-learning-based intelligent semantic segmentation model with a selective attention mechanism and selective feature fusion moduledwas created to enable the intelligent identification and segmentation of different types of fractures through electrical logging images.Data from electrical image logs representing structural and induced fractures were first selected using the sliding window technique before image inpainting and data augmentation were implemented for these images to improve the generalizability of the model.Various image-processing tools,including the bilateral filter,Laplace operator,and Gaussian low-pass filter,were also applied to the electrical logging images to generate a multi-attribute dataset to help the model learn the semantic features of the fractures.The results demonstrated that the EILnet model outperforms mainstream deep-learning semantic segmentation models,such as Fully Convolutional Networks(FCN-8s),U-Net,and SegNet,for both the single-channel dataset and the multi-attribute dataset.The EILnet provided significant advantages for the single-channel dataset,and its mean intersection over union(MIoU)and pixel accuracy(PA)were 81.32%and 89.37%,respectively.In the case of the multi-attribute dataset,the identification capability of all models improved to varying degrees,with the EILnet achieving the highest MIoU and PA of 83.43%and 91.11%,respectively.Further,applying the EILnet model to various blind wells demonstrated its ability to provide reliable fracture identification,thereby indicating its promising potential applications.展开更多
The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is i...The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is imperative to ensure thorough tumor resection.In this study,we integrate Raman imaging technology with an artificial intelligence(AI)generative model,proposing an innovative approach for intraoperative margin status diagnosis.This method utilizes Raman imaging to swiftly and non-invasively capture tissue Raman images,which are then transformed into hematoxylin-eosin(H&E)-stained histopathological images using an AI generative model for histopathological diagnosis.The generated H&E-stained images clearly illustrate the tissue’s pathological conditions.Independently reviewed by three pathologists,the overall diagnostic accuracy for distinguishing between tumor tissue and normal muscle tissue reaches 86.7%.Notably,it outperforms current clinical practices,especially in TSCC with positive lymph node metastasis or moderately differentiated grades.This advancement highlights the potential of AI-enhanced Raman imaging to significantly improve intraoperative assessments and surgical margin evaluations,promising a versatile diagnostic tool beyond TSCC.展开更多
Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image qual...Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image quality of the reconstruction is still difficult,which is particularly useful for scheme design of X-ray imaging systems,testing and improvement of imaging algorithms,and scientific research of X-ray sources.Currently,there is no specified method to quantitatively evaluate the quality of X-ray image reconstruction and the point-spread function(PSF)of an X-ray imager.In this paper,we propose percentage proximity degree(PPD)by considering the imaging characteristics of X-ray image reconstruction and in particular,sidelobes and their effects on imaging quality.After testing a variety of imaging quality assessments in six aspects,we utilized the technique for order preference by similarity to ideal solution to the indices that meet the requirements.Then we develop the final quality index for X-ray image reconstruction,QuIX,which consists of the selected indices and the new PPD.QuIX performs well in a series of tests,including assessment of instrument PSF and simulation tests under different grid configurations,as well as imaging tests with RHESSI data.It is also a useful tool for testing of imaging algorithms,and determination of imaging parameters for both RHESSI and ASO-S/Hard X-ray Imager,such as field of view,beam width factor,and detector selection.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method...Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method,an indirect imaging technique that requires algorithms to reconstruct and optimize images.During the last decade,a variety of algorithms have been developed and improved.However,it is difficult to quantitatively evaluate the image quality of different solutions without a true,reference image of observation.How to choose the values of imaging parameters for these algorithms to get the best performance is also an open question.In this study,we present a detailed test of the characteristics of these algorithms,imaging dynamic range and a crucial parameter for the CLEAN method,clean beam width factor(CBWF).We first used SDO/AIA EUV images to compute DEM maps and calculate thermal X-ray maps.Then these realistic sources and several types of simulated sources are used as the ground truth in the imaging simulations for both RHESSI and ASO-S/HXI.The different solutions are evaluated quantitatively by a number of means.The overall results suggest that EM,PIXON,and CLEAN are exceptional methods for sidelobe elimination,producing images with clear source details.Although MEM_GE,MEM_NJIT,VIS_WV and VIS_CS possess fast imaging processes and generate good images,they too possess associated imperfections unique to each method.The two forward fit algorithms,VF and FF,perform differently,and VF appears to be more robust and useful.We also demonstrated the imaging capability of HXI and available HXI algorithms.Furthermore,the effect of CBWF on image quality was investigated,and the optimal settings for both RHESSI and HXI were proposed.展开更多
Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive te...Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).展开更多
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f...The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.展开更多
Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps i...Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps is influenced by the representations of background images and map symbols.Many researchers explored the optimizations for background images and symbolization techniques for symbols to reduce the complexity of image-maps and improve the usability.However,little literature was found for the optimum amount of symbol loading.This study focuses on the effects of background image complexity and map symbol load on the usability(i.e.,effectiveness and efficiency)of image-maps.Experiments were conducted by user studies via eye-tracking equipment and an online questionnaire survey.Experimental data sets included image-maps with ten levels of map symbol load in ten areas.Forty volunteers took part in the target searching experiments.It has been found that the usability,i.e.,average time viewed(efficiency)and average revisits(effectiveness)of targets recorded,is influenced by the complexity of background images,a peak exists for optimum symbol load for an image-map.The optimum levels for symbol load for different image-maps also have a peak when the complexity of the background image/image map increases.The complexity of background images serves as a guideline for optimum map symbol load in image-map design.This study enhanced user experience by optimizing visual clarity and managing cognitive load.Understanding how these factors interact can help create adaptive maps that maintain clarity and usability,guiding AI algorithms to adjust symbol density based on user context.This research establishes the practices for map design,making cartographic tools more innovative and more user-centric.展开更多
The unmanned aerial vehicle(UAV)images captured under low-light conditions are often suffering from noise and uneven illumination.To address these issues,we propose a low-light image enhancement algorithm for UAV imag...The unmanned aerial vehicle(UAV)images captured under low-light conditions are often suffering from noise and uneven illumination.To address these issues,we propose a low-light image enhancement algorithm for UAV images,which is inspired by the Retinex theory and guided by a light weighted map.Firstly,we propose a new network for reflectance component processing to suppress the noise in images.Secondly,we construct an illumination enhancement module that uses a light weighted map to guide the enhancement process.Finally,the processed reflectance and illumination components are recombined to obtain the enhancement results.Experimental results show that our method can suppress the noise in images while enhancing image brightness,and prevent over enhancement in bright regions.Code and data are available at https://gitee.com/baixiaotong2/uav-images.git.展开更多
Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale who...Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.展开更多
基金supported by Postgraduate Research&Practice Innovation Program of Jiangsu Province,China(Grant No.SJCX24_1332)Jiangsu Province Education Science Planning Project in 2024(Grant No.B-b/2024/01/122)High-Level Talent Scientific Research Foundation of Jinling Institute of Technology,China(Grant No.jit-b-201918).
文摘Digital watermarking technology plays an important role in detecting malicious tampering and protecting image copyright.However,in practical applications,this technology faces various problems such as severe image distortion,inaccurate localization of the tampered regions,and difficulty in recovering content.Given these shortcomings,a fragile image watermarking algorithm for tampering blind-detection and content self-recovery is proposed.The multi-feature watermarking authentication code(AC)is constructed using texture feature of local binary patterns(LBP),direct coefficient of discrete cosine transform(DCT)and contrast feature of gray level co-occurrence matrix(GLCM)for detecting the tampered region,and the recovery code(RC)is designed according to the average grayscale value of pixels in image blocks for recovering the tampered content.Optimal pixel adjustment process(OPAP)and least significant bit(LSB)algorithms are used to embed the recovery code and authentication code into the image in a staggered manner.When detecting the integrity of the image,the authentication code comparison method and threshold judgment method are used to perform two rounds of tampering detection on the image and blindly recover the tampered content.Experimental results show that this algorithm has good transparency,strong and blind detection,and self-recovery performance against four types of malicious attacks and some conventional signal processing operations.When resisting copy-paste,text addition,cropping and vector quantization under the tampering rate(TR)10%,the average tampering detection rate is up to 94.09%,and the peak signal-to-noise ratio(PSNR)of the watermarked image and the recovered image are both greater than 41.47 and 40.31 dB,which demonstrates its excellent advantages compared with other related algorithms in recent years.
基金funded by the National Natural Science Foundation of China,grant numbers 52374156 and 62476005。
文摘Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approaches,while effective in global illumination modeling,often struggle to simultaneously suppress noise and preserve structural details,especially under heterogeneous lighting.Furthermore,misalignment between luminance and color channels introduces additional challenges to accurate enhancement.In response to the aforementioned difficulties,we introduce a single-stage framework,M2ATNet,using the multi-scale multi-attention and Transformer architecture.First,to address the problems of texture blurring and residual noise,we design a multi-scale multi-attention denoising module(MMAD),which is applied separately to the luminance and color channels to enhance the structural and texture modeling capabilities.Secondly,to solve the non-alignment problem of the luminance and color channels,we introduce the multi-channel feature fusion Transformer(CFFT)module,which effectively recovers the dark details and corrects the color shifts through cross-channel alignment and deep feature interaction.To guide the model to learn more stably and efficiently,we also fuse multiple types of loss functions to form a hybrid loss term.We extensively evaluate the proposed method on various standard datasets,including LOL-v1,LOL-v2,DICM,LIME,and NPE.Evaluation in terms of numerical metrics and visual quality demonstrate that M2ATNet consistently outperforms existing advanced approaches.Ablation studies further confirm the critical roles played by the MMAD and CFFT modules to detail preservation and visual fidelity under challenging illumination-deficient environments.
基金provided by the Science Research Project of Hebei Education Department under grant No.BJK2024115.
文摘High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes and wealth of spatial details pose challenges for semantic segmentation.While convolutional neural networks(CNNs)excel at capturing local features,they are limited in modeling long-range dependencies.Conversely,transformers utilize multihead self-attention to integrate global context effectively,but this approach often incurs a high computational cost.This paper proposes a global-local multiscale context network(GLMCNet)to extract both global and local multiscale contextual information from HRSIs.A detail-enhanced filtering module(DEFM)is proposed at the end of the encoder to refine the encoder outputs further,thereby enhancing the key details extracted by the encoder and effectively suppressing redundant information.In addition,a global-local multiscale transformer block(GLMTB)is proposed in the decoding stage to enable the modeling of rich multiscale global and local information.We also design a stair fusion mechanism to transmit deep semantic information from deep to shallow layers progressively.Finally,we propose the semantic awareness enhancement module(SAEM),which further enhances the representation of multiscale semantic features through spatial attention and covariance channel attention.Extensive ablation analyses and comparative experiments were conducted to evaluate the performance of the proposed method.Specifically,our method achieved a mean Intersection over Union(mIoU)of 86.89%on the ISPRS Potsdam dataset and 84.34%on the ISPRS Vaihingen dataset,outperforming existing models such as ABCNet and BANet.
基金supported by the National Natural Science Foundation of China(62376106)The Science and Technology Development Plan of Jilin Province(20250102212JC).
文摘Driven by advancements in mobile internet technology,images have become a crucial data medium.Ensuring the security of image information during transmission has thus emerged as an urgent challenge.This study proposes a novel image encryption algorithm specifically designed for grayscale image security.This research introduces a new Cantor diagonal matrix permutation method.The proposed permutation method uses row and column index sequences to control the Cantor diagonal matrix,where the row and column index sequences are generated by a spatiotemporal chaotic system named coupled map lattice(CML).The high initial value sensitivity of the CML system makes the permutation method highly sensitive and secure.Additionally,leveraging fractal theory,this study introduces a chaotic fractal matrix and applies this matrix in the diffusion process.This chaotic fractal matrix exhibits selfsimilarity and irregularity.Using the Cantor diagonal matrix and chaotic fractal matrix,this paper introduces a fast image encryption algorithm involving two diffusion steps and one permutation step.Moreover,the algorithm achieves robust security with only a single encryption round,ensuring high operational efficiency.Experimental results show that the proposed algorithm features an expansive key space,robust security,high sensitivity,high efficiency,and superior statistical properties for the ciphered images.Thus,the proposed algorithm not only provides a practical solution for secure image transmission but also bridges fractal theory with image encryption techniques,thereby opening new research avenues in chaotic cryptography and advancing the development of information security technology.
基金funded by University of Transport and Communications(UTC)under grant number T2025-CN-004.
文摘Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.
基金This study was supported by:Inner Mongolia Academy of Forestry Sciences Open Research Project(Grant No.KF2024MS03)The Project to Improve the Scientific Research Capacity of the Inner Mongolia Academy of Forestry Sciences(Grant No.2024NLTS04)The Innovation and Entrepreneurship Training Program for Undergraduates of Beijing Forestry University(Grant No.X202410022268).
文摘Remote sensing image super-resolution technology is pivotal for enhancing image quality in critical applications including environmental monitoring,urban planning,and disaster assessment.However,traditional methods exhibit deficiencies in detail recovery and noise suppression,particularly when processing complex landscapes(e.g.,forests,farmlands),leading to artifacts and spectral distortions that limit practical utility.To address this,we propose an enhanced Super-Resolution Generative Adversarial Network(SRGAN)framework featuring three key innovations:(1)Replacement of L1/L2 loss with a robust Charbonnier loss to suppress noise while preserving edge details via adaptive gradient balancing;(2)A multi-loss joint optimization strategy dynamically weighting Charbonnier loss(β=0.5),Visual Geometry Group(VGG)perceptual loss(α=1),and adversarial loss(γ=0.1)to synergize pixel-level accuracy and perceptual quality;(3)A multi-scale residual network(MSRN)capturing cross-scale texture features(e.g.,forest canopies,mountain contours).Validated on Sentinel-2(10 m)and SPOT-6/7(2.5 m)datasets covering 904 km2 in Motuo County,Xizang,our method outperforms the SRGAN baseline(SR4RS)with Peak Signal-to-Noise Ratio(PSNR)gains of 0.29 dB and Structural Similarity Index(SSIM)improvements of 3.08%on forest imagery.Visual comparisons confirm enhanced texture continuity despite marginal Learned Perceptual Image Patch Similarity(LPIPS)increases.The method significantly improves noise robustness and edge retention in complex geomorphology,demonstrating 18%faster response in forest fire early warning and providing high-resolution support for agricultural/urban monitoring.Future work will integrate spectral constraints and lightweight architectures.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01295).
文摘Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.
文摘The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.
基金supported by the National Key Research and Development Project of China(No.2023YFB3709605)the National Natural Science Foundation of China(No.62073193)the National College Student Innovation Training Program(No.202310422122)。
文摘Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time performance and monitoring scope.To address this,a temperature detection method based on infrared image processing has been proposed:utilizing the median filtering algorithm to denoise the original infrared image,then applying an image segmentation algorithm to divide the image.
基金the National Natural Science Foundation of China(42472194,42302153,and 42002144)the Fundamental Research Funds for the Central Univer-sities(22CX06002A).
文摘Karst fractures serve as crucial seepage channels and storage spaces for carbonate natural gas reservoirs,and electrical image logs are vital data for visualizing and characterizing such fractures.However,the conventional approach of identifying fractures using electrical image logs predominantly relies on manual processes that are not only time-consuming but also highly subjective.In addition,the heterogeneity and strong dissolution tendency of karst carbonate reservoirs lead to complexity and variety in fracture geometry,which makes it difficult to accurately identify fractures.In this paper,the electrical image logs network(EILnet)da deep-learning-based intelligent semantic segmentation model with a selective attention mechanism and selective feature fusion moduledwas created to enable the intelligent identification and segmentation of different types of fractures through electrical logging images.Data from electrical image logs representing structural and induced fractures were first selected using the sliding window technique before image inpainting and data augmentation were implemented for these images to improve the generalizability of the model.Various image-processing tools,including the bilateral filter,Laplace operator,and Gaussian low-pass filter,were also applied to the electrical logging images to generate a multi-attribute dataset to help the model learn the semantic features of the fractures.The results demonstrated that the EILnet model outperforms mainstream deep-learning semantic segmentation models,such as Fully Convolutional Networks(FCN-8s),U-Net,and SegNet,for both the single-channel dataset and the multi-attribute dataset.The EILnet provided significant advantages for the single-channel dataset,and its mean intersection over union(MIoU)and pixel accuracy(PA)were 81.32%and 89.37%,respectively.In the case of the multi-attribute dataset,the identification capability of all models improved to varying degrees,with the EILnet achieving the highest MIoU and PA of 83.43%and 91.11%,respectively.Further,applying the EILnet model to various blind wells demonstrated its ability to provide reliable fracture identification,thereby indicating its promising potential applications.
基金supported by the National Natural Science Foundation of China(Grant Nos.82272955 and 22203057)the Natural Science Foundation of Fujian Province(Grant No.2021J011361).
文摘The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is imperative to ensure thorough tumor resection.In this study,we integrate Raman imaging technology with an artificial intelligence(AI)generative model,proposing an innovative approach for intraoperative margin status diagnosis.This method utilizes Raman imaging to swiftly and non-invasively capture tissue Raman images,which are then transformed into hematoxylin-eosin(H&E)-stained histopathological images using an AI generative model for histopathological diagnosis.The generated H&E-stained images clearly illustrate the tissue’s pathological conditions.Independently reviewed by three pathologists,the overall diagnostic accuracy for distinguishing between tumor tissue and normal muscle tissue reaches 86.7%.Notably,it outperforms current clinical practices,especially in TSCC with positive lymph node metastasis or moderately differentiated grades.This advancement highlights the potential of AI-enhanced Raman imaging to significantly improve intraoperative assessments and surgical margin evaluations,promising a versatile diagnostic tool beyond TSCC.
基金supported by the National Natural Science Foundation of China(NSFC)12333010the National Key R&D Program of China 2022YFF0503002+3 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(grant No.XDB0560000)the NSFC 11921003supported by the Prominent Postdoctoral Project of Jiangsu Province(2023ZB304)supported by the Strategic Priority Research Program on Space Science,the Chinese Academy of Sciences,grant No.XDA15320000.
文摘Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image quality of the reconstruction is still difficult,which is particularly useful for scheme design of X-ray imaging systems,testing and improvement of imaging algorithms,and scientific research of X-ray sources.Currently,there is no specified method to quantitatively evaluate the quality of X-ray image reconstruction and the point-spread function(PSF)of an X-ray imager.In this paper,we propose percentage proximity degree(PPD)by considering the imaging characteristics of X-ray image reconstruction and in particular,sidelobes and their effects on imaging quality.After testing a variety of imaging quality assessments in six aspects,we utilized the technique for order preference by similarity to ideal solution to the indices that meet the requirements.Then we develop the final quality index for X-ray image reconstruction,QuIX,which consists of the selected indices and the new PPD.QuIX performs well in a series of tests,including assessment of instrument PSF and simulation tests under different grid configurations,as well as imaging tests with RHESSI data.It is also a useful tool for testing of imaging algorithms,and determination of imaging parameters for both RHESSI and ASO-S/Hard X-ray Imager,such as field of view,beam width factor,and detector selection.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金supported by the National Key R&D Program of China 2022YFF0503002the National Natural Science Foundation of China(NSFC,Grant Nos.12333010 and 12233012)+2 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(grant No.XDB0560000)supported by the Prominent Postdoctoral Project of Jiangsu Province(2023ZB304)supported by the Strategic Priority Research Program on Space Science,the Chinese Academy of Sciences,grant No.XDA15320000.
文摘Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method,an indirect imaging technique that requires algorithms to reconstruct and optimize images.During the last decade,a variety of algorithms have been developed and improved.However,it is difficult to quantitatively evaluate the image quality of different solutions without a true,reference image of observation.How to choose the values of imaging parameters for these algorithms to get the best performance is also an open question.In this study,we present a detailed test of the characteristics of these algorithms,imaging dynamic range and a crucial parameter for the CLEAN method,clean beam width factor(CBWF).We first used SDO/AIA EUV images to compute DEM maps and calculate thermal X-ray maps.Then these realistic sources and several types of simulated sources are used as the ground truth in the imaging simulations for both RHESSI and ASO-S/HXI.The different solutions are evaluated quantitatively by a number of means.The overall results suggest that EM,PIXON,and CLEAN are exceptional methods for sidelobe elimination,producing images with clear source details.Although MEM_GE,MEM_NJIT,VIS_WV and VIS_CS possess fast imaging processes and generate good images,they too possess associated imperfections unique to each method.The two forward fit algorithms,VF and FF,perform differently,and VF appears to be more robust and useful.We also demonstrated the imaging capability of HXI and available HXI algorithms.Furthermore,the effect of CBWF on image quality was investigated,and the optimal settings for both RHESSI and HXI were proposed.
文摘Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).
基金Supported by the Henan Province Key Research and Development Project(231111211300)the Central Government of Henan Province Guides Local Science and Technology Development Funds(Z20231811005)+2 种基金Henan Province Key Research and Development Project(231111110100)Henan Provincial Outstanding Foreign Scientist Studio(GZS2024006)Henan Provincial Joint Fund for Scientific and Technological Research and Development Plan(Application and Overcoming Technical Barriers)(242103810028)。
文摘The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.
基金National Natural Science Foundation of China(No.42301518)Hubei Key Laboratory of Regional Development and Environmental Response(No.2023(A)002)Key Laboratory of the Evaluation and Monitoring of Southwest Land Resources(Ministry of Education)(No.TDSYS202304).
文摘Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps is influenced by the representations of background images and map symbols.Many researchers explored the optimizations for background images and symbolization techniques for symbols to reduce the complexity of image-maps and improve the usability.However,little literature was found for the optimum amount of symbol loading.This study focuses on the effects of background image complexity and map symbol load on the usability(i.e.,effectiveness and efficiency)of image-maps.Experiments were conducted by user studies via eye-tracking equipment and an online questionnaire survey.Experimental data sets included image-maps with ten levels of map symbol load in ten areas.Forty volunteers took part in the target searching experiments.It has been found that the usability,i.e.,average time viewed(efficiency)and average revisits(effectiveness)of targets recorded,is influenced by the complexity of background images,a peak exists for optimum symbol load for an image-map.The optimum levels for symbol load for different image-maps also have a peak when the complexity of the background image/image map increases.The complexity of background images serves as a guideline for optimum map symbol load in image-map design.This study enhanced user experience by optimizing visual clarity and managing cognitive load.Understanding how these factors interact can help create adaptive maps that maintain clarity and usability,guiding AI algorithms to adjust symbol density based on user context.This research establishes the practices for map design,making cartographic tools more innovative and more user-centric.
基金supported by the National Natural Science Foundation of China(Nos.62201454 and 62306235)the Xi’an Science and Technology Program of Xi’an Science and Technology Bureau(No.23SFSF0004)。
文摘The unmanned aerial vehicle(UAV)images captured under low-light conditions are often suffering from noise and uneven illumination.To address these issues,we propose a low-light image enhancement algorithm for UAV images,which is inspired by the Retinex theory and guided by a light weighted map.Firstly,we propose a new network for reflectance component processing to suppress the noise in images.Secondly,we construct an illumination enhancement module that uses a light weighted map to guide the enhancement process.Finally,the processed reflectance and illumination components are recombined to obtain the enhancement results.Experimental results show that our method can suppress the noise in images while enhancing image brightness,and prevent over enhancement in bright regions.Code and data are available at https://gitee.com/baixiaotong2/uav-images.git.
基金supported by the National Natural Science Foundation of China(No.82371933)the National Natural Science Foundation of Shandong Province of China(No.ZR2021MH120)+1 种基金the Taishan Scholars Project(No.tsqn202211378)the Shandong Provincial Natural Science Foundation for Excellent Young Scholars(No.ZR2024YQ075).
文摘Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.