Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread a...Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread adoption of convolutional neural networks(CNNs)has catalyzed the remarkable success of architectures such as ResNet-101 within the domain of image classification.However,inmulti-label image classification tasks,it is crucial to consider the correlation between labels.In order to improve the accuracy and performance of multi-label classification and fully combine visual and semantic features,many existing studies use graph convolutional networks(GCN)for modeling.Object detection and multi-label image classification exhibit a degree of conceptual overlap;however,the integration of these two tasks within a unified framework has been relatively underexplored in the existing literature.In this paper,we come up with Object-GCN framework,a model combining object detection network YOLOv5 and graph convolutional network,and we carry out a thorough experimental analysis using a range of well-established public datasets.The designed framework Object-GCN achieves significantly better performance than existing studies in public datasets COCO2014,VOC2007,VOC2012.The final results achieved are 86.9%,96.7%,and 96.3%mean Average Precision(mAP)across the three datasets.展开更多
To reduce the discrepancy between the source and target domains,a new multi-label adaptation network(ML-ANet)based on multiple kernel variants with maximum mean discrepancies is proposed in this paper.The hidden repre...To reduce the discrepancy between the source and target domains,a new multi-label adaptation network(ML-ANet)based on multiple kernel variants with maximum mean discrepancies is proposed in this paper.The hidden representations of the task-specific layers in ML-ANet are embedded in the reproducing kernel Hilbert space(RKHS)so that the mean-embeddings of specific features in different domains could be precisely matched.Multiple kernel functions are used to improve feature distribution efficiency for explicit mean embedding matching,which can further reduce domain discrepancy.Adverse weather and cross-camera adaptation examinations are conducted to verify the effectiveness of our proposed ML-ANet.The results show that our proposed ML-ANet achieves higher accuracies than the compared state-of-the-art methods for multi-label image classification in both the adverse weather adaptation and cross-camera adaptation experiments.These results indicate that ML-ANet can alleviate the reliance on fully labeled training data and improve the accuracy of multi-label image classification in various domain shift scenarios.展开更多
In this paper,we study the partial multi-label(PML)image classification problem,where each image is annotated with a candidate label set consisting of multiple relevant labels and other noisy labels.Existing PML metho...In this paper,we study the partial multi-label(PML)image classification problem,where each image is annotated with a candidate label set consisting of multiple relevant labels and other noisy labels.Existing PML methods typically design a disambiguation strategy to filter out noisy labels by utilizing prior knowledge with extra assumptions,which unfortunately is unavailable in many real tasks.Furthermore,because the objective function for disambiguation is usually elaborately designed on the whole training set,it can hardly be optimized in a deep model with stochastic gradient descent(SGD)on mini-batches.In this paper,for the first time,we propose a deep model for PML to enhance the representation and discrimination ability.On the one hand,we propose a novel curriculum-based disambiguation strategy to progressively identify ground-truth labels by incorporating the varied difficulties of different classes.On the other hand,consistency regularization is introduced for model training to balance fitting identified easy labels and exploiting potential relevant labels.Extensive experimental results on the commonly used benchmark datasets show that the proposed method significantlyoutperforms the SOTA methods.展开更多
At present,research on multi-label image classification mainly focuses on exploring the correlation between labels to improve the classification accuracy of multi-label images.However,in existing methods,label correla...At present,research on multi-label image classification mainly focuses on exploring the correlation between labels to improve the classification accuracy of multi-label images.However,in existing methods,label correlation is calculated based on the statistical information of the data.This label correlation is global and depends on the dataset,not suitable for all samples.In the process of extracting image features,the characteristic information of small objects in the image is easily lost,resulting in a low classification accuracy of small objects.To this end,this paper proposes a multi-label image classification model based on multiscale fusion and adaptive label correlation.The main idea is:first,the feature maps of multiple scales are fused to enhance the feature information of small objects.Semantic guidance decomposes the fusion feature map into feature vectors of each category,then adaptively mines the correlation between categories in the image through the self-attention mechanism of graph attention network,and obtains feature vectors containing category-related information for the final classification.The mean average precision of the model on the two public datasets of VOC 2007 and MS COCO 2014 reached 95.6% and 83.6%,respectively,and most of the indicators are better than those of the existing latest methods.展开更多
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist...The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.展开更多
Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time perfor...Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time performance and monitoring scope.To address this,a temperature detection method based on infrared image processing has been proposed:utilizing the median filtering algorithm to denoise the original infrared image,then applying an image segmentation algorithm to divide the image.展开更多
The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is i...The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is imperative to ensure thorough tumor resection.In this study,we integrate Raman imaging technology with an artificial intelligence(AI)generative model,proposing an innovative approach for intraoperative margin status diagnosis.This method utilizes Raman imaging to swiftly and non-invasively capture tissue Raman images,which are then transformed into hematoxylin-eosin(H&E)-stained histopathological images using an AI generative model for histopathological diagnosis.The generated H&E-stained images clearly illustrate the tissue’s pathological conditions.Independently reviewed by three pathologists,the overall diagnostic accuracy for distinguishing between tumor tissue and normal muscle tissue reaches 86.7%.Notably,it outperforms current clinical practices,especially in TSCC with positive lymph node metastasis or moderately differentiated grades.This advancement highlights the potential of AI-enhanced Raman imaging to significantly improve intraoperative assessments and surgical margin evaluations,promising a versatile diagnostic tool beyond TSCC.展开更多
Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image qual...Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image quality of the reconstruction is still difficult,which is particularly useful for scheme design of X-ray imaging systems,testing and improvement of imaging algorithms,and scientific research of X-ray sources.Currently,there is no specified method to quantitatively evaluate the quality of X-ray image reconstruction and the point-spread function(PSF)of an X-ray imager.In this paper,we propose percentage proximity degree(PPD)by considering the imaging characteristics of X-ray image reconstruction and in particular,sidelobes and their effects on imaging quality.After testing a variety of imaging quality assessments in six aspects,we utilized the technique for order preference by similarity to ideal solution to the indices that meet the requirements.Then we develop the final quality index for X-ray image reconstruction,QuIX,which consists of the selected indices and the new PPD.QuIX performs well in a series of tests,including assessment of instrument PSF and simulation tests under different grid configurations,as well as imaging tests with RHESSI data.It is also a useful tool for testing of imaging algorithms,and determination of imaging parameters for both RHESSI and ASO-S/Hard X-ray Imager,such as field of view,beam width factor,and detector selection.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method...Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method,an indirect imaging technique that requires algorithms to reconstruct and optimize images.During the last decade,a variety of algorithms have been developed and improved.However,it is difficult to quantitatively evaluate the image quality of different solutions without a true,reference image of observation.How to choose the values of imaging parameters for these algorithms to get the best performance is also an open question.In this study,we present a detailed test of the characteristics of these algorithms,imaging dynamic range and a crucial parameter for the CLEAN method,clean beam width factor(CBWF).We first used SDO/AIA EUV images to compute DEM maps and calculate thermal X-ray maps.Then these realistic sources and several types of simulated sources are used as the ground truth in the imaging simulations for both RHESSI and ASO-S/HXI.The different solutions are evaluated quantitatively by a number of means.The overall results suggest that EM,PIXON,and CLEAN are exceptional methods for sidelobe elimination,producing images with clear source details.Although MEM_GE,MEM_NJIT,VIS_WV and VIS_CS possess fast imaging processes and generate good images,they too possess associated imperfections unique to each method.The two forward fit algorithms,VF and FF,perform differently,and VF appears to be more robust and useful.We also demonstrated the imaging capability of HXI and available HXI algorithms.Furthermore,the effect of CBWF on image quality was investigated,and the optimal settings for both RHESSI and HXI were proposed.展开更多
Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive te...Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).展开更多
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f...The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.展开更多
Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps i...Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps is influenced by the representations of background images and map symbols.Many researchers explored the optimizations for background images and symbolization techniques for symbols to reduce the complexity of image-maps and improve the usability.However,little literature was found for the optimum amount of symbol loading.This study focuses on the effects of background image complexity and map symbol load on the usability(i.e.,effectiveness and efficiency)of image-maps.Experiments were conducted by user studies via eye-tracking equipment and an online questionnaire survey.Experimental data sets included image-maps with ten levels of map symbol load in ten areas.Forty volunteers took part in the target searching experiments.It has been found that the usability,i.e.,average time viewed(efficiency)and average revisits(effectiveness)of targets recorded,is influenced by the complexity of background images,a peak exists for optimum symbol load for an image-map.The optimum levels for symbol load for different image-maps also have a peak when the complexity of the background image/image map increases.The complexity of background images serves as a guideline for optimum map symbol load in image-map design.This study enhanced user experience by optimizing visual clarity and managing cognitive load.Understanding how these factors interact can help create adaptive maps that maintain clarity and usability,guiding AI algorithms to adjust symbol density based on user context.This research establishes the practices for map design,making cartographic tools more innovative and more user-centric.展开更多
Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale who...Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.展开更多
Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models...Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models achieve speed-accuracy trade-offs via fixed convolution kernels and manual feature fusion,their rigid architectures struggle with multi-scale adaptability,as exemplified by YOLOv8n’s 36.4%mAP and 13.9%small-object AP on VisDrone2019.This paper presents YOLO-LE,a lightweight framework addressing these limitations through three novel designs:(1)We introduce the C2f-Dy and LDown modules to enhance the backbone’s sensitivity to small-object features while reducing backbone parameters,thereby improving model efficiency.(2)An adaptive feature fusion module is designed to dynamically integrate multi-scale feature maps,optimizing the neck structure,reducing neck complexity,and enhancing overall model performance.(3)We replace the original loss function with a distributed focal loss and incorporate a lightweight self-attention mechanism to improve small-object recognition and bounding box regression accuracy.Experimental results demonstrate that YOLO-LE achieves 39.9%mAP@0.5 on VisDrone2019,representing a 9.6%improvement over YOLOv8n,while maintaining 8.5 GFLOPs computational efficiency.This provides an efficient solution for UAV object detection in complex scenarios.展开更多
In low-light environments,captured images often exhibit issues such as insufficient clarity and detail loss,which significantly degrade the accuracy of subsequent target recognition tasks.To tackle these challenges,th...In low-light environments,captured images often exhibit issues such as insufficient clarity and detail loss,which significantly degrade the accuracy of subsequent target recognition tasks.To tackle these challenges,this study presents a novel low-light image enhancement algorithm that leverages virtual hazy image generation through dehazing models based on statistical analysis.The proposed algorithm initiates the enhancement process by transforming the low-light image into a virtual hazy image,followed by image segmentation using a quadtree method.To improve the accuracy and robustness of atmospheric light estimation,the algorithm incorporates a genetic algorithm to optimize the quadtree-based estimation of atmospheric light regions.Additionally,this method employs an adaptive window adjustment mechanism to derive the dark channel prior image,which is subsequently refined using morphological operations and guided filtering.The final enhanced image is reconstructed through the hazy image degradation model.Extensive experimental evaluations across multiple datasets verify the superiority of the designed framework,achieving a peak signal-to-noise ratio(PSNR)of 17.09 and a structural similarity index(SSIM)of 0.74.These results indicate that the proposed algorithm not only effectively enhances image contrast and brightness but also outperforms traditional methods in terms of subjective and objective evaluation metrics.展开更多
Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feat...Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.展开更多
In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantl...In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantly degrades image quality.Traditional denoising methods,typically based on filter techniques,often face challenges related to inefficiency and limited adaptability.To address these limitations,this study proposes a novel SAR image denoising algorithm based on an enhanced residual network architecture,with the objective of enhancing the utility of SAR imagery in complex electromagnetic environments.The proposed algorithm integrates residual network modules,which directly process the noisy input images to generate denoised outputs.This approach not only reduces computational complexity but also mitigates the difficulties associated with model training.By combining the Transformer module with the residual block,the algorithm enhances the network's ability to extract global features,offering superior feature extraction capabilities compared to CNN-based residual modules.Additionally,the algorithm employs the adaptive activation function Meta-ACON,which dynamically adjusts the activation patterns of neurons,thereby improving the network's feature extraction efficiency.The effectiveness of the proposed denoising method is empirically validated using real SAR images from the RSOD dataset.The proposed algorithm exhibits remarkable performance in terms of EPI,SSIM,and ENL,while achieving a substantial enhancement in PSNR when compared to traditional and deep learning-based algorithms.The PSNR performance is enhanced by over twofold.Moreover,the evaluation of the MSTAR SAR dataset substantiates the algorithm's robustness and applicability in SAR denoising tasks,with a PSNR of 25.2021 being attained.These findings underscore the efficacy of the proposed algorithm in mitigating speckle noise while preserving critical features in SAR imagery,thereby enhancing its quality and usability in practical scenarios.展开更多
Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)t...Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.展开更多
文摘Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread adoption of convolutional neural networks(CNNs)has catalyzed the remarkable success of architectures such as ResNet-101 within the domain of image classification.However,inmulti-label image classification tasks,it is crucial to consider the correlation between labels.In order to improve the accuracy and performance of multi-label classification and fully combine visual and semantic features,many existing studies use graph convolutional networks(GCN)for modeling.Object detection and multi-label image classification exhibit a degree of conceptual overlap;however,the integration of these two tasks within a unified framework has been relatively underexplored in the existing literature.In this paper,we come up with Object-GCN framework,a model combining object detection network YOLOv5 and graph convolutional network,and we carry out a thorough experimental analysis using a range of well-established public datasets.The designed framework Object-GCN achieves significantly better performance than existing studies in public datasets COCO2014,VOC2007,VOC2012.The final results achieved are 86.9%,96.7%,and 96.3%mean Average Precision(mAP)across the three datasets.
基金Supported by Shenzhen Fundamental Research Fund of China(Grant No.JCYJ20190808142613246)National Natural Science Foundation of China(Grant No.51805332),and Young Elite Scientists Sponsorship Program funded by the China Society of Automotive Engineers.
文摘To reduce the discrepancy between the source and target domains,a new multi-label adaptation network(ML-ANet)based on multiple kernel variants with maximum mean discrepancies is proposed in this paper.The hidden representations of the task-specific layers in ML-ANet are embedded in the reproducing kernel Hilbert space(RKHS)so that the mean-embeddings of specific features in different domains could be precisely matched.Multiple kernel functions are used to improve feature distribution efficiency for explicit mean embedding matching,which can further reduce domain discrepancy.Adverse weather and cross-camera adaptation examinations are conducted to verify the effectiveness of our proposed ML-ANet.The results show that our proposed ML-ANet achieves higher accuracies than the compared state-of-the-art methods for multi-label image classification in both the adverse weather adaptation and cross-camera adaptation experiments.These results indicate that ML-ANet can alleviate the reliance on fully labeled training data and improve the accuracy of multi-label image classification in various domain shift scenarios.
文摘In this paper,we study the partial multi-label(PML)image classification problem,where each image is annotated with a candidate label set consisting of multiple relevant labels and other noisy labels.Existing PML methods typically design a disambiguation strategy to filter out noisy labels by utilizing prior knowledge with extra assumptions,which unfortunately is unavailable in many real tasks.Furthermore,because the objective function for disambiguation is usually elaborately designed on the whole training set,it can hardly be optimized in a deep model with stochastic gradient descent(SGD)on mini-batches.In this paper,for the first time,we propose a deep model for PML to enhance the representation and discrimination ability.On the one hand,we propose a novel curriculum-based disambiguation strategy to progressively identify ground-truth labels by incorporating the varied difficulties of different classes.On the other hand,consistency regularization is introduced for model training to balance fitting identified easy labels and exploiting potential relevant labels.Extensive experimental results on the commonly used benchmark datasets show that the proposed method significantlyoutperforms the SOTA methods.
基金the National Natural Science Foundation of China(Nos.62167005 and 61966018)the Key Research Projects of Jiangxi Provincial Department of Education(No.GJJ200302)。
文摘At present,research on multi-label image classification mainly focuses on exploring the correlation between labels to improve the classification accuracy of multi-label images.However,in existing methods,label correlation is calculated based on the statistical information of the data.This label correlation is global and depends on the dataset,not suitable for all samples.In the process of extracting image features,the characteristic information of small objects in the image is easily lost,resulting in a low classification accuracy of small objects.To this end,this paper proposes a multi-label image classification model based on multiscale fusion and adaptive label correlation.The main idea is:first,the feature maps of multiple scales are fused to enhance the feature information of small objects.Semantic guidance decomposes the fusion feature map into feature vectors of each category,then adaptively mines the correlation between categories in the image through the self-attention mechanism of graph attention network,and obtains feature vectors containing category-related information for the final classification.The mean average precision of the model on the two public datasets of VOC 2007 and MS COCO 2014 reached 95.6% and 83.6%,respectively,and most of the indicators are better than those of the existing latest methods.
文摘The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.
基金supported by the National Key Research and Development Project of China(No.2023YFB3709605)the National Natural Science Foundation of China(No.62073193)the National College Student Innovation Training Program(No.202310422122)。
文摘Potential high-temperature risks exist in heat-prone components of electric moped charging devices,such as sockets,interfaces,and controllers.Traditional detection methods have limitations in terms of real-time performance and monitoring scope.To address this,a temperature detection method based on infrared image processing has been proposed:utilizing the median filtering algorithm to denoise the original infrared image,then applying an image segmentation algorithm to divide the image.
基金supported by the National Natural Science Foundation of China(Grant Nos.82272955 and 22203057)the Natural Science Foundation of Fujian Province(Grant No.2021J011361).
文摘The presence of a positive deep surgical margin in tongue squamous cell carcinoma(TSCC)significantly elevates the risk of local recurrence.Therefore,a prompt and precise intraoperative assessment of margin status is imperative to ensure thorough tumor resection.In this study,we integrate Raman imaging technology with an artificial intelligence(AI)generative model,proposing an innovative approach for intraoperative margin status diagnosis.This method utilizes Raman imaging to swiftly and non-invasively capture tissue Raman images,which are then transformed into hematoxylin-eosin(H&E)-stained histopathological images using an AI generative model for histopathological diagnosis.The generated H&E-stained images clearly illustrate the tissue’s pathological conditions.Independently reviewed by three pathologists,the overall diagnostic accuracy for distinguishing between tumor tissue and normal muscle tissue reaches 86.7%.Notably,it outperforms current clinical practices,especially in TSCC with positive lymph node metastasis or moderately differentiated grades.This advancement highlights the potential of AI-enhanced Raman imaging to significantly improve intraoperative assessments and surgical margin evaluations,promising a versatile diagnostic tool beyond TSCC.
基金supported by the National Natural Science Foundation of China(NSFC)12333010the National Key R&D Program of China 2022YFF0503002+3 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(grant No.XDB0560000)the NSFC 11921003supported by the Prominent Postdoctoral Project of Jiangsu Province(2023ZB304)supported by the Strategic Priority Research Program on Space Science,the Chinese Academy of Sciences,grant No.XDA15320000.
文摘Indirect X-ray modulation imaging has been adopted in a number of solar missions and provided reconstructed X-ray images of solar flares that are of great scientific importance.However,the assessment of the image quality of the reconstruction is still difficult,which is particularly useful for scheme design of X-ray imaging systems,testing and improvement of imaging algorithms,and scientific research of X-ray sources.Currently,there is no specified method to quantitatively evaluate the quality of X-ray image reconstruction and the point-spread function(PSF)of an X-ray imager.In this paper,we propose percentage proximity degree(PPD)by considering the imaging characteristics of X-ray image reconstruction and in particular,sidelobes and their effects on imaging quality.After testing a variety of imaging quality assessments in six aspects,we utilized the technique for order preference by similarity to ideal solution to the indices that meet the requirements.Then we develop the final quality index for X-ray image reconstruction,QuIX,which consists of the selected indices and the new PPD.QuIX performs well in a series of tests,including assessment of instrument PSF and simulation tests under different grid configurations,as well as imaging tests with RHESSI data.It is also a useful tool for testing of imaging algorithms,and determination of imaging parameters for both RHESSI and ASO-S/Hard X-ray Imager,such as field of view,beam width factor,and detector selection.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金supported by the National Key R&D Program of China 2022YFF0503002the National Natural Science Foundation of China(NSFC,Grant Nos.12333010 and 12233012)+2 种基金the Strategic Priority Research Program of the Chinese Academy of Sciences(grant No.XDB0560000)supported by the Prominent Postdoctoral Project of Jiangsu Province(2023ZB304)supported by the Strategic Priority Research Program on Space Science,the Chinese Academy of Sciences,grant No.XDA15320000.
文摘Imaging observations of solar X-ray bursts can reveal details of the energy release process and particle acceleration in flares.Most hard X-ray imagers make use of the modulation-based Fourier transform imaging method,an indirect imaging technique that requires algorithms to reconstruct and optimize images.During the last decade,a variety of algorithms have been developed and improved.However,it is difficult to quantitatively evaluate the image quality of different solutions without a true,reference image of observation.How to choose the values of imaging parameters for these algorithms to get the best performance is also an open question.In this study,we present a detailed test of the characteristics of these algorithms,imaging dynamic range and a crucial parameter for the CLEAN method,clean beam width factor(CBWF).We first used SDO/AIA EUV images to compute DEM maps and calculate thermal X-ray maps.Then these realistic sources and several types of simulated sources are used as the ground truth in the imaging simulations for both RHESSI and ASO-S/HXI.The different solutions are evaluated quantitatively by a number of means.The overall results suggest that EM,PIXON,and CLEAN are exceptional methods for sidelobe elimination,producing images with clear source details.Although MEM_GE,MEM_NJIT,VIS_WV and VIS_CS possess fast imaging processes and generate good images,they too possess associated imperfections unique to each method.The two forward fit algorithms,VF and FF,perform differently,and VF appears to be more robust and useful.We also demonstrated the imaging capability of HXI and available HXI algorithms.Furthermore,the effect of CBWF on image quality was investigated,and the optimal settings for both RHESSI and HXI were proposed.
文摘Large language models(LLMs),such as ChatGPT developed by OpenAI,represent a significant advancement in artificial intelligence(AI),designed to understand,generate,and interpret human language by analyzing extensive text data.Their potential integration into clinical settings offers a promising avenue that could transform clinical diagnosis and decision-making processes in the future(Thirunavukarasu et al.,2023).This article aims to provide an in-depth analysis of LLMs’current and potential impact on clinical practices.Their ability to generate differential diagnosis lists underscores their potential as invaluable tools in medical practice and education(Hirosawa et al.,2023;Koga et al.,2023).
基金Supported by the Henan Province Key Research and Development Project(231111211300)the Central Government of Henan Province Guides Local Science and Technology Development Funds(Z20231811005)+2 种基金Henan Province Key Research and Development Project(231111110100)Henan Provincial Outstanding Foreign Scientist Studio(GZS2024006)Henan Provincial Joint Fund for Scientific and Technological Research and Development Plan(Application and Overcoming Technical Barriers)(242103810028)。
文摘The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.
基金National Natural Science Foundation of China(No.42301518)Hubei Key Laboratory of Regional Development and Environmental Response(No.2023(A)002)Key Laboratory of the Evaluation and Monitoring of Southwest Land Resources(Ministry of Education)(No.TDSYS202304).
文摘Image-maps,a hybrid design with satellite images as background and map symbols uploaded,aim to combine the advantages of maps’high interpretation efficiency and satellite images’realism.The usability of image-maps is influenced by the representations of background images and map symbols.Many researchers explored the optimizations for background images and symbolization techniques for symbols to reduce the complexity of image-maps and improve the usability.However,little literature was found for the optimum amount of symbol loading.This study focuses on the effects of background image complexity and map symbol load on the usability(i.e.,effectiveness and efficiency)of image-maps.Experiments were conducted by user studies via eye-tracking equipment and an online questionnaire survey.Experimental data sets included image-maps with ten levels of map symbol load in ten areas.Forty volunteers took part in the target searching experiments.It has been found that the usability,i.e.,average time viewed(efficiency)and average revisits(effectiveness)of targets recorded,is influenced by the complexity of background images,a peak exists for optimum symbol load for an image-map.The optimum levels for symbol load for different image-maps also have a peak when the complexity of the background image/image map increases.The complexity of background images serves as a guideline for optimum map symbol load in image-map design.This study enhanced user experience by optimizing visual clarity and managing cognitive load.Understanding how these factors interact can help create adaptive maps that maintain clarity and usability,guiding AI algorithms to adjust symbol density based on user context.This research establishes the practices for map design,making cartographic tools more innovative and more user-centric.
基金supported by the National Natural Science Foundation of China(No.82371933)the National Natural Science Foundation of Shandong Province of China(No.ZR2021MH120)+1 种基金the Taishan Scholars Project(No.tsqn202211378)the Shandong Provincial Natural Science Foundation for Excellent Young Scholars(No.ZR2024YQ075).
文摘Objective:Early predicting response before neoadjuvant chemotherapy(NAC)is crucial for personalized treatment plans for locally advanced breast cancer patients.We aim to develop a multi-task model using multiscale whole slide images(WSIs)features to predict the response to breast cancer NAC more finely.Methods:This work collected 1,670 whole slide images for training and validation sets,internal testing sets,external testing sets,and prospective testing sets of the weakly-supervised deep learning-based multi-task model(DLMM)in predicting treatment response and pCR to NAC.Our approach models two-by-two feature interactions across scales by employing concatenate fusion of single-scale feature representations,and controls the expressiveness of each representation via a gating-based attention mechanism.Results:In the retrospective analysis,DLMM exhibited excellent predictive performance for the prediction of treatment response,with area under the receiver operating characteristic curves(AUCs)of 0.869[95%confidence interval(95%CI):0.806−0.933]in the internal testing set and 0.841(95%CI:0.814−0.867)in the external testing sets.For the pCR prediction task,DLMM reached AUCs of 0.865(95%CI:0.763−0.964)in the internal testing and 0.821(95%CI:0.763−0.878)in the pooled external testing set.In the prospective testing study,DLMM also demonstrated favorable predictive performance,with AUCs of 0.829(95%CI:0.754−0.903)and 0.821(95%CI:0.692−0.949)in treatment response and pCR prediction,respectively.DLMM significantly outperformed the baseline models in all testing sets(P<0.05).Heatmaps were employed to interpret the decision-making basis of the model.Furthermore,it was discovered that high DLMM scores were associated with immune-related pathways and cells in the microenvironment during biological basis exploration.Conclusions:The DLMM represents a valuable tool that aids clinicians in selecting personalized treatment strategies for breast cancer patients.
文摘Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models achieve speed-accuracy trade-offs via fixed convolution kernels and manual feature fusion,their rigid architectures struggle with multi-scale adaptability,as exemplified by YOLOv8n’s 36.4%mAP and 13.9%small-object AP on VisDrone2019.This paper presents YOLO-LE,a lightweight framework addressing these limitations through three novel designs:(1)We introduce the C2f-Dy and LDown modules to enhance the backbone’s sensitivity to small-object features while reducing backbone parameters,thereby improving model efficiency.(2)An adaptive feature fusion module is designed to dynamically integrate multi-scale feature maps,optimizing the neck structure,reducing neck complexity,and enhancing overall model performance.(3)We replace the original loss function with a distributed focal loss and incorporate a lightweight self-attention mechanism to improve small-object recognition and bounding box regression accuracy.Experimental results demonstrate that YOLO-LE achieves 39.9%mAP@0.5 on VisDrone2019,representing a 9.6%improvement over YOLOv8n,while maintaining 8.5 GFLOPs computational efficiency.This provides an efficient solution for UAV object detection in complex scenarios.
基金supported by the Natural Science Foundation of Shandong Province(nos.ZR2023MF047,ZR2024MA055 and ZR2023QF139)the Enterprise Commissioned Project(nos.2024HX104 and 2024HX140)+1 种基金the China University Industry-University-Research Innovation Foundation(nos.2021ZYA11003 and 2021ITA05032)the Science and Technology Plan for Youth Innovation of Shandong's Universities(no.2019KJN012).
文摘In low-light environments,captured images often exhibit issues such as insufficient clarity and detail loss,which significantly degrade the accuracy of subsequent target recognition tasks.To tackle these challenges,this study presents a novel low-light image enhancement algorithm that leverages virtual hazy image generation through dehazing models based on statistical analysis.The proposed algorithm initiates the enhancement process by transforming the low-light image into a virtual hazy image,followed by image segmentation using a quadtree method.To improve the accuracy and robustness of atmospheric light estimation,the algorithm incorporates a genetic algorithm to optimize the quadtree-based estimation of atmospheric light regions.Additionally,this method employs an adaptive window adjustment mechanism to derive the dark channel prior image,which is subsequently refined using morphological operations and guided filtering.The final enhanced image is reconstructed through the hazy image degradation model.Extensive experimental evaluations across multiple datasets verify the superiority of the designed framework,achieving a peak signal-to-noise ratio(PSNR)of 17.09 and a structural similarity index(SSIM)of 0.74.These results indicate that the proposed algorithm not only effectively enhances image contrast and brightness but also outperforms traditional methods in terms of subjective and objective evaluation metrics.
基金supported by the National Natural Science Foundation of China(62302167,62477013)Natural Science Foundation of Shanghai(No.24ZR1456100)+1 种基金Science and Technology Commission of Shanghai Municipality(No.24DZ2305900)the Shanghai Municipal Special Fund for Promoting High-Quality Development of Industries(2211106).
文摘Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.
文摘In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantly degrades image quality.Traditional denoising methods,typically based on filter techniques,often face challenges related to inefficiency and limited adaptability.To address these limitations,this study proposes a novel SAR image denoising algorithm based on an enhanced residual network architecture,with the objective of enhancing the utility of SAR imagery in complex electromagnetic environments.The proposed algorithm integrates residual network modules,which directly process the noisy input images to generate denoised outputs.This approach not only reduces computational complexity but also mitigates the difficulties associated with model training.By combining the Transformer module with the residual block,the algorithm enhances the network's ability to extract global features,offering superior feature extraction capabilities compared to CNN-based residual modules.Additionally,the algorithm employs the adaptive activation function Meta-ACON,which dynamically adjusts the activation patterns of neurons,thereby improving the network's feature extraction efficiency.The effectiveness of the proposed denoising method is empirically validated using real SAR images from the RSOD dataset.The proposed algorithm exhibits remarkable performance in terms of EPI,SSIM,and ENL,while achieving a substantial enhancement in PSNR when compared to traditional and deep learning-based algorithms.The PSNR performance is enhanced by over twofold.Moreover,the evaluation of the MSTAR SAR dataset substantiates the algorithm's robustness and applicability in SAR denoising tasks,with a PSNR of 25.2021 being attained.These findings underscore the efficacy of the proposed algorithm in mitigating speckle noise while preserving critical features in SAR imagery,thereby enhancing its quality and usability in practical scenarios.
基金supported by the Natural Science Foundation of China(No.41804112,author:Chengyun Song).
文摘Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.