期刊文献+
共找到8,371篇文章
< 1 2 250 >
每页显示 20 50 100
Dual networks with hierarchical attention for fine-grained image classification
1
作者 YANG Tao WANG Gaihua 《中国科学院大学学报(中英文)》 北大核心 2025年第6期806-813,共8页
In this paper,we propose hierarchical attention dual network(DNet)for fine-grained image classification.The DNet can randomly select pairs of inputs from the dataset and compare the differences between them through hi... In this paper,we propose hierarchical attention dual network(DNet)for fine-grained image classification.The DNet can randomly select pairs of inputs from the dataset and compare the differences between them through hierarchical attention feature learning,which are used simultaneously to remove noise and retain salient features.In the loss function,it considers the losses of difference in paired images according to the intra-variance and inter-variance.In addition,we also collect the disaster scene dataset from remote sensing images and apply the proposed method to disaster scene classification,which contains complex scenes and multiple types of disasters.Compared to other methods,experimental results show that the DNet with hierarchical attention is robust to different datasets and performs better. 展开更多
关键词 dual network(DNet) fine-grained image classification hierarchical attention features
在线阅读 下载PDF
A Fine-Grained Image Classification Model Based on Hybrid Attention and Pyramidal Convolution
2
作者 Sifeng Wang Shengxiang Li +3 位作者 Anran Li Zhaoan Dong Guangshun Li Chao Yan 《Tsinghua Science and Technology》 2025年第3期1283-1293,共11页
Finding more specific subcategories within a larger category is the goal of fine-grained image classification(FGIC),and the key is to find local discriminative regions of visual features.Most existing methods use trad... Finding more specific subcategories within a larger category is the goal of fine-grained image classification(FGIC),and the key is to find local discriminative regions of visual features.Most existing methods use traditional convolutional operations to achieve fine-grained image classification.However,traditional convolution cannot extract multi-scale features of an image and existing methods are susceptible to interference from image background information.Therefore,to address the above problems,this paper proposes an FGIC model(Attention-PCNN)based on hybrid attention mechanism and pyramidal convolution.The model feeds the multi-scale features extracted by the pyramidal convolutional neural network into two branches capturing global and local information respectively.In particular,a hybrid attention mechanism is added to the branch capturing global information in order to reduce the interference of image background information and make the model pay more attention to the target region with fine-grained features.In addition,the mutual-channel loss(MC-LOSS)is introduced in the local information branch to capture fine-grained features.We evaluated the model on three publicly available datasets CUB-200-2011,Stanford Cars,FGVCAircraft,etc.Compared to the state-of-the-art methods,the results show that Attention-PCNN performs better. 展开更多
关键词 fine-grained image classification pyramidal convolution hybrid attention
原文传递
A Hybrid Deep Learning Multi-Class Classification Model for Alzheimer’s Disease Using Enhanced MRI Images
3
作者 Ghadah Naif Alwakid 《Computers, Materials & Continua》 2026年第1期797-821,共25页
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru... Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice. 展开更多
关键词 Alzheimer’s disease deep learning MRI images MobileNetV2 contrast-limited adaptive histogram equalization(CLAHE) enhanced super-resolution generative adversarial networks(ESRGAN) multi-class classification
在线阅读 下载PDF
Fine-Grained Classification of Remote Sensing Ship Images Based on Improved VAN
4
作者 Guoqing Zhou Liang Huang Qiao Sun 《Computers, Materials & Continua》 SCIE EI 2023年第11期1985-2007,共23页
The remote sensing ships’fine-grained classification technology makes it possible to identify certain ship types in remote sensing images,and it has broad application prospects in civil and military fields.However,th... The remote sensing ships’fine-grained classification technology makes it possible to identify certain ship types in remote sensing images,and it has broad application prospects in civil and military fields.However,the current model does not examine the properties of ship targets in remote sensing images with mixed multi-granularity features and a complicated backdrop.There is still an opportunity for future enhancement of the classification impact.To solve the challenges brought by the above characteristics,this paper proposes a Metaformer and Residual fusion network based on Visual Attention Network(VAN-MR)for fine-grained classification tasks.For the complex background of remote sensing images,the VAN-MR model adopts the parallel structure of large kernel attention and spatial attention to enhance the model’s feature extraction ability of interest targets and improve the classification performance of remote sensing ship targets.For the problem of multi-grained feature mixing in remote sensing images,the VAN-MR model uses a Metaformer structure and a parallel network of residual modules to extract ship features.The parallel network has different depths,considering both high-level and lowlevel semantic information.The model achieves better classification performance in remote sensing ship images with multi-granularity mixing.Finally,the model achieves 88.73%and 94.56%accuracy on the public fine-grained ship collection-23(FGSC-23)and FGSCR-42 datasets,respectively,while the parameter size is only 53.47 M,the floating point operations is 9.9 G.The experimental results show that the classification effect of VAN-MR is superior to that of traditional CNNs model and visual model with Transformer structure under the same parameter quantity. 展开更多
关键词 fine-grained classification metaformer remote sensing RESIDUAL ship image
在线阅读 下载PDF
Experiments on image data augmentation techniques for geological rock type classification with convolutional neural networks 被引量:1
5
作者 Afshin Tatar Manouchehr Haghighi Abbas Zeinijahromi 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第1期106-125,共20页
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist... The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications. 展开更多
关键词 Deep learning(DL) image analysis image data augmentation Convolutional neural networks(CNNs) Geological image analysis Rock classification Rock thin section(RTS)images
在线阅读 下载PDF
Congruent Feature Selection Method to Improve the Efficacy of Machine Learning-Based Classification in Medical Image Processing
6
作者 Mohd Anjum Naoufel Kraiem +2 位作者 Hong Min Ashit Kumar Dutta Yousef Ibrahim Daradkeh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期357-384,共28页
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp... Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset. 展开更多
关键词 Computer vision feature selection machine learning region detection texture analysis image classification medical images
在线阅读 下载PDF
Step-by-step to success:Multi-stage learning driven robust audiovisual fusion network for fine-grained bird species classification
7
作者 Shanshan Xie Jiangjian Xie +6 位作者 Yang Liu Lianshuai Sha Ye Tian Jiahua Dong Diwen Liang Kaijun Pan Junguo Zhang 《Avian Research》 2025年第4期818-831,共14页
Bird monitoring and protection are essential for maintaining biodiversity,and fine-grained bird classification has become a key focus in this field.Audio-visual modalities provide critical cues for this task,but robus... Bird monitoring and protection are essential for maintaining biodiversity,and fine-grained bird classification has become a key focus in this field.Audio-visual modalities provide critical cues for this task,but robust feature extraction and efficient fusion remain major challenges.We introduce a multi-stage fine-grained audiovisual fusion network(MSFG-AVFNet) for fine-grained bird species classification,which addresses these challenges through two key components:(1) the audiovisual feature extraction module,which adopts a multi-stage finetuning strategy to provide high-quality unimodal features,laying a solid foundation for modality fusion;(2) the audiovisual feature fusion module,which combines a max pooling aggregation strategy with a novel audiovisual loss function to achieve effective and robust feature fusion.Experiments were conducted on the self-built AVB81and the publicly available SSW60 datasets,which contain data from 81 and 60 bird species,respectively.Comprehensive experiments demonstrate that our approach achieves notable performance gains,outperforming existing state-of-the-art methods.These results highlight its effectiveness in leveraging audiovisual modalities for fine-grained bird classification and its potential to support ecological monitoring and biodiversity research. 展开更多
关键词 Audiovisual modality Bird species classification Feature fusion fine-grained
在线阅读 下载PDF
A teacher-student based attention network for fine-grainedimage recognition
8
作者 Ang Li Xueyi Zhang +1 位作者 Peilin Li Bin Kang 《Digital Communications and Networks》 2025年第1期52-59,共8页
Fine-grained Image Recognition(FGIR)task is dedicated to distinguishing similar sub-categories that belong to the same super-category,such as bird species and car types.In order to highlight visual differences,existin... Fine-grained Image Recognition(FGIR)task is dedicated to distinguishing similar sub-categories that belong to the same super-category,such as bird species and car types.In order to highlight visual differences,existing FGIR works often follow two steps:discriminative sub-region localization and local feature representation.However,these works pay less attention on global context information.They neglect a fact that the subtle visual difference in challenging scenarios can be highlighted through exploiting the spatial relationship among different subregions from a global view point.Therefore,in this paper,we consider both global and local information for FGIR,and propose a collaborative teacher-student strategy to reinforce and unity the two types of information.Our framework is implemented mainly by convolutional neural network,referred to Teacher-Student Based Attention Convolutional Neural Network(T-S-ACNN).For fine-grained local information,we choose the classic Multi-Attention Network(MA-Net)as our baseline,and propose a type of boundary constraint to further reduce background noises in the local attention maps.In this way,the discriminative sub-regions tend to appear in the area occupied by fine-grained objects,leading to more accurate sub-region localization.For fine-grained global information,we design a graph convolution based Global Attention Network(GA-Net),which can combine extracted local attention maps from MA-Net with non-local techniques to explore spatial relationship among subregions.At last,we develop a collaborative teacher-student strategy to adaptively determine the attended roles and optimization modes,so as to enhance the cooperative reinforcement of MA-Net and GA-Net.Extensive experiments on CUB-200-2011,Stanford Cars and FGVC Aircraft datasets illustrate the promising performance of our framework. 展开更多
关键词 fine-grained image recognition Collaborative teacher-student strategy Multi-attention Global attention
在线阅读 下载PDF
An EfficientNet integrated ResNet deep network and explainable AI for breast lesion classification from ultrasound images
9
作者 Kiran Jabeen Muhammad Attique Khan +4 位作者 Ameer Hamza Hussain Mobarak Albarakati Shrooq Alsenan Usman Tariq Isaac Ofori 《CAAI Transactions on Intelligence Technology》 2025年第3期842-857,共16页
Breast cancer is one of the major causes of deaths in women.However,the early diagnosis is important for screening and control the mortality rate.Thus for the diagnosis of breast cancer at the early stage,a computer-a... Breast cancer is one of the major causes of deaths in women.However,the early diagnosis is important for screening and control the mortality rate.Thus for the diagnosis of breast cancer at the early stage,a computer-aided diagnosis system is highly required.Ultrasound is an important examination technique for breast cancer diagnosis due to its low cost.Recently,many learning-based techniques have been introduced to classify breast cancer using breast ultrasound imaging dataset(BUSI)datasets;however,the manual handling is not an easy process and time consuming.The authors propose an EfficientNet-integrated ResNet deep network and XAI-based framework for accurately classifying breast cancer(malignant and benign).In the initial step,data augmentation is performed to increase the number of training samples.For this purpose,three-pixel flip mathematical equations are introduced:horizontal,vertical,and 90°.Later,two pretrained deep learning models were employed,skipped some layers,and fine-tuned.Both fine-tuned models are later trained using a deep transfer learning process and extracted features from the deeper layer.Explainable artificial intelligence-based analysed the performance of trained models.After that,a new feature selection technique is proposed based on the cuckoo search algorithm called cuckoo search controlled standard error mean.This technique selects the best features and fuses using a new parallel zeropadding maximum correlated coefficient features.In the end,the selection algorithm is applied again to the fused feature vector and classified using machine learning algorithms.The experimental process of the proposed framework is conducted on a publicly available BUSI and obtained 98.4%and 98%accuracy in two different experiments.Comparing the proposed framework is also conducted with recent techniques and shows improved accuracy.In addition,the proposed framework was executed less than the original deep learning models. 展开更多
关键词 augmentation breast cancer classification deep learning OPTIMIZATION ultrasound images
在线阅读 下载PDF
CloudViT:A Lightweight Ground-Based Cloud Image Classification Model with the Ability to Capture Global Features
10
作者 Daoming Wei Fangyan Ge +5 位作者 Bopeng Zhang Zhiqiang Zhao Dequan Li Lizong Xi Jinrong Hu Xin Wang 《Computers, Materials & Continua》 2025年第6期5729-5746,共18页
Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning b... Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning based model,for the types identification.However,traditional approaches such as convolutional neural networks(CNNs)encounter difficulties in capturing global contextual information.In addition,they are computationally expensive,which restricts their usability in resource-limited environments.To tackle these issues,we present the Cloud Vision Transformer(CloudViT),a lightweight model that integrates CNNs with Transformers.The integration enables an effective balance between local and global feature extraction.To be specific,CloudViT comprises two innovative modules:Feature Extraction(E_Module)and Downsampling(D_Module).These modules are able to significantly reduce the number of model parameters and computational complexity while maintaining translation invariance and enhancing contextual comprehension.Overall,the CloudViT includes 0.93×10^(6)parameters,which decreases more than ten times compared to the SOTA(State-of-the-Art)model CloudNet.Comprehensive evaluations conducted on the HBMCD and SWIMCAT datasets showcase the outstanding performance of CloudViT.It achieves classification accuracies of 98.45%and 100%,respectively.Moreover,the efficiency and scalability of CloudViT make it an ideal candidate for deployment inmobile cloud observation systems,enabling real-time cloud image classification.The proposed hybrid architecture of CloudViT offers a promising approach for advancing ground-based cloud image classification.It holds significant potential for both optimizing performance and facilitating practical deployment scenarios. 展开更多
关键词 image classification ground-based cloud images lightweight neural networks attention mechanism deep learning vision transformer
在线阅读 下载PDF
UltraSegNet:A Hybrid Deep Learning Framework for Enhanced Breast Cancer Segmentation and Classification on Ultrasound Images
11
作者 Suhaila Abuowaida Hamza Abu Owida +3 位作者 Deema Mohammed Alsekait Nawaf Alshdaifat Diaa Salama Abd Elminaam Mohammad Alshinwan 《Computers, Materials & Continua》 2025年第5期3303-3333,共31页
Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addres... Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addresses these challenges through three key technical innovations:This work adds three things:(1)a changed ResNet-50 backbone with sequential 3×3 convolutions to keep fine anatomical details that are needed for finding lesion boundaries;(2)a computationally efficient regional attention mechanism that works on high-resolution features without using a transformer’s extra memory;and(3)an adaptive feature fusion strategy that changes local and global featuresbasedonhowthe image isbeing used.Extensive evaluation on two distinct datasets demonstrates UltraSegNet’s superior performance:On the BUSI dataset,it obtains a precision of 0.915,a recall of 0.908,and an F1 score of 0.911.In the UDAIT dataset,it achieves robust performance across the board,with a precision of 0.901 and recall of 0.894.Importantly,these improvements are achieved at clinically feasible computation times,taking 235 ms per image on standard GPU hardware.Notably,UltraSegNet does amazingly well on difficult small lesions(less than 10 mm),achieving a detection accuracy of 0.891.This is a huge improvement over traditional methods that have a hard time with small-scale features,as standard models can only achieve 0.63–0.71 accuracy.This improvement in small lesion detection is particularly crucial for early-stage breast cancer identification.Results from this work demonstrate that UltraSegNet can be practically deployable in clinical workflows to improve breast cancer screening accuracy. 展开更多
关键词 Breast cancer ultrasound image SEGMENTATION classification deep learning
在线阅读 下载PDF
Compressed meta-optical encoder for image classification
12
作者 Anna Wirth-Singh Jinlin Xiang +5 位作者 Minho Choi Johannes EFröch Luocheng Huang Shane Colburn Eli Shlizerman Arka Majumdar 《Advanced Photonics Nexus》 2025年第2期87-96,共10页
Optical and hybrid convolutional neural networks(CNNs)recently have become of increasing interest to achieve low-latency,low-power image classification,and computer-vision tasks.However,implementing optical nonlineari... Optical and hybrid convolutional neural networks(CNNs)recently have become of increasing interest to achieve low-latency,low-power image classification,and computer-vision tasks.However,implementing optical nonlinearity is challenging,and omitting the nonlinear layers in a standard CNN comes with a significant reduction in accuracy.We use knowledge distillation to compress modified AlexNet to a single linear convolutional layer and an electronic backend(two fully connected layers).We obtain comparable performance with a purely electronic CNN with five convolutional layers and three fully connected layers.We implement the convolution optically via engineering the point spread function of an inverse-designed meta-optic.Using this hybrid approach,we estimate a reduction in multiply-accumulate operations from 17M in a conventional electronic modified AlexNet to only 86 K in the hybrid compressed network enabled by the optical front end.This constitutes over 2 orders of magnitude of reduction in latency and power consumption.Furthermore,we experimentally demonstrate that the classification accuracy of the system exceeds 93%on the MNIST dataset of handwritten digits. 展开更多
关键词 neural network meta-optics image classification knowledge distillation optical computing
在线阅读 下载PDF
Central-Pixel Guiding Sub-Pixel and Sub-Channel Convolution Network for Hyperspectral Image Classification
13
作者 Xin Guan Shan Wang Qiang Li 《Journal of Beijing Institute of Technology》 2025年第5期510-525,共16页
In hyperspectral image classification(HSIC),accurately extracting spatial and spectral information from hyperspectral images(HSI)is crucial for achieving precise classification.However,due to low spatial resolution an... In hyperspectral image classification(HSIC),accurately extracting spatial and spectral information from hyperspectral images(HSI)is crucial for achieving precise classification.However,due to low spatial resolution and complex category boundary,mixed pixels containing features from multiple classes are inevitable in HSIs.Additionally,the spectral similarity among different classes challenge for extracting distinctive spectral features essential for HSIC.To address the impact of mixed pixels and spectral similarity for HSIC,we propose a central-pixel guiding sub-pixel and sub-channel convolution network(CP-SPSC)to extract more precise spatial and spectral features.Firstly,we designed spatial attention(CP-SPA)and spectral attention(CP-SPE)informed by the central pixel to effectively reduce spectral interference of irrelevant categories in the same patch.Furthermore,we use CP-SPA to guide 2D sub-pixel convolution(SPConv2d)to capture spatial features finer than the pixel level.Meanwhile,CP-SPE is also utilized to guide 1D sub-channel con-volution(SCConv1d)in selecting more precise spectral channels.For fusing spatial and spectral information at the feature-level,the spectral feature extension transformation module(SFET)adopts mirror-padding and snake permutation to transform 1D spectral information of the center pixel into 2D spectral features.Experiments on three popular datasets demonstrate that ours out-performs several state-of-the-art methods in accuracy. 展开更多
关键词 hyperspectral image classification similar spectra mixed pixel ATTENTION
在线阅读 下载PDF
Multi-Scale Feature Fusion and Advanced Representation Learning for Multi Label Image Classification
14
作者 Naikang Zhong Xiao Lin +1 位作者 Wen Du Jin Shi 《Computers, Materials & Continua》 2025年第3期5285-5306,共22页
Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feat... Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification. 展开更多
关键词 image classification MULTI-LABEL multi scale attention mechanisms feature fusion
在线阅读 下载PDF
Hybrid Fusion Net with Explanability:A Novel Explainable Deep Learning-Based Hybrid Framework for Enhanced Skin Lesion Classification Using Dermoscopic Images
15
作者 Mohamed Hammad Mohammed El Affendi Souham Meshoul 《Computer Modeling in Engineering & Sciences》 2025年第10期1055-1086,共32页
Skin cancer is among the most common malignancies worldwide,but its mortality burden is largely driven by aggressive subtypes such as melanoma,with outcomes varying across regions and healthcare settings.These variati... Skin cancer is among the most common malignancies worldwide,but its mortality burden is largely driven by aggressive subtypes such as melanoma,with outcomes varying across regions and healthcare settings.These variations emphasize the importance of reliable diagnostic technologies that support clinicians in detecting skin malignancies with higher accuracy.Traditional diagnostic methods often rely on subjective visual assessments,which can lead to misdiagnosis.This study addresses these challenges by developing HybridFusionNet,a novel model that integrates Convolutional Neural Networks(CNN)with 1D feature extraction techniques to enhance diagnostic accuracy.Utilizing two extensive datasets,BCN20000 and HAM10000,the methodology includes data preprocessing,application of Synthetic Minority Oversampling Technique combined with Edited Nearest Neighbors(SMOTEENN)for data balancing,and optimization of feature selection using the Tree-based Pipeline Optimization Tool(TPOT).The results demonstrate significant performance improvements over traditional CNN models,achieving an accuracy of 0.9693 on the BCN20000 dataset and 0.9909 on the HAM10000 dataset.The HybridFusionNet model not only outperforms conventionalmethods but also effectively addresses class imbalance.To enhance transparency,it integrates post-hoc explanation techniques such as LIME,which highlight the features influencing predictions.These findings highlight the potential of HybridFusionNet to support real-world applications,including physician-assist systems,teledermatology,and large-scale skin cancer screening programs.By improving diagnostic efficiency and enabling access to expert-level analysis,the modelmay enhance patient outcomes and foster greater trust in artificial intelligence(AI)-assisted clinical decision-making. 展开更多
关键词 AI CNN deep learning image classification model optimization skin cancer detection
在线阅读 下载PDF
MMGC-Net: Deep neural network for classification of mineral grains using multi-modal polarization images
16
作者 Jun Shu Xiaohai He +3 位作者 Qizhi Teng Pengcheng Yan Haibo He Honggang Chen 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第6期3894-3909,共16页
The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring ef... The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models. 展开更多
关键词 Mineral particles Multi-modal image classification Shared parameters Feature fusion Spatiotemporal feature
暂未订购
Multi-Label Image Classification Model Based on Multiscale Fusion and Adaptive Label Correlation
17
作者 YE Jihua JIANG Lu +2 位作者 XIAO Shunjie ZONG Yi JIANG Aiwen 《Journal of Shanghai Jiaotong university(Science)》 2025年第5期889-898,共10页
At present,research on multi-label image classification mainly focuses on exploring the correlation between labels to improve the classification accuracy of multi-label images.However,in existing methods,label correla... At present,research on multi-label image classification mainly focuses on exploring the correlation between labels to improve the classification accuracy of multi-label images.However,in existing methods,label correlation is calculated based on the statistical information of the data.This label correlation is global and depends on the dataset,not suitable for all samples.In the process of extracting image features,the characteristic information of small objects in the image is easily lost,resulting in a low classification accuracy of small objects.To this end,this paper proposes a multi-label image classification model based on multiscale fusion and adaptive label correlation.The main idea is:first,the feature maps of multiple scales are fused to enhance the feature information of small objects.Semantic guidance decomposes the fusion feature map into feature vectors of each category,then adaptively mines the correlation between categories in the image through the self-attention mechanism of graph attention network,and obtains feature vectors containing category-related information for the final classification.The mean average precision of the model on the two public datasets of VOC 2007 and MS COCO 2014 reached 95.6% and 83.6%,respectively,and most of the indicators are better than those of the existing latest methods. 展开更多
关键词 image classification label correlation graph attention network small object multi-scale fusion
原文传递
New MDA Transformation Process from Urban Satellite Image Classification to Specific Urban Landsat Satellite Image Classification
18
作者 Hafsa Ouchra Abdessamad Belangour +1 位作者 Allae Erraissi Maria Labied 《Journal of Environmental & Earth Sciences》 2025年第1期81-91,共11页
In a context where urban satellite image processing technologies are undergoing rapid evolution,this article presents an innovative and rigorous approach to satellite image classification applied to urban planning.Thi... In a context where urban satellite image processing technologies are undergoing rapid evolution,this article presents an innovative and rigorous approach to satellite image classification applied to urban planning.This research proposes an integrated methodological framework,based on the principles of model-driven engineering(MDE),to transform a generic meta-model into a meta-model specifically dedicated to urban satellite image classification.We implemented this transformation using the Atlas Transformation Language(ATL),guaranteeing a smooth and consistent transition from platform-independent model(PIM)to platform-specific model(PSM),according to the principles of model-driven architecture(MDA).The application of this IDM methodology enables advanced structuring of satellite data for targeted urban planning analyses,making it possible to classify various urban zones such as built-up,cultivated,arid and water areas.The novelty of this approach lies in the automation and standardization of the classification process,which significantly reduces the need for manual intervention,and thus improves the reliability,reproducibility and efficiency of urban data analysis.By adopting this method,decision-makers and urban planners are provided with a powerful tool for systematically and consistently analyzing and interpreting satellite images,facilitating decision-making in critical areas such as urban space management,infrastructure planning and environmental preservation. 展开更多
关键词 Model-Driven Engineering META-MODEL ATL Transformation Urban Satellite image classification Meta-Model
在线阅读 下载PDF
Dual-Classifier Label Correction Network for Carotid Plaque Classification on Multi-Center Ultrasound Images
19
作者 Louyi Jiang Sulei Wang +2 位作者 Jiang Xie Haiya Wang Wei Shao 《Computers, Materials & Continua》 2025年第6期5445-5460,共16页
Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerabil... Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerability.Therefore,classifying plaque risk constitutes one of themost critical tasks in the clinicalmanagement of this condition.While classification models derived from individual medical centers have been extensively investigated,these singlecenter models often fail to generalize well to multi-center data due to variations in ultrasound images caused by differences in physician expertise and equipment.To address this limitation,a Dual-Classifier Label Correction Networkmodel(DCLCN)is proposed for the classification of carotid plaque ultrasound images acrossmultiplemedical centers.TheDCLCNdesigns amulti-center domain adaptationmodule that leverages a dual-classifier strategy to extract knowledge from both source and target centers,thereby reducing feature discrepancies through a domain adaptation layer.Additionally,to mitigate the impact of image noise,a label modeling and correction module is introduced to generate pseudo-labels for the target centers and iteratively refine them using an end-to-end correction mechanism.Experiments on the carotid plaque dataset collected fromthreemedical centers demonstrate that the DCLCN achieves commendable performance and robustness. 展开更多
关键词 Deep learning medical image processing carotid plaque classification multi-center data
在线阅读 下载PDF
Enhancing Medical Image Classification with BSDA-Mamba:Integrating Bayesian Random Semantic Data Augmentation and Residual Connections
20
作者 Honglin Wang Yaohua Xu Cheng Zhu 《Computers, Materials & Continua》 2025年第6期4999-5018,共20页
Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Aug... Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Augmentation(BSDA)with a Vision Mamba-based model for medical image classification(MedMamba),enhanced by residual connection blocks,we named the model BSDA-Mamba.BSDA augments medical image data semantically,enhancing the model’s generalization ability and classification performance.MedMamba,a deep learning-based state space model,excels in capturing long-range dependencies in medical images.By incorporating residual connections,BSDA-Mamba further improves feature extraction capabilities.Through comprehensive experiments on eight medical image datasets,we demonstrate that BSDA-Mamba outperforms existing models in accuracy,area under the curve,and F1-score.Our results highlight BSDA-Mamba’s potential as a reliable tool for medical image analysis,particularly in handling diverse imaging modalities from X-rays to MRI.The open-sourcing of our model’s code and datasets,will facilitate the reproduction and extension of our work. 展开更多
关键词 Deep learning medical image classification data augmentation visual state space model
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部