Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
In this paper,we propose hierarchical attention dual network(DNet)for fine-grained image classification.The DNet can randomly select pairs of inputs from the dataset and compare the differences between them through hi...In this paper,we propose hierarchical attention dual network(DNet)for fine-grained image classification.The DNet can randomly select pairs of inputs from the dataset and compare the differences between them through hierarchical attention feature learning,which are used simultaneously to remove noise and retain salient features.In the loss function,it considers the losses of difference in paired images according to the intra-variance and inter-variance.In addition,we also collect the disaster scene dataset from remote sensing images and apply the proposed method to disaster scene classification,which contains complex scenes and multiple types of disasters.Compared to other methods,experimental results show that the DNet with hierarchical attention is robust to different datasets and performs better.展开更多
Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning b...Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning based model,for the types identification.However,traditional approaches such as convolutional neural networks(CNNs)encounter difficulties in capturing global contextual information.In addition,they are computationally expensive,which restricts their usability in resource-limited environments.To tackle these issues,we present the Cloud Vision Transformer(CloudViT),a lightweight model that integrates CNNs with Transformers.The integration enables an effective balance between local and global feature extraction.To be specific,CloudViT comprises two innovative modules:Feature Extraction(E_Module)and Downsampling(D_Module).These modules are able to significantly reduce the number of model parameters and computational complexity while maintaining translation invariance and enhancing contextual comprehension.Overall,the CloudViT includes 0.93×10^(6)parameters,which decreases more than ten times compared to the SOTA(State-of-the-Art)model CloudNet.Comprehensive evaluations conducted on the HBMCD and SWIMCAT datasets showcase the outstanding performance of CloudViT.It achieves classification accuracies of 98.45%and 100%,respectively.Moreover,the efficiency and scalability of CloudViT make it an ideal candidate for deployment inmobile cloud observation systems,enabling real-time cloud image classification.The proposed hybrid architecture of CloudViT offers a promising approach for advancing ground-based cloud image classification.It holds significant potential for both optimizing performance and facilitating practical deployment scenarios.展开更多
Optical and hybrid convolutional neural networks(CNNs)recently have become of increasing interest to achieve low-latency,low-power image classification,and computer-vision tasks.However,implementing optical nonlineari...Optical and hybrid convolutional neural networks(CNNs)recently have become of increasing interest to achieve low-latency,low-power image classification,and computer-vision tasks.However,implementing optical nonlinearity is challenging,and omitting the nonlinear layers in a standard CNN comes with a significant reduction in accuracy.We use knowledge distillation to compress modified AlexNet to a single linear convolutional layer and an electronic backend(two fully connected layers).We obtain comparable performance with a purely electronic CNN with five convolutional layers and three fully connected layers.We implement the convolution optically via engineering the point spread function of an inverse-designed meta-optic.Using this hybrid approach,we estimate a reduction in multiply-accumulate operations from 17M in a conventional electronic modified AlexNet to only 86 K in the hybrid compressed network enabled by the optical front end.This constitutes over 2 orders of magnitude of reduction in latency and power consumption.Furthermore,we experimentally demonstrate that the classification accuracy of the system exceeds 93%on the MNIST dataset of handwritten digits.展开更多
Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feat...Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.展开更多
The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring ef...The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models.展开更多
At present,research on multi-label image classification mainly focuses on exploring the correlation between labels to improve the classification accuracy of multi-label images.However,in existing methods,label correla...At present,research on multi-label image classification mainly focuses on exploring the correlation between labels to improve the classification accuracy of multi-label images.However,in existing methods,label correlation is calculated based on the statistical information of the data.This label correlation is global and depends on the dataset,not suitable for all samples.In the process of extracting image features,the characteristic information of small objects in the image is easily lost,resulting in a low classification accuracy of small objects.To this end,this paper proposes a multi-label image classification model based on multiscale fusion and adaptive label correlation.The main idea is:first,the feature maps of multiple scales are fused to enhance the feature information of small objects.Semantic guidance decomposes the fusion feature map into feature vectors of each category,then adaptively mines the correlation between categories in the image through the self-attention mechanism of graph attention network,and obtains feature vectors containing category-related information for the final classification.The mean average precision of the model on the two public datasets of VOC 2007 and MS COCO 2014 reached 95.6% and 83.6%,respectively,and most of the indicators are better than those of the existing latest methods.展开更多
We propose a hierarchical multi-scale attention mechanism-based model in response to the low accuracy and inefficient manual classification of existing oceanic biological image classification methods. Firstly, the hie...We propose a hierarchical multi-scale attention mechanism-based model in response to the low accuracy and inefficient manual classification of existing oceanic biological image classification methods. Firstly, the hierarchical efficient multi-scale attention(H-EMA) module is designed for lightweight feature extraction, achieving outstanding performance at a relatively low cost. Secondly, an improved EfficientNetV2 block is used to integrate information from different scales better and enhance inter-layer message passing. Furthermore, introducing the convolutional block attention module(CBAM) enhances the model's perception of critical features, optimizing its generalization ability. Lastly, Focal Loss is introduced to adjust the weights of complex samples to address the issue of imbalanced categories in the dataset, further improving the model's performance. The model achieved 96.11% accuracy on the intertidal marine organism dataset of Nanji Islands and 84.78% accuracy on the CIFAR-100 dataset, demonstrating its strong generalization ability to meet the demands of oceanic biological image classification.展开更多
The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natura...The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples.展开更多
Real-world data always exhibit an imbalanced and long-tailed distribution,which leads to poor performance for neural network-based classification.Existing methods mainly tackle this problem by reweighting the loss fun...Real-world data always exhibit an imbalanced and long-tailed distribution,which leads to poor performance for neural network-based classification.Existing methods mainly tackle this problem by reweighting the loss function or rebalancing the classifier.However,one crucial aspect overlooked by previous research studies is the imbalanced feature space problem caused by the imbalanced angle distribution.In this paper,the authors shed light on the significance of the angle distribution in achieving a balanced feature space,which is essential for improving model performance under long-tailed distributions.Nevertheless,it is challenging to effectively balance both the classifier norms and angle distribution due to problems such as the low feature norm.To tackle these challenges,the authors first thoroughly analyse the classifier and feature space by decoupling the classification logits into three key components:classifier norm(i.e.the magnitude of the classifier vector),feature norm(i.e.the magnitude of the feature vector),and cosine similarity between the classifier vector and feature vector.In this way,the authors analyse the change of each component in the training process and reveal three critical problems that should be solved,that is,the imbalanced angle distribution,the lack of feature discrimination,and the low feature norm.Drawing from this analysis,the authors propose a novel loss function that incorporates hyperspherical uniformity,additive angular margin,and feature norm regularisation.Each component of the loss function addresses a specific problem and synergistically contributes to achieving a balanced classifier and feature space.The authors conduct extensive experiments on three popular benchmark datasets including CIFAR-10/100-LT,ImageNet-LT,and iNaturalist 2018.The experimental results demonstrate that the authors’loss function outperforms several previous state-of-the-art methods in addressing the challenges posed by imbalanced and longtailed datasets,that is,by improving upon the best-performing baselines on CIFAR-100-LT by 1.34,1.41,1.41 and 1.33,respectively.展开更多
In hyperspectral image classification(HSIC),accurately extracting spatial and spectral information from hyperspectral images(HSI)is crucial for achieving precise classification.However,due to low spatial resolution an...In hyperspectral image classification(HSIC),accurately extracting spatial and spectral information from hyperspectral images(HSI)is crucial for achieving precise classification.However,due to low spatial resolution and complex category boundary,mixed pixels containing features from multiple classes are inevitable in HSIs.Additionally,the spectral similarity among different classes challenge for extracting distinctive spectral features essential for HSIC.To address the impact of mixed pixels and spectral similarity for HSIC,we propose a central-pixel guiding sub-pixel and sub-channel convolution network(CP-SPSC)to extract more precise spatial and spectral features.Firstly,we designed spatial attention(CP-SPA)and spectral attention(CP-SPE)informed by the central pixel to effectively reduce spectral interference of irrelevant categories in the same patch.Furthermore,we use CP-SPA to guide 2D sub-pixel convolution(SPConv2d)to capture spatial features finer than the pixel level.Meanwhile,CP-SPE is also utilized to guide 1D sub-channel con-volution(SCConv1d)in selecting more precise spectral channels.For fusing spatial and spectral information at the feature-level,the spectral feature extension transformation module(SFET)adopts mirror-padding and snake permutation to transform 1D spectral information of the center pixel into 2D spectral features.Experiments on three popular datasets demonstrate that ours out-performs several state-of-the-art methods in accuracy.展开更多
In a context where urban satellite image processing technologies are undergoing rapid evolution,this article presents an innovative and rigorous approach to satellite image classification applied to urban planning.Thi...In a context where urban satellite image processing technologies are undergoing rapid evolution,this article presents an innovative and rigorous approach to satellite image classification applied to urban planning.This research proposes an integrated methodological framework,based on the principles of model-driven engineering(MDE),to transform a generic meta-model into a meta-model specifically dedicated to urban satellite image classification.We implemented this transformation using the Atlas Transformation Language(ATL),guaranteeing a smooth and consistent transition from platform-independent model(PIM)to platform-specific model(PSM),according to the principles of model-driven architecture(MDA).The application of this IDM methodology enables advanced structuring of satellite data for targeted urban planning analyses,making it possible to classify various urban zones such as built-up,cultivated,arid and water areas.The novelty of this approach lies in the automation and standardization of the classification process,which significantly reduces the need for manual intervention,and thus improves the reliability,reproducibility and efficiency of urban data analysis.By adopting this method,decision-makers and urban planners are provided with a powerful tool for systematically and consistently analyzing and interpreting satellite images,facilitating decision-making in critical areas such as urban space management,infrastructure planning and environmental preservation.展开更多
Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Aug...Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Augmentation(BSDA)with a Vision Mamba-based model for medical image classification(MedMamba),enhanced by residual connection blocks,we named the model BSDA-Mamba.BSDA augments medical image data semantically,enhancing the model’s generalization ability and classification performance.MedMamba,a deep learning-based state space model,excels in capturing long-range dependencies in medical images.By incorporating residual connections,BSDA-Mamba further improves feature extraction capabilities.Through comprehensive experiments on eight medical image datasets,we demonstrate that BSDA-Mamba outperforms existing models in accuracy,area under the curve,and F1-score.Our results highlight BSDA-Mamba’s potential as a reliable tool for medical image analysis,particularly in handling diverse imaging modalities from X-rays to MRI.The open-sourcing of our model’s code and datasets,will facilitate the reproduction and extension of our work.展开更多
Current concrete surface crack detection methods cannot simultaneously achieve high detection accuracy and efficiency.Thus,this study focuses on the recognition and classification of crack images and proposes a concre...Current concrete surface crack detection methods cannot simultaneously achieve high detection accuracy and efficiency.Thus,this study focuses on the recognition and classification of crack images and proposes a concrete crack detection method that integrates the Inception module and a quantum convolutional neural network.First,the features of concrete cracks are highlighted by image gray processing,morphological operations,and threshold segmentation,and then the image is quantum coded by angle coding to transform the classical image information into quantum image information.Then,quantum circuits are used to implement classical image convolution operations to improve the convergence speed of the model and enhance the image representation.Second,two image input paths are designed:one with a quantum convolutional layer and the other with a classical convolutional layer.Finally,comparative experiments are conducted using different parameters to determine the optimal concrete crack classification parameter values for concrete crack image classification.Experimental results show that the method is suitable for crack classification in different scenarios,and training speed is greatly improved compared with that of existing deep learning models.The two evaluation metrics,accuracy and recall,are considerably enhanced.展开更多
The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning technique...The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning techniques bring encouraging performance to fine-grained image classification which aims to distinguish subordinate-level categories, such as bird species or dog breeds. This task is extremely challenging due to high intra-class and low inter-class variance. In this paper, we review four types of deep learning based fine-grained image classification approaches, including the general convolutional neural networks (CNNs), part detection based, ensemble of networks based and visual attention based fine-grained image classification approaches. Besides, the deep learning based semantic segmentation approaches are also covered in this paper. The region proposal based and fully convolutional networks based approaches for semantic segmentation are introduced respectively.展开更多
The automated interpretation of rock structure can improve the efficiency,accuracy,and consistency of the geological risk assessment of tunnel face.Because of the high uncertainties in the geological images as a resul...The automated interpretation of rock structure can improve the efficiency,accuracy,and consistency of the geological risk assessment of tunnel face.Because of the high uncertainties in the geological images as a result of different regional rock types,as well as in-situ conditions(e.g.,temperature,humidity,and construction procedure),previous automated methods have limited performance in classification of rock structure of tunnel face during construction.This paper presents a framework for classifying multiple rock structures based on the geological images of tunnel face using convolutional neural networks(CNN),namely Inception-ResNet-V2(IRV2).A prototype recognition system is implemented to classify 5 types of rock structures including mosaic,granular,layered,block,and fragmentation structures.The proposed IRV2 network is trained by over 35,000 out of 42,400 images extracted from over 150 sections of tunnel faces and tested by the remaining 7400 images.Furthermore,different hyperparameters of the CNN model are introduced to optimize the most efficient algorithm parameter.Among all the discussed models,i.e.,ResNet-50,ResNet-101,and Inception-v4,Inception-ResNet-V2 exhibits the best performance in terms of various indicators,such as precision,recall,F-score,and testing time per image.Meanwhile,the model trained by a large database can obtain the object features more comprehensively,leading to higher accuracy.Compared with the original image classification method,the sub-image method is closer to the reality considering both the accuracy and the perspective of error divergence.The experimental results reveal that the proposed method is optimal and efficient for automated classification of rock structure using the geological images of the tunnel face.展开更多
Hyperspectral image(HSI)classification has been one of themost important tasks in the remote sensing community over the last few decades.Due to the presence of highly correlated bands and limited training samples in H...Hyperspectral image(HSI)classification has been one of themost important tasks in the remote sensing community over the last few decades.Due to the presence of highly correlated bands and limited training samples in HSI,discriminative feature extraction was challenging for traditional machine learning methods.Recently,deep learning based methods have been recognized as powerful feature extraction tool and have drawn a significant amount of attention in HSI classification.Among various deep learning models,convolutional neural networks(CNNs)have shown huge success and offered great potential to yield high performance in HSI classification.Motivated by this successful performance,this paper presents a systematic review of different CNN architectures for HSI classification and provides some future guidelines.To accomplish this,our study has taken a few important steps.First,we have focused on different CNN architectures,which are able to extract spectral,spatial,and joint spectral-spatial features.Then,many publications related to CNN based HSI classifications have been reviewed systematically.Further,a detailed comparative performance analysis has been presented between four CNN models namely 1D CNN,2D CNN,3D CNN,and feature fusion based CNN(FFCNN).Four benchmark HSI datasets have been used in our experiment for evaluating the performance.Finally,we concluded the paper with challenges on CNN based HSI classification and future guidelines that may help the researchers to work on HSI classification using CNN.展开更多
The conventional sparse representation-based image classification usually codes the samples independently,which will ignore the correlation information existed in the data.Hence,if we can explore the correlation infor...The conventional sparse representation-based image classification usually codes the samples independently,which will ignore the correlation information existed in the data.Hence,if we can explore the correlation information hidden in the data,the classification result will be improved significantly.To this end,in this paper,a novel weighted supervised spare coding method is proposed to address the image classification problem.The proposed method firstly explores the structural information sufficiently hidden in the data based on the low rank representation.And then,it introduced the extracted structural information to a novel weighted sparse representation model to code the samples in a supervised way.Experimental results show that the proposed method is superiority to many conventional image classification methods.展开更多
The evolving“Industry 4.0”domain encompasses a collection of future industrial developments with cyber-physical systems(CPS),Internet of things(IoT),big data,cloud computing,etc.Besides,the industrial Internet of th...The evolving“Industry 4.0”domain encompasses a collection of future industrial developments with cyber-physical systems(CPS),Internet of things(IoT),big data,cloud computing,etc.Besides,the industrial Internet of things(IIoT)directs data from systems for monitoring and controlling the physical world to the data processing system.A major novelty of the IIoT is the unmanned aerial vehicles(UAVs),which are treated as an efficient remote sensing technique to gather data from large regions.UAVs are commonly employed in the industrial sector to solve several issues and help decision making.But the strict regulations leading to data privacy possibly hinder data sharing across autonomous UAVs.Federated learning(FL)becomes a recent advancement of machine learning(ML)which aims to protect user data.In this aspect,this study designs federated learning with blockchain assisted image classification model for clustered UAV networks(FLBIC-CUAV)on IIoT environment.The proposed FLBIC-CUAV technique involves three major processes namely clustering,blockchain enabled secure communication and FL based image classification.For UAV cluster construction process,beetle swarm optimization(BSO)algorithm with three input parameters is designed to cluster the UAVs for effective communication.In addition,blockchain enabled secure data transmission process take place to transmit the data from UAVs to cloud servers.Finally,the cloud server uses an FL with Residual Network model to carry out the image classification process.A wide range of simulation analyses takes place for ensuring the betterment of the FLBIC-CUAV approach.The experimental outcomes portrayed the betterment of the FLBIC-CUAV approach over the recent state of art methods.展开更多
Indian agriculture is striving to achieve sustainable intensification,the system aiming to increase agricultural yield per unit area without harming natural resources and the ecosystem.Modern farming employs technolog...Indian agriculture is striving to achieve sustainable intensification,the system aiming to increase agricultural yield per unit area without harming natural resources and the ecosystem.Modern farming employs technology to improve productivity.Early and accurate analysis and diagnosis of plant disease is very helpful in reducing plant diseases and improving plant health and food crop productivity.Plant disease experts are not available in remote areas thus there is a requirement of automatic low-cost,approachable and reliable solutions to identify the plant diseases without the laboratory inspection and expert’s opinion.Deep learning-based computer vision techniques like Convolutional Neural Network(CNN)and traditional machine learning-based image classification approaches are being applied to identify plant diseases.In this paper,the CNN model is proposed for the classification of rice and potato plant leaf diseases.Rice leaves are diagnosed with bacterial blight,blast,brown spot and tungro diseases.Potato leaf images are classified into three classes:healthy leaves,early blight and late blight diseases.Rice leaf dataset with 5932 images and 1500 potato leaf images are used in the study.The proposed CNN model was able to learn hidden patterns from the raw images and classify rice images with 99.58%accuracy and potato leaves with 97.66%accuracy.The results demonstrate that the proposed CNN model performed better when compared with other machine learning image classifiers such as Support Vector Machine(SVM),K-Nearest Neighbors(KNN),Decision Tree and Random Forest.展开更多
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金Supported by the National Natural Science Foundation of China(61601176)。
文摘In this paper,we propose hierarchical attention dual network(DNet)for fine-grained image classification.The DNet can randomly select pairs of inputs from the dataset and compare the differences between them through hierarchical attention feature learning,which are used simultaneously to remove noise and retain salient features.In the loss function,it considers the losses of difference in paired images according to the intra-variance and inter-variance.In addition,we also collect the disaster scene dataset from remote sensing images and apply the proposed method to disaster scene classification,which contains complex scenes and multiple types of disasters.Compared to other methods,experimental results show that the DNet with hierarchical attention is robust to different datasets and performs better.
基金funded by Innovation and Development Special Project of China Meteorological Administration(CXFZ2022J038,CXFZ2024J035)Sichuan Science and Technology Program(No.2023YFQ0072)+1 种基金Key Laboratory of Smart Earth(No.KF2023YB03-07)Automatic Software Generation and Intelligent Service Key Laboratory of Sichuan Province(CUIT-SAG202210).
文摘Accurate cloud classification plays a crucial role in aviation safety,climate monitoring,and localized weather forecasting.Current research has been focusing on machine learning techniques,particularly deep learning based model,for the types identification.However,traditional approaches such as convolutional neural networks(CNNs)encounter difficulties in capturing global contextual information.In addition,they are computationally expensive,which restricts their usability in resource-limited environments.To tackle these issues,we present the Cloud Vision Transformer(CloudViT),a lightweight model that integrates CNNs with Transformers.The integration enables an effective balance between local and global feature extraction.To be specific,CloudViT comprises two innovative modules:Feature Extraction(E_Module)and Downsampling(D_Module).These modules are able to significantly reduce the number of model parameters and computational complexity while maintaining translation invariance and enhancing contextual comprehension.Overall,the CloudViT includes 0.93×10^(6)parameters,which decreases more than ten times compared to the SOTA(State-of-the-Art)model CloudNet.Comprehensive evaluations conducted on the HBMCD and SWIMCAT datasets showcase the outstanding performance of CloudViT.It achieves classification accuracies of 98.45%and 100%,respectively.Moreover,the efficiency and scalability of CloudViT make it an ideal candidate for deployment inmobile cloud observation systems,enabling real-time cloud image classification.The proposed hybrid architecture of CloudViT offers a promising approach for advancing ground-based cloud image classification.It holds significant potential for both optimizing performance and facilitating practical deployment scenarios.
基金supported by the National Science Foundation(Grant Nos.NSF-ECCS-2127235 and EFRI-BRAID-2223495)Part of this work was conducted at the Washington Nanofabrication Facility/Molecular Analysis Facility,a National Nanotechnology Coordinated Infrastructure(NNCI)site at the University of Washington with partial support from the National Science Foundation(Grant Nos.NNCI-1542101 and NNCI-2025489).
文摘Optical and hybrid convolutional neural networks(CNNs)recently have become of increasing interest to achieve low-latency,low-power image classification,and computer-vision tasks.However,implementing optical nonlinearity is challenging,and omitting the nonlinear layers in a standard CNN comes with a significant reduction in accuracy.We use knowledge distillation to compress modified AlexNet to a single linear convolutional layer and an electronic backend(two fully connected layers).We obtain comparable performance with a purely electronic CNN with five convolutional layers and three fully connected layers.We implement the convolution optically via engineering the point spread function of an inverse-designed meta-optic.Using this hybrid approach,we estimate a reduction in multiply-accumulate operations from 17M in a conventional electronic modified AlexNet to only 86 K in the hybrid compressed network enabled by the optical front end.This constitutes over 2 orders of magnitude of reduction in latency and power consumption.Furthermore,we experimentally demonstrate that the classification accuracy of the system exceeds 93%on the MNIST dataset of handwritten digits.
基金supported by the National Natural Science Foundation of China(62302167,62477013)Natural Science Foundation of Shanghai(No.24ZR1456100)+1 种基金Science and Technology Commission of Shanghai Municipality(No.24DZ2305900)the Shanghai Municipal Special Fund for Promoting High-Quality Development of Industries(2211106).
文摘Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.
基金supported by the National Natural Science Foundation of China(Grant Nos.62071315 and 62271336).
文摘The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models.
基金the National Natural Science Foundation of China(Nos.62167005 and 61966018)the Key Research Projects of Jiangxi Provincial Department of Education(No.GJJ200302)。
文摘At present,research on multi-label image classification mainly focuses on exploring the correlation between labels to improve the classification accuracy of multi-label images.However,in existing methods,label correlation is calculated based on the statistical information of the data.This label correlation is global and depends on the dataset,not suitable for all samples.In the process of extracting image features,the characteristic information of small objects in the image is easily lost,resulting in a low classification accuracy of small objects.To this end,this paper proposes a multi-label image classification model based on multiscale fusion and adaptive label correlation.The main idea is:first,the feature maps of multiple scales are fused to enhance the feature information of small objects.Semantic guidance decomposes the fusion feature map into feature vectors of each category,then adaptively mines the correlation between categories in the image through the self-attention mechanism of graph attention network,and obtains feature vectors containing category-related information for the final classification.The mean average precision of the model on the two public datasets of VOC 2007 and MS COCO 2014 reached 95.6% and 83.6%,respectively,and most of the indicators are better than those of the existing latest methods.
基金supported by the National Natural Science Foundation of China (Nos.61806107 and 61702135)。
文摘We propose a hierarchical multi-scale attention mechanism-based model in response to the low accuracy and inefficient manual classification of existing oceanic biological image classification methods. Firstly, the hierarchical efficient multi-scale attention(H-EMA) module is designed for lightweight feature extraction, achieving outstanding performance at a relatively low cost. Secondly, an improved EfficientNetV2 block is used to integrate information from different scales better and enhance inter-layer message passing. Furthermore, introducing the convolutional block attention module(CBAM) enhances the model's perception of critical features, optimizing its generalization ability. Lastly, Focal Loss is introduced to adjust the weights of complex samples to address the issue of imbalanced categories in the dataset, further improving the model's performance. The model achieved 96.11% accuracy on the intertidal marine organism dataset of Nanji Islands and 84.78% accuracy on the CIFAR-100 dataset, demonstrating its strong generalization ability to meet the demands of oceanic biological image classification.
文摘The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples.
基金National Key Research and Development Program of China,Grant/Award Numbers:2022YFB3103900,2023YFB3106504Major Key Project of PCL,Grant/Award Numbers:PCL2022A03,PCL2023A09+5 种基金Shenzhen Basic Research,Grant/Award Number:JCYJ20220531095214031Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies,Grant/Award Number:2022B1212010005Shenzhen International Science and Technology Cooperation Project,Grant/Award Number:GJHZ20220913143008015Natural Science Foundation of Guangdong Province,Grant/Award Number:2023A1515011959Shenzhen-Hong Kong Jointly Funded Project,Grant/Award Number:SGDX20230116091246007Shenzhen Science and Technology Program,Grant/Award Numbers:RCBS20221008093131089,ZDSYS20210623091809029。
文摘Real-world data always exhibit an imbalanced and long-tailed distribution,which leads to poor performance for neural network-based classification.Existing methods mainly tackle this problem by reweighting the loss function or rebalancing the classifier.However,one crucial aspect overlooked by previous research studies is the imbalanced feature space problem caused by the imbalanced angle distribution.In this paper,the authors shed light on the significance of the angle distribution in achieving a balanced feature space,which is essential for improving model performance under long-tailed distributions.Nevertheless,it is challenging to effectively balance both the classifier norms and angle distribution due to problems such as the low feature norm.To tackle these challenges,the authors first thoroughly analyse the classifier and feature space by decoupling the classification logits into three key components:classifier norm(i.e.the magnitude of the classifier vector),feature norm(i.e.the magnitude of the feature vector),and cosine similarity between the classifier vector and feature vector.In this way,the authors analyse the change of each component in the training process and reveal three critical problems that should be solved,that is,the imbalanced angle distribution,the lack of feature discrimination,and the low feature norm.Drawing from this analysis,the authors propose a novel loss function that incorporates hyperspherical uniformity,additive angular margin,and feature norm regularisation.Each component of the loss function addresses a specific problem and synergistically contributes to achieving a balanced classifier and feature space.The authors conduct extensive experiments on three popular benchmark datasets including CIFAR-10/100-LT,ImageNet-LT,and iNaturalist 2018.The experimental results demonstrate that the authors’loss function outperforms several previous state-of-the-art methods in addressing the challenges posed by imbalanced and longtailed datasets,that is,by improving upon the best-performing baselines on CIFAR-100-LT by 1.34,1.41,1.41 and 1.33,respectively.
基金supported by the National Natural Science Foundation of China(No.62071323).
文摘In hyperspectral image classification(HSIC),accurately extracting spatial and spectral information from hyperspectral images(HSI)is crucial for achieving precise classification.However,due to low spatial resolution and complex category boundary,mixed pixels containing features from multiple classes are inevitable in HSIs.Additionally,the spectral similarity among different classes challenge for extracting distinctive spectral features essential for HSIC.To address the impact of mixed pixels and spectral similarity for HSIC,we propose a central-pixel guiding sub-pixel and sub-channel convolution network(CP-SPSC)to extract more precise spatial and spectral features.Firstly,we designed spatial attention(CP-SPA)and spectral attention(CP-SPE)informed by the central pixel to effectively reduce spectral interference of irrelevant categories in the same patch.Furthermore,we use CP-SPA to guide 2D sub-pixel convolution(SPConv2d)to capture spatial features finer than the pixel level.Meanwhile,CP-SPE is also utilized to guide 1D sub-channel con-volution(SCConv1d)in selecting more precise spectral channels.For fusing spatial and spectral information at the feature-level,the spectral feature extension transformation module(SFET)adopts mirror-padding and snake permutation to transform 1D spectral information of the center pixel into 2D spectral features.Experiments on three popular datasets demonstrate that ours out-performs several state-of-the-art methods in accuracy.
文摘In a context where urban satellite image processing technologies are undergoing rapid evolution,this article presents an innovative and rigorous approach to satellite image classification applied to urban planning.This research proposes an integrated methodological framework,based on the principles of model-driven engineering(MDE),to transform a generic meta-model into a meta-model specifically dedicated to urban satellite image classification.We implemented this transformation using the Atlas Transformation Language(ATL),guaranteeing a smooth and consistent transition from platform-independent model(PIM)to platform-specific model(PSM),according to the principles of model-driven architecture(MDA).The application of this IDM methodology enables advanced structuring of satellite data for targeted urban planning analyses,making it possible to classify various urban zones such as built-up,cultivated,arid and water areas.The novelty of this approach lies in the automation and standardization of the classification process,which significantly reduces the need for manual intervention,and thus improves the reliability,reproducibility and efficiency of urban data analysis.By adopting this method,decision-makers and urban planners are provided with a powerful tool for systematically and consistently analyzing and interpreting satellite images,facilitating decision-making in critical areas such as urban space management,infrastructure planning and environmental preservation.
文摘Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Augmentation(BSDA)with a Vision Mamba-based model for medical image classification(MedMamba),enhanced by residual connection blocks,we named the model BSDA-Mamba.BSDA augments medical image data semantically,enhancing the model’s generalization ability and classification performance.MedMamba,a deep learning-based state space model,excels in capturing long-range dependencies in medical images.By incorporating residual connections,BSDA-Mamba further improves feature extraction capabilities.Through comprehensive experiments on eight medical image datasets,we demonstrate that BSDA-Mamba outperforms existing models in accuracy,area under the curve,and F1-score.Our results highlight BSDA-Mamba’s potential as a reliable tool for medical image analysis,particularly in handling diverse imaging modalities from X-rays to MRI.The open-sourcing of our model’s code and datasets,will facilitate the reproduction and extension of our work.
基金supported by 2023 National College Students'Innovation and Entrepreneurship Training Program project"Building Crack Structure Safety Detection based on Quantum Convolutional Neural Network intelligent Algorithm-A case study of Sanzhuang Town,Donggang District,Rizhao City"(NO.202310429224).
文摘Current concrete surface crack detection methods cannot simultaneously achieve high detection accuracy and efficiency.Thus,this study focuses on the recognition and classification of crack images and proposes a concrete crack detection method that integrates the Inception module and a quantum convolutional neural network.First,the features of concrete cracks are highlighted by image gray processing,morphological operations,and threshold segmentation,and then the image is quantum coded by angle coding to transform the classical image information into quantum image information.Then,quantum circuits are used to implement classical image convolution operations to improve the convergence speed of the model and enhance the image representation.Second,two image input paths are designed:one with a quantum convolutional layer and the other with a classical convolutional layer.Finally,comparative experiments are conducted using different parameters to determine the optimal concrete crack classification parameter values for concrete crack image classification.Experimental results show that the method is suitable for crack classification in different scenarios,and training speed is greatly improved compared with that of existing deep learning models.The two evaluation metrics,accuracy and recall,are considerably enhanced.
基金supported by the National Natural Science Foundation of China(Nos.61373121 and 61328205)Program for Sichuan Provincial Science Fund for Distinguished Young Scholars(No.13QNJJ0149)+1 种基金the Fundamental Research Funds for the Central UniversitiesChina Scholarship Council(No.201507000032)
文摘The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning techniques bring encouraging performance to fine-grained image classification which aims to distinguish subordinate-level categories, such as bird species or dog breeds. This task is extremely challenging due to high intra-class and low inter-class variance. In this paper, we review four types of deep learning based fine-grained image classification approaches, including the general convolutional neural networks (CNNs), part detection based, ensemble of networks based and visual attention based fine-grained image classification approaches. Besides, the deep learning based semantic segmentation approaches are also covered in this paper. The region proposal based and fully convolutional networks based approaches for semantic segmentation are introduced respectively.
基金supported by the Natural Science Foundation Committee Program of China(Grant Nos.1538009 and 51778474)Science and Technology Project of Yunnan Provincial Transportation Department(Grant No.25 of 2018)+1 种基金the Fundamental Research Funds for the Central Universities in China(Grant No.0200219129)Key innovation team program of innovation talents promotion plan by MOST of China(Grant No.2016RA4059)。
文摘The automated interpretation of rock structure can improve the efficiency,accuracy,and consistency of the geological risk assessment of tunnel face.Because of the high uncertainties in the geological images as a result of different regional rock types,as well as in-situ conditions(e.g.,temperature,humidity,and construction procedure),previous automated methods have limited performance in classification of rock structure of tunnel face during construction.This paper presents a framework for classifying multiple rock structures based on the geological images of tunnel face using convolutional neural networks(CNN),namely Inception-ResNet-V2(IRV2).A prototype recognition system is implemented to classify 5 types of rock structures including mosaic,granular,layered,block,and fragmentation structures.The proposed IRV2 network is trained by over 35,000 out of 42,400 images extracted from over 150 sections of tunnel faces and tested by the remaining 7400 images.Furthermore,different hyperparameters of the CNN model are introduced to optimize the most efficient algorithm parameter.Among all the discussed models,i.e.,ResNet-50,ResNet-101,and Inception-v4,Inception-ResNet-V2 exhibits the best performance in terms of various indicators,such as precision,recall,F-score,and testing time per image.Meanwhile,the model trained by a large database can obtain the object features more comprehensively,leading to higher accuracy.Compared with the original image classification method,the sub-image method is closer to the reality considering both the accuracy and the perspective of error divergence.The experimental results reveal that the proposed method is optimal and efficient for automated classification of rock structure using the geological images of the tunnel face.
文摘Hyperspectral image(HSI)classification has been one of themost important tasks in the remote sensing community over the last few decades.Due to the presence of highly correlated bands and limited training samples in HSI,discriminative feature extraction was challenging for traditional machine learning methods.Recently,deep learning based methods have been recognized as powerful feature extraction tool and have drawn a significant amount of attention in HSI classification.Among various deep learning models,convolutional neural networks(CNNs)have shown huge success and offered great potential to yield high performance in HSI classification.Motivated by this successful performance,this paper presents a systematic review of different CNN architectures for HSI classification and provides some future guidelines.To accomplish this,our study has taken a few important steps.First,we have focused on different CNN architectures,which are able to extract spectral,spatial,and joint spectral-spatial features.Then,many publications related to CNN based HSI classifications have been reviewed systematically.Further,a detailed comparative performance analysis has been presented between four CNN models namely 1D CNN,2D CNN,3D CNN,and feature fusion based CNN(FFCNN).Four benchmark HSI datasets have been used in our experiment for evaluating the performance.Finally,we concluded the paper with challenges on CNN based HSI classification and future guidelines that may help the researchers to work on HSI classification using CNN.
基金This research is funded by the National Natural Science Foundation of China(61771154).
文摘The conventional sparse representation-based image classification usually codes the samples independently,which will ignore the correlation information existed in the data.Hence,if we can explore the correlation information hidden in the data,the classification result will be improved significantly.To this end,in this paper,a novel weighted supervised spare coding method is proposed to address the image classification problem.The proposed method firstly explores the structural information sufficiently hidden in the data based on the low rank representation.And then,it introduced the extracted structural information to a novel weighted sparse representation model to code the samples in a supervised way.Experimental results show that the proposed method is superiority to many conventional image classification methods.
基金We deeply acknowledge Taif University for supporting this research through Taif University Researchers Supporting Project Number(TURSP-2020/328),Taif University,Taif,Saudi Arabia.
文摘The evolving“Industry 4.0”domain encompasses a collection of future industrial developments with cyber-physical systems(CPS),Internet of things(IoT),big data,cloud computing,etc.Besides,the industrial Internet of things(IIoT)directs data from systems for monitoring and controlling the physical world to the data processing system.A major novelty of the IIoT is the unmanned aerial vehicles(UAVs),which are treated as an efficient remote sensing technique to gather data from large regions.UAVs are commonly employed in the industrial sector to solve several issues and help decision making.But the strict regulations leading to data privacy possibly hinder data sharing across autonomous UAVs.Federated learning(FL)becomes a recent advancement of machine learning(ML)which aims to protect user data.In this aspect,this study designs federated learning with blockchain assisted image classification model for clustered UAV networks(FLBIC-CUAV)on IIoT environment.The proposed FLBIC-CUAV technique involves three major processes namely clustering,blockchain enabled secure communication and FL based image classification.For UAV cluster construction process,beetle swarm optimization(BSO)algorithm with three input parameters is designed to cluster the UAVs for effective communication.In addition,blockchain enabled secure data transmission process take place to transmit the data from UAVs to cloud servers.Finally,the cloud server uses an FL with Residual Network model to carry out the image classification process.A wide range of simulation analyses takes place for ensuring the betterment of the FLBIC-CUAV approach.The experimental outcomes portrayed the betterment of the FLBIC-CUAV approach over the recent state of art methods.
基金This research supported by KAU Scientific Endowment,King Abdulaziz University,Jeddah,Saudi Arabia under Grant Number KAU 2020/251.
文摘Indian agriculture is striving to achieve sustainable intensification,the system aiming to increase agricultural yield per unit area without harming natural resources and the ecosystem.Modern farming employs technology to improve productivity.Early and accurate analysis and diagnosis of plant disease is very helpful in reducing plant diseases and improving plant health and food crop productivity.Plant disease experts are not available in remote areas thus there is a requirement of automatic low-cost,approachable and reliable solutions to identify the plant diseases without the laboratory inspection and expert’s opinion.Deep learning-based computer vision techniques like Convolutional Neural Network(CNN)and traditional machine learning-based image classification approaches are being applied to identify plant diseases.In this paper,the CNN model is proposed for the classification of rice and potato plant leaf diseases.Rice leaves are diagnosed with bacterial blight,blast,brown spot and tungro diseases.Potato leaf images are classified into three classes:healthy leaves,early blight and late blight diseases.Rice leaf dataset with 5932 images and 1500 potato leaf images are used in the study.The proposed CNN model was able to learn hidden patterns from the raw images and classify rice images with 99.58%accuracy and potato leaves with 97.66%accuracy.The results demonstrate that the proposed CNN model performed better when compared with other machine learning image classifiers such as Support Vector Machine(SVM),K-Nearest Neighbors(KNN),Decision Tree and Random Forest.