Detecting pavement cracks is critical for road safety and infrastructure management.Traditional methods,relying on manual inspection and basic image processing,are time-consuming and prone to errors.Recent deep-learni...Detecting pavement cracks is critical for road safety and infrastructure management.Traditional methods,relying on manual inspection and basic image processing,are time-consuming and prone to errors.Recent deep-learning(DL)methods automate crack detection,but many still struggle with variable crack patterns and environmental conditions.This study aims to address these limitations by introducing the Masker Transformer,a novel hybrid deep learning model that integrates the precise localization capabilities of Mask Region-based Convolutional Neural Network(Mask R-CNN)with the global contextual awareness of Vision Transformer(ViT).The research focuses on leveraging the strengths of both architectures to enhance segmentation accuracy and adaptability across different pavement conditions.We evaluated the performance of theMaskerTransformer against other state-of-theartmodels such asU-Net,TransformerU-Net(TransUNet),U-NetTransformer(UNETr),SwinU-NetTransformer(Swin-UNETr),You Only Look Once version 8(YoloV8),and Mask R-CNN using two benchmark datasets:Crack500 and DeepCrack.The findings reveal that the MaskerTransformer significantly outperforms the existing models,achieving the highest Dice SimilarityCoefficient(DSC),precision,recall,and F1-Score across both datasets.Specifically,the model attained a DSC of 80.04%on Crack500 and 91.37%on DeepCrack,demonstrating superior segmentation accuracy and reliability.The high precision and recall rates further substantiate its effectiveness in real-world applications,suggesting that the Masker Transformer can serve as a robust tool for automated pavement crack detection,potentially replacing more traditional methods.展开更多
This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as o...This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as other transformer-based models including Token to Token ViT,ViT withoutmemory,and Parallel ViT.Leveraging awidely-used steel surface defect dataset,the research applies data augmentation and t-distributed stochastic neighbor embedding(t-SNE)to enhance feature extraction and understanding.These techniques mitigated overfitting,stabilized training,and improved generalization capabilities.The LMViT model achieved a test accuracy of 97.22%,significantly outperforming ResNet18(88.89%)and ResNet50(88.90%),aswell as the Token to TokenViT(88.46%),ViT without memory(87.18),and Parallel ViT(91.03%).Furthermore,LMViT exhibited superior training and validation performance,attaining a validation accuracy of 98.2%compared to 91.0%for ResNet 18,96.0%for ResNet50,and 89.12%,87.51%,and 91.21%for Token to Token ViT,ViT without memory,and Parallel ViT,respectively.The findings highlight the LMViT’s ability to capture long-range dependencies in images,an areawhere CNNs struggle due to their reliance on local receptive fields and hierarchical feature extraction.The additional transformer-based models also demonstrate improved performance in capturing complex features over CNNs,with LMViT excelling particularly at detecting subtle and complex defects,which is critical for maintaining product quality and operational efficiency in industrial applications.For instance,the LMViT model successfully identified fine scratches and minor surface irregularities that CNNs often misclassify.This study not only demonstrates LMViT’s potential for real-world defect detection but also underscores the promise of other transformer-based architectures like Token to Token ViT,ViT without memory,and Parallel ViT in industrial scenarios where complex spatial relationships are key.Future research may focus on enhancing LMViT’s computational efficiency for deployment in real-time quality control systems.展开更多
In the vision transformer(ViT)architecture,image data are transformed into sequential data for processing,which may result in the loss of spatial positional information.While the self-attention mechanism enhances the ...In the vision transformer(ViT)architecture,image data are transformed into sequential data for processing,which may result in the loss of spatial positional information.While the self-attention mechanism enhances the capacity of ViT to capture global features,it compromises the preservation of fine-grained local feature information.To address these challenges,we propose a spatial positional enhancement module and a wavelet transform enhancement module tailored for ViT models.These modules aim to reduce spatial positional information loss during the patch embedding process and enhance the model’s feature extraction capabilities.The spatial positional enhancement module reinforces spatial information in sequential data through convolutional operations and multi-scale feature extraction.Meanwhile,the wavelet transform enhancement module utilizes the multi-scale analysis and frequency decomposition to improve the ViT’s understanding of global and local image structures.This enhancement also improves the ViT’s ability to process complex structures and intricate image details.Experiments on CIFAR-10,CIFAR-100 and ImageNet-1k datasets are done to compare the proposed method with advanced classification methods.The results show that the proposed model achieves a higher classification accuracy,confirming its effectiveness and competitive advantage.展开更多
Deep learning techniques have recently been the most popular method for automatically detecting bridge damage captured by unmanned aerial vehicles(UAVs).However,their wider application to real-world scenarios is hinde...Deep learning techniques have recently been the most popular method for automatically detecting bridge damage captured by unmanned aerial vehicles(UAVs).However,their wider application to real-world scenarios is hindered by three challenges:①defect scale variance,motion blur,and strong illumination significantly affect the accuracy and reliability of damage detectors;②existing commonly used anchor-based damage detectors struggle to effectively generalize to harsh real-world scenarios;and③convolutional neural networks(CNNs)lack the capability to model long-range dependencies across the entire image.This paper presents an efficient Vision Transformer-enhanced anchor-free YOLO(you only look once)method to address these challenges.First,a concrete bridge damage dataset was established,augmented by motion blur and varying brightness.Four key enhancements were then applied to an anchor-based YOLO method:①Four detection heads were introduced to alleviate the multi-scale damage detection issue;②decoupled heads were employed to address the conflict between classification and bounding box regression tasks inherent in the original coupled head design;③an anchor-free mechanism was incorporated to reduce the computational complexity and improve generalization to real-world scenarios;and④a novel Vision Transformer block,C3MaxViT,was added to enable CNNs to model long-range dependencies.These enhancements were integrated into an advanced anchor-based YOLOv5l algorithm,and the proposed Vision Transformer-enhanced anchor-free YOLO method was then compared against cutting-edge damage detection methods.The experimental results demonstrated the effectiveness of the proposed method,with an increase of 8.1%in mean average precision at intersection over union threshold of 0.5(mAP_(50))and an improvement of 8.4%in mAP@[0.5:.05:.95]respectively.Furthermore,extensive ablation studies revealed that the four detection heads,decoupled head design,anchor-free mechanism,and C3MaxViT contributed improvements of 2.4%,1.2%,2.6%,and 1.9%in mAP50,respectively.展开更多
Accurate plant species classification is essential for many applications,such as biodiversity conservation,ecological research,and sustainable agricultural practices.Traditional morphological classification methods ar...Accurate plant species classification is essential for many applications,such as biodiversity conservation,ecological research,and sustainable agricultural practices.Traditional morphological classification methods are inherently slow,labour-intensive,and prone to inaccuracies,especiallywhen distinguishing between species exhibiting visual similarities or high intra-species variability.To address these limitations and to overcome the constraints of imageonly approaches,we introduce a novel Artificial Intelligence-driven framework.This approach integrates robust Vision Transformer(ViT)models for advanced visual analysis with a multi-modal data fusion strategy,incorporating contextual metadata such as precise environmental conditions,geographic location,and phenological traits.This combination of visual and ecological cues significantly enhances classification accuracy and robustness,proving especially vital in complex,heterogeneous real-world environments.The proposedmodel achieves an impressive 97.27%of test accuracy,andMean Reciprocal Rank(MRR)of 0.9842 that demonstrates strong generalization capabilities.Furthermore,efficient utilization of high-performance GPU resources(RTX 3090,18 GB memory)ensures scalable processing of highdimensional data.Comparative analysis consistently confirms that ourmetadata fusion approach substantially improves classification performance,particularly formorphologically similar species,and through principled self-supervised and transfer learning from ImageNet,the model adapts efficiently to new species,ensuring enhanced generalization.This comprehensive approach holds profound practical implications for precise conservation initiatives,rigorous ecological monitoring,and advanced agricultural management.展开更多
Mango farming significantly contributes to the economy,particularly in developing countries.However,mango trees are susceptible to various diseases caused by fungi,viruses,and bacteria,and diagnosing these diseases at...Mango farming significantly contributes to the economy,particularly in developing countries.However,mango trees are susceptible to various diseases caused by fungi,viruses,and bacteria,and diagnosing these diseases at an early stage is crucial to prevent their spread,which can lead to substantial losses.The development of deep learning models for detecting crop diseases is an active area of research in smart agriculture.This study focuses on mango plant diseases and employs the ConvNeXt and Vision Transformer(ViT)architectures.Two datasets were used.The first,MangoLeafBD,contains data for mango leaf diseases such as anthracnose,bacterial canker,gall midge,and powdery mildew.The second,SenMangoFruitDDS,includes data for mango fruit diseases such as Alternaria,Anthracnose,Black Mould Rot,Healthy,and Stem and Rot.Both datasets were obtained from publicly available sources.The proposed model achieved an accuracy of 99.87%on the MangoLeafBD dataset and 98.40%on the MangoFruitDDS dataset.The results demonstrate that ConvNeXt and ViT models can effectively diagnose mango diseases,enabling farmers to identify these conditions more efficiently.The system contributes to increased mango production and minimizes economic losses by reducing the time and effort needed for manual diagnostics.Additionally,the proposed system is integrated into a mobile application that utilizes the model as a backend to detect mango diseases instantly.展开更多
文摘Detecting pavement cracks is critical for road safety and infrastructure management.Traditional methods,relying on manual inspection and basic image processing,are time-consuming and prone to errors.Recent deep-learning(DL)methods automate crack detection,but many still struggle with variable crack patterns and environmental conditions.This study aims to address these limitations by introducing the Masker Transformer,a novel hybrid deep learning model that integrates the precise localization capabilities of Mask Region-based Convolutional Neural Network(Mask R-CNN)with the global contextual awareness of Vision Transformer(ViT).The research focuses on leveraging the strengths of both architectures to enhance segmentation accuracy and adaptability across different pavement conditions.We evaluated the performance of theMaskerTransformer against other state-of-theartmodels such asU-Net,TransformerU-Net(TransUNet),U-NetTransformer(UNETr),SwinU-NetTransformer(Swin-UNETr),You Only Look Once version 8(YoloV8),and Mask R-CNN using two benchmark datasets:Crack500 and DeepCrack.The findings reveal that the MaskerTransformer significantly outperforms the existing models,achieving the highest Dice SimilarityCoefficient(DSC),precision,recall,and F1-Score across both datasets.Specifically,the model attained a DSC of 80.04%on Crack500 and 91.37%on DeepCrack,demonstrating superior segmentation accuracy and reliability.The high precision and recall rates further substantiate its effectiveness in real-world applications,suggesting that the Masker Transformer can serve as a robust tool for automated pavement crack detection,potentially replacing more traditional methods.
基金funded by Woosong University Academic Research 2024.
文摘This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as other transformer-based models including Token to Token ViT,ViT withoutmemory,and Parallel ViT.Leveraging awidely-used steel surface defect dataset,the research applies data augmentation and t-distributed stochastic neighbor embedding(t-SNE)to enhance feature extraction and understanding.These techniques mitigated overfitting,stabilized training,and improved generalization capabilities.The LMViT model achieved a test accuracy of 97.22%,significantly outperforming ResNet18(88.89%)and ResNet50(88.90%),aswell as the Token to TokenViT(88.46%),ViT without memory(87.18),and Parallel ViT(91.03%).Furthermore,LMViT exhibited superior training and validation performance,attaining a validation accuracy of 98.2%compared to 91.0%for ResNet 18,96.0%for ResNet50,and 89.12%,87.51%,and 91.21%for Token to Token ViT,ViT without memory,and Parallel ViT,respectively.The findings highlight the LMViT’s ability to capture long-range dependencies in images,an areawhere CNNs struggle due to their reliance on local receptive fields and hierarchical feature extraction.The additional transformer-based models also demonstrate improved performance in capturing complex features over CNNs,with LMViT excelling particularly at detecting subtle and complex defects,which is critical for maintaining product quality and operational efficiency in industrial applications.For instance,the LMViT model successfully identified fine scratches and minor surface irregularities that CNNs often misclassify.This study not only demonstrates LMViT’s potential for real-world defect detection but also underscores the promise of other transformer-based architectures like Token to Token ViT,ViT without memory,and Parallel ViT in industrial scenarios where complex spatial relationships are key.Future research may focus on enhancing LMViT’s computational efficiency for deployment in real-time quality control systems.
基金National Natural Science Foundation of China(No.62176052)。
文摘In the vision transformer(ViT)architecture,image data are transformed into sequential data for processing,which may result in the loss of spatial positional information.While the self-attention mechanism enhances the capacity of ViT to capture global features,it compromises the preservation of fine-grained local feature information.To address these challenges,we propose a spatial positional enhancement module and a wavelet transform enhancement module tailored for ViT models.These modules aim to reduce spatial positional information loss during the patch embedding process and enhance the model’s feature extraction capabilities.The spatial positional enhancement module reinforces spatial information in sequential data through convolutional operations and multi-scale feature extraction.Meanwhile,the wavelet transform enhancement module utilizes the multi-scale analysis and frequency decomposition to improve the ViT’s understanding of global and local image structures.This enhancement also improves the ViT’s ability to process complex structures and intricate image details.Experiments on CIFAR-10,CIFAR-100 and ImageNet-1k datasets are done to compare the proposed method with advanced classification methods.The results show that the proposed model achieves a higher classification accuracy,confirming its effectiveness and competitive advantage.
基金support by University of Auckland Faculty Research Development Fund(3716476).
文摘Deep learning techniques have recently been the most popular method for automatically detecting bridge damage captured by unmanned aerial vehicles(UAVs).However,their wider application to real-world scenarios is hindered by three challenges:①defect scale variance,motion blur,and strong illumination significantly affect the accuracy and reliability of damage detectors;②existing commonly used anchor-based damage detectors struggle to effectively generalize to harsh real-world scenarios;and③convolutional neural networks(CNNs)lack the capability to model long-range dependencies across the entire image.This paper presents an efficient Vision Transformer-enhanced anchor-free YOLO(you only look once)method to address these challenges.First,a concrete bridge damage dataset was established,augmented by motion blur and varying brightness.Four key enhancements were then applied to an anchor-based YOLO method:①Four detection heads were introduced to alleviate the multi-scale damage detection issue;②decoupled heads were employed to address the conflict between classification and bounding box regression tasks inherent in the original coupled head design;③an anchor-free mechanism was incorporated to reduce the computational complexity and improve generalization to real-world scenarios;and④a novel Vision Transformer block,C3MaxViT,was added to enable CNNs to model long-range dependencies.These enhancements were integrated into an advanced anchor-based YOLOv5l algorithm,and the proposed Vision Transformer-enhanced anchor-free YOLO method was then compared against cutting-edge damage detection methods.The experimental results demonstrated the effectiveness of the proposed method,with an increase of 8.1%in mean average precision at intersection over union threshold of 0.5(mAP_(50))and an improvement of 8.4%in mAP@[0.5:.05:.95]respectively.Furthermore,extensive ablation studies revealed that the four detection heads,decoupled head design,anchor-free mechanism,and C3MaxViT contributed improvements of 2.4%,1.2%,2.6%,and 1.9%in mAP50,respectively.
文摘Accurate plant species classification is essential for many applications,such as biodiversity conservation,ecological research,and sustainable agricultural practices.Traditional morphological classification methods are inherently slow,labour-intensive,and prone to inaccuracies,especiallywhen distinguishing between species exhibiting visual similarities or high intra-species variability.To address these limitations and to overcome the constraints of imageonly approaches,we introduce a novel Artificial Intelligence-driven framework.This approach integrates robust Vision Transformer(ViT)models for advanced visual analysis with a multi-modal data fusion strategy,incorporating contextual metadata such as precise environmental conditions,geographic location,and phenological traits.This combination of visual and ecological cues significantly enhances classification accuracy and robustness,proving especially vital in complex,heterogeneous real-world environments.The proposedmodel achieves an impressive 97.27%of test accuracy,andMean Reciprocal Rank(MRR)of 0.9842 that demonstrates strong generalization capabilities.Furthermore,efficient utilization of high-performance GPU resources(RTX 3090,18 GB memory)ensures scalable processing of highdimensional data.Comparative analysis consistently confirms that ourmetadata fusion approach substantially improves classification performance,particularly formorphologically similar species,and through principled self-supervised and transfer learning from ImageNet,the model adapts efficiently to new species,ensuring enhanced generalization.This comprehensive approach holds profound practical implications for precise conservation initiatives,rigorous ecological monitoring,and advanced agricultural management.
基金funded by Princess Nourah bint Abdulrahman University and Researchers Supporting Project number(PNURSP2025R346)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Mango farming significantly contributes to the economy,particularly in developing countries.However,mango trees are susceptible to various diseases caused by fungi,viruses,and bacteria,and diagnosing these diseases at an early stage is crucial to prevent their spread,which can lead to substantial losses.The development of deep learning models for detecting crop diseases is an active area of research in smart agriculture.This study focuses on mango plant diseases and employs the ConvNeXt and Vision Transformer(ViT)architectures.Two datasets were used.The first,MangoLeafBD,contains data for mango leaf diseases such as anthracnose,bacterial canker,gall midge,and powdery mildew.The second,SenMangoFruitDDS,includes data for mango fruit diseases such as Alternaria,Anthracnose,Black Mould Rot,Healthy,and Stem and Rot.Both datasets were obtained from publicly available sources.The proposed model achieved an accuracy of 99.87%on the MangoLeafBD dataset and 98.40%on the MangoFruitDDS dataset.The results demonstrate that ConvNeXt and ViT models can effectively diagnose mango diseases,enabling farmers to identify these conditions more efficiently.The system contributes to increased mango production and minimizes economic losses by reducing the time and effort needed for manual diagnostics.Additionally,the proposed system is integrated into a mobile application that utilizes the model as a backend to detect mango diseases instantly.