The rapid advancement of large language models(LLMs)has driven the pervasive adoption of AI-generated content(AIGC),while also raising concerns about misinformation,academic misconduct,biased or harmful content,and ot...The rapid advancement of large language models(LLMs)has driven the pervasive adoption of AI-generated content(AIGC),while also raising concerns about misinformation,academic misconduct,biased or harmful content,and other risks.Detecting AI-generated text has thus become essential to safeguard the authenticity and reliability of digital information.This survey reviews recent progress in detection methods,categorizing approaches into passive and active categories based on their reliance on intrinsic textual features or embedded signals.Passive detection is further divided into surface linguistic feature-based and language model-based methods,whereas active detection encompasses watermarking-based and semantic retrieval-based approaches.This taxonomy enables systematic comparison of methodological differences in model dependency,applicability,and robustness.A key challenge for AI-generated text detection is that existing detectors are highly vulnerable to adversarial attacks,particularly paraphrasing,which substantially compromises their effectiveness.Addressing this gap highlights the need for future research on enhancing robustness and cross-domain generalization.By synthesizing current advances and limitations,this survey provides a structured reference for the field and outlines pathways toward more reliable and scalable detection solutions.展开更多
Detecting small forest fire targets in unmanned aerial vehicle(UAV)images is difficult,as flames typically cover only a very limited portion of the visual scene.This study proposes Context-guided Compact Lightweight N...Detecting small forest fire targets in unmanned aerial vehicle(UAV)images is difficult,as flames typically cover only a very limited portion of the visual scene.This study proposes Context-guided Compact Lightweight Network(CCLNet),an end-to-end lightweight model designed to detect small forest fire targets while ensuring efficient inference on devices with constrained computational resources.CCLNet employs a three-stage network architecture.Its key components include three modules.C3F-Convolutional Gated Linear Unit(C3F-CGLU)performs selective local feature extraction while preserving fine-grained high-frequency flame details.Context-Guided Feature Fusion Module(CGFM)replaces plain concatenation with triplet-attention interactions to emphasize subtle flame patterns.Lightweight Shared Convolution with Separated Batch Normalization Detection(LSCSBD)reduces parameters through separated batch normalization while maintaining scale-specific statistics.We build TF-11K,an 11,139-image dataset combining 9139 self-collected UAV images from subtropical forests and 2000 re-annotated frames from the FLAME dataset.On TF-11K,CCLNet attains 85.8%mAP@0.5,45.5%mean Average Precision(mAP)@[0.5:0.95],87.4%precision,and 79.1%recall with 2.21 M parameters and 5.7 Giga Floating-point Operations Per Second(GFLOPs).The ablation study confirms that each module contributes to both accuracy and efficiency.Cross-dataset evaluation on DFS yields 77.5%mAP@0.5 and 42.3%mAP@[0.5:0.95],indicating good generalization to unseen scenes.These results suggest that CCLNet offers a practical balance between accuracy and speed for small-target forest fire monitoring with UAVs.展开更多
In recent years,the rapid advancement of artificial intelligence(AI)technology has enabled AI-assisted negative screening to significantly enhance physicians'efficiency through image feature analysis and multimoda...In recent years,the rapid advancement of artificial intelligence(AI)technology has enabled AI-assisted negative screening to significantly enhance physicians'efficiency through image feature analysis and multimodal data modeling,allowing them to focus more on diagnosing positive cases.Meanwhile,multispectral imaging(MSI)integrates spectral and spatial resolution to capture subtle tissue features invisible to the human eye,providing high-resolution data support for pathological analysis.Combining AI technology with MSI and employing quantitative methods to analyze multiband biomarkers(such as absorbance differences in keratin pearls)can effectively improve diagnostic specificity and reduce subjective errors in manual slide interpretation.To address the challenge of identifying negative tissue sections,we developed a discrimination algorithm powered by MSI.We demonstrated its efficacy using cutaneous squamous cell carcinoma(cSCC)as a representative case study.The algorithm achieved 100%accuracy in excluding negative cases and effectively mitigated the false-positive problem caused by cSCC heterogeneity.We constructed a multispectral image(MSI)dataset acquired at 520 nm,600 nm,and 630 nm wavelengths.Subsequently,we employed an optimized MobileViT model for tissue classification and performed comparative analyses against other models.The experimental results showed that our optimized MobileViT model achieved superior performance in identifying negative tissue sections,with a perfect accuracy rate of 100%.Thus,our results confirm the feasibility of integrating MSI with AI to exclude negative cases with perfect accuracy,offering a novel solution to alleviate the workload of pathologists.展开更多
Generative Adversarial Networks(GANs)have become valuable tools in medical imaging,enabling realistic image synthesis for enhancement,augmentation,and restoration.However,their integration into clinical workflows rais...Generative Adversarial Networks(GANs)have become valuable tools in medical imaging,enabling realistic image synthesis for enhancement,augmentation,and restoration.However,their integration into clinical workflows raises concerns,particularly the risk of subtle distortions or hallucinations that may undermine diagnostic accuracy and weaken trust in AI-assisted decision-making.To address this challenge,we propose a hybrid deep learning framework designed to detect GAN-induced artifacts in medical images,thereby reinforcing the reliability of AI-driven diagnostics.The framework integrates low-level statistical descriptors,including high-frequency residuals and Gray-Level Co-occurrence Matrix(GLCM)texture features,with high-level semantic representations extracted from a pre-trained ResNet18.This dual-stream approach enables detection of both pixel-level anomalies and structural inconsistencies introduced by GAN-based manipulation.We validated the framework on a curated dataset of 10,000 medical images,evenly split between authentic and GAN-generated samples across four modalities:MRI,CT,X-ray,and fundus photography.To improve generalizability to real-world clinical settings,we incorporated domain adaptation strategies such as adversarial training and style transfer,reducing domain shift by 15%.Experimental results demonstrate robust performance,achieving 92.6%accuracy and an F1-score of 0.91 on synthetic test data,and maintaining strong performance on real-world GAN-modified images with 87.3%accuracy and an F1-score of 0.85.Additionally,the model attained an AUC of 0.96 and an average precision of 0.92,outperforming conventional GAN detection pipelines and baseline Convolutional Neural Network(CNN)architectures.These findings establish the proposed framework as an effective and reliable solution for detecting GAN-induced hallucinations in medical imaging,representing an important step toward building trustworthy and clinically deployable AI systems.展开更多
The application of deep learning for target detection in aerial images captured by Unmanned Aerial Vehicles(UAV)has emerged as a prominent research focus.Due to the considerable distance between UAVs and the photograp...The application of deep learning for target detection in aerial images captured by Unmanned Aerial Vehicles(UAV)has emerged as a prominent research focus.Due to the considerable distance between UAVs and the photographed objects,coupled with complex shooting environments,existing models often struggle to achieve accurate real-time target detection.In this paper,a You Only Look Once v8(YOLOv8)model is modified from four aspects:the detection head,the up-sampling module,the feature extraction module,and the parameter optimization of positive sample screening,and the YOLO-S3DT model is proposed to improve the performance of the model for detecting small targets in aerial images.Experimental results show that all detection indexes of the proposed model are significantly improved without increasing the number of model parameters and with the limited growth of computation.Moreover,this model also has the best performance compared to other detecting models,demonstrating its advancement within this category of tasks.展开更多
Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models...Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models achieve speed-accuracy trade-offs via fixed convolution kernels and manual feature fusion,their rigid architectures struggle with multi-scale adaptability,as exemplified by YOLOv8n’s 36.4%mAP and 13.9%small-object AP on VisDrone2019.This paper presents YOLO-LE,a lightweight framework addressing these limitations through three novel designs:(1)We introduce the C2f-Dy and LDown modules to enhance the backbone’s sensitivity to small-object features while reducing backbone parameters,thereby improving model efficiency.(2)An adaptive feature fusion module is designed to dynamically integrate multi-scale feature maps,optimizing the neck structure,reducing neck complexity,and enhancing overall model performance.(3)We replace the original loss function with a distributed focal loss and incorporate a lightweight self-attention mechanism to improve small-object recognition and bounding box regression accuracy.Experimental results demonstrate that YOLO-LE achieves 39.9%mAP@0.5 on VisDrone2019,representing a 9.6%improvement over YOLOv8n,while maintaining 8.5 GFLOPs computational efficiency.This provides an efficient solution for UAV object detection in complex scenarios.展开更多
A measurement system for the scattering characteristics of warhead fragments based on high-speed imaging systems offers advantages such as simple deployment,flexible maneuverability,and high spatiotemporal resolution,...A measurement system for the scattering characteristics of warhead fragments based on high-speed imaging systems offers advantages such as simple deployment,flexible maneuverability,and high spatiotemporal resolution,enabling the acquisition of full-process data of the fragment scattering process.However,mismatches between camera frame rates and target velocities can lead to long motion blur tails of high-speed fragment targets,resulting in low signal-to-noise ratios and rendering conventional detection algorithms ineffective in dynamic strong interference testing environments.In this study,we propose a detection framework centered on dynamic strong interference disturbance signal separation and suppression.We introduce a mixture Gaussian model constrained under a joint spatialtemporal-transform domain Dirichlet process,combined with total variation regularization to achieve disturbance signal suppression.Experimental results demonstrate that the proposed disturbance suppression method can be integrated with certain conventional motion target detection tasks,enabling adaptation to real-world data to a certain extent.Moreover,we provide a specific implementation of this process,which achieves a detection rate close to 100%with an approximate 0%false alarm rate in multiple sets of real target field test data.This research effectively advances the development of the field of damage parameter testing.展开更多
In recent years,with the development of synthetic aperture radar(SAR)technology and the widespread application of deep learning,lightweight detection of SAR images has emerged as a research direction.The ultimate goal...In recent years,with the development of synthetic aperture radar(SAR)technology and the widespread application of deep learning,lightweight detection of SAR images has emerged as a research direction.The ultimate goal is to reduce computational and storage requirements while ensuring detection accuracy and reliability,making it an ideal choice for achieving rapid response and efficient processing.In this regard,a lightweight SAR ship target detection algorithm based on YOLOv8 was proposed in this study.Firstly,the C2f-Sc module was designed by fusing the C2f in the backbone network with the ScConv to reduce spatial redundancy and channel redundancy between features in convolutional neural networks.At the same time,the Ghost module was introduced into the neck network to effectively reduce model parameters and computational complexity.A relatively lightweight EMA attention mechanism was added to the neck network to promote the effective fusion of features at different levels.Experimental results showed that the Parameters and GFLOPs of the improved model are reduced by 8.5%and 7.0%when mAP@0.5 and mAP@0.5:0.95 are increased by 0.7%and 1.8%,respectively.It makes the model lightweight and improves the detection accuracy,which has certain application value.展开更多
In the field of image forensics,image tampering detection is a critical and challenging task.Traditional methods based on manually designed feature extraction typically focus on a specific type of tampering operation,...In the field of image forensics,image tampering detection is a critical and challenging task.Traditional methods based on manually designed feature extraction typically focus on a specific type of tampering operation,which limits their effectiveness in complex scenarios involving multiple forms of tampering.Although deep learningbasedmethods offer the advantage of automatic feature learning,current approaches still require further improvements in terms of detection accuracy and computational efficiency.To address these challenges,this study applies the UNet 3+model to image tampering detection and proposes a hybrid framework,referred to as DDT-Net(Deep Detail Tracking Network),which integrates deep learning with traditional detection techniques.In contrast to traditional additive methods,this approach innovatively applies amultiplicative fusion technique during downsampling,effectively combining the deep learning feature maps at each layer with those generated by the Bayar noise stream.This design enables noise residual features to guide the learning of semantic features more precisely and efficiently,thus facilitating comprehensive feature-level interaction.Furthermore,by leveraging the complementary strengths of deep networks in capturing large-scale semantic manipulations and traditional algorithms’proficiency in detecting fine-grained local traces,the method significantly enhances the accuracy and robustness of tampered region detection.Compared with other approaches,the proposed method achieves an F1 score improvement exceeding 30% on the DEFACTO and DIS25k datasets.In addition,it has been extensively validated on other datasets,including CASIA and DIS25k.Experimental results demonstrate that this method achieves outstanding performance across various types of image tampering detection tasks.展开更多
BACKGROUND Optical coherence tomography(OCT)enables high-resolution,non-invasive visualization of retinal structures.Recent evidence suggests that retinal layer alterations may reflect central nervous system changes a...BACKGROUND Optical coherence tomography(OCT)enables high-resolution,non-invasive visualization of retinal structures.Recent evidence suggests that retinal layer alterations may reflect central nervous system changes associated with psychiatric disorders such as schizophrenia(SZ).AIM To develop an advanced deep learning model to classify OCT images and distinguish patients with SZ from healthy controls using retinal biomarkers.METHODS A novel convolutional neural network,Self-AttentionNeXt,was designed by integrating grouped self-attention mechanisms,residual and inverted bottleneck blocks,and a final 1×1 convolution for feature refinement.The model was trained and tested on both a custom OCT dataset collected from patients with SZ and a publicly available OCT dataset(OCT2017).RESULTS Self-AttentionNeXt achieved 97.0%accuracy on the collected SZ OCT dataset and over 95%accuracy on the public OCT2017 dataset.Gradient-weighted class activation mapping visualizations confirmed the model’s attention to clinically relevant retinal regions,suggesting effective feature localization.CONCLUSION Self-AttentionNeXt effectively combines transformer-inspired attention mechanisms with convolutional neural networks architecture to support the early and accurate detection of SZ using OCT images.This approach offers a promising direction for artificial intelligence-assisted psychiatric diagnostics and clinical decision support.展开更多
Aimed at the long and narrow geometric features and poor generalization ability of the damage detection in conveyor belts with steel rope cores using the X-ray image,a detection method of damage X-ray image is propose...Aimed at the long and narrow geometric features and poor generalization ability of the damage detection in conveyor belts with steel rope cores using the X-ray image,a detection method of damage X-ray image is proposed based on the improved fully convolutional one-stage object detection(FCOS)algorithm.The regression performance of bounding boxes was optimized by introducing the complete intersection over union loss function into the improved algorithm.The feature fusion network structure is modified by adding adaptive fusion paths to the feature fusion network structure,which makes full use of the features of accurate localization and semantics of multi-scale feature fusion networks.Finally,the network structure was trained and validated by using the X-ray image dataset of damages in conveyor belts with steel rope cores provided by a flaw detection equipment manufacturer.In addition,the data enhancement methods such as rotating,mirroring,and scaling,were employed to enrich the image dataset so that the model is adequately trained.Experimental results showed that the improved FCOS algorithm promoted the precision rate and the recall rate by 20.9%and 14.8%respectively,compared with the original algorithm.Meanwhile,compared with Fast R-CNN,Faster R-CNN,SSD,and YOLOv3,the improved FCOS algorithm has obvious advantages;detection precision rate and recall rate of the modified network reached 95.8%and 97.0%respectively.Furthermore,it demonstrated a higher detection accuracy without affecting the speed.The results of this work have some reference significance for the automatic identification and detection of steel core conveyor belt damage.展开更多
Given the challenges of underwater garbage detection,including insufficient lighting,low visibility,high noise levels,and high misclassification rates,this paper proposes a model named CSC-YOLO.CSC-YOLO for detecting ...Given the challenges of underwater garbage detection,including insufficient lighting,low visibility,high noise levels,and high misclassification rates,this paper proposes a model named CSC-YOLO.CSC-YOLO for detecting garbage in complex un-derwater environments characterized by murky water and strong hydrodynamic conditions.The model incorporates the Content-Guid-ed Attention(CGA)attention mechanism into the SPPF module of the YOLOv8 backbone network to enhance dehazing,reduce noise interference,and fuse multi-scale feature information.Additionally,a Single-Head Self-Attention(SHSA)mechanism is introduced in the final layer of the backbone network to achieve local and global feature fusion in a lightweight manner,improving the accuracy of garbage detection.In the detection head,the CBAM attention mechanism is added to further enhance feature representation,increase the model’s target localization,and improve robustness against complex backgrounds and noise.Furthermore,the anchor box coordi-nates from CSC-YOLO are fed into Mobile_SAM to achieve precise segmentation of underwater garbage.Experimental results show that CSC-YOLO achieves a Precision of 0.962,Recall of 0.898,F1-score of 0.929,and mAP0.5 of 0.960 on the ICRA19 trash dataset,representing improvements of 2.9%,1.7%,2.3%,and 2.0%over YOLOv8n,respectively.The combination of CSC-YOLO and Mo-bile_SAM not only enables garbage detection in complex underwater environments but also achieves segmentation.This approach generates additional garbage segmentation masks without manual annotations,facilitating rapid expansion of labeled underwater garbage datasets for training.As an emerging model for intelligent underwater garbage detection,the proposed method holds signifi-cant potential for practical applications and academic research,offering an effective solution to the challenges of intelligent garbage detection in complex underwater environments.展开更多
Electronic nose and thermal images are effective ways to diagnose the presence of gases in real-time realtime.Multimodal fusion of these modalities can result in the development of highly accurate diagnostic systems.T...Electronic nose and thermal images are effective ways to diagnose the presence of gases in real-time realtime.Multimodal fusion of these modalities can result in the development of highly accurate diagnostic systems.The low-cost thermal imaging software produces low-resolution thermal images in grayscale format,hence necessitating methods for improving the resolution and colorizing the images.The objective of this paper is to develop and train a super-resolution generative adversarial network for improving the resolution of the thermal images,followed by a sparse autoencoder for colorization of thermal images and amultimodal convolutional neural network for gas detection using electronic nose and thermal images.The dataset used comprises 6400 thermal images and electronic nose measurements for four classes.A multimodal Convolutional Neural Network(CNN)comprising an EfficientNetB2 pre-trainedmodel was developed using both early and late feature fusion.The Super Resolution Generative Adversarial Network(SRGAN)model was developed and trained on low and high-resolution thermal images.Asparse autoencoder was trained on the grayscale and colorized thermal images.The SRGAN was trained on lowand high-resolution thermal images,achieving a Structural Similarity Index(SSIM)of 90.28,a Peak Signal-to-Noise Ratio(PSNR)of 68.74,and a Mean Absolute Error(MAE)of 0.066.The autoencoder model produced an MAE of 0.035,a Mean Squared Error(MSE)of 0.006,and a Root Mean Squared Error(RMSE)of 0.0705.The multimodal CNN,trained on these images and electronic nose measurements using both early and late fusion techniques,achieved accuracies of 97.89% and 98.55%,respectively.Hence,the proposed framework can be of great aid for the integration with low-cost software to generate high quality thermal camera images and highly accurate detection of gases in real-time.展开更多
Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of vis...Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of visible and infrared images.However,the inherent differences in the imaging mechanisms of visible and infrared modalities make effective cross-modal fusion challenging.Furthermore,constrained by the physical characteristics of sensors and thermal diffusion effects,infrared images generally suffer from blurred object contours and missing details,making it difficult to extract object features effectively.To address these issues,we propose an infrared-visible image fusion network that realizesmultimodal information fusion of infrared and visible images through a carefully designedmultiscale fusion strategy.First,we design an adaptive gray-radiance enhancement(AGRE)module to strengthen the detail representation in infrared images,improving their usability in complex lighting scenarios.Next,we introduce a channelspatial feature interaction(CSFI)module,which achieves efficient complementarity between the RGB and infrared(IR)modalities via dynamic channel switching and a spatial attention mechanism.Finally,we propose a multi-scale enhanced cross-attention fusion(MSECA)module,which optimizes the fusion ofmulti-level features through dynamic convolution and gating mechanisms and captures long-range complementary relationships of cross-modal features on a global scale,thereby enhancing the expressiveness of the fused features.Experiments on the KAIST,M3FD,and FLIR datasets demonstrate that our method delivers outstanding performance in daytime and nighttime scenarios.On the KAIST dataset,the miss rate drops to 5.99%,and further to 4.26% in night scenes.On the FLIR and M3FD datasets,it achieves AP50 scores of 79.4% and 88.9%,respectively.展开更多
Efficient banana crop detection is crucial for precision agriculture;however,traditional remote sensing methods often lack the spatial resolution required for accurate identification.This study utilizes low-altitude U...Efficient banana crop detection is crucial for precision agriculture;however,traditional remote sensing methods often lack the spatial resolution required for accurate identification.This study utilizes low-altitude Unmanned Aerial Vehicle(UAV)images and deep learning-based object detection models to enhance banana plant detection.A comparative analysis of Faster Region-Based Convolutional Neural Network(Faster R-CNN),You Only Look Once Version 3(YOLOv3),Retina Network(RetinaNet),and Single Shot MultiBox Detector(SSD)was conducted to evaluate their effectiveness.Results show that RetinaNet achieved the highest detection accuracy,with a precision of 96.67%,a recall of 71.67%,and an F1 score of 81.33%.The study further highlights the impact of scale variation,occlusion,and vegetation density on detection performance.Unlike previous studies,this research systematically evaluates multi-scale object detection models for banana plant identification,offering insights into the advantages of UAV-based deep learning applications in agriculture.In addition,this study compares five evaluation metrics across the four detection models using both RGB and grayscale images.Specifically,RetinaNet exhibited the best overall performance with grayscale images,achieving the highest values across all five metrics.Compared to its performance with RGB images,these results represent a marked improvement,confirming the potential of grayscale preprocessing to enhance detection capability.展开更多
Underwater target detection in forward-looking sonar(FLS)images is a challenging but promising endeavor.The existing neural-based methods yield notable progress but there remains room for improvement due to overlookin...Underwater target detection in forward-looking sonar(FLS)images is a challenging but promising endeavor.The existing neural-based methods yield notable progress but there remains room for improvement due to overlooking the unique characteristics of underwater environments.Considering the problems of low imaging resolution,complex background environment,and large changes in target imaging of underwater sonar images,this paper specifically designs a sonar images target detection Network based on Progressive sensitivity capture,named ProNet.It progressively captures the sensitive regions in the current image where potential effective targets may exist.Guided by this basic idea,the primary technical innovation of this paper is the introduction of a foundational module structure for constructing a sonar target detection backbone network.This structure employs a multi-subspace mixed convolution module that initially maps sonar images into different subspaces and extracts local contextual features using varying convolutional receptive fields within these heterogeneous subspaces.Subsequently,a Scale-aware aggregation module effectively aggregates the heterogeneous features extracted from different subspaces.Finally,the multi-scale attention structure further enhances the relational perception of the aggregated features.We evaluated ProNet on three FLS datasets of varying scenes,and experimental results indicate that ProNet outperforms the current state-of-the-art sonar image and general target detectors.展开更多
In response to challenges posed by complex backgrounds,diverse target angles,and numerous small targets in remote sensing images,alongside the issue of high resource consumption hindering model deployment,we propose a...In response to challenges posed by complex backgrounds,diverse target angles,and numerous small targets in remote sensing images,alongside the issue of high resource consumption hindering model deployment,we propose an enhanced,lightweight you only look once version 8 small(YOLOv8s)detection algorithm.Regarding network improvements,we first replace tradi-tional horizontal boxes with rotated boxes for target detection,effectively addressing difficulties in feature extraction caused by varying target angles.Second,we design a module integrating convolu-tional neural networks(CNN)and Transformer components to replace specific C2f modules in the backbone network,thereby expanding the model’s receptive field and enhancing feature extraction in complex backgrounds.Finally,we introduce a feature calibration structure to mitigate potential feature mismatches during feature fusion.For model compression,we employ a lightweight channel pruning technique based on localized mean average precision(LMAP)to eliminate redundancies in the enhanced model.Although this approach results in some loss of detection accuracy,it effec-tively reduces the number of parameters,computational load,and model size.Additionally,we employ channel-level knowledge distillation to recover accuracy in the pruned model,further enhancing detection performance.Experimental results indicate that the enhanced algorithm achieves a 6.1%increase in mAP50 compared to YOLOv8s,while simultaneously reducing parame-ters,computational load,and model size by 57.7%,28.8%,and 52.3%,respectively.展开更多
Glaucoma,a chronic eye disease affecting millions worldwide,poses a substantial threat to eyesight and can result in permanent vision loss if left untreated.Manual identification of glaucoma is a complicated and time-...Glaucoma,a chronic eye disease affecting millions worldwide,poses a substantial threat to eyesight and can result in permanent vision loss if left untreated.Manual identification of glaucoma is a complicated and time-consuming practice requiring specialized expertise and results may be subjective.To address these challenges,this research proposes a computer-aided diagnosis(CAD)approach using Artificial Intelligence(AI)techniques for binary and multiclass classification of glaucoma stages.An ensemble fusion mechanism that combines the outputs of three pre-trained convolutional neural network(ConvNet)models–ResNet-50,VGG-16,and InceptionV3 is utilized in this paper.This fusion technique enhances diagnostic accuracy and robustness by ensemble-averaging the predictions from individual models,leveraging their complementary strengths.The objective of this work is to assess the model’s capability for early-stage glaucoma diagnosis.Classification is performed on a dataset collected from the Harvard Dataverse repository.With the proposed technique,for Normal vs.Advanced glaucoma classification,a validation accuracy of 98.04%and testing accuracy of 98.03%is achieved,with a specificity of 100%which outperforms stateof-the-art methods.For multiclass classification,the suggested ensemble approach achieved a precision and sensitivity of 97%,specificity,and testing accuracy of 98.57%and 96.82%,respectively.The proposed E-GlauNet model has significant potential in assisting ophthalmologists in the screening and fast diagnosis of glaucoma,leading to more reliable,efficient,and timely diagnosis,particularly for early-stage detection and staging of the disease.While the proposed method demonstrates high accuracy and robustness,the study is limited by the evaluation of a single dataset.Future work will focus on external validation across diverse datasets and enhancing interpretability using explainable AI techniques.展开更多
Aflatoxin B1(AFB1)is a toxic fungal metabolite that contaminates almonds from cultivation to harvesting.It leads to chronic health problems and significant economic loss to the producers.Therefore,a fast and non-invas...Aflatoxin B1(AFB1)is a toxic fungal metabolite that contaminates almonds from cultivation to harvesting.It leads to chronic health problems and significant economic loss to the producers.Therefore,a fast and non-invasive detection technique is crucial for safeguarding food safety by swiftly identifying and eliminating contaminated almonds from the supply chain.Hyperspectral imaging has been explored as a potential non-destructive technology for detecting AFB1.However,the diverse geometries of almonds present a significant challenge on acquired images,thereby impacting the accuracy of the developed prediction and classification models.This study investigates the effectiveness of short-wave infrared(SwIR)hyperspectral imaging combined with deep learning for detecting AFB1 in almonds of varying geometries.Initially,partial least squares regression(PLSR)and support vector machine(SvM)regression models were evaluated for quantification,while SVM and quadratic discriminant analysis(QDA)classifiers were applied for classification.The results indicated that spectral responses varied with almond thickness,making quantification models unreliable for industrial applications.The Competitive Adaptive Reweighted Sampling(CARS)algorithm was employed to identify key spectral features for developing multi-spectral AFB1 classification models to evaluate the feasibility of high-speed,accurate in-line detection.The deep learning approach significantly outperformed traditional machine learning models,with the pre-trained Inception V3 network achieving a cross-validation accuracy of 84.82%,an F1-score of 0.8522,and an area under curve of 0.893.These findings highlight the superiority of deep learning-based hyperspectral imaging for accurate and reliable AFB1 detection in almonds with diverse shapes and thicknesses.展开更多
It is important to understand the development of joints and fractures in rock masses to ensure drilling stability and blasting effectiveness.Traditional manual observation techniques for identifying and extracting fra...It is important to understand the development of joints and fractures in rock masses to ensure drilling stability and blasting effectiveness.Traditional manual observation techniques for identifying and extracting fracture characteristics have been proven to be inefficient and prone to subjective interpretation.Moreover,conventional image processing algorithms and classical deep learning models often encounter difficulties in accurately identifying fracture areas,resulting in unclear contours.This study proposes an intelligent method for detecting internal fractures in mine rock masses to address these challenges.The proposed approach captures a nodal fracture map within the targeted blast area and integrates channel and spatial attention mechanisms into the ResUnet(RU)model.The channel attention mechanism dynamically recalibrates the importance of each feature channel,and the spatial attention mechanism enhances feature representation in key areas while minimizing background noise,thus improving segmentation accuracy.A dynamic serpentine convolution module is also introduced that adaptively adjusts the shape and orientation of the convolution kernel based on the local structure of the input feature map.Furthermore,this method enables the automatic extraction and quantification of borehole nodal fracture information by fitting sinusoidal curves to the boundaries of the fracture contours using the least squares method.In comparison to other advanced deep learning models,our enhanced RU demonstrates superior performance across evaluation metrics,including accuracy,pixel accuracy(PA),and intersection over union(IoU).Unlike traditional manual extraction methods,our intelligent detection approach provides considerable time and cost savings,with an average error rate of approximately 4%.This approach has the potential to greatly improve the efficiency of geological surveys of borehole fractures.展开更多
基金supported in part by the Science and Technology Innovation Program of Hunan Province under Grant 2025RC3166the National Natural Science Foundation of China under Grant 62572176the National Key R&D Program of China under Grant 2024YFF0618800.
文摘The rapid advancement of large language models(LLMs)has driven the pervasive adoption of AI-generated content(AIGC),while also raising concerns about misinformation,academic misconduct,biased or harmful content,and other risks.Detecting AI-generated text has thus become essential to safeguard the authenticity and reliability of digital information.This survey reviews recent progress in detection methods,categorizing approaches into passive and active categories based on their reliance on intrinsic textual features or embedded signals.Passive detection is further divided into surface linguistic feature-based and language model-based methods,whereas active detection encompasses watermarking-based and semantic retrieval-based approaches.This taxonomy enables systematic comparison of methodological differences in model dependency,applicability,and robustness.A key challenge for AI-generated text detection is that existing detectors are highly vulnerable to adversarial attacks,particularly paraphrasing,which substantially compromises their effectiveness.Addressing this gap highlights the need for future research on enhancing robustness and cross-domain generalization.By synthesizing current advances and limitations,this survey provides a structured reference for the field and outlines pathways toward more reliable and scalable detection solutions.
基金funded by the Natural Science Foundation of Hunan Province(Grant No.2025JJ80352)the National Natural Science Foundation Project of China(Grant No.32271879).
文摘Detecting small forest fire targets in unmanned aerial vehicle(UAV)images is difficult,as flames typically cover only a very limited portion of the visual scene.This study proposes Context-guided Compact Lightweight Network(CCLNet),an end-to-end lightweight model designed to detect small forest fire targets while ensuring efficient inference on devices with constrained computational resources.CCLNet employs a three-stage network architecture.Its key components include three modules.C3F-Convolutional Gated Linear Unit(C3F-CGLU)performs selective local feature extraction while preserving fine-grained high-frequency flame details.Context-Guided Feature Fusion Module(CGFM)replaces plain concatenation with triplet-attention interactions to emphasize subtle flame patterns.Lightweight Shared Convolution with Separated Batch Normalization Detection(LSCSBD)reduces parameters through separated batch normalization while maintaining scale-specific statistics.We build TF-11K,an 11,139-image dataset combining 9139 self-collected UAV images from subtropical forests and 2000 re-annotated frames from the FLAME dataset.On TF-11K,CCLNet attains 85.8%mAP@0.5,45.5%mean Average Precision(mAP)@[0.5:0.95],87.4%precision,and 79.1%recall with 2.21 M parameters and 5.7 Giga Floating-point Operations Per Second(GFLOPs).The ablation study confirms that each module contributes to both accuracy and efficiency.Cross-dataset evaluation on DFS yields 77.5%mAP@0.5 and 42.3%mAP@[0.5:0.95],indicating good generalization to unseen scenes.These results suggest that CCLNet offers a practical balance between accuracy and speed for small-target forest fire monitoring with UAVs.
基金funded by the Natural Science Foundation of Shanghai Municipality(No.21ZR1440500)the Shanghai Science and Technology Commission(Grant No.21S31902700).
文摘In recent years,the rapid advancement of artificial intelligence(AI)technology has enabled AI-assisted negative screening to significantly enhance physicians'efficiency through image feature analysis and multimodal data modeling,allowing them to focus more on diagnosing positive cases.Meanwhile,multispectral imaging(MSI)integrates spectral and spatial resolution to capture subtle tissue features invisible to the human eye,providing high-resolution data support for pathological analysis.Combining AI technology with MSI and employing quantitative methods to analyze multiband biomarkers(such as absorbance differences in keratin pearls)can effectively improve diagnostic specificity and reduce subjective errors in manual slide interpretation.To address the challenge of identifying negative tissue sections,we developed a discrimination algorithm powered by MSI.We demonstrated its efficacy using cutaneous squamous cell carcinoma(cSCC)as a representative case study.The algorithm achieved 100%accuracy in excluding negative cases and effectively mitigated the false-positive problem caused by cSCC heterogeneity.We constructed a multispectral image(MSI)dataset acquired at 520 nm,600 nm,and 630 nm wavelengths.Subsequently,we employed an optimized MobileViT model for tissue classification and performed comparative analyses against other models.The experimental results showed that our optimized MobileViT model achieved superior performance in identifying negative tissue sections,with a perfect accuracy rate of 100%.Thus,our results confirm the feasibility of integrating MSI with AI to exclude negative cases with perfect accuracy,offering a novel solution to alleviate the workload of pathologists.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2601).
文摘Generative Adversarial Networks(GANs)have become valuable tools in medical imaging,enabling realistic image synthesis for enhancement,augmentation,and restoration.However,their integration into clinical workflows raises concerns,particularly the risk of subtle distortions or hallucinations that may undermine diagnostic accuracy and weaken trust in AI-assisted decision-making.To address this challenge,we propose a hybrid deep learning framework designed to detect GAN-induced artifacts in medical images,thereby reinforcing the reliability of AI-driven diagnostics.The framework integrates low-level statistical descriptors,including high-frequency residuals and Gray-Level Co-occurrence Matrix(GLCM)texture features,with high-level semantic representations extracted from a pre-trained ResNet18.This dual-stream approach enables detection of both pixel-level anomalies and structural inconsistencies introduced by GAN-based manipulation.We validated the framework on a curated dataset of 10,000 medical images,evenly split between authentic and GAN-generated samples across four modalities:MRI,CT,X-ray,and fundus photography.To improve generalizability to real-world clinical settings,we incorporated domain adaptation strategies such as adversarial training and style transfer,reducing domain shift by 15%.Experimental results demonstrate robust performance,achieving 92.6%accuracy and an F1-score of 0.91 on synthetic test data,and maintaining strong performance on real-world GAN-modified images with 87.3%accuracy and an F1-score of 0.85.Additionally,the model attained an AUC of 0.96 and an average precision of 0.92,outperforming conventional GAN detection pipelines and baseline Convolutional Neural Network(CNN)architectures.These findings establish the proposed framework as an effective and reliable solution for detecting GAN-induced hallucinations in medical imaging,representing an important step toward building trustworthy and clinically deployable AI systems.
文摘The application of deep learning for target detection in aerial images captured by Unmanned Aerial Vehicles(UAV)has emerged as a prominent research focus.Due to the considerable distance between UAVs and the photographed objects,coupled with complex shooting environments,existing models often struggle to achieve accurate real-time target detection.In this paper,a You Only Look Once v8(YOLOv8)model is modified from four aspects:the detection head,the up-sampling module,the feature extraction module,and the parameter optimization of positive sample screening,and the YOLO-S3DT model is proposed to improve the performance of the model for detecting small targets in aerial images.Experimental results show that all detection indexes of the proposed model are significantly improved without increasing the number of model parameters and with the limited growth of computation.Moreover,this model also has the best performance compared to other detecting models,demonstrating its advancement within this category of tasks.
文摘Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models achieve speed-accuracy trade-offs via fixed convolution kernels and manual feature fusion,their rigid architectures struggle with multi-scale adaptability,as exemplified by YOLOv8n’s 36.4%mAP and 13.9%small-object AP on VisDrone2019.This paper presents YOLO-LE,a lightweight framework addressing these limitations through three novel designs:(1)We introduce the C2f-Dy and LDown modules to enhance the backbone’s sensitivity to small-object features while reducing backbone parameters,thereby improving model efficiency.(2)An adaptive feature fusion module is designed to dynamically integrate multi-scale feature maps,optimizing the neck structure,reducing neck complexity,and enhancing overall model performance.(3)We replace the original loss function with a distributed focal loss and incorporate a lightweight self-attention mechanism to improve small-object recognition and bounding box regression accuracy.Experimental results demonstrate that YOLO-LE achieves 39.9%mAP@0.5 on VisDrone2019,representing a 9.6%improvement over YOLOv8n,while maintaining 8.5 GFLOPs computational efficiency.This provides an efficient solution for UAV object detection in complex scenarios.
文摘A measurement system for the scattering characteristics of warhead fragments based on high-speed imaging systems offers advantages such as simple deployment,flexible maneuverability,and high spatiotemporal resolution,enabling the acquisition of full-process data of the fragment scattering process.However,mismatches between camera frame rates and target velocities can lead to long motion blur tails of high-speed fragment targets,resulting in low signal-to-noise ratios and rendering conventional detection algorithms ineffective in dynamic strong interference testing environments.In this study,we propose a detection framework centered on dynamic strong interference disturbance signal separation and suppression.We introduce a mixture Gaussian model constrained under a joint spatialtemporal-transform domain Dirichlet process,combined with total variation regularization to achieve disturbance signal suppression.Experimental results demonstrate that the proposed disturbance suppression method can be integrated with certain conventional motion target detection tasks,enabling adaptation to real-world data to a certain extent.Moreover,we provide a specific implementation of this process,which achieves a detection rate close to 100%with an approximate 0%false alarm rate in multiple sets of real target field test data.This research effectively advances the development of the field of damage parameter testing.
文摘In recent years,with the development of synthetic aperture radar(SAR)technology and the widespread application of deep learning,lightweight detection of SAR images has emerged as a research direction.The ultimate goal is to reduce computational and storage requirements while ensuring detection accuracy and reliability,making it an ideal choice for achieving rapid response and efficient processing.In this regard,a lightweight SAR ship target detection algorithm based on YOLOv8 was proposed in this study.Firstly,the C2f-Sc module was designed by fusing the C2f in the backbone network with the ScConv to reduce spatial redundancy and channel redundancy between features in convolutional neural networks.At the same time,the Ghost module was introduced into the neck network to effectively reduce model parameters and computational complexity.A relatively lightweight EMA attention mechanism was added to the neck network to promote the effective fusion of features at different levels.Experimental results showed that the Parameters and GFLOPs of the improved model are reduced by 8.5%and 7.0%when mAP@0.5 and mAP@0.5:0.95 are increased by 0.7%and 1.8%,respectively.It makes the model lightweight and improves the detection accuracy,which has certain application value.
基金supported by National Natural Science Foundation of China(No.61502274).
文摘In the field of image forensics,image tampering detection is a critical and challenging task.Traditional methods based on manually designed feature extraction typically focus on a specific type of tampering operation,which limits their effectiveness in complex scenarios involving multiple forms of tampering.Although deep learningbasedmethods offer the advantage of automatic feature learning,current approaches still require further improvements in terms of detection accuracy and computational efficiency.To address these challenges,this study applies the UNet 3+model to image tampering detection and proposes a hybrid framework,referred to as DDT-Net(Deep Detail Tracking Network),which integrates deep learning with traditional detection techniques.In contrast to traditional additive methods,this approach innovatively applies amultiplicative fusion technique during downsampling,effectively combining the deep learning feature maps at each layer with those generated by the Bayar noise stream.This design enables noise residual features to guide the learning of semantic features more precisely and efficiently,thus facilitating comprehensive feature-level interaction.Furthermore,by leveraging the complementary strengths of deep networks in capturing large-scale semantic manipulations and traditional algorithms’proficiency in detecting fine-grained local traces,the method significantly enhances the accuracy and robustness of tampered region detection.Compared with other approaches,the proposed method achieves an F1 score improvement exceeding 30% on the DEFACTO and DIS25k datasets.In addition,it has been extensively validated on other datasets,including CASIA and DIS25k.Experimental results demonstrate that this method achieves outstanding performance across various types of image tampering detection tasks.
文摘BACKGROUND Optical coherence tomography(OCT)enables high-resolution,non-invasive visualization of retinal structures.Recent evidence suggests that retinal layer alterations may reflect central nervous system changes associated with psychiatric disorders such as schizophrenia(SZ).AIM To develop an advanced deep learning model to classify OCT images and distinguish patients with SZ from healthy controls using retinal biomarkers.METHODS A novel convolutional neural network,Self-AttentionNeXt,was designed by integrating grouped self-attention mechanisms,residual and inverted bottleneck blocks,and a final 1×1 convolution for feature refinement.The model was trained and tested on both a custom OCT dataset collected from patients with SZ and a publicly available OCT dataset(OCT2017).RESULTS Self-AttentionNeXt achieved 97.0%accuracy on the collected SZ OCT dataset and over 95%accuracy on the public OCT2017 dataset.Gradient-weighted class activation mapping visualizations confirmed the model’s attention to clinically relevant retinal regions,suggesting effective feature localization.CONCLUSION Self-AttentionNeXt effectively combines transformer-inspired attention mechanisms with convolutional neural networks architecture to support the early and accurate detection of SZ using OCT images.This approach offers a promising direction for artificial intelligence-assisted psychiatric diagnostics and clinical decision support.
文摘Aimed at the long and narrow geometric features and poor generalization ability of the damage detection in conveyor belts with steel rope cores using the X-ray image,a detection method of damage X-ray image is proposed based on the improved fully convolutional one-stage object detection(FCOS)algorithm.The regression performance of bounding boxes was optimized by introducing the complete intersection over union loss function into the improved algorithm.The feature fusion network structure is modified by adding adaptive fusion paths to the feature fusion network structure,which makes full use of the features of accurate localization and semantics of multi-scale feature fusion networks.Finally,the network structure was trained and validated by using the X-ray image dataset of damages in conveyor belts with steel rope cores provided by a flaw detection equipment manufacturer.In addition,the data enhancement methods such as rotating,mirroring,and scaling,were employed to enrich the image dataset so that the model is adequately trained.Experimental results showed that the improved FCOS algorithm promoted the precision rate and the recall rate by 20.9%and 14.8%respectively,compared with the original algorithm.Meanwhile,compared with Fast R-CNN,Faster R-CNN,SSD,and YOLOv3,the improved FCOS algorithm has obvious advantages;detection precision rate and recall rate of the modified network reached 95.8%and 97.0%respectively.Furthermore,it demonstrated a higher detection accuracy without affecting the speed.The results of this work have some reference significance for the automatic identification and detection of steel core conveyor belt damage.
基金support of this research from the National Natural Science Foundation of China(No.12174085)the Key Research and Devel-opment Project of Changzhou,Jiangsu Province(No.CE 20235054).
文摘Given the challenges of underwater garbage detection,including insufficient lighting,low visibility,high noise levels,and high misclassification rates,this paper proposes a model named CSC-YOLO.CSC-YOLO for detecting garbage in complex un-derwater environments characterized by murky water and strong hydrodynamic conditions.The model incorporates the Content-Guid-ed Attention(CGA)attention mechanism into the SPPF module of the YOLOv8 backbone network to enhance dehazing,reduce noise interference,and fuse multi-scale feature information.Additionally,a Single-Head Self-Attention(SHSA)mechanism is introduced in the final layer of the backbone network to achieve local and global feature fusion in a lightweight manner,improving the accuracy of garbage detection.In the detection head,the CBAM attention mechanism is added to further enhance feature representation,increase the model’s target localization,and improve robustness against complex backgrounds and noise.Furthermore,the anchor box coordi-nates from CSC-YOLO are fed into Mobile_SAM to achieve precise segmentation of underwater garbage.Experimental results show that CSC-YOLO achieves a Precision of 0.962,Recall of 0.898,F1-score of 0.929,and mAP0.5 of 0.960 on the ICRA19 trash dataset,representing improvements of 2.9%,1.7%,2.3%,and 2.0%over YOLOv8n,respectively.The combination of CSC-YOLO and Mo-bile_SAM not only enables garbage detection in complex underwater environments but also achieves segmentation.This approach generates additional garbage segmentation masks without manual annotations,facilitating rapid expansion of labeled underwater garbage datasets for training.As an emerging model for intelligent underwater garbage detection,the proposed method holds signifi-cant potential for practical applications and academic research,offering an effective solution to the challenges of intelligent garbage detection in complex underwater environments.
基金funded by the Centre for Advanced Modelling and Geospatial Information Systems(CAMGIS),Faculty of Engineering and IT,University of Technology Sydneysupported by the Researchers Supporting Project,King Saud University,Riyadh,Saudi Arabia,under Project RSP2025 R14.
文摘Electronic nose and thermal images are effective ways to diagnose the presence of gases in real-time realtime.Multimodal fusion of these modalities can result in the development of highly accurate diagnostic systems.The low-cost thermal imaging software produces low-resolution thermal images in grayscale format,hence necessitating methods for improving the resolution and colorizing the images.The objective of this paper is to develop and train a super-resolution generative adversarial network for improving the resolution of the thermal images,followed by a sparse autoencoder for colorization of thermal images and amultimodal convolutional neural network for gas detection using electronic nose and thermal images.The dataset used comprises 6400 thermal images and electronic nose measurements for four classes.A multimodal Convolutional Neural Network(CNN)comprising an EfficientNetB2 pre-trainedmodel was developed using both early and late feature fusion.The Super Resolution Generative Adversarial Network(SRGAN)model was developed and trained on low and high-resolution thermal images.Asparse autoencoder was trained on the grayscale and colorized thermal images.The SRGAN was trained on lowand high-resolution thermal images,achieving a Structural Similarity Index(SSIM)of 90.28,a Peak Signal-to-Noise Ratio(PSNR)of 68.74,and a Mean Absolute Error(MAE)of 0.066.The autoencoder model produced an MAE of 0.035,a Mean Squared Error(MSE)of 0.006,and a Root Mean Squared Error(RMSE)of 0.0705.The multimodal CNN,trained on these images and electronic nose measurements using both early and late fusion techniques,achieved accuracies of 97.89% and 98.55%,respectively.Hence,the proposed framework can be of great aid for the integration with low-cost software to generate high quality thermal camera images and highly accurate detection of gases in real-time.
基金supported by the National Natural Science Foundation of China(Grant No.62302086)the Natural Science Foundation of Liaoning Province(Grant No.2023-MSBA-070)the Fundamental Research Funds for the Central Universities(Grant No.N2317005).
文摘Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of visible and infrared images.However,the inherent differences in the imaging mechanisms of visible and infrared modalities make effective cross-modal fusion challenging.Furthermore,constrained by the physical characteristics of sensors and thermal diffusion effects,infrared images generally suffer from blurred object contours and missing details,making it difficult to extract object features effectively.To address these issues,we propose an infrared-visible image fusion network that realizesmultimodal information fusion of infrared and visible images through a carefully designedmultiscale fusion strategy.First,we design an adaptive gray-radiance enhancement(AGRE)module to strengthen the detail representation in infrared images,improving their usability in complex lighting scenarios.Next,we introduce a channelspatial feature interaction(CSFI)module,which achieves efficient complementarity between the RGB and infrared(IR)modalities via dynamic channel switching and a spatial attention mechanism.Finally,we propose a multi-scale enhanced cross-attention fusion(MSECA)module,which optimizes the fusion ofmulti-level features through dynamic convolution and gating mechanisms and captures long-range complementary relationships of cross-modal features on a global scale,thereby enhancing the expressiveness of the fused features.Experiments on the KAIST,M3FD,and FLIR datasets demonstrate that our method delivers outstanding performance in daytime and nighttime scenarios.On the KAIST dataset,the miss rate drops to 5.99%,and further to 4.26% in night scenes.On the FLIR and M3FD datasets,it achieves AP50 scores of 79.4% and 88.9%,respectively.
文摘Efficient banana crop detection is crucial for precision agriculture;however,traditional remote sensing methods often lack the spatial resolution required for accurate identification.This study utilizes low-altitude Unmanned Aerial Vehicle(UAV)images and deep learning-based object detection models to enhance banana plant detection.A comparative analysis of Faster Region-Based Convolutional Neural Network(Faster R-CNN),You Only Look Once Version 3(YOLOv3),Retina Network(RetinaNet),and Single Shot MultiBox Detector(SSD)was conducted to evaluate their effectiveness.Results show that RetinaNet achieved the highest detection accuracy,with a precision of 96.67%,a recall of 71.67%,and an F1 score of 81.33%.The study further highlights the impact of scale variation,occlusion,and vegetation density on detection performance.Unlike previous studies,this research systematically evaluates multi-scale object detection models for banana plant identification,offering insights into the advantages of UAV-based deep learning applications in agriculture.In addition,this study compares five evaluation metrics across the four detection models using both RGB and grayscale images.Specifically,RetinaNet exhibited the best overall performance with grayscale images,achieving the highest values across all five metrics.Compared to its performance with RGB images,these results represent a marked improvement,confirming the potential of grayscale preprocessing to enhance detection capability.
基金supported in part by Youth Innovation Promotion Association,Chinese Academy of Sciences under Grant 2022022in part by South China Sea Nova project of Hainan Province under Grant NHXXRCXM202340in part by the Scientific Research Foundation Project of Hainan Acoustics Laboratory under grant ZKNZ2024001.
文摘Underwater target detection in forward-looking sonar(FLS)images is a challenging but promising endeavor.The existing neural-based methods yield notable progress but there remains room for improvement due to overlooking the unique characteristics of underwater environments.Considering the problems of low imaging resolution,complex background environment,and large changes in target imaging of underwater sonar images,this paper specifically designs a sonar images target detection Network based on Progressive sensitivity capture,named ProNet.It progressively captures the sensitive regions in the current image where potential effective targets may exist.Guided by this basic idea,the primary technical innovation of this paper is the introduction of a foundational module structure for constructing a sonar target detection backbone network.This structure employs a multi-subspace mixed convolution module that initially maps sonar images into different subspaces and extracts local contextual features using varying convolutional receptive fields within these heterogeneous subspaces.Subsequently,a Scale-aware aggregation module effectively aggregates the heterogeneous features extracted from different subspaces.Finally,the multi-scale attention structure further enhances the relational perception of the aggregated features.We evaluated ProNet on three FLS datasets of varying scenes,and experimental results indicate that ProNet outperforms the current state-of-the-art sonar image and general target detectors.
基金supported in part by the National Natural Foundation of China(Nos.52472334,U2368204)。
文摘In response to challenges posed by complex backgrounds,diverse target angles,and numerous small targets in remote sensing images,alongside the issue of high resource consumption hindering model deployment,we propose an enhanced,lightweight you only look once version 8 small(YOLOv8s)detection algorithm.Regarding network improvements,we first replace tradi-tional horizontal boxes with rotated boxes for target detection,effectively addressing difficulties in feature extraction caused by varying target angles.Second,we design a module integrating convolu-tional neural networks(CNN)and Transformer components to replace specific C2f modules in the backbone network,thereby expanding the model’s receptive field and enhancing feature extraction in complex backgrounds.Finally,we introduce a feature calibration structure to mitigate potential feature mismatches during feature fusion.For model compression,we employ a lightweight channel pruning technique based on localized mean average precision(LMAP)to eliminate redundancies in the enhanced model.Although this approach results in some loss of detection accuracy,it effec-tively reduces the number of parameters,computational load,and model size.Additionally,we employ channel-level knowledge distillation to recover accuracy in the pruned model,further enhancing detection performance.Experimental results indicate that the enhanced algorithm achieves a 6.1%increase in mAP50 compared to YOLOv8s,while simultaneously reducing parame-ters,computational load,and model size by 57.7%,28.8%,and 52.3%,respectively.
基金funded by Department of Robotics and Mechatronics Engineering,Kennesaw State University,Marietta,GA 30060,USA.
文摘Glaucoma,a chronic eye disease affecting millions worldwide,poses a substantial threat to eyesight and can result in permanent vision loss if left untreated.Manual identification of glaucoma is a complicated and time-consuming practice requiring specialized expertise and results may be subjective.To address these challenges,this research proposes a computer-aided diagnosis(CAD)approach using Artificial Intelligence(AI)techniques for binary and multiclass classification of glaucoma stages.An ensemble fusion mechanism that combines the outputs of three pre-trained convolutional neural network(ConvNet)models–ResNet-50,VGG-16,and InceptionV3 is utilized in this paper.This fusion technique enhances diagnostic accuracy and robustness by ensemble-averaging the predictions from individual models,leveraging their complementary strengths.The objective of this work is to assess the model’s capability for early-stage glaucoma diagnosis.Classification is performed on a dataset collected from the Harvard Dataverse repository.With the proposed technique,for Normal vs.Advanced glaucoma classification,a validation accuracy of 98.04%and testing accuracy of 98.03%is achieved,with a specificity of 100%which outperforms stateof-the-art methods.For multiclass classification,the suggested ensemble approach achieved a precision and sensitivity of 97%,specificity,and testing accuracy of 98.57%and 96.82%,respectively.The proposed E-GlauNet model has significant potential in assisting ophthalmologists in the screening and fast diagnosis of glaucoma,leading to more reliable,efficient,and timely diagnosis,particularly for early-stage detection and staging of the disease.While the proposed method demonstrates high accuracy and robustness,the study is limited by the evaluation of a single dataset.Future work will focus on external validation across diverse datasets and enhancing interpretability using explainable AI techniques.
基金the Research Training Program International(RTPi)scholarship from Commonwealth Australiathe top-up scholarship provided by SureNut Ltd.SureNut Ltd.for supplying all the almonds used in this study.
文摘Aflatoxin B1(AFB1)is a toxic fungal metabolite that contaminates almonds from cultivation to harvesting.It leads to chronic health problems and significant economic loss to the producers.Therefore,a fast and non-invasive detection technique is crucial for safeguarding food safety by swiftly identifying and eliminating contaminated almonds from the supply chain.Hyperspectral imaging has been explored as a potential non-destructive technology for detecting AFB1.However,the diverse geometries of almonds present a significant challenge on acquired images,thereby impacting the accuracy of the developed prediction and classification models.This study investigates the effectiveness of short-wave infrared(SwIR)hyperspectral imaging combined with deep learning for detecting AFB1 in almonds of varying geometries.Initially,partial least squares regression(PLSR)and support vector machine(SvM)regression models were evaluated for quantification,while SVM and quadratic discriminant analysis(QDA)classifiers were applied for classification.The results indicated that spectral responses varied with almond thickness,making quantification models unreliable for industrial applications.The Competitive Adaptive Reweighted Sampling(CARS)algorithm was employed to identify key spectral features for developing multi-spectral AFB1 classification models to evaluate the feasibility of high-speed,accurate in-line detection.The deep learning approach significantly outperformed traditional machine learning models,with the pre-trained Inception V3 network achieving a cross-validation accuracy of 84.82%,an F1-score of 0.8522,and an area under curve of 0.893.These findings highlight the superiority of deep learning-based hyperspectral imaging for accurate and reliable AFB1 detection in almonds with diverse shapes and thicknesses.
基金supported by the National Natural Science Foundation of China(No.52474172).
文摘It is important to understand the development of joints and fractures in rock masses to ensure drilling stability and blasting effectiveness.Traditional manual observation techniques for identifying and extracting fracture characteristics have been proven to be inefficient and prone to subjective interpretation.Moreover,conventional image processing algorithms and classical deep learning models often encounter difficulties in accurately identifying fracture areas,resulting in unclear contours.This study proposes an intelligent method for detecting internal fractures in mine rock masses to address these challenges.The proposed approach captures a nodal fracture map within the targeted blast area and integrates channel and spatial attention mechanisms into the ResUnet(RU)model.The channel attention mechanism dynamically recalibrates the importance of each feature channel,and the spatial attention mechanism enhances feature representation in key areas while minimizing background noise,thus improving segmentation accuracy.A dynamic serpentine convolution module is also introduced that adaptively adjusts the shape and orientation of the convolution kernel based on the local structure of the input feature map.Furthermore,this method enables the automatic extraction and quantification of borehole nodal fracture information by fitting sinusoidal curves to the boundaries of the fracture contours using the least squares method.In comparison to other advanced deep learning models,our enhanced RU demonstrates superior performance across evaluation metrics,including accuracy,pixel accuracy(PA),and intersection over union(IoU).Unlike traditional manual extraction methods,our intelligent detection approach provides considerable time and cost savings,with an average error rate of approximately 4%.This approach has the potential to greatly improve the efficiency of geological surveys of borehole fractures.