Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models...Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models achieve speed-accuracy trade-offs via fixed convolution kernels and manual feature fusion,their rigid architectures struggle with multi-scale adaptability,as exemplified by YOLOv8n’s 36.4%mAP and 13.9%small-object AP on VisDrone2019.This paper presents YOLO-LE,a lightweight framework addressing these limitations through three novel designs:(1)We introduce the C2f-Dy and LDown modules to enhance the backbone’s sensitivity to small-object features while reducing backbone parameters,thereby improving model efficiency.(2)An adaptive feature fusion module is designed to dynamically integrate multi-scale feature maps,optimizing the neck structure,reducing neck complexity,and enhancing overall model performance.(3)We replace the original loss function with a distributed focal loss and incorporate a lightweight self-attention mechanism to improve small-object recognition and bounding box regression accuracy.Experimental results demonstrate that YOLO-LE achieves 39.9%mAP@0.5 on VisDrone2019,representing a 9.6%improvement over YOLOv8n,while maintaining 8.5 GFLOPs computational efficiency.This provides an efficient solution for UAV object detection in complex scenarios.展开更多
In recent years,with the development of synthetic aperture radar(SAR)technology and the widespread application of deep learning,lightweight detection of SAR images has emerged as a research direction.The ultimate goal...In recent years,with the development of synthetic aperture radar(SAR)technology and the widespread application of deep learning,lightweight detection of SAR images has emerged as a research direction.The ultimate goal is to reduce computational and storage requirements while ensuring detection accuracy and reliability,making it an ideal choice for achieving rapid response and efficient processing.In this regard,a lightweight SAR ship target detection algorithm based on YOLOv8 was proposed in this study.Firstly,the C2f-Sc module was designed by fusing the C2f in the backbone network with the ScConv to reduce spatial redundancy and channel redundancy between features in convolutional neural networks.At the same time,the Ghost module was introduced into the neck network to effectively reduce model parameters and computational complexity.A relatively lightweight EMA attention mechanism was added to the neck network to promote the effective fusion of features at different levels.Experimental results showed that the Parameters and GFLOPs of the improved model are reduced by 8.5%and 7.0%when mAP@0.5 and mAP@0.5:0.95 are increased by 0.7%and 1.8%,respectively.It makes the model lightweight and improves the detection accuracy,which has certain application value.展开更多
In the field of image forensics,image tampering detection is a critical and challenging task.Traditional methods based on manually designed feature extraction typically focus on a specific type of tampering operation,...In the field of image forensics,image tampering detection is a critical and challenging task.Traditional methods based on manually designed feature extraction typically focus on a specific type of tampering operation,which limits their effectiveness in complex scenarios involving multiple forms of tampering.Although deep learningbasedmethods offer the advantage of automatic feature learning,current approaches still require further improvements in terms of detection accuracy and computational efficiency.To address these challenges,this study applies the UNet 3+model to image tampering detection and proposes a hybrid framework,referred to as DDT-Net(Deep Detail Tracking Network),which integrates deep learning with traditional detection techniques.In contrast to traditional additive methods,this approach innovatively applies amultiplicative fusion technique during downsampling,effectively combining the deep learning feature maps at each layer with those generated by the Bayar noise stream.This design enables noise residual features to guide the learning of semantic features more precisely and efficiently,thus facilitating comprehensive feature-level interaction.Furthermore,by leveraging the complementary strengths of deep networks in capturing large-scale semantic manipulations and traditional algorithms’proficiency in detecting fine-grained local traces,the method significantly enhances the accuracy and robustness of tampered region detection.Compared with other approaches,the proposed method achieves an F1 score improvement exceeding 30% on the DEFACTO and DIS25k datasets.In addition,it has been extensively validated on other datasets,including CASIA and DIS25k.Experimental results demonstrate that this method achieves outstanding performance across various types of image tampering detection tasks.展开更多
The application of deep learning for target detection in aerial images captured by Unmanned Aerial Vehicles(UAV)has emerged as a prominent research focus.Due to the considerable distance between UAVs and the photograp...The application of deep learning for target detection in aerial images captured by Unmanned Aerial Vehicles(UAV)has emerged as a prominent research focus.Due to the considerable distance between UAVs and the photographed objects,coupled with complex shooting environments,existing models often struggle to achieve accurate real-time target detection.In this paper,a You Only Look Once v8(YOLOv8)model is modified from four aspects:the detection head,the up-sampling module,the feature extraction module,and the parameter optimization of positive sample screening,and the YOLO-S3DT model is proposed to improve the performance of the model for detecting small targets in aerial images.Experimental results show that all detection indexes of the proposed model are significantly improved without increasing the number of model parameters and with the limited growth of computation.Moreover,this model also has the best performance compared to other detecting models,demonstrating its advancement within this category of tasks.展开更多
BACKGROUND Optical coherence tomography(OCT)enables high-resolution,non-invasive visualization of retinal structures.Recent evidence suggests that retinal layer alterations may reflect central nervous system changes a...BACKGROUND Optical coherence tomography(OCT)enables high-resolution,non-invasive visualization of retinal structures.Recent evidence suggests that retinal layer alterations may reflect central nervous system changes associated with psychiatric disorders such as schizophrenia(SZ).AIM To develop an advanced deep learning model to classify OCT images and distinguish patients with SZ from healthy controls using retinal biomarkers.METHODS A novel convolutional neural network,Self-AttentionNeXt,was designed by integrating grouped self-attention mechanisms,residual and inverted bottleneck blocks,and a final 1×1 convolution for feature refinement.The model was trained and tested on both a custom OCT dataset collected from patients with SZ and a publicly available OCT dataset(OCT2017).RESULTS Self-AttentionNeXt achieved 97.0%accuracy on the collected SZ OCT dataset and over 95%accuracy on the public OCT2017 dataset.Gradient-weighted class activation mapping visualizations confirmed the model’s attention to clinically relevant retinal regions,suggesting effective feature localization.CONCLUSION Self-AttentionNeXt effectively combines transformer-inspired attention mechanisms with convolutional neural networks architecture to support the early and accurate detection of SZ using OCT images.This approach offers a promising direction for artificial intelligence-assisted psychiatric diagnostics and clinical decision support.展开更多
Aimed at the long and narrow geometric features and poor generalization ability of the damage detection in conveyor belts with steel rope cores using the X-ray image,a detection method of damage X-ray image is propose...Aimed at the long and narrow geometric features and poor generalization ability of the damage detection in conveyor belts with steel rope cores using the X-ray image,a detection method of damage X-ray image is proposed based on the improved fully convolutional one-stage object detection(FCOS)algorithm.The regression performance of bounding boxes was optimized by introducing the complete intersection over union loss function into the improved algorithm.The feature fusion network structure is modified by adding adaptive fusion paths to the feature fusion network structure,which makes full use of the features of accurate localization and semantics of multi-scale feature fusion networks.Finally,the network structure was trained and validated by using the X-ray image dataset of damages in conveyor belts with steel rope cores provided by a flaw detection equipment manufacturer.In addition,the data enhancement methods such as rotating,mirroring,and scaling,were employed to enrich the image dataset so that the model is adequately trained.Experimental results showed that the improved FCOS algorithm promoted the precision rate and the recall rate by 20.9%and 14.8%respectively,compared with the original algorithm.Meanwhile,compared with Fast R-CNN,Faster R-CNN,SSD,and YOLOv3,the improved FCOS algorithm has obvious advantages;detection precision rate and recall rate of the modified network reached 95.8%and 97.0%respectively.Furthermore,it demonstrated a higher detection accuracy without affecting the speed.The results of this work have some reference significance for the automatic identification and detection of steel core conveyor belt damage.展开更多
Given the challenges of underwater garbage detection,including insufficient lighting,low visibility,high noise levels,and high misclassification rates,this paper proposes a model named CSC-YOLO.CSC-YOLO for detecting ...Given the challenges of underwater garbage detection,including insufficient lighting,low visibility,high noise levels,and high misclassification rates,this paper proposes a model named CSC-YOLO.CSC-YOLO for detecting garbage in complex un-derwater environments characterized by murky water and strong hydrodynamic conditions.The model incorporates the Content-Guid-ed Attention(CGA)attention mechanism into the SPPF module of the YOLOv8 backbone network to enhance dehazing,reduce noise interference,and fuse multi-scale feature information.Additionally,a Single-Head Self-Attention(SHSA)mechanism is introduced in the final layer of the backbone network to achieve local and global feature fusion in a lightweight manner,improving the accuracy of garbage detection.In the detection head,the CBAM attention mechanism is added to further enhance feature representation,increase the model’s target localization,and improve robustness against complex backgrounds and noise.Furthermore,the anchor box coordi-nates from CSC-YOLO are fed into Mobile_SAM to achieve precise segmentation of underwater garbage.Experimental results show that CSC-YOLO achieves a Precision of 0.962,Recall of 0.898,F1-score of 0.929,and mAP0.5 of 0.960 on the ICRA19 trash dataset,representing improvements of 2.9%,1.7%,2.3%,and 2.0%over YOLOv8n,respectively.The combination of CSC-YOLO and Mo-bile_SAM not only enables garbage detection in complex underwater environments but also achieves segmentation.This approach generates additional garbage segmentation masks without manual annotations,facilitating rapid expansion of labeled underwater garbage datasets for training.As an emerging model for intelligent underwater garbage detection,the proposed method holds signifi-cant potential for practical applications and academic research,offering an effective solution to the challenges of intelligent garbage detection in complex underwater environments.展开更多
Electronic nose and thermal images are effective ways to diagnose the presence of gases in real-time realtime.Multimodal fusion of these modalities can result in the development of highly accurate diagnostic systems.T...Electronic nose and thermal images are effective ways to diagnose the presence of gases in real-time realtime.Multimodal fusion of these modalities can result in the development of highly accurate diagnostic systems.The low-cost thermal imaging software produces low-resolution thermal images in grayscale format,hence necessitating methods for improving the resolution and colorizing the images.The objective of this paper is to develop and train a super-resolution generative adversarial network for improving the resolution of the thermal images,followed by a sparse autoencoder for colorization of thermal images and amultimodal convolutional neural network for gas detection using electronic nose and thermal images.The dataset used comprises 6400 thermal images and electronic nose measurements for four classes.A multimodal Convolutional Neural Network(CNN)comprising an EfficientNetB2 pre-trainedmodel was developed using both early and late feature fusion.The Super Resolution Generative Adversarial Network(SRGAN)model was developed and trained on low and high-resolution thermal images.Asparse autoencoder was trained on the grayscale and colorized thermal images.The SRGAN was trained on lowand high-resolution thermal images,achieving a Structural Similarity Index(SSIM)of 90.28,a Peak Signal-to-Noise Ratio(PSNR)of 68.74,and a Mean Absolute Error(MAE)of 0.066.The autoencoder model produced an MAE of 0.035,a Mean Squared Error(MSE)of 0.006,and a Root Mean Squared Error(RMSE)of 0.0705.The multimodal CNN,trained on these images and electronic nose measurements using both early and late fusion techniques,achieved accuracies of 97.89% and 98.55%,respectively.Hence,the proposed framework can be of great aid for the integration with low-cost software to generate high quality thermal camera images and highly accurate detection of gases in real-time.展开更多
A measurement system for the scattering characteristics of warhead fragments based on high-speed imaging systems offers advantages such as simple deployment,flexible maneuverability,and high spatiotemporal resolution,...A measurement system for the scattering characteristics of warhead fragments based on high-speed imaging systems offers advantages such as simple deployment,flexible maneuverability,and high spatiotemporal resolution,enabling the acquisition of full-process data of the fragment scattering process.However,mismatches between camera frame rates and target velocities can lead to long motion blur tails of high-speed fragment targets,resulting in low signal-to-noise ratios and rendering conventional detection algorithms ineffective in dynamic strong interference testing environments.In this study,we propose a detection framework centered on dynamic strong interference disturbance signal separation and suppression.We introduce a mixture Gaussian model constrained under a joint spatialtemporal-transform domain Dirichlet process,combined with total variation regularization to achieve disturbance signal suppression.Experimental results demonstrate that the proposed disturbance suppression method can be integrated with certain conventional motion target detection tasks,enabling adaptation to real-world data to a certain extent.Moreover,we provide a specific implementation of this process,which achieves a detection rate close to 100%with an approximate 0%false alarm rate in multiple sets of real target field test data.This research effectively advances the development of the field of damage parameter testing.展开更多
Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of vis...Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of visible and infrared images.However,the inherent differences in the imaging mechanisms of visible and infrared modalities make effective cross-modal fusion challenging.Furthermore,constrained by the physical characteristics of sensors and thermal diffusion effects,infrared images generally suffer from blurred object contours and missing details,making it difficult to extract object features effectively.To address these issues,we propose an infrared-visible image fusion network that realizesmultimodal information fusion of infrared and visible images through a carefully designedmultiscale fusion strategy.First,we design an adaptive gray-radiance enhancement(AGRE)module to strengthen the detail representation in infrared images,improving their usability in complex lighting scenarios.Next,we introduce a channelspatial feature interaction(CSFI)module,which achieves efficient complementarity between the RGB and infrared(IR)modalities via dynamic channel switching and a spatial attention mechanism.Finally,we propose a multi-scale enhanced cross-attention fusion(MSECA)module,which optimizes the fusion ofmulti-level features through dynamic convolution and gating mechanisms and captures long-range complementary relationships of cross-modal features on a global scale,thereby enhancing the expressiveness of the fused features.Experiments on the KAIST,M3FD,and FLIR datasets demonstrate that our method delivers outstanding performance in daytime and nighttime scenarios.On the KAIST dataset,the miss rate drops to 5.99%,and further to 4.26% in night scenes.On the FLIR and M3FD datasets,it achieves AP50 scores of 79.4% and 88.9%,respectively.展开更多
Efficient banana crop detection is crucial for precision agriculture;however,traditional remote sensing methods often lack the spatial resolution required for accurate identification.This study utilizes low-altitude U...Efficient banana crop detection is crucial for precision agriculture;however,traditional remote sensing methods often lack the spatial resolution required for accurate identification.This study utilizes low-altitude Unmanned Aerial Vehicle(UAV)images and deep learning-based object detection models to enhance banana plant detection.A comparative analysis of Faster Region-Based Convolutional Neural Network(Faster R-CNN),You Only Look Once Version 3(YOLOv3),Retina Network(RetinaNet),and Single Shot MultiBox Detector(SSD)was conducted to evaluate their effectiveness.Results show that RetinaNet achieved the highest detection accuracy,with a precision of 96.67%,a recall of 71.67%,and an F1 score of 81.33%.The study further highlights the impact of scale variation,occlusion,and vegetation density on detection performance.Unlike previous studies,this research systematically evaluates multi-scale object detection models for banana plant identification,offering insights into the advantages of UAV-based deep learning applications in agriculture.In addition,this study compares five evaluation metrics across the four detection models using both RGB and grayscale images.Specifically,RetinaNet exhibited the best overall performance with grayscale images,achieving the highest values across all five metrics.Compared to its performance with RGB images,these results represent a marked improvement,confirming the potential of grayscale preprocessing to enhance detection capability.展开更多
Underwater target detection in forward-looking sonar(FLS)images is a challenging but promising endeavor.The existing neural-based methods yield notable progress but there remains room for improvement due to overlookin...Underwater target detection in forward-looking sonar(FLS)images is a challenging but promising endeavor.The existing neural-based methods yield notable progress but there remains room for improvement due to overlooking the unique characteristics of underwater environments.Considering the problems of low imaging resolution,complex background environment,and large changes in target imaging of underwater sonar images,this paper specifically designs a sonar images target detection Network based on Progressive sensitivity capture,named ProNet.It progressively captures the sensitive regions in the current image where potential effective targets may exist.Guided by this basic idea,the primary technical innovation of this paper is the introduction of a foundational module structure for constructing a sonar target detection backbone network.This structure employs a multi-subspace mixed convolution module that initially maps sonar images into different subspaces and extracts local contextual features using varying convolutional receptive fields within these heterogeneous subspaces.Subsequently,a Scale-aware aggregation module effectively aggregates the heterogeneous features extracted from different subspaces.Finally,the multi-scale attention structure further enhances the relational perception of the aggregated features.We evaluated ProNet on three FLS datasets of varying scenes,and experimental results indicate that ProNet outperforms the current state-of-the-art sonar image and general target detectors.展开更多
In response to challenges posed by complex backgrounds,diverse target angles,and numerous small targets in remote sensing images,alongside the issue of high resource consumption hindering model deployment,we propose a...In response to challenges posed by complex backgrounds,diverse target angles,and numerous small targets in remote sensing images,alongside the issue of high resource consumption hindering model deployment,we propose an enhanced,lightweight you only look once version 8 small(YOLOv8s)detection algorithm.Regarding network improvements,we first replace tradi-tional horizontal boxes with rotated boxes for target detection,effectively addressing difficulties in feature extraction caused by varying target angles.Second,we design a module integrating convolu-tional neural networks(CNN)and Transformer components to replace specific C2f modules in the backbone network,thereby expanding the model’s receptive field and enhancing feature extraction in complex backgrounds.Finally,we introduce a feature calibration structure to mitigate potential feature mismatches during feature fusion.For model compression,we employ a lightweight channel pruning technique based on localized mean average precision(LMAP)to eliminate redundancies in the enhanced model.Although this approach results in some loss of detection accuracy,it effec-tively reduces the number of parameters,computational load,and model size.Additionally,we employ channel-level knowledge distillation to recover accuracy in the pruned model,further enhancing detection performance.Experimental results indicate that the enhanced algorithm achieves a 6.1%increase in mAP50 compared to YOLOv8s,while simultaneously reducing parame-ters,computational load,and model size by 57.7%,28.8%,and 52.3%,respectively.展开更多
Glaucoma,a chronic eye disease affecting millions worldwide,poses a substantial threat to eyesight and can result in permanent vision loss if left untreated.Manual identification of glaucoma is a complicated and time-...Glaucoma,a chronic eye disease affecting millions worldwide,poses a substantial threat to eyesight and can result in permanent vision loss if left untreated.Manual identification of glaucoma is a complicated and time-consuming practice requiring specialized expertise and results may be subjective.To address these challenges,this research proposes a computer-aided diagnosis(CAD)approach using Artificial Intelligence(AI)techniques for binary and multiclass classification of glaucoma stages.An ensemble fusion mechanism that combines the outputs of three pre-trained convolutional neural network(ConvNet)models–ResNet-50,VGG-16,and InceptionV3 is utilized in this paper.This fusion technique enhances diagnostic accuracy and robustness by ensemble-averaging the predictions from individual models,leveraging their complementary strengths.The objective of this work is to assess the model’s capability for early-stage glaucoma diagnosis.Classification is performed on a dataset collected from the Harvard Dataverse repository.With the proposed technique,for Normal vs.Advanced glaucoma classification,a validation accuracy of 98.04%and testing accuracy of 98.03%is achieved,with a specificity of 100%which outperforms stateof-the-art methods.For multiclass classification,the suggested ensemble approach achieved a precision and sensitivity of 97%,specificity,and testing accuracy of 98.57%and 96.82%,respectively.The proposed E-GlauNet model has significant potential in assisting ophthalmologists in the screening and fast diagnosis of glaucoma,leading to more reliable,efficient,and timely diagnosis,particularly for early-stage detection and staging of the disease.While the proposed method demonstrates high accuracy and robustness,the study is limited by the evaluation of a single dataset.Future work will focus on external validation across diverse datasets and enhancing interpretability using explainable AI techniques.展开更多
It is important to understand the development of joints and fractures in rock masses to ensure drilling stability and blasting effectiveness.Traditional manual observation techniques for identifying and extracting fra...It is important to understand the development of joints and fractures in rock masses to ensure drilling stability and blasting effectiveness.Traditional manual observation techniques for identifying and extracting fracture characteristics have been proven to be inefficient and prone to subjective interpretation.Moreover,conventional image processing algorithms and classical deep learning models often encounter difficulties in accurately identifying fracture areas,resulting in unclear contours.This study proposes an intelligent method for detecting internal fractures in mine rock masses to address these challenges.The proposed approach captures a nodal fracture map within the targeted blast area and integrates channel and spatial attention mechanisms into the ResUnet(RU)model.The channel attention mechanism dynamically recalibrates the importance of each feature channel,and the spatial attention mechanism enhances feature representation in key areas while minimizing background noise,thus improving segmentation accuracy.A dynamic serpentine convolution module is also introduced that adaptively adjusts the shape and orientation of the convolution kernel based on the local structure of the input feature map.Furthermore,this method enables the automatic extraction and quantification of borehole nodal fracture information by fitting sinusoidal curves to the boundaries of the fracture contours using the least squares method.In comparison to other advanced deep learning models,our enhanced RU demonstrates superior performance across evaluation metrics,including accuracy,pixel accuracy(PA),and intersection over union(IoU).Unlike traditional manual extraction methods,our intelligent detection approach provides considerable time and cost savings,with an average error rate of approximately 4%.This approach has the potential to greatly improve the efficiency of geological surveys of borehole fractures.展开更多
It is of great importance to obtain precise trace data,as traces are frequently the sole visible and measurable parameter in most outcrops.The manual recognition and detection of traces on high-resolution three-dimens...It is of great importance to obtain precise trace data,as traces are frequently the sole visible and measurable parameter in most outcrops.The manual recognition and detection of traces on high-resolution three-dimensional(3D)models are relatively straightforward but time-consuming.One potential solution to enhance this process is to use machine learning algorithms to detect the 3D traces.In this study,a unique pixel-wise texture mapper algorithm generates a dense point cloud representation of an outcrop with the precise resolution of the original textured 3D model.A virtual digital image rendering was then employed to capture virtual images of selected regions.This technique helps to overcome limitations caused by the surface morphology of the rock mass,such as restricted access,lighting conditions,and shading effects.After AI-powered trace detection on two-dimensional(2D)images,a 3D data structuring technique was applied to the selected trace pixels.In the 3D data structuring,the trace data were structured through 2D thinning,3D reprojection,clustering,segmentation,and segment linking.Finally,the linked segments were exported as 3D polylines,with each polyline in the output corresponding to a trace.The efficacy of the proposed method was assessed using a 3D model of a real-world case study,which was used to compare the results of artificial intelligence(AI)-aided and human intelligence trace detection.Rosette diagrams,which visualize the distribution of trace orientations,confirmed the high similarity between the automatically and manually generated trace maps.In conclusion,the proposed semi-automatic method was easy to use,fast,and accurate in detecting the dominant jointing system of the rock mass.展开更多
As technologies related to power equipment fault diagnosis and infrared temperature measurement continue to advance,the classification and identification of infrared temperature measurement images have become crucial ...As technologies related to power equipment fault diagnosis and infrared temperature measurement continue to advance,the classification and identification of infrared temperature measurement images have become crucial in effective intelligent fault diagnosis of various electrical equipment.In response to the increasing demand for sufficient feature fusion in current real-time detection and low detection accuracy in existing networks for Substation fault diagnosis,we introduce an innovative method known as Gather and Distribution Mechanism-You Only Look Once(GD-YOLO).Firstly,a partial convolution group is designed based on different convolution kernels.We combine the partial convolution group with deep convolution to propose a new Grouped Channel-wise Spatial Convolution(GCSConv)that compensates for the information loss caused by spatial channel convolution.Secondly,the Gather and Distribute Mechanism,which addresses the fusion problem of different dimensional features,has been implemented by aligning and sharing information through aggregation and distribution mechanisms.Thirdly,considering the limitations in current bounding box regression and the imbalance between complex and simple samples,Maximum Possible Distance Intersection over Union(MPDIoU)and Adaptive SlideLoss is incorporated into the loss function,allowing samples near the Intersection over Union(IoU)to receive more attention through the dynamic variation of the mean Intersection over Union.The GD-YOLO algorithm can surpass YOLOv5,YOLOv7,and YOLOv8 in infrared image detection for electrical equipment,achieving a mean Average Precision(mAP)of 88.9%,with accuracy improvements of 3.7%,4.3%,and 3.1%,respectively.Additionally,the model delivers a frame rate of 48 FPS,which aligns with the precision and velocity criteria necessary for the detection of infrared images in power equipment.展开更多
Pore formation is a significant challenges in the advancement of laser additive manufacturing(LAM)technologies.To address this issue,image data-driven pore detection techniques have become a research focus.However,exi...Pore formation is a significant challenges in the advancement of laser additive manufacturing(LAM)technologies.To address this issue,image data-driven pore detection techniques have become a research focus.However,existing methods are constrained by reliance on a single detection environment(e.g.,consistent brightness)and fixed input image sizes,limiting their predictive accuracy and application scope.This paper introduces an in-novative a pore detection method based on a deep learning model for laser-directed energy deposition(L-DED).The proposed method leverages the deep learning model’s ability to extract feature information from melt pool images captured by a high-speed camera,enabling efficient pore detection under varying brightness conditions and diverse image sizes.The detection results demonstrate that,under varying brightness levels,the proposed model achieves a pore detection accuracy of approximately 93.5% and a root mean square error(RMSE)of 0.42 for local porosity prediction.Additionally,even with changes in input image size,the model maintains robust performance,achieving a detection accuracy of 96% for pore status detection and an RMSE value of 0.09 for local porosity prediction.This study not only addresses the limitations of traditional detection techniques but also broadens the scope of online detection technologies.It highlights the potential of deep learning in complex industrial settings and provides valuable insights for advancing defect detection research in related fields.展开更多
Recent years have seen a surge in interest in object detection on remote sensing images for applications such as surveillance andmanagement.However,challenges like small object detection,scale variation,and the presen...Recent years have seen a surge in interest in object detection on remote sensing images for applications such as surveillance andmanagement.However,challenges like small object detection,scale variation,and the presence of closely packed objects in these images hinder accurate detection.Additionally,the motion blur effect further complicates the identification of such objects.To address these issues,we propose enhanced YOLOv9 with a transformer head(YOLOv9-TH).The model introduces an additional prediction head for detecting objects of varying sizes and swaps the original prediction heads for transformer heads to leverage self-attention mechanisms.We further improve YOLOv9-TH using several strategies,including data augmentation,multi-scale testing,multi-model integration,and the introduction of an additional classifier.The cross-stage partial(CSP)method and the ghost convolution hierarchical graph(GCHG)are combined to improve detection accuracy by better utilizing feature maps,widening the receptive field,and precisely extracting multi-scale objects.Additionally,we incorporate the E-SimAM attention mechanism to address low-resolution feature loss.Extensive experiments on the VisDrone2021 and DIOR datasets demonstrate the effectiveness of YOLOv9-TH,showing good improvement in mAP compared to the best existing methods.The YOLOv9-TH-e achieved 54.2% of mAP50 on the VisDrone2021 dataset and 92.3% of mAP on the DIOR dataset.The results confirmthemodel’s robustness and suitability for real-world applications,particularly for small object detection in remote sensing images.展开更多
The objective of this study is to address semantic misalignment and insufficient accuracy in edge detail and discrimination detection,which are common issues in deep learning-based change detection methods relying on ...The objective of this study is to address semantic misalignment and insufficient accuracy in edge detail and discrimination detection,which are common issues in deep learning-based change detection methods relying on encoding and decoding frameworks.In response to this,we propose a model called FlowDual-PixelClsObjectMec(FPCNet),which innovatively incorporates dual flow alignment technology in the decoding stage to rectify semantic discrepancies through streamlined feature correction fusion.Furthermore,the model employs an object-level similarity measurement coupled with pixel-level classification in the PixelClsObjectMec(PCOM)module during the final discrimination stage,significantly enhancing edge detail detection and overall accuracy.Experimental evaluations on the change detection dataset(CDD)and building CDD demonstrate superior performance,with F1 scores of 95.1%and 92.8%,respectively.Our findings indicate that the FPCNet outperforms the existing algorithms in stability,robustness,and other key metrics.展开更多
文摘Unmanned aerial vehicle(UAV)imagery poses significant challenges for object detection due to extreme scale variations,high-density small targets(68%in VisDrone dataset),and complex backgrounds.While YOLO-series models achieve speed-accuracy trade-offs via fixed convolution kernels and manual feature fusion,their rigid architectures struggle with multi-scale adaptability,as exemplified by YOLOv8n’s 36.4%mAP and 13.9%small-object AP on VisDrone2019.This paper presents YOLO-LE,a lightweight framework addressing these limitations through three novel designs:(1)We introduce the C2f-Dy and LDown modules to enhance the backbone’s sensitivity to small-object features while reducing backbone parameters,thereby improving model efficiency.(2)An adaptive feature fusion module is designed to dynamically integrate multi-scale feature maps,optimizing the neck structure,reducing neck complexity,and enhancing overall model performance.(3)We replace the original loss function with a distributed focal loss and incorporate a lightweight self-attention mechanism to improve small-object recognition and bounding box regression accuracy.Experimental results demonstrate that YOLO-LE achieves 39.9%mAP@0.5 on VisDrone2019,representing a 9.6%improvement over YOLOv8n,while maintaining 8.5 GFLOPs computational efficiency.This provides an efficient solution for UAV object detection in complex scenarios.
文摘In recent years,with the development of synthetic aperture radar(SAR)technology and the widespread application of deep learning,lightweight detection of SAR images has emerged as a research direction.The ultimate goal is to reduce computational and storage requirements while ensuring detection accuracy and reliability,making it an ideal choice for achieving rapid response and efficient processing.In this regard,a lightweight SAR ship target detection algorithm based on YOLOv8 was proposed in this study.Firstly,the C2f-Sc module was designed by fusing the C2f in the backbone network with the ScConv to reduce spatial redundancy and channel redundancy between features in convolutional neural networks.At the same time,the Ghost module was introduced into the neck network to effectively reduce model parameters and computational complexity.A relatively lightweight EMA attention mechanism was added to the neck network to promote the effective fusion of features at different levels.Experimental results showed that the Parameters and GFLOPs of the improved model are reduced by 8.5%and 7.0%when mAP@0.5 and mAP@0.5:0.95 are increased by 0.7%and 1.8%,respectively.It makes the model lightweight and improves the detection accuracy,which has certain application value.
基金supported by National Natural Science Foundation of China(No.61502274).
文摘In the field of image forensics,image tampering detection is a critical and challenging task.Traditional methods based on manually designed feature extraction typically focus on a specific type of tampering operation,which limits their effectiveness in complex scenarios involving multiple forms of tampering.Although deep learningbasedmethods offer the advantage of automatic feature learning,current approaches still require further improvements in terms of detection accuracy and computational efficiency.To address these challenges,this study applies the UNet 3+model to image tampering detection and proposes a hybrid framework,referred to as DDT-Net(Deep Detail Tracking Network),which integrates deep learning with traditional detection techniques.In contrast to traditional additive methods,this approach innovatively applies amultiplicative fusion technique during downsampling,effectively combining the deep learning feature maps at each layer with those generated by the Bayar noise stream.This design enables noise residual features to guide the learning of semantic features more precisely and efficiently,thus facilitating comprehensive feature-level interaction.Furthermore,by leveraging the complementary strengths of deep networks in capturing large-scale semantic manipulations and traditional algorithms’proficiency in detecting fine-grained local traces,the method significantly enhances the accuracy and robustness of tampered region detection.Compared with other approaches,the proposed method achieves an F1 score improvement exceeding 30% on the DEFACTO and DIS25k datasets.In addition,it has been extensively validated on other datasets,including CASIA and DIS25k.Experimental results demonstrate that this method achieves outstanding performance across various types of image tampering detection tasks.
文摘The application of deep learning for target detection in aerial images captured by Unmanned Aerial Vehicles(UAV)has emerged as a prominent research focus.Due to the considerable distance between UAVs and the photographed objects,coupled with complex shooting environments,existing models often struggle to achieve accurate real-time target detection.In this paper,a You Only Look Once v8(YOLOv8)model is modified from four aspects:the detection head,the up-sampling module,the feature extraction module,and the parameter optimization of positive sample screening,and the YOLO-S3DT model is proposed to improve the performance of the model for detecting small targets in aerial images.Experimental results show that all detection indexes of the proposed model are significantly improved without increasing the number of model parameters and with the limited growth of computation.Moreover,this model also has the best performance compared to other detecting models,demonstrating its advancement within this category of tasks.
文摘BACKGROUND Optical coherence tomography(OCT)enables high-resolution,non-invasive visualization of retinal structures.Recent evidence suggests that retinal layer alterations may reflect central nervous system changes associated with psychiatric disorders such as schizophrenia(SZ).AIM To develop an advanced deep learning model to classify OCT images and distinguish patients with SZ from healthy controls using retinal biomarkers.METHODS A novel convolutional neural network,Self-AttentionNeXt,was designed by integrating grouped self-attention mechanisms,residual and inverted bottleneck blocks,and a final 1×1 convolution for feature refinement.The model was trained and tested on both a custom OCT dataset collected from patients with SZ and a publicly available OCT dataset(OCT2017).RESULTS Self-AttentionNeXt achieved 97.0%accuracy on the collected SZ OCT dataset and over 95%accuracy on the public OCT2017 dataset.Gradient-weighted class activation mapping visualizations confirmed the model’s attention to clinically relevant retinal regions,suggesting effective feature localization.CONCLUSION Self-AttentionNeXt effectively combines transformer-inspired attention mechanisms with convolutional neural networks architecture to support the early and accurate detection of SZ using OCT images.This approach offers a promising direction for artificial intelligence-assisted psychiatric diagnostics and clinical decision support.
文摘Aimed at the long and narrow geometric features and poor generalization ability of the damage detection in conveyor belts with steel rope cores using the X-ray image,a detection method of damage X-ray image is proposed based on the improved fully convolutional one-stage object detection(FCOS)algorithm.The regression performance of bounding boxes was optimized by introducing the complete intersection over union loss function into the improved algorithm.The feature fusion network structure is modified by adding adaptive fusion paths to the feature fusion network structure,which makes full use of the features of accurate localization and semantics of multi-scale feature fusion networks.Finally,the network structure was trained and validated by using the X-ray image dataset of damages in conveyor belts with steel rope cores provided by a flaw detection equipment manufacturer.In addition,the data enhancement methods such as rotating,mirroring,and scaling,were employed to enrich the image dataset so that the model is adequately trained.Experimental results showed that the improved FCOS algorithm promoted the precision rate and the recall rate by 20.9%and 14.8%respectively,compared with the original algorithm.Meanwhile,compared with Fast R-CNN,Faster R-CNN,SSD,and YOLOv3,the improved FCOS algorithm has obvious advantages;detection precision rate and recall rate of the modified network reached 95.8%and 97.0%respectively.Furthermore,it demonstrated a higher detection accuracy without affecting the speed.The results of this work have some reference significance for the automatic identification and detection of steel core conveyor belt damage.
基金support of this research from the National Natural Science Foundation of China(No.12174085)the Key Research and Devel-opment Project of Changzhou,Jiangsu Province(No.CE 20235054).
文摘Given the challenges of underwater garbage detection,including insufficient lighting,low visibility,high noise levels,and high misclassification rates,this paper proposes a model named CSC-YOLO.CSC-YOLO for detecting garbage in complex un-derwater environments characterized by murky water and strong hydrodynamic conditions.The model incorporates the Content-Guid-ed Attention(CGA)attention mechanism into the SPPF module of the YOLOv8 backbone network to enhance dehazing,reduce noise interference,and fuse multi-scale feature information.Additionally,a Single-Head Self-Attention(SHSA)mechanism is introduced in the final layer of the backbone network to achieve local and global feature fusion in a lightweight manner,improving the accuracy of garbage detection.In the detection head,the CBAM attention mechanism is added to further enhance feature representation,increase the model’s target localization,and improve robustness against complex backgrounds and noise.Furthermore,the anchor box coordi-nates from CSC-YOLO are fed into Mobile_SAM to achieve precise segmentation of underwater garbage.Experimental results show that CSC-YOLO achieves a Precision of 0.962,Recall of 0.898,F1-score of 0.929,and mAP0.5 of 0.960 on the ICRA19 trash dataset,representing improvements of 2.9%,1.7%,2.3%,and 2.0%over YOLOv8n,respectively.The combination of CSC-YOLO and Mo-bile_SAM not only enables garbage detection in complex underwater environments but also achieves segmentation.This approach generates additional garbage segmentation masks without manual annotations,facilitating rapid expansion of labeled underwater garbage datasets for training.As an emerging model for intelligent underwater garbage detection,the proposed method holds signifi-cant potential for practical applications and academic research,offering an effective solution to the challenges of intelligent garbage detection in complex underwater environments.
基金funded by the Centre for Advanced Modelling and Geospatial Information Systems(CAMGIS),Faculty of Engineering and IT,University of Technology Sydneysupported by the Researchers Supporting Project,King Saud University,Riyadh,Saudi Arabia,under Project RSP2025 R14.
文摘Electronic nose and thermal images are effective ways to diagnose the presence of gases in real-time realtime.Multimodal fusion of these modalities can result in the development of highly accurate diagnostic systems.The low-cost thermal imaging software produces low-resolution thermal images in grayscale format,hence necessitating methods for improving the resolution and colorizing the images.The objective of this paper is to develop and train a super-resolution generative adversarial network for improving the resolution of the thermal images,followed by a sparse autoencoder for colorization of thermal images and amultimodal convolutional neural network for gas detection using electronic nose and thermal images.The dataset used comprises 6400 thermal images and electronic nose measurements for four classes.A multimodal Convolutional Neural Network(CNN)comprising an EfficientNetB2 pre-trainedmodel was developed using both early and late feature fusion.The Super Resolution Generative Adversarial Network(SRGAN)model was developed and trained on low and high-resolution thermal images.Asparse autoencoder was trained on the grayscale and colorized thermal images.The SRGAN was trained on lowand high-resolution thermal images,achieving a Structural Similarity Index(SSIM)of 90.28,a Peak Signal-to-Noise Ratio(PSNR)of 68.74,and a Mean Absolute Error(MAE)of 0.066.The autoencoder model produced an MAE of 0.035,a Mean Squared Error(MSE)of 0.006,and a Root Mean Squared Error(RMSE)of 0.0705.The multimodal CNN,trained on these images and electronic nose measurements using both early and late fusion techniques,achieved accuracies of 97.89% and 98.55%,respectively.Hence,the proposed framework can be of great aid for the integration with low-cost software to generate high quality thermal camera images and highly accurate detection of gases in real-time.
文摘A measurement system for the scattering characteristics of warhead fragments based on high-speed imaging systems offers advantages such as simple deployment,flexible maneuverability,and high spatiotemporal resolution,enabling the acquisition of full-process data of the fragment scattering process.However,mismatches between camera frame rates and target velocities can lead to long motion blur tails of high-speed fragment targets,resulting in low signal-to-noise ratios and rendering conventional detection algorithms ineffective in dynamic strong interference testing environments.In this study,we propose a detection framework centered on dynamic strong interference disturbance signal separation and suppression.We introduce a mixture Gaussian model constrained under a joint spatialtemporal-transform domain Dirichlet process,combined with total variation regularization to achieve disturbance signal suppression.Experimental results demonstrate that the proposed disturbance suppression method can be integrated with certain conventional motion target detection tasks,enabling adaptation to real-world data to a certain extent.Moreover,we provide a specific implementation of this process,which achieves a detection rate close to 100%with an approximate 0%false alarm rate in multiple sets of real target field test data.This research effectively advances the development of the field of damage parameter testing.
基金supported by the National Natural Science Foundation of China(Grant No.62302086)the Natural Science Foundation of Liaoning Province(Grant No.2023-MSBA-070)the Fundamental Research Funds for the Central Universities(Grant No.N2317005).
文摘Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of visible and infrared images.However,the inherent differences in the imaging mechanisms of visible and infrared modalities make effective cross-modal fusion challenging.Furthermore,constrained by the physical characteristics of sensors and thermal diffusion effects,infrared images generally suffer from blurred object contours and missing details,making it difficult to extract object features effectively.To address these issues,we propose an infrared-visible image fusion network that realizesmultimodal information fusion of infrared and visible images through a carefully designedmultiscale fusion strategy.First,we design an adaptive gray-radiance enhancement(AGRE)module to strengthen the detail representation in infrared images,improving their usability in complex lighting scenarios.Next,we introduce a channelspatial feature interaction(CSFI)module,which achieves efficient complementarity between the RGB and infrared(IR)modalities via dynamic channel switching and a spatial attention mechanism.Finally,we propose a multi-scale enhanced cross-attention fusion(MSECA)module,which optimizes the fusion ofmulti-level features through dynamic convolution and gating mechanisms and captures long-range complementary relationships of cross-modal features on a global scale,thereby enhancing the expressiveness of the fused features.Experiments on the KAIST,M3FD,and FLIR datasets demonstrate that our method delivers outstanding performance in daytime and nighttime scenarios.On the KAIST dataset,the miss rate drops to 5.99%,and further to 4.26% in night scenes.On the FLIR and M3FD datasets,it achieves AP50 scores of 79.4% and 88.9%,respectively.
文摘Efficient banana crop detection is crucial for precision agriculture;however,traditional remote sensing methods often lack the spatial resolution required for accurate identification.This study utilizes low-altitude Unmanned Aerial Vehicle(UAV)images and deep learning-based object detection models to enhance banana plant detection.A comparative analysis of Faster Region-Based Convolutional Neural Network(Faster R-CNN),You Only Look Once Version 3(YOLOv3),Retina Network(RetinaNet),and Single Shot MultiBox Detector(SSD)was conducted to evaluate their effectiveness.Results show that RetinaNet achieved the highest detection accuracy,with a precision of 96.67%,a recall of 71.67%,and an F1 score of 81.33%.The study further highlights the impact of scale variation,occlusion,and vegetation density on detection performance.Unlike previous studies,this research systematically evaluates multi-scale object detection models for banana plant identification,offering insights into the advantages of UAV-based deep learning applications in agriculture.In addition,this study compares five evaluation metrics across the four detection models using both RGB and grayscale images.Specifically,RetinaNet exhibited the best overall performance with grayscale images,achieving the highest values across all five metrics.Compared to its performance with RGB images,these results represent a marked improvement,confirming the potential of grayscale preprocessing to enhance detection capability.
基金supported in part by Youth Innovation Promotion Association,Chinese Academy of Sciences under Grant 2022022in part by South China Sea Nova project of Hainan Province under Grant NHXXRCXM202340in part by the Scientific Research Foundation Project of Hainan Acoustics Laboratory under grant ZKNZ2024001.
文摘Underwater target detection in forward-looking sonar(FLS)images is a challenging but promising endeavor.The existing neural-based methods yield notable progress but there remains room for improvement due to overlooking the unique characteristics of underwater environments.Considering the problems of low imaging resolution,complex background environment,and large changes in target imaging of underwater sonar images,this paper specifically designs a sonar images target detection Network based on Progressive sensitivity capture,named ProNet.It progressively captures the sensitive regions in the current image where potential effective targets may exist.Guided by this basic idea,the primary technical innovation of this paper is the introduction of a foundational module structure for constructing a sonar target detection backbone network.This structure employs a multi-subspace mixed convolution module that initially maps sonar images into different subspaces and extracts local contextual features using varying convolutional receptive fields within these heterogeneous subspaces.Subsequently,a Scale-aware aggregation module effectively aggregates the heterogeneous features extracted from different subspaces.Finally,the multi-scale attention structure further enhances the relational perception of the aggregated features.We evaluated ProNet on three FLS datasets of varying scenes,and experimental results indicate that ProNet outperforms the current state-of-the-art sonar image and general target detectors.
基金supported in part by the National Natural Foundation of China(Nos.52472334,U2368204)。
文摘In response to challenges posed by complex backgrounds,diverse target angles,and numerous small targets in remote sensing images,alongside the issue of high resource consumption hindering model deployment,we propose an enhanced,lightweight you only look once version 8 small(YOLOv8s)detection algorithm.Regarding network improvements,we first replace tradi-tional horizontal boxes with rotated boxes for target detection,effectively addressing difficulties in feature extraction caused by varying target angles.Second,we design a module integrating convolu-tional neural networks(CNN)and Transformer components to replace specific C2f modules in the backbone network,thereby expanding the model’s receptive field and enhancing feature extraction in complex backgrounds.Finally,we introduce a feature calibration structure to mitigate potential feature mismatches during feature fusion.For model compression,we employ a lightweight channel pruning technique based on localized mean average precision(LMAP)to eliminate redundancies in the enhanced model.Although this approach results in some loss of detection accuracy,it effec-tively reduces the number of parameters,computational load,and model size.Additionally,we employ channel-level knowledge distillation to recover accuracy in the pruned model,further enhancing detection performance.Experimental results indicate that the enhanced algorithm achieves a 6.1%increase in mAP50 compared to YOLOv8s,while simultaneously reducing parame-ters,computational load,and model size by 57.7%,28.8%,and 52.3%,respectively.
基金funded by Department of Robotics and Mechatronics Engineering,Kennesaw State University,Marietta,GA 30060,USA.
文摘Glaucoma,a chronic eye disease affecting millions worldwide,poses a substantial threat to eyesight and can result in permanent vision loss if left untreated.Manual identification of glaucoma is a complicated and time-consuming practice requiring specialized expertise and results may be subjective.To address these challenges,this research proposes a computer-aided diagnosis(CAD)approach using Artificial Intelligence(AI)techniques for binary and multiclass classification of glaucoma stages.An ensemble fusion mechanism that combines the outputs of three pre-trained convolutional neural network(ConvNet)models–ResNet-50,VGG-16,and InceptionV3 is utilized in this paper.This fusion technique enhances diagnostic accuracy and robustness by ensemble-averaging the predictions from individual models,leveraging their complementary strengths.The objective of this work is to assess the model’s capability for early-stage glaucoma diagnosis.Classification is performed on a dataset collected from the Harvard Dataverse repository.With the proposed technique,for Normal vs.Advanced glaucoma classification,a validation accuracy of 98.04%and testing accuracy of 98.03%is achieved,with a specificity of 100%which outperforms stateof-the-art methods.For multiclass classification,the suggested ensemble approach achieved a precision and sensitivity of 97%,specificity,and testing accuracy of 98.57%and 96.82%,respectively.The proposed E-GlauNet model has significant potential in assisting ophthalmologists in the screening and fast diagnosis of glaucoma,leading to more reliable,efficient,and timely diagnosis,particularly for early-stage detection and staging of the disease.While the proposed method demonstrates high accuracy and robustness,the study is limited by the evaluation of a single dataset.Future work will focus on external validation across diverse datasets and enhancing interpretability using explainable AI techniques.
基金supported by the National Natural Science Foundation of China(No.52474172).
文摘It is important to understand the development of joints and fractures in rock masses to ensure drilling stability and blasting effectiveness.Traditional manual observation techniques for identifying and extracting fracture characteristics have been proven to be inefficient and prone to subjective interpretation.Moreover,conventional image processing algorithms and classical deep learning models often encounter difficulties in accurately identifying fracture areas,resulting in unclear contours.This study proposes an intelligent method for detecting internal fractures in mine rock masses to address these challenges.The proposed approach captures a nodal fracture map within the targeted blast area and integrates channel and spatial attention mechanisms into the ResUnet(RU)model.The channel attention mechanism dynamically recalibrates the importance of each feature channel,and the spatial attention mechanism enhances feature representation in key areas while minimizing background noise,thus improving segmentation accuracy.A dynamic serpentine convolution module is also introduced that adaptively adjusts the shape and orientation of the convolution kernel based on the local structure of the input feature map.Furthermore,this method enables the automatic extraction and quantification of borehole nodal fracture information by fitting sinusoidal curves to the boundaries of the fracture contours using the least squares method.In comparison to other advanced deep learning models,our enhanced RU demonstrates superior performance across evaluation metrics,including accuracy,pixel accuracy(PA),and intersection over union(IoU).Unlike traditional manual extraction methods,our intelligent detection approach provides considerable time and cost savings,with an average error rate of approximately 4%.This approach has the potential to greatly improve the efficiency of geological surveys of borehole fractures.
基金supported by grants from the Human Resources Development program (Grant No.20204010600250)the Training Program of CCUS for the Green Growth (Grant No.20214000000500)by the Korea Institute of Energy Technology Evaluation and Planning (KETEP)funded by the Ministry of Trade,Industry,and Energy of the Korean Government (MOTIE).
文摘It is of great importance to obtain precise trace data,as traces are frequently the sole visible and measurable parameter in most outcrops.The manual recognition and detection of traces on high-resolution three-dimensional(3D)models are relatively straightforward but time-consuming.One potential solution to enhance this process is to use machine learning algorithms to detect the 3D traces.In this study,a unique pixel-wise texture mapper algorithm generates a dense point cloud representation of an outcrop with the precise resolution of the original textured 3D model.A virtual digital image rendering was then employed to capture virtual images of selected regions.This technique helps to overcome limitations caused by the surface morphology of the rock mass,such as restricted access,lighting conditions,and shading effects.After AI-powered trace detection on two-dimensional(2D)images,a 3D data structuring technique was applied to the selected trace pixels.In the 3D data structuring,the trace data were structured through 2D thinning,3D reprojection,clustering,segmentation,and segment linking.Finally,the linked segments were exported as 3D polylines,with each polyline in the output corresponding to a trace.The efficacy of the proposed method was assessed using a 3D model of a real-world case study,which was used to compare the results of artificial intelligence(AI)-aided and human intelligence trace detection.Rosette diagrams,which visualize the distribution of trace orientations,confirmed the high similarity between the automatically and manually generated trace maps.In conclusion,the proposed semi-automatic method was easy to use,fast,and accurate in detecting the dominant jointing system of the rock mass.
基金Science and Technology Department of Jilin Province(No.20200403075SF)Education Department of Jilin Province(No.JJKH20240148KJ).
文摘As technologies related to power equipment fault diagnosis and infrared temperature measurement continue to advance,the classification and identification of infrared temperature measurement images have become crucial in effective intelligent fault diagnosis of various electrical equipment.In response to the increasing demand for sufficient feature fusion in current real-time detection and low detection accuracy in existing networks for Substation fault diagnosis,we introduce an innovative method known as Gather and Distribution Mechanism-You Only Look Once(GD-YOLO).Firstly,a partial convolution group is designed based on different convolution kernels.We combine the partial convolution group with deep convolution to propose a new Grouped Channel-wise Spatial Convolution(GCSConv)that compensates for the information loss caused by spatial channel convolution.Secondly,the Gather and Distribute Mechanism,which addresses the fusion problem of different dimensional features,has been implemented by aligning and sharing information through aggregation and distribution mechanisms.Thirdly,considering the limitations in current bounding box regression and the imbalance between complex and simple samples,Maximum Possible Distance Intersection over Union(MPDIoU)and Adaptive SlideLoss is incorporated into the loss function,allowing samples near the Intersection over Union(IoU)to receive more attention through the dynamic variation of the mean Intersection over Union.The GD-YOLO algorithm can surpass YOLOv5,YOLOv7,and YOLOv8 in infrared image detection for electrical equipment,achieving a mean Average Precision(mAP)of 88.9%,with accuracy improvements of 3.7%,4.3%,and 3.1%,respectively.Additionally,the model delivers a frame rate of 48 FPS,which aligns with the precision and velocity criteria necessary for the detection of infrared images in power equipment.
基金supported by National Natural Science Foundation of China(Grant No.52475155)National Science Foundation for Hunan Province,China(Grant No.2023JJ30137)+2 种基金Guangdong Basic and Applied Basic Research Foundation(Grant No.2024A1515010684)Guangdong Basic and Applied Basic Research Foundation(Grant No.2023A1515240059)Program sponsored by the Foundation of Yuelushan Center for Industrial Innovation(Grant No.2023YCII0138).
文摘Pore formation is a significant challenges in the advancement of laser additive manufacturing(LAM)technologies.To address this issue,image data-driven pore detection techniques have become a research focus.However,existing methods are constrained by reliance on a single detection environment(e.g.,consistent brightness)and fixed input image sizes,limiting their predictive accuracy and application scope.This paper introduces an in-novative a pore detection method based on a deep learning model for laser-directed energy deposition(L-DED).The proposed method leverages the deep learning model’s ability to extract feature information from melt pool images captured by a high-speed camera,enabling efficient pore detection under varying brightness conditions and diverse image sizes.The detection results demonstrate that,under varying brightness levels,the proposed model achieves a pore detection accuracy of approximately 93.5% and a root mean square error(RMSE)of 0.42 for local porosity prediction.Additionally,even with changes in input image size,the model maintains robust performance,achieving a detection accuracy of 96% for pore status detection and an RMSE value of 0.09 for local porosity prediction.This study not only addresses the limitations of traditional detection techniques but also broadens the scope of online detection technologies.It highlights the potential of deep learning in complex industrial settings and provides valuable insights for advancing defect detection research in related fields.
文摘Recent years have seen a surge in interest in object detection on remote sensing images for applications such as surveillance andmanagement.However,challenges like small object detection,scale variation,and the presence of closely packed objects in these images hinder accurate detection.Additionally,the motion blur effect further complicates the identification of such objects.To address these issues,we propose enhanced YOLOv9 with a transformer head(YOLOv9-TH).The model introduces an additional prediction head for detecting objects of varying sizes and swaps the original prediction heads for transformer heads to leverage self-attention mechanisms.We further improve YOLOv9-TH using several strategies,including data augmentation,multi-scale testing,multi-model integration,and the introduction of an additional classifier.The cross-stage partial(CSP)method and the ghost convolution hierarchical graph(GCHG)are combined to improve detection accuracy by better utilizing feature maps,widening the receptive field,and precisely extracting multi-scale objects.Additionally,we incorporate the E-SimAM attention mechanism to address low-resolution feature loss.Extensive experiments on the VisDrone2021 and DIOR datasets demonstrate the effectiveness of YOLOv9-TH,showing good improvement in mAP compared to the best existing methods.The YOLOv9-TH-e achieved 54.2% of mAP50 on the VisDrone2021 dataset and 92.3% of mAP on the DIOR dataset.The results confirmthemodel’s robustness and suitability for real-world applications,particularly for small object detection in remote sensing images.
文摘The objective of this study is to address semantic misalignment and insufficient accuracy in edge detail and discrimination detection,which are common issues in deep learning-based change detection methods relying on encoding and decoding frameworks.In response to this,we propose a model called FlowDual-PixelClsObjectMec(FPCNet),which innovatively incorporates dual flow alignment technology in the decoding stage to rectify semantic discrepancies through streamlined feature correction fusion.Furthermore,the model employs an object-level similarity measurement coupled with pixel-level classification in the PixelClsObjectMec(PCOM)module during the final discrimination stage,significantly enhancing edge detail detection and overall accuracy.Experimental evaluations on the change detection dataset(CDD)and building CDD demonstrate superior performance,with F1 scores of 95.1%and 92.8%,respectively.Our findings indicate that the FPCNet outperforms the existing algorithms in stability,robustness,and other key metrics.