In order to solve the problem of small objects detection in unmanned aerial vehicle(UAV)aerial images with complex background,a general detection method for multi-scale small objects based on Faster region-based convo...In order to solve the problem of small objects detection in unmanned aerial vehicle(UAV)aerial images with complex background,a general detection method for multi-scale small objects based on Faster region-based convolutional neural network(Faster R-CNN)is proposed.The bird’s nest on the high-voltage tower is taken as the research object.Firstly,we use the improved convolutional neural network ResNet101 to extract object features,and then use multi-scale sliding windows to obtain the object region proposals on the convolution feature maps with different resolutions.Finally,a deconvolution operation is added to further enhance the selected feature map with higher resolution,and then it taken as a feature mapping layer of the region proposals passing to the object detection sub-network.The detection results of the bird’s nest in UAV aerial images show that the proposed method can precisely detect small objects in aerial images.展开更多
Detecting moving objects in the stationary background is an important problem in visual surveillance systems.However,the traditional background subtraction method fails when the background is not completely stationary...Detecting moving objects in the stationary background is an important problem in visual surveillance systems.However,the traditional background subtraction method fails when the background is not completely stationary and involves certain dynamic changes.In this paper,according to the basic steps of the background subtraction method,a novel non-parametric moving object detection method is proposed based on an improved ant colony algorithm by using the Markov random field.Concretely,the contributions are as follows:1)A new nonparametric strategy is utilized to model the background,based on an improved kernel density estimation;this approach uses an adaptive bandwidth,and the fused features combine the colours,gradients and positions.2)A Markov random field method based on this adaptive background model via the constraint of the spatial context is proposed to extract objects.3)The posterior function is maximized efficiently by using an improved ant colony system algorithm.Extensive experiments show that the proposed method demonstrates a better performance than many existing state-of-the-art methods.展开更多
Geospatial objects detection within complex environment is a challenging problem in remote sensing area. In this paper, we derive an extension of the Relevance Vector Machine (RVM) technique to multiple kernel version...Geospatial objects detection within complex environment is a challenging problem in remote sensing area. In this paper, we derive an extension of the Relevance Vector Machine (RVM) technique to multiple kernel version. The proposed method learns an optimal kernel combination and the associated classifier simultaneously. Two feature types are extracted from images, forming basis kernels. Then these basis kernels are weighted combined and resulted the composite kernel exploits interesting points and appearance information of objects simultaneously. Weights and the detection model are finally learnt by a new algorithm. Experimental results show that the proposed method improve detection accuracy to above 88%, yields good interpretation for the selected subset of features and appears sparser than traditional single-kernel RVMs.展开更多
The article deals with the experimental studies of atmosphere indistinct radiation structure. The information extraction background of dot size thermal object presence in atmosphere is reasonable. Indistinct generaliz...The article deals with the experimental studies of atmosphere indistinct radiation structure. The information extraction background of dot size thermal object presence in atmosphere is reasonable. Indistinct generalization of experimental study regularities technique of space-time irregularity radiation structure in infrared wave range is offered. The approach to dot size thermal object detection in atmosphere is proved with a help of threshold method in the thermodynamic and turbulent process conditions, based on the indistinct statement return task solution.展开更多
Pulmonary nodules represent an early manifestation of lung cancer.However,pulmonary nodules only constitute a small portion of the overall image,posing challenges for physicians in image interpretation and potentially...Pulmonary nodules represent an early manifestation of lung cancer.However,pulmonary nodules only constitute a small portion of the overall image,posing challenges for physicians in image interpretation and potentially leading to false positives or missed detections.To solve these problems,the YOLOv8 network is enhanced by adding deformable convolution and atrous spatial pyramid pooling(ASPP),along with the integration of a coordinate attention(CA)mechanism.This allows the network to focus on small targets while expanding the receptive field without losing resolution.At the same time,context information on the target is gathered and feature expression is enhanced by attention modules in different directions.It effectively improves the positioning accuracy and achieves good results on the LUNA16 dataset.Compared with other detection algorithms,it improves the accuracy of pulmonary nodule detection to a certain extent.展开更多
To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-cap...To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-captured images has posed a challenge for traditional target detection methods,especially in identifying small objects in high-resolution images.This study presents an enhanced object detection algorithm based on the Faster Regionbased Convolutional Neural Network(Faster R-CNN)framework,specifically tailored for detecting small-scale electrical components like insulators,shock hammers,and screws in transmission line.The algorithm features an improved backbone network for Faster R-CNN,which significantly boosts the feature extraction network’s ability to detect fine details.The Region Proposal Network is optimized using a method of guided feature refinement(GFR),which achieves a balance between accuracy and speed.The incorporation of Generalized Intersection over Union(GIOU)and Region of Interest(ROI)Align further refines themodel’s accuracy.Experimental results demonstrate a notable improvement in mean Average Precision,reaching 89.3%,an 11.1%increase compared to the standard Faster R-CNN.This highlights the effectiveness of the proposed algorithm in identifying electrical components in high-resolution aerial images.展开更多
Lunar impact crater detection is crucial for lunar surface studies and spacecraft landing missions,yet deep learning still struggles with accurately detecting small craters,especially when relying on incomplete catalo...Lunar impact crater detection is crucial for lunar surface studies and spacecraft landing missions,yet deep learning still struggles with accurately detecting small craters,especially when relying on incomplete catalogs.In this work,we integrate Digital Elevation Model(DEM)data to construct a high-quality dataset enriched with slope information,enabling a detailed analysis of crater features and effectively improving detection performance in complex terrains and low-contrast areas.Based on this foundation,we propose a novel two-stage detection network,MSFNet,which leverages multi-scale adaptive feature fusion and multisize ROI pooling to enhance the recognition of craters across various scales.Experimental results demonstrate that MSFNet achieves an F1 score of 74.8%on Test Region1 and a recall rate of 87%for craters with diameters larger than 2 km.Moreover,it shows exceptional performance in detecting sub-kilometer craters by successfully identifying a large number of high-confidence,previously unlabeled targets with a low false detection rate confirmed through manual review.This approach offers an efficient and reliable deep learning solution for lunar impact crater detection.展开更多
In this paper,a two-stage light detection and ranging(LiDAR) three-dimensional(3D) object detection framework is presented,namely point-voxel dual transformer(PV-DT3D),which is a transformer-based method.In the propos...In this paper,a two-stage light detection and ranging(LiDAR) three-dimensional(3D) object detection framework is presented,namely point-voxel dual transformer(PV-DT3D),which is a transformer-based method.In the proposed PV-DT3D,point-voxel fusion features are used for proposal refinement.Specifically,keypoints are sampled from entire point cloud scene and used to encode representative scene features via a proposal-aware voxel set abstraction module.Subsequently,following the generation of proposals by the region proposal networks(RPN),the internal encoded keypoints are fed into the dual transformer encoder-decoder architecture.In 3D object detection,the proposed PV-DT3D takes advantage of both point-wise transformer and channel-wise architecture to capture contextual information from the spatial and channel dimensions.Experiments conducted on the highly competitive KITTI 3D car detection leaderboard show that the PV-DT3D achieves superior detection accuracy among state-of-the-art point-voxel-based methods.展开更多
Aiming at the problems of low detection accuracy and large model size of existing object detection algorithms applied to complex road scenes,an improved you only look once version 8(YOLOv8)object detection algorithm f...Aiming at the problems of low detection accuracy and large model size of existing object detection algorithms applied to complex road scenes,an improved you only look once version 8(YOLOv8)object detection algorithm for infrared images,F-YOLOv8,is proposed.First,a spatial-to-depth network replaces the traditional backbone network's strided convolution or pooling layer.At the same time,it combines with the channel attention mechanism so that the neural network focuses on the channels with large weight values to better extract low-resolution image feature information;then an improved feature pyramid network of lightweight bidirectional feature pyramid network(L-BiFPN)is proposed,which can efficiently fuse features of different scales.In addition,a loss function of insertion of union based on the minimum point distance(MPDIoU)is introduced for bounding box regression,which obtains faster convergence speed and more accurate regression results.Experimental results on the FLIR dataset show that the improved algorithm can accurately detect infrared road targets in real time with 3%and 2.2%enhancement in mean average precision at 50%IoU(mAP50)and mean average precision at 50%—95%IoU(mAP50-95),respectively,and 38.1%,37.3%and 16.9%reduction in the number of model parameters,the model weight,and floating-point operations per second(FLOPs),respectively.To further demonstrate the detection capability of the improved algorithm,it is tested on the public dataset PASCAL VOC,and the results show that F-YOLO has excellent generalized detection performance.展开更多
Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones...Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones and lack of detail information for small-scale objects make drone-based small object detection a formidable challenge. To address these issues, we first develop a mathematical model to explore how changing receptive fields impacts the polynomial fitting results. Subsequently, based on the obtained conclusions, we propose a simple but effective Hybrid Receptive Field Network (HRFNet), whose modules include Hybrid Feature Augmentation (HFA), Hybrid Feature Pyramid (HFP) and Dual Scale Head (DSH). Specifically, HFA employs parallel dilated convolution kernels of different sizes to extend shallow features with different receptive fields, committed to improving the multi-scale adaptability of the network;HFP enhances the perception of small objects by capturing contextual information across layers, while DSH reconstructs the original prediction head utilizing a set of high-resolution features and ultrahigh-resolution features. In addition, in order to train HRFNet, the corresponding dual-scale loss function is designed. Finally, comprehensive evaluation results on public benchmarks such as VisDrone-DET and TinyPerson demonstrate the robustness of the proposed method. Most impressively, the proposed HRFNet achieves a mAP of 51.0 on VisDrone-DET with 29.3 M parameters, which outperforms the extant state-of-the-art detectors. HRFNet also performs excellently in complex scenarios captured by drones, achieving the best performance on the CS-Drone dataset we built.展开更多
To address the challenges of low detection accuracy caused by the diverse species,significant size variations,and complex growth environments of wheat pests in natural settings,a PSA-YOLO11n algorithm is proposed to e...To address the challenges of low detection accuracy caused by the diverse species,significant size variations,and complex growth environments of wheat pests in natural settings,a PSA-YOLO11n algorithm is proposed to enhance detection precision.Building upon the YOLO11n framework,the proposed improvements include three key components:1)SimCSPSPPF in Backbone:An improved Spatial Pyramid Pooling-Fast(SPPF)module,SimCSPSPPF,is integrated into the Backbone to reduce the number of channels in the hidden layers,thereby accelerating model training.2)PEC in Neck:The standard convolution layers in the Neck are replaced with Perception Enhancement Convolutions(PEC)to improve multi-scale feature extraction capabilities,enhancing detection speed.3)AWIoU Loss Function:The regression loss function is replaced with Adequate Wise IoU(AWIoU),addressing issues of bounding box distortion caused by the diversity in pest species and size variations,thereby improving the precision of bounding box localization.Experimental evaluations on the IP102 dataset demonstrate that PSA-YOLO11n achieves a mean Average Precision(mAP)of 89.10%,surpassing YOLO11n by 0.8%.Comparisons with other mainstream algorithms,including Faster R-CNN,RetinaNet,YOLOv5s,YOLOv8n,YOLOv10n,and YOLO11n,confirm that PSA-YOLO11n outperforms all baselines in terms of detection performance.These results highlight the algorithm’s capability to significantly improve the detection accuracy of multi-scale wheat pests in natural environments,providing an effective solution for pest management in wheat production.展开更多
UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,comp...UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,complex backgrounds,and variable lighting persist due to the unique perspective of UAV imagery.To address these issues,this paper introduces DAFPN-YOLO,an innovative model based on YOLOv8s(You Only Look Once version 8s).Themodel strikes a balance between detection accuracy and speed while reducing parameters,making itwell-suited for multi-object detection tasks from drone perspectives.A key feature of DAFPN-YOLO is the enhanced Drone-AFPN(Adaptive Feature Pyramid Network),which adaptively fuses multi-scale features to optimize feature extraction and enhance spatial and small-object information.To leverage Drone-AFPN’smulti-scale capabilities fully,a dedicated 160×160 small-object detection head was added,significantly boosting detection accuracy for small targets.In the backbone,the C2f_Dual(Cross Stage Partial with Cross-Stage Feature Fusion Dual)module and SPPELAN(Spatial Pyramid Pooling with Enhanced LocalAttentionNetwork)modulewere integrated.These components improve feature extraction and information aggregationwhile reducing parameters and computational complexity,enhancing inference efficiency.Additionally,Shape-IoU(Shape Intersection over Union)is used as the loss function for bounding box regression,enabling more precise shape-based object matching.Experimental results on the VisDrone 2019 dataset demonstrate the effectiveness ofDAFPN-YOLO.Compared to YOLOv8s,the proposedmodel achieves a 5.4 percentage point increase inmAP@0.5,a 3.8 percentage point improvement in mAP@0.5:0.95,and a 17.2%reduction in parameter count.These results highlight DAFPN-YOLO’s advantages in UAV-based object detection,offering valuable insights for applying deep learning to UAV-specific multi-object detection tasks.展开更多
Forests are vital ecosystems that play a crucial role in sustaining life on Earth and supporting human well-being.Traditional forest mapping and monitoring methods are often costly and limited in scope,necessitating t...Forests are vital ecosystems that play a crucial role in sustaining life on Earth and supporting human well-being.Traditional forest mapping and monitoring methods are often costly and limited in scope,necessitating the adoption of advanced,automated approaches for improved forest conservation and management.This study explores the application of deep learning-based object detection techniques for individual tree detection in RGB satellite imagery.A dataset of 3157 images was collected and divided into training(2528),validation(495),and testing(134)sets.To enhance model robustness and generalization,data augmentation was applied to the training part of the dataset.Various YOLO-based models,including YOLOv8,YOLOv9,YOLOv10,YOLOv11,and YOLOv12,were evaluated using different hyperparameters and optimization techniques,such as stochastic gradient descent(SGD)and auto-optimization.These models were assessed in terms of detection accuracy and the number of detected trees.The highest-performing model,YOLOv12m,achieved a mean average precision(mAP@50)of 0.908,mAP@50:95 of 0.581,recall of 0.851,precision of 0.852,and an F1-score of 0.847.The results demonstrate that YOLO-based object detection offers a highly efficient,scalable,and accurate solution for individual tree detection in satellite imagery,facilitating improved forest inventory,monitoring,and ecosystem management.This study underscores the potential of AI-driven tree detection to enhance environmental sustainability and support data-driven decision-making in forestry.展开更多
Improving consumer satisfaction with the appearance and surface quality of wood-based products requires inspection methods that are both accurate and efficient.The adoption of artificial intelligence(AI)for surface ev...Improving consumer satisfaction with the appearance and surface quality of wood-based products requires inspection methods that are both accurate and efficient.The adoption of artificial intelligence(AI)for surface evaluation has emerged as a promising solution.Since the visual appeal of wooden products directly impacts their market value and overall business success,effective quality control is crucial.However,conventional inspection techniques often fail to meet performance requirements due to limited accuracy and slow processing times.To address these shortcomings,the authors propose a real-time deep learning-based system for evaluating surface appearance quality.The method integrates object detection and classification within an area attention framework and leverages R-ELAN for advanced fine-tuning.This architecture supports precise identification and classification of multiple objects,even under ambiguous or visually complex conditions.Furthermore,the model is computationally efficient and well-suited to moderate or domain-specific datasets commonly found in industrial inspection tasks.Experimental validation on the Zenodo dataset shows that the model achieves an average precision(AP)of 60.6%,outperforming the current state-of-the-art YOLOv12 model(55.3%),with a fast inference time of approximately 70 milliseconds.These results underscore the potential of AI-powered methods to enhance surface quality inspection in the wood manufacturing sector.展开更多
Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dens...Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dense small objects is challenging.展开更多
Bees play a crucial role in the global food chain,pollinating over 75% of food and producing valuable products such as bee pollen,propolis,and royal jelly.However,theAsian hornet poses a serious threat to bee populati...Bees play a crucial role in the global food chain,pollinating over 75% of food and producing valuable products such as bee pollen,propolis,and royal jelly.However,theAsian hornet poses a serious threat to bee populations by preying on them and disrupting agricultural ecosystems.To address this issue,this study developed a modified YOLOv7tiny(You Only Look Once)model for efficient hornet detection.The model incorporated space-to-depth(SPD)and squeeze-and-excitation(SE)attention mechanisms and involved detailed annotation of the hornet’s head and full body,significantly enhancing the detection of small objects.The Taguchi method was also used to optimize the training parameters,resulting in optimal performance.Data for this study were collected from the Roboflow platformusing a 640×640 resolution dataset.The YOLOv7tinymodel was trained on this dataset.After optimizing the training parameters using the Taguchi method,significant improvements were observed in accuracy,precision,recall,F1 score,andmean average precision(mAP)for hornet detection.Without the hornet head label,incorporating the SPD attentionmechanism resulted in a peakmAP of 98.7%,representing an 8.58%increase over the original YOLOv7tiny.By including the hornet head label and applying the SPD attention mechanism and Soft-CIOU loss function,themAP was further enhanced to 97.3%,a 7.04% increase over the original YOLOv7tiny.Furthermore,the Soft-CIOU Loss function contributed to additional performance enhancements during the validation phase.展开更多
Data augmentation plays an important role in boosting the performance of 3D models,while very few studies handle the 3D point cloud data with this technique.Global augmentation and cut-paste are commonly used augmenta...Data augmentation plays an important role in boosting the performance of 3D models,while very few studies handle the 3D point cloud data with this technique.Global augmentation and cut-paste are commonly used augmentation techniques for point clouds,where global augmentation is applied to the entire point cloud of the scene,and cut-paste samples objects from other frames into the current frame.Both types of data augmentation can improve performance,but the cut-paste technique cannot effectively deal with the occlusion relationship between the foreground object and the background scene and the rationality of object sampling,which may be counterproductive and may hurt the overall performance.In addition,LiDAR is susceptible to signal loss,external occlusion,extreme weather and other factors,which can easily cause object shape changes,while global augmentation and cut-paste cannot effectively enhance the robustness of the model.To this end,we propose Syn-Aug,a synchronous data augmentation framework for LiDAR-based 3D object detection.Specifically,we first propose a novel rendering-based object augmentation technique(Ren-Aug)to enrich training data while enhancing scene realism.Second,we propose a local augmentation technique(Local-Aug)to generate local noise by rotating and scaling objects in the scene while avoiding collisions,which can improve generalisation performance.Finally,we make full use of the structural information of 3D labels to make the model more robust by randomly changing the geometry of objects in the training frames.We verify the proposed framework with four different types of 3D object detectors.Experimental results show that our proposed Syn-Aug significantly improves the performance of various 3D object detectors in the KITTI and nuScenes datasets,proving the effectiveness and generality of Syn-Aug.On KITTI,four different types of baseline models using Syn-Aug improved mAP by 0.89%,1.35%,1.61%and 1.14%respectively.On nuScenes,four different types of baseline models using Syn-Aug improved mAP by 14.93%,10.42%,8.47%and 6.81%respectively.The code is available at https://github.com/liuhuaijjin/Syn-Aug.展开更多
With the rapid urbanization and exponential population growth in China,two-wheeled vehicles have become a popular mode of transportation,particularly for short-distance travel.However,due to a lack of safety awareness...With the rapid urbanization and exponential population growth in China,two-wheeled vehicles have become a popular mode of transportation,particularly for short-distance travel.However,due to a lack of safety awareness,traffic violations by two-wheeled vehicle riders have become a widespread concern,contributing to urban traffic risks.Currently,significant human and material resources are being allocated to monitor and intercept non-compliant riders to ensure safe driving behavior.To enhance the safety,efficiency,and cost-effectiveness of traffic monitoring,automated detection systems based on image processing algorithms can be employed to identify traffic violations from eye-level video footage.In this study,we propose a robust detection algorithm specifically designed for two-wheeled vehicles,which serves as a fundamental step toward intelligent traffic monitoring.Our approach integrates a novel convolutional and attention mechanism to improve detection accuracy and efficiency.Additionally,we introduce a semi-supervised training strategy that leverages a large number of unlabeled images to enhance the model’s learning capability by extracting valuable background information.This method enables the model to generalize effectively to diverse urban environments and varying lighting conditions.We evaluate our proposed algorithm on a custom-built dataset,and experimental results demonstrate its superior performance,achieving an average precision(AP)of 95%and a recall(R)of 90.6%.Furthermore,the model maintains a computational efficiency of only 25.7 GFLOPs while achieving a high processing speed of 249 FPS,making it highly suitable for deployment on edge devices.Compared to existing detection methods,our approach significantly enhances the accuracy and robustness of two-wheeled vehicle identification while ensuring real-time performance.展开更多
Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)a...Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)and three-dimensional(3D)object detection methods havemade significant progress,they often fall short under severe occlusion due to depth ambiguities in 2D imagery and the high cost and deployment limitations of 3D sensors such as Light Detection and Ranging(LiDAR).This paper presents a comparative review of recent 2D and 3D detection models,focusing on their occlusion-handling capabilities and the impact of sensor modalities such as stereo vision,Time-of-Flight(ToF)cameras,and LiDAR.In this context,we introduce FuDensityNet,our multimodal occlusion-aware detection framework that combines Red-Green-Blue(RGB)images and LiDAR data to enhance detection performance.As a forward-looking direction,we propose a monocular depth-estimation extension to FuDensityNet,aimed at replacing expensive 3D sensors with a more scalable CNN-based pipeline.Although this enhancement is not experimentally evaluated in this manuscript,we describe its conceptual design and potential for future implementation.展开更多
Efficient detection of surface defects is primary for ensuring product quality during manufacturing processes.To enhance the performance of deep learning-based methods in practical applications,the authors propose Den...Efficient detection of surface defects is primary for ensuring product quality during manufacturing processes.To enhance the performance of deep learning-based methods in practical applications,the authors propose Dense-YOLO,a fast surface defect detection network that combines the strengths of DenseNet and you only look once version 3(YOLOv3).The authors design a lightweight backbone network with improved densely connected blocks,optimising the utilisation of shallow features while maintaining high detection speeds.Additionally,the authors refine the feature pyramid network of YOLOv3 to increase the recall of tiny defects and overall positioning accuracy.Furthermore,an online multi-angle template matching technique is introduced based on normalised cross-correlation to precisely locate the detection area.This refined template matching method not only accelerates detection speed but also mitigates the influence of the background.To validate the effectiveness of our enhancements,the authors conduct comparative experiments across two private datasets and one public dataset.Results show that Dense-YOLO outperforms existing methods,such as faster R-CNN,YOLOv3,YOLOv5s,YOLOv7,and SSD,in terms of mean average precision(mAP)and detection speed.Moreover,Dense-YOLO outperforms networks inherited from VGG and ResNet,including improved faster R-CNN,FCOS,M2Det-320 and FRCN,in mAP.展开更多
基金National Defense Pre-research Fund Project(No.KMGY318002531)。
文摘In order to solve the problem of small objects detection in unmanned aerial vehicle(UAV)aerial images with complex background,a general detection method for multi-scale small objects based on Faster region-based convolutional neural network(Faster R-CNN)is proposed.The bird’s nest on the high-voltage tower is taken as the research object.Firstly,we use the improved convolutional neural network ResNet101 to extract object features,and then use multi-scale sliding windows to obtain the object region proposals on the convolution feature maps with different resolutions.Finally,a deconvolution operation is added to further enhance the selected feature map with higher resolution,and then it taken as a feature mapping layer of the region proposals passing to the object detection sub-network.The detection results of the bird’s nest in UAV aerial images show that the proposed method can precisely detect small objects in aerial images.
基金supported in part by the National Natural Science Foundation of China under Grants 61841103,61673164,and 61602397in part by the Natural Science Foundation of Hunan Provincial under Grants 2016JJ2041 and 2019JJ50106+1 种基金in part by the Key Project of Education Department of Hunan Provincial under Grant 18B385and in part by the Graduate Research Innovation Projects of Hunan Province under Grants CX2018B805 and CX2018B813.
文摘Detecting moving objects in the stationary background is an important problem in visual surveillance systems.However,the traditional background subtraction method fails when the background is not completely stationary and involves certain dynamic changes.In this paper,according to the basic steps of the background subtraction method,a novel non-parametric moving object detection method is proposed based on an improved ant colony algorithm by using the Markov random field.Concretely,the contributions are as follows:1)A new nonparametric strategy is utilized to model the background,based on an improved kernel density estimation;this approach uses an adaptive bandwidth,and the fused features combine the colours,gradients and positions.2)A Markov random field method based on this adaptive background model via the constraint of the spatial context is proposed to extract objects.3)The posterior function is maximized efficiently by using an improved ant colony system algorithm.Extensive experiments show that the proposed method demonstrates a better performance than many existing state-of-the-art methods.
基金Supported by the National Natural Science Foundation of China (No.41001285)
文摘Geospatial objects detection within complex environment is a challenging problem in remote sensing area. In this paper, we derive an extension of the Relevance Vector Machine (RVM) technique to multiple kernel version. The proposed method learns an optimal kernel combination and the associated classifier simultaneously. Two feature types are extracted from images, forming basis kernels. Then these basis kernels are weighted combined and resulted the composite kernel exploits interesting points and appearance information of objects simultaneously. Weights and the detection model are finally learnt by a new algorithm. Experimental results show that the proposed method improve detection accuracy to above 88%, yields good interpretation for the selected subset of features and appears sparser than traditional single-kernel RVMs.
文摘The article deals with the experimental studies of atmosphere indistinct radiation structure. The information extraction background of dot size thermal object presence in atmosphere is reasonable. Indistinct generalization of experimental study regularities technique of space-time irregularity radiation structure in infrared wave range is offered. The approach to dot size thermal object detection in atmosphere is proved with a help of threshold method in the thermodynamic and turbulent process conditions, based on the indistinct statement return task solution.
文摘Pulmonary nodules represent an early manifestation of lung cancer.However,pulmonary nodules only constitute a small portion of the overall image,posing challenges for physicians in image interpretation and potentially leading to false positives or missed detections.To solve these problems,the YOLOv8 network is enhanced by adding deformable convolution and atrous spatial pyramid pooling(ASPP),along with the integration of a coordinate attention(CA)mechanism.This allows the network to focus on small targets while expanding the receptive field without losing resolution.At the same time,context information on the target is gathered and feature expression is enhanced by attention modules in different directions.It effectively improves the positioning accuracy and achieves good results on the LUNA16 dataset.Compared with other detection algorithms,it improves the accuracy of pulmonary nodule detection to a certain extent.
基金supported by the Shanghai Science and Technology Innovation Action Plan High-Tech Field Project(Grant No.22511100601)for the year 2022 and Technology Development Fund for People’s Livelihood Research(Research on Transmission Line Deep Foundation Pit Environmental Situation Awareness System Based on Multi-Source Data).
文摘To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-captured images has posed a challenge for traditional target detection methods,especially in identifying small objects in high-resolution images.This study presents an enhanced object detection algorithm based on the Faster Regionbased Convolutional Neural Network(Faster R-CNN)framework,specifically tailored for detecting small-scale electrical components like insulators,shock hammers,and screws in transmission line.The algorithm features an improved backbone network for Faster R-CNN,which significantly boosts the feature extraction network’s ability to detect fine details.The Region Proposal Network is optimized using a method of guided feature refinement(GFR),which achieves a balance between accuracy and speed.The incorporation of Generalized Intersection over Union(GIOU)and Region of Interest(ROI)Align further refines themodel’s accuracy.Experimental results demonstrate a notable improvement in mean Average Precision,reaching 89.3%,an 11.1%increase compared to the standard Faster R-CNN.This highlights the effectiveness of the proposed algorithm in identifying electrical components in high-resolution aerial images.
基金National Natural Science Foundation of China(12103020,12363009)Natural Science Foundation of Jiangxi Province(20224BAB211011)+1 种基金Open Project Program of State Key Laboratory of Lunar and Planetary Sciences(Macao University of Science and Technology)(Macao FDCT grant No.002/2024/SKL)Youth Talent Project of Science and Technology Plan of Ganzhou(2022CXRC9191,2023CYZ26970)。
文摘Lunar impact crater detection is crucial for lunar surface studies and spacecraft landing missions,yet deep learning still struggles with accurately detecting small craters,especially when relying on incomplete catalogs.In this work,we integrate Digital Elevation Model(DEM)data to construct a high-quality dataset enriched with slope information,enabling a detailed analysis of crater features and effectively improving detection performance in complex terrains and low-contrast areas.Based on this foundation,we propose a novel two-stage detection network,MSFNet,which leverages multi-scale adaptive feature fusion and multisize ROI pooling to enhance the recognition of craters across various scales.Experimental results demonstrate that MSFNet achieves an F1 score of 74.8%on Test Region1 and a recall rate of 87%for craters with diameters larger than 2 km.Moreover,it shows exceptional performance in detecting sub-kilometer craters by successfully identifying a large number of high-confidence,previously unlabeled targets with a low false detection rate confirmed through manual review.This approach offers an efficient and reliable deep learning solution for lunar impact crater detection.
基金supported by the Natural Science Foundation of China (No.62103298)the South African National Research Foundation (Nos.132797 and 137951)。
文摘In this paper,a two-stage light detection and ranging(LiDAR) three-dimensional(3D) object detection framework is presented,namely point-voxel dual transformer(PV-DT3D),which is a transformer-based method.In the proposed PV-DT3D,point-voxel fusion features are used for proposal refinement.Specifically,keypoints are sampled from entire point cloud scene and used to encode representative scene features via a proposal-aware voxel set abstraction module.Subsequently,following the generation of proposals by the region proposal networks(RPN),the internal encoded keypoints are fed into the dual transformer encoder-decoder architecture.In 3D object detection,the proposed PV-DT3D takes advantage of both point-wise transformer and channel-wise architecture to capture contextual information from the spatial and channel dimensions.Experiments conducted on the highly competitive KITTI 3D car detection leaderboard show that the PV-DT3D achieves superior detection accuracy among state-of-the-art point-voxel-based methods.
基金supported by the National Natural Science Foundation of China(No.62103298)。
文摘Aiming at the problems of low detection accuracy and large model size of existing object detection algorithms applied to complex road scenes,an improved you only look once version 8(YOLOv8)object detection algorithm for infrared images,F-YOLOv8,is proposed.First,a spatial-to-depth network replaces the traditional backbone network's strided convolution or pooling layer.At the same time,it combines with the channel attention mechanism so that the neural network focuses on the channels with large weight values to better extract low-resolution image feature information;then an improved feature pyramid network of lightweight bidirectional feature pyramid network(L-BiFPN)is proposed,which can efficiently fuse features of different scales.In addition,a loss function of insertion of union based on the minimum point distance(MPDIoU)is introduced for bounding box regression,which obtains faster convergence speed and more accurate regression results.Experimental results on the FLIR dataset show that the improved algorithm can accurately detect infrared road targets in real time with 3%and 2.2%enhancement in mean average precision at 50%IoU(mAP50)and mean average precision at 50%—95%IoU(mAP50-95),respectively,and 38.1%,37.3%and 16.9%reduction in the number of model parameters,the model weight,and floating-point operations per second(FLOPs),respectively.To further demonstrate the detection capability of the improved algorithm,it is tested on the public dataset PASCAL VOC,and the results show that F-YOLO has excellent generalized detection performance.
基金supported by the National Natural Science Foundation of China(Nos.62276204 and 62203343)the Fundamental Research Funds for the Central Universities(No.YJSJ24011)+1 种基金the Natural Science Basic Research Program of Shanxi,China(Nos.2022JM-340 and 2023-JC-QN-0710)the China Postdoctoral Science Foundation(Nos.2020T130494 and 2018M633470).
文摘Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones and lack of detail information for small-scale objects make drone-based small object detection a formidable challenge. To address these issues, we first develop a mathematical model to explore how changing receptive fields impacts the polynomial fitting results. Subsequently, based on the obtained conclusions, we propose a simple but effective Hybrid Receptive Field Network (HRFNet), whose modules include Hybrid Feature Augmentation (HFA), Hybrid Feature Pyramid (HFP) and Dual Scale Head (DSH). Specifically, HFA employs parallel dilated convolution kernels of different sizes to extend shallow features with different receptive fields, committed to improving the multi-scale adaptability of the network;HFP enhances the perception of small objects by capturing contextual information across layers, while DSH reconstructs the original prediction head utilizing a set of high-resolution features and ultrahigh-resolution features. In addition, in order to train HRFNet, the corresponding dual-scale loss function is designed. Finally, comprehensive evaluation results on public benchmarks such as VisDrone-DET and TinyPerson demonstrate the robustness of the proposed method. Most impressively, the proposed HRFNet achieves a mAP of 51.0 on VisDrone-DET with 29.3 M parameters, which outperforms the extant state-of-the-art detectors. HRFNet also performs excellently in complex scenarios captured by drones, achieving the best performance on the CS-Drone dataset we built.
文摘To address the challenges of low detection accuracy caused by the diverse species,significant size variations,and complex growth environments of wheat pests in natural settings,a PSA-YOLO11n algorithm is proposed to enhance detection precision.Building upon the YOLO11n framework,the proposed improvements include three key components:1)SimCSPSPPF in Backbone:An improved Spatial Pyramid Pooling-Fast(SPPF)module,SimCSPSPPF,is integrated into the Backbone to reduce the number of channels in the hidden layers,thereby accelerating model training.2)PEC in Neck:The standard convolution layers in the Neck are replaced with Perception Enhancement Convolutions(PEC)to improve multi-scale feature extraction capabilities,enhancing detection speed.3)AWIoU Loss Function:The regression loss function is replaced with Adequate Wise IoU(AWIoU),addressing issues of bounding box distortion caused by the diversity in pest species and size variations,thereby improving the precision of bounding box localization.Experimental evaluations on the IP102 dataset demonstrate that PSA-YOLO11n achieves a mean Average Precision(mAP)of 89.10%,surpassing YOLO11n by 0.8%.Comparisons with other mainstream algorithms,including Faster R-CNN,RetinaNet,YOLOv5s,YOLOv8n,YOLOv10n,and YOLO11n,confirm that PSA-YOLO11n outperforms all baselines in terms of detection performance.These results highlight the algorithm’s capability to significantly improve the detection accuracy of multi-scale wheat pests in natural environments,providing an effective solution for pest management in wheat production.
基金supported by the National Natural Science Foundation of China(Grant Nos.62101275 and 62101274).
文摘UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,complex backgrounds,and variable lighting persist due to the unique perspective of UAV imagery.To address these issues,this paper introduces DAFPN-YOLO,an innovative model based on YOLOv8s(You Only Look Once version 8s).Themodel strikes a balance between detection accuracy and speed while reducing parameters,making itwell-suited for multi-object detection tasks from drone perspectives.A key feature of DAFPN-YOLO is the enhanced Drone-AFPN(Adaptive Feature Pyramid Network),which adaptively fuses multi-scale features to optimize feature extraction and enhance spatial and small-object information.To leverage Drone-AFPN’smulti-scale capabilities fully,a dedicated 160×160 small-object detection head was added,significantly boosting detection accuracy for small targets.In the backbone,the C2f_Dual(Cross Stage Partial with Cross-Stage Feature Fusion Dual)module and SPPELAN(Spatial Pyramid Pooling with Enhanced LocalAttentionNetwork)modulewere integrated.These components improve feature extraction and information aggregationwhile reducing parameters and computational complexity,enhancing inference efficiency.Additionally,Shape-IoU(Shape Intersection over Union)is used as the loss function for bounding box regression,enabling more precise shape-based object matching.Experimental results on the VisDrone 2019 dataset demonstrate the effectiveness ofDAFPN-YOLO.Compared to YOLOv8s,the proposedmodel achieves a 5.4 percentage point increase inmAP@0.5,a 3.8 percentage point improvement in mAP@0.5:0.95,and a 17.2%reduction in parameter count.These results highlight DAFPN-YOLO’s advantages in UAV-based object detection,offering valuable insights for applying deep learning to UAV-specific multi-object detection tasks.
基金funding from Horizon Europe Framework Programme(HORIZON),call Teaming for Excellence(HORIZON-WIDERA-2022-ACCESS-01-two-stage)-Creation of the centre of excellence in smart forestry“Forest 4.0”No.101059985funded by the EuropeanUnion under the project FOREST 4.0-“Ekscelencijos centras tvariai miško bioekonomikai vystyti”No.10-042-P-0002.
文摘Forests are vital ecosystems that play a crucial role in sustaining life on Earth and supporting human well-being.Traditional forest mapping and monitoring methods are often costly and limited in scope,necessitating the adoption of advanced,automated approaches for improved forest conservation and management.This study explores the application of deep learning-based object detection techniques for individual tree detection in RGB satellite imagery.A dataset of 3157 images was collected and divided into training(2528),validation(495),and testing(134)sets.To enhance model robustness and generalization,data augmentation was applied to the training part of the dataset.Various YOLO-based models,including YOLOv8,YOLOv9,YOLOv10,YOLOv11,and YOLOv12,were evaluated using different hyperparameters and optimization techniques,such as stochastic gradient descent(SGD)and auto-optimization.These models were assessed in terms of detection accuracy and the number of detected trees.The highest-performing model,YOLOv12m,achieved a mean average precision(mAP@50)of 0.908,mAP@50:95 of 0.581,recall of 0.851,precision of 0.852,and an F1-score of 0.847.The results demonstrate that YOLO-based object detection offers a highly efficient,scalable,and accurate solution for individual tree detection in satellite imagery,facilitating improved forest inventory,monitoring,and ecosystem management.This study underscores the potential of AI-driven tree detection to enhance environmental sustainability and support data-driven decision-making in forestry.
文摘Improving consumer satisfaction with the appearance and surface quality of wood-based products requires inspection methods that are both accurate and efficient.The adoption of artificial intelligence(AI)for surface evaluation has emerged as a promising solution.Since the visual appeal of wooden products directly impacts their market value and overall business success,effective quality control is crucial.However,conventional inspection techniques often fail to meet performance requirements due to limited accuracy and slow processing times.To address these shortcomings,the authors propose a real-time deep learning-based system for evaluating surface appearance quality.The method integrates object detection and classification within an area attention framework and leverages R-ELAN for advanced fine-tuning.This architecture supports precise identification and classification of multiple objects,even under ambiguous or visually complex conditions.Furthermore,the model is computationally efficient and well-suited to moderate or domain-specific datasets commonly found in industrial inspection tasks.Experimental validation on the Zenodo dataset shows that the model achieves an average precision(AP)of 60.6%,outperforming the current state-of-the-art YOLOv12 model(55.3%),with a fast inference time of approximately 70 milliseconds.These results underscore the potential of AI-powered methods to enhance surface quality inspection in the wood manufacturing sector.
基金supported in part by the National Science Foundation of China(52371372)the Project of Science and Technology Commission of Shanghai Municipality,China(22JC1401400,21190780300)the 111 Project,China(D18003)
文摘Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dense small objects is challenging.
文摘Bees play a crucial role in the global food chain,pollinating over 75% of food and producing valuable products such as bee pollen,propolis,and royal jelly.However,theAsian hornet poses a serious threat to bee populations by preying on them and disrupting agricultural ecosystems.To address this issue,this study developed a modified YOLOv7tiny(You Only Look Once)model for efficient hornet detection.The model incorporated space-to-depth(SPD)and squeeze-and-excitation(SE)attention mechanisms and involved detailed annotation of the hornet’s head and full body,significantly enhancing the detection of small objects.The Taguchi method was also used to optimize the training parameters,resulting in optimal performance.Data for this study were collected from the Roboflow platformusing a 640×640 resolution dataset.The YOLOv7tinymodel was trained on this dataset.After optimizing the training parameters using the Taguchi method,significant improvements were observed in accuracy,precision,recall,F1 score,andmean average precision(mAP)for hornet detection.Without the hornet head label,incorporating the SPD attentionmechanism resulted in a peakmAP of 98.7%,representing an 8.58%increase over the original YOLOv7tiny.By including the hornet head label and applying the SPD attention mechanism and Soft-CIOU loss function,themAP was further enhanced to 97.3%,a 7.04% increase over the original YOLOv7tiny.Furthermore,the Soft-CIOU Loss function contributed to additional performance enhancements during the validation phase.
基金supported by National Natural Science Foundation of China(61673186 and 61871196)Beijing Normal University Education Reform Project(jx2024040)Guangdong Undergraduate Universities Teaching Quality and Reform Project(jx2024309).
文摘Data augmentation plays an important role in boosting the performance of 3D models,while very few studies handle the 3D point cloud data with this technique.Global augmentation and cut-paste are commonly used augmentation techniques for point clouds,where global augmentation is applied to the entire point cloud of the scene,and cut-paste samples objects from other frames into the current frame.Both types of data augmentation can improve performance,but the cut-paste technique cannot effectively deal with the occlusion relationship between the foreground object and the background scene and the rationality of object sampling,which may be counterproductive and may hurt the overall performance.In addition,LiDAR is susceptible to signal loss,external occlusion,extreme weather and other factors,which can easily cause object shape changes,while global augmentation and cut-paste cannot effectively enhance the robustness of the model.To this end,we propose Syn-Aug,a synchronous data augmentation framework for LiDAR-based 3D object detection.Specifically,we first propose a novel rendering-based object augmentation technique(Ren-Aug)to enrich training data while enhancing scene realism.Second,we propose a local augmentation technique(Local-Aug)to generate local noise by rotating and scaling objects in the scene while avoiding collisions,which can improve generalisation performance.Finally,we make full use of the structural information of 3D labels to make the model more robust by randomly changing the geometry of objects in the training frames.We verify the proposed framework with four different types of 3D object detectors.Experimental results show that our proposed Syn-Aug significantly improves the performance of various 3D object detectors in the KITTI and nuScenes datasets,proving the effectiveness and generality of Syn-Aug.On KITTI,four different types of baseline models using Syn-Aug improved mAP by 0.89%,1.35%,1.61%and 1.14%respectively.On nuScenes,four different types of baseline models using Syn-Aug improved mAP by 14.93%,10.42%,8.47%and 6.81%respectively.The code is available at https://github.com/liuhuaijjin/Syn-Aug.
基金supported by the Natural Science Foundation Project of Fujian Province,China(Grant No.2023J011439 and No.2019J01859).
文摘With the rapid urbanization and exponential population growth in China,two-wheeled vehicles have become a popular mode of transportation,particularly for short-distance travel.However,due to a lack of safety awareness,traffic violations by two-wheeled vehicle riders have become a widespread concern,contributing to urban traffic risks.Currently,significant human and material resources are being allocated to monitor and intercept non-compliant riders to ensure safe driving behavior.To enhance the safety,efficiency,and cost-effectiveness of traffic monitoring,automated detection systems based on image processing algorithms can be employed to identify traffic violations from eye-level video footage.In this study,we propose a robust detection algorithm specifically designed for two-wheeled vehicles,which serves as a fundamental step toward intelligent traffic monitoring.Our approach integrates a novel convolutional and attention mechanism to improve detection accuracy and efficiency.Additionally,we introduce a semi-supervised training strategy that leverages a large number of unlabeled images to enhance the model’s learning capability by extracting valuable background information.This method enables the model to generalize effectively to diverse urban environments and varying lighting conditions.We evaluate our proposed algorithm on a custom-built dataset,and experimental results demonstrate its superior performance,achieving an average precision(AP)of 95%and a recall(R)of 90.6%.Furthermore,the model maintains a computational efficiency of only 25.7 GFLOPs while achieving a high processing speed of 249 FPS,making it highly suitable for deployment on edge devices.Compared to existing detection methods,our approach significantly enhances the accuracy and robustness of two-wheeled vehicle identification while ensuring real-time performance.
文摘Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)and three-dimensional(3D)object detection methods havemade significant progress,they often fall short under severe occlusion due to depth ambiguities in 2D imagery and the high cost and deployment limitations of 3D sensors such as Light Detection and Ranging(LiDAR).This paper presents a comparative review of recent 2D and 3D detection models,focusing on their occlusion-handling capabilities and the impact of sensor modalities such as stereo vision,Time-of-Flight(ToF)cameras,and LiDAR.In this context,we introduce FuDensityNet,our multimodal occlusion-aware detection framework that combines Red-Green-Blue(RGB)images and LiDAR data to enhance detection performance.As a forward-looking direction,we propose a monocular depth-estimation extension to FuDensityNet,aimed at replacing expensive 3D sensors with a more scalable CNN-based pipeline.Although this enhancement is not experimentally evaluated in this manuscript,we describe its conceptual design and potential for future implementation.
基金Program for Young Excellent Talents in the University of Fujian Province,Grant/Award Number:201847The National Key Research and Development Program of China,Grant/Award Number:2022YFB3206605Natural Science Foundation of Xiamen Municipality,Grant/Award Number:3502Z20227189。
文摘Efficient detection of surface defects is primary for ensuring product quality during manufacturing processes.To enhance the performance of deep learning-based methods in practical applications,the authors propose Dense-YOLO,a fast surface defect detection network that combines the strengths of DenseNet and you only look once version 3(YOLOv3).The authors design a lightweight backbone network with improved densely connected blocks,optimising the utilisation of shallow features while maintaining high detection speeds.Additionally,the authors refine the feature pyramid network of YOLOv3 to increase the recall of tiny defects and overall positioning accuracy.Furthermore,an online multi-angle template matching technique is introduced based on normalised cross-correlation to precisely locate the detection area.This refined template matching method not only accelerates detection speed but also mitigates the influence of the background.To validate the effectiveness of our enhancements,the authors conduct comparative experiments across two private datasets and one public dataset.Results show that Dense-YOLO outperforms existing methods,such as faster R-CNN,YOLOv3,YOLOv5s,YOLOv7,and SSD,in terms of mean average precision(mAP)and detection speed.Moreover,Dense-YOLO outperforms networks inherited from VGG and ResNet,including improved faster R-CNN,FCOS,M2Det-320 and FRCN,in mAP.