In this paper,a two-stage light detection and ranging(LiDAR) three-dimensional(3D) object detection framework is presented,namely point-voxel dual transformer(PV-DT3D),which is a transformer-based method.In the propos...In this paper,a two-stage light detection and ranging(LiDAR) three-dimensional(3D) object detection framework is presented,namely point-voxel dual transformer(PV-DT3D),which is a transformer-based method.In the proposed PV-DT3D,point-voxel fusion features are used for proposal refinement.Specifically,keypoints are sampled from entire point cloud scene and used to encode representative scene features via a proposal-aware voxel set abstraction module.Subsequently,following the generation of proposals by the region proposal networks(RPN),the internal encoded keypoints are fed into the dual transformer encoder-decoder architecture.In 3D object detection,the proposed PV-DT3D takes advantage of both point-wise transformer and channel-wise architecture to capture contextual information from the spatial and channel dimensions.Experiments conducted on the highly competitive KITTI 3D car detection leaderboard show that the PV-DT3D achieves superior detection accuracy among state-of-the-art point-voxel-based methods.展开更多
Aiming at the problems of low detection accuracy and large model size of existing object detection algorithms applied to complex road scenes,an improved you only look once version 8(YOLOv8)object detection algorithm f...Aiming at the problems of low detection accuracy and large model size of existing object detection algorithms applied to complex road scenes,an improved you only look once version 8(YOLOv8)object detection algorithm for infrared images,F-YOLOv8,is proposed.First,a spatial-to-depth network replaces the traditional backbone network's strided convolution or pooling layer.At the same time,it combines with the channel attention mechanism so that the neural network focuses on the channels with large weight values to better extract low-resolution image feature information;then an improved feature pyramid network of lightweight bidirectional feature pyramid network(L-BiFPN)is proposed,which can efficiently fuse features of different scales.In addition,a loss function of insertion of union based on the minimum point distance(MPDIoU)is introduced for bounding box regression,which obtains faster convergence speed and more accurate regression results.Experimental results on the FLIR dataset show that the improved algorithm can accurately detect infrared road targets in real time with 3%and 2.2%enhancement in mean average precision at 50%IoU(mAP50)and mean average precision at 50%—95%IoU(mAP50-95),respectively,and 38.1%,37.3%and 16.9%reduction in the number of model parameters,the model weight,and floating-point operations per second(FLOPs),respectively.To further demonstrate the detection capability of the improved algorithm,it is tested on the public dataset PASCAL VOC,and the results show that F-YOLO has excellent generalized detection performance.展开更多
Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones...Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones and lack of detail information for small-scale objects make drone-based small object detection a formidable challenge. To address these issues, we first develop a mathematical model to explore how changing receptive fields impacts the polynomial fitting results. Subsequently, based on the obtained conclusions, we propose a simple but effective Hybrid Receptive Field Network (HRFNet), whose modules include Hybrid Feature Augmentation (HFA), Hybrid Feature Pyramid (HFP) and Dual Scale Head (DSH). Specifically, HFA employs parallel dilated convolution kernels of different sizes to extend shallow features with different receptive fields, committed to improving the multi-scale adaptability of the network;HFP enhances the perception of small objects by capturing contextual information across layers, while DSH reconstructs the original prediction head utilizing a set of high-resolution features and ultrahigh-resolution features. In addition, in order to train HRFNet, the corresponding dual-scale loss function is designed. Finally, comprehensive evaluation results on public benchmarks such as VisDrone-DET and TinyPerson demonstrate the robustness of the proposed method. Most impressively, the proposed HRFNet achieves a mAP of 51.0 on VisDrone-DET with 29.3 M parameters, which outperforms the extant state-of-the-art detectors. HRFNet also performs excellently in complex scenarios captured by drones, achieving the best performance on the CS-Drone dataset we built.展开更多
UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,comp...UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,complex backgrounds,and variable lighting persist due to the unique perspective of UAV imagery.To address these issues,this paper introduces DAFPN-YOLO,an innovative model based on YOLOv8s(You Only Look Once version 8s).Themodel strikes a balance between detection accuracy and speed while reducing parameters,making itwell-suited for multi-object detection tasks from drone perspectives.A key feature of DAFPN-YOLO is the enhanced Drone-AFPN(Adaptive Feature Pyramid Network),which adaptively fuses multi-scale features to optimize feature extraction and enhance spatial and small-object information.To leverage Drone-AFPN’smulti-scale capabilities fully,a dedicated 160×160 small-object detection head was added,significantly boosting detection accuracy for small targets.In the backbone,the C2f_Dual(Cross Stage Partial with Cross-Stage Feature Fusion Dual)module and SPPELAN(Spatial Pyramid Pooling with Enhanced LocalAttentionNetwork)modulewere integrated.These components improve feature extraction and information aggregationwhile reducing parameters and computational complexity,enhancing inference efficiency.Additionally,Shape-IoU(Shape Intersection over Union)is used as the loss function for bounding box regression,enabling more precise shape-based object matching.Experimental results on the VisDrone 2019 dataset demonstrate the effectiveness ofDAFPN-YOLO.Compared to YOLOv8s,the proposedmodel achieves a 5.4 percentage point increase inmAP@0.5,a 3.8 percentage point improvement in mAP@0.5:0.95,and a 17.2%reduction in parameter count.These results highlight DAFPN-YOLO’s advantages in UAV-based object detection,offering valuable insights for applying deep learning to UAV-specific multi-object detection tasks.展开更多
Top-view fisheye cameras are widely used in personnel surveillance for their broad field of view,but their unique imaging characteristics pose challenges like distortion,complex scenes,scale variations,and small objec...Top-view fisheye cameras are widely used in personnel surveillance for their broad field of view,but their unique imaging characteristics pose challenges like distortion,complex scenes,scale variations,and small objects near image edges.To tackle these,we proposed peripheral focus you only look once(PF-YOLO),an enhanced YOLOv8n-based method.Firstly,we introduced a cutting-patch data augmentation strategy to mitigate the problem of insufficient small-object samples in various scenes.Secondly,to enhance the model's focus on small objects near the edges,we designed the peripheral focus loss,which uses dynamic focus coefficients to provide greater gradient gains for these objects,improving their regression accuracy.Finally,we designed the three dimensional(3D)spatial-channel coordinate attention C2f module,enhancing spatial and channel perception,suppressing noise,and improving personnel detection.Experimental results demonstrate that PF-YOLO achieves strong performance on the challenging events for person detection from overhead fisheye images(CEPDTOF)and in-the-wild events for people detection and tracking from overhead fisheye cameras(WEPDTOF)datasets.Compared to the original YOLOv8n model,PFYOLO achieves improvements on CEPDTOF with increases of 2.1%,1.7%and 2.9%in mean average precision 50(mAP 50),mAP 50-95,and tively.On WEPDTOF,PF-YOLO achieves substantial improvements with increases of 31.4%,14.9%,61.1%and 21.0%in 91.2%and 57.2%,respectively.展开更多
To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-cap...To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-captured images has posed a challenge for traditional target detection methods,especially in identifying small objects in high-resolution images.This study presents an enhanced object detection algorithm based on the Faster Regionbased Convolutional Neural Network(Faster R-CNN)framework,specifically tailored for detecting small-scale electrical components like insulators,shock hammers,and screws in transmission line.The algorithm features an improved backbone network for Faster R-CNN,which significantly boosts the feature extraction network’s ability to detect fine details.The Region Proposal Network is optimized using a method of guided feature refinement(GFR),which achieves a balance between accuracy and speed.The incorporation of Generalized Intersection over Union(GIOU)and Region of Interest(ROI)Align further refines themodel’s accuracy.Experimental results demonstrate a notable improvement in mean Average Precision,reaching 89.3%,an 11.1%increase compared to the standard Faster R-CNN.This highlights the effectiveness of the proposed algorithm in identifying electrical components in high-resolution aerial images.展开更多
Data augmentation plays an important role in boosting the performance of 3D models,while very few studies handle the 3D point cloud data with this technique.Global augmentation and cut-paste are commonly used augmenta...Data augmentation plays an important role in boosting the performance of 3D models,while very few studies handle the 3D point cloud data with this technique.Global augmentation and cut-paste are commonly used augmentation techniques for point clouds,where global augmentation is applied to the entire point cloud of the scene,and cut-paste samples objects from other frames into the current frame.Both types of data augmentation can improve performance,but the cut-paste technique cannot effectively deal with the occlusion relationship between the foreground object and the background scene and the rationality of object sampling,which may be counterproductive and may hurt the overall performance.In addition,LiDAR is susceptible to signal loss,external occlusion,extreme weather and other factors,which can easily cause object shape changes,while global augmentation and cut-paste cannot effectively enhance the robustness of the model.To this end,we propose Syn-Aug,a synchronous data augmentation framework for LiDAR-based 3D object detection.Specifically,we first propose a novel rendering-based object augmentation technique(Ren-Aug)to enrich training data while enhancing scene realism.Second,we propose a local augmentation technique(Local-Aug)to generate local noise by rotating and scaling objects in the scene while avoiding collisions,which can improve generalisation performance.Finally,we make full use of the structural information of 3D labels to make the model more robust by randomly changing the geometry of objects in the training frames.We verify the proposed framework with four different types of 3D object detectors.Experimental results show that our proposed Syn-Aug significantly improves the performance of various 3D object detectors in the KITTI and nuScenes datasets,proving the effectiveness and generality of Syn-Aug.On KITTI,four different types of baseline models using Syn-Aug improved mAP by 0.89%,1.35%,1.61%and 1.14%respectively.On nuScenes,four different types of baseline models using Syn-Aug improved mAP by 14.93%,10.42%,8.47%and 6.81%respectively.The code is available at https://github.com/liuhuaijjin/Syn-Aug.展开更多
Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dens...Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dense small objects is challenging.展开更多
Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)a...Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)and three-dimensional(3D)object detection methods havemade significant progress,they often fall short under severe occlusion due to depth ambiguities in 2D imagery and the high cost and deployment limitations of 3D sensors such as Light Detection and Ranging(LiDAR).This paper presents a comparative review of recent 2D and 3D detection models,focusing on their occlusion-handling capabilities and the impact of sensor modalities such as stereo vision,Time-of-Flight(ToF)cameras,and LiDAR.In this context,we introduce FuDensityNet,our multimodal occlusion-aware detection framework that combines Red-Green-Blue(RGB)images and LiDAR data to enhance detection performance.As a forward-looking direction,we propose a monocular depth-estimation extension to FuDensityNet,aimed at replacing expensive 3D sensors with a more scalable CNN-based pipeline.Although this enhancement is not experimentally evaluated in this manuscript,we describe its conceptual design and potential for future implementation.展开更多
The YOLO(You Only Look Once)series,a leading single-stage object detection framework,has gained significant prominence in medical-image analysis due to its real-time efficiency and robust performance.Recent iterations...The YOLO(You Only Look Once)series,a leading single-stage object detection framework,has gained significant prominence in medical-image analysis due to its real-time efficiency and robust performance.Recent iterations of YOLO have further enhanced its accuracy and reliability in critical clinical tasks such as tumor detection,lesion segmentation,and microscopic image analysis,thereby accelerating the development of clinical decision support systems.This paper systematically reviews advances in YOLO-based medical object detection from 2018 to 2024.It compares YOLO’s performance with othermodels(e.g.,Faster R-CNN,RetinaNet)inmedical contexts,summarizes standard evaluation metrics(e.g.,mean Average Precision(mAP),sensitivity),and analyzes hardware deployment strategies using public datasets such as LUNA16,BraTS,andCheXpert.Thereviewhighlights the impressive performance of YOLO models,particularly from YOLOv5 to YOLOv8,in achieving high precision(up to 99.17%),sensitivity(up to 97.5%),and mAP exceeding 95%in tasks such as lung nodule,breast cancer,and polyp detection.These results demonstrate the significant potential of YOLO models for early disease detection and real-time clinical applications,indicating their ability to enhance clinical workflows.However,the study also identifies key challenges,including high small-object miss rates,limited generalization in low-contrast images,scarcity of annotated data,and model interpretability issues.Finally,the potential future research directions are also proposed to address these challenges and further advance the application of YOLO models in healthcare.展开更多
In foggy traffic scenarios, existing object detection algorithms face challenges such as low detection accuracy, poor robustness, occlusion, missed detections, and false detections. To address this issue, a multi-scal...In foggy traffic scenarios, existing object detection algorithms face challenges such as low detection accuracy, poor robustness, occlusion, missed detections, and false detections. To address this issue, a multi-scale object detection algorithm based on an improved YOLOv8 has been proposed. Firstly, a lightweight attention mechanism, Triplet Attention, is introduced to enhance the algorithm’s ability to extract multi-dimensional and multi-scale features, thereby improving the receptive capability of the feature maps. Secondly, the Diverse Branch Block (DBB) is integrated into the CSP Bottleneck with two Convolutions (C2F) module to strengthen the fusion of semantic information across different layers. Thirdly, a new decoupled detection head is proposed by redesigning the original network head based on the Diverse Branch Block module to improve detection accuracy and reduce missed and false detections. Finally, the Minimum Point Distance based Intersection-over-Union (MPDIoU) is used to replace the original YOLOv8 Complete Intersection-over-Union (CIoU) to accelerate the network’s training convergence. Comparative experiments and dehazing pre-processing tests were conducted on the RTTS and VOC-Fog datasets. Compared to the baseline YOLOv8 model, the improved algorithm achieved mean Average Precision (mAP) improvements of 4.6% and 3.8%, respectively. After defogging pre-processing, the mAP increased by 5.3% and 4.4%, respectively. The experimental results demonstrate that the improved algorithm exhibits high practicality and effectiveness in foggy traffic scenarios.展开更多
To address the challenges of low accuracy and insufficient real-time performance in dynamic object detection for UAV surveillance,this paper proposes a novel tracking framework that integrates a lightweight improved Y...To address the challenges of low accuracy and insufficient real-time performance in dynamic object detection for UAV surveillance,this paper proposes a novel tracking framework that integrates a lightweight improved YOLOv5s model with adaptive motion compensation.A UAV-view dynamic feature enhancement strategy is innovatively introduced,and a lightweight detection network combining attention mechanisms and multi-scale fusion is constructed.The robustness of tracking under motion blur scenarios is also optimized.Experimental results demonstrate that the proposed method achieves a mAP@0.5 of 68.2%on the VisDrone dataset and reaches an inference speed of 32 FPS on the NVIDIA Jetson TX2 platform.This significantly improves the balance between accuracy and efficiency in complex scenes,offering reliable technical support for real-time applications such as emergency response.展开更多
Although conventional object detection methods achieve high accuracy through extensively annotated datasets,acquiring such large-scale labeled data remains challenging and cost-prohibitive in numerous real-world appli...Although conventional object detection methods achieve high accuracy through extensively annotated datasets,acquiring such large-scale labeled data remains challenging and cost-prohibitive in numerous real-world applications.Few-shot object detection presents a new research idea that aims to localize and classify objects in images using only limited annotated examples.However,the inherent challenge in few-shot object detection lies in the insufficient sample diversity to fully characterize the sample feature distribution,which consequently impacts model performance.Inspired by contrastive learning principles,we propose an Implicit Feature Contrastive Learning(IFCL)module to address this limitation and augment feature diversity for more robust representational learning.This module generates augmented support sample features in a mixed feature space and implicitly contrasts them with query Region of Interest(RoI)features.This approach facilitates more comprehensive learning of both intra-class feature similarity and inter-class feature diversity,thereby enhancing the model’s object classification and localization capabilities.Extensive experiments on PASCAL VOC show that our method achieves a respective improvement of 3.2%,1.8%,and 2.3%on 10-shot of three Novel Sets compared to the baseline model FPD.展开更多
Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limi...Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limited,and mainstream downsampling convolution operations further exacerbate feature loss.Additionally,due to the occlusionprone nature of small objects and their higher sensitivity to localization deviations,conventional Intersection over Union(IoU)loss functions struggle to achieve stable convergence.To address these limitations,LR-Net is proposed for small object detection.Specifically,the proposed Lossless Feature Fusion(LFF)method transfers spatial features into the channel domain while leveraging a hybrid attentionmechanism to focus on critical features,mitigating feature loss caused by downsampling.Furthermore,RSIoU is proposed to enhance the convergence performance of IoU-based losses for small objects.RSIoU corrects the inherent convergence direction issues in SIoU and proposes a penalty term as a Dynamic Focusing Mechanism parameter,enabling it to dynamically emphasize the loss contribution of small object samples.Ultimately,RSIoU significantly improves the convergence performance of the loss function for small objects,particularly under occlusion scenarios.Experiments demonstrate that LR-Net achieves significant improvements across variousmetrics onmultiple datasets compared with YOLOv8n,achieving a 3.7% increase in mean Average Precision(AP)on the VisDrone2019 dataset,along with improvements of 3.3% on the AI-TOD dataset and 1.2% on the COCO dataset.展开更多
Accurate defect detection plays a critical role in ensuring product quality and equipment reliability.Small-object detection poses unique challenges due to weak feature representation and significant background interf...Accurate defect detection plays a critical role in ensuring product quality and equipment reliability.Small-object detection poses unique challenges due to weak feature representation and significant background interference.To address these issues,this study incorporates three key innovations into the YOLOv8 framework:the use of GhostNet convolution for lightweight and efficient feature extraction,the addition of a P2 detection layer to enhance small-object detection capabilities,and the integration of the Triplet Attention mechanism to capture comprehensive spatial and channel dependencies.These improvements collectively optimize detection performance for small objects while reducing computational complexity.Experimental results demonstrate that the enhanced model achieves a mean average precision(mAP@0.5)of 97.46%and a mAP@0.5∶0.95 of 61.84%,representing a performance improvement of 1.9%and 3.2%,respectively,compared to the baseline YOLOv8 model.Additionally,the model achieves a frame rate of 158 FPS,maintaining real-time detection capabilities while reducing the parameter count by 50%,further underscoring its efficiency and suitability for smallobject detection in complex scenarios.展开更多
A novel dual-branch decoding fusion convolutional neural network model(DDFNet)specifically designed for real-time salient object detection(SOD)on steel surfaces is proposed.DDFNet is based on a standard encoder–decod...A novel dual-branch decoding fusion convolutional neural network model(DDFNet)specifically designed for real-time salient object detection(SOD)on steel surfaces is proposed.DDFNet is based on a standard encoder–decoder architecture.DDFNet integrates three key innovations:first,we introduce a novel,lightweight multi-scale progressive aggregation residual network that effectively suppresses background interference and refines defect details,enabling efficient salient feature extraction.Then,we propose an innovative dual-branch decoding fusion structure,comprising the refined defect representation branch and the enhanced defect representation branch,which enhance accuracy in defect region identification and feature representation.Additionally,to further improve the detection of small and complex defects,we incorporate a multi-scale attention fusion module.Experimental results on the public ESDIs-SOD dataset show that DDFNet,with only 3.69 million parameters,achieves detection performance comparable to current state-of-the-art models,demonstrating its potential for real-time industrial applications.Furthermore,our DDFNet-L variant consistently outperforms leading methods in detection performance.The code is available at https://github.com/13140W/DDFNet.展开更多
Recent years have seen a surge in interest in object detection on remote sensing images for applications such as surveillance andmanagement.However,challenges like small object detection,scale variation,and the presen...Recent years have seen a surge in interest in object detection on remote sensing images for applications such as surveillance andmanagement.However,challenges like small object detection,scale variation,and the presence of closely packed objects in these images hinder accurate detection.Additionally,the motion blur effect further complicates the identification of such objects.To address these issues,we propose enhanced YOLOv9 with a transformer head(YOLOv9-TH).The model introduces an additional prediction head for detecting objects of varying sizes and swaps the original prediction heads for transformer heads to leverage self-attention mechanisms.We further improve YOLOv9-TH using several strategies,including data augmentation,multi-scale testing,multi-model integration,and the introduction of an additional classifier.The cross-stage partial(CSP)method and the ghost convolution hierarchical graph(GCHG)are combined to improve detection accuracy by better utilizing feature maps,widening the receptive field,and precisely extracting multi-scale objects.Additionally,we incorporate the E-SimAM attention mechanism to address low-resolution feature loss.Extensive experiments on the VisDrone2021 and DIOR datasets demonstrate the effectiveness of YOLOv9-TH,showing good improvement in mAP compared to the best existing methods.The YOLOv9-TH-e achieved 54.2% of mAP50 on the VisDrone2021 dataset and 92.3% of mAP on the DIOR dataset.The results confirmthemodel’s robustness and suitability for real-world applications,particularly for small object detection in remote sensing images.展开更多
Knowledge distillation(KD)is an emerging model compression technique for learning compact object detector models.Previous KD often focused solely on distilling from the logits layer or the feature intermediate layers,...Knowledge distillation(KD)is an emerging model compression technique for learning compact object detector models.Previous KD often focused solely on distilling from the logits layer or the feature intermediate layers,which may limit the comprehensive learning of the student network.Additionally,the imbalance between the foreground and background also affects the performance of the model.To address these issues,this paper employs feature-based distillation to enhance the detection performance of the bounding box localization part,and logit-based distillation to improve the detection performance of the category prediction part.Specifically,for the intermediate layer feature distillation,we introduce feature resampling to reduce the risk of the student model merely imitating the teacher model.At the same time,we incorporate a Spatial Attention Mechanism(SAM)to highlight the foreground features learned by the student model.In terms of output layer feature distillation,we divide the traditional distillation targets into target-class objects and non-target-class objects,aiming to improve overall distillation performance.Furthermore,we introduce a one-to-many matching distillation strategy based on Feature Alignment Module(FAM),which further enhances the studentmodel’s feature representation ability,making its feature distribution closer to that of the teacher model,and thus demonstrating superior localization and classification capabilities in object detection tasks.Experimental results demonstrate that our proposedmethodology outperforms conventional distillation techniques in terms of object detecting performance.展开更多
At present, salient object detection (SOD) has achieved considerable progress. However, the methods that perform well still face the issue of inadequate detection accuracy. For example, sometimes there are problems of...At present, salient object detection (SOD) has achieved considerable progress. However, the methods that perform well still face the issue of inadequate detection accuracy. For example, sometimes there are problems of missed and false detections. Effectively optimizing features to capture key information and better integrating different levels of features to enhance their complementarity are two significant challenges in the domain of SOD. In response to these challenges, this study proposes a novel SOD method based on multi-strategy feature optimization. We propose the multi-size feature extraction module (MSFEM), which uses the attention mechanism, the multi-level feature fusion, and the residual block to obtain finer features. This module provides robust support for the subsequent accurate detection of the salient object. In addition, we use two rounds of feature fusion and the feedback mechanism to optimize the features obtained by the MSFEM to improve detection accuracy. The first round of feature fusion is applied to integrate the features extracted by the MSFEM to obtain more refined features. Subsequently, the feedback mechanism and the second round of feature fusion are applied to refine the features, thereby providing a stronger foundation for accurately detecting salient objects. To improve the fusion effect, we propose the feature enhancement module (FEM) and the feature optimization module (FOM). The FEM integrates the upper and lower features with the optimized features obtained by the FOM to enhance feature complementarity. The FOM uses different receptive fields, the attention mechanism, and the residual block to more effectively capture key information. Experimental results demonstrate that our method outperforms 10 state-of-the-art SOD methods.展开更多
Object detection plays a critical role in drone imagery analysis,especially in remote sensing applications where accurate and efficient detection of small objects is essential.Despite significant advancements in drone...Object detection plays a critical role in drone imagery analysis,especially in remote sensing applications where accurate and efficient detection of small objects is essential.Despite significant advancements in drone imagery detection,most models still struggle with small object detection due to challenges such as object size,complex backgrounds.To address these issues,we propose a robust detection model based on You Only Look Once(YOLO)that balances accuracy and efficiency.The model mainly contains several major innovation:feature selection pyramid network,Inner-Shape Intersection over Union(ISIoU)loss function and small object detection head.To overcome the limitations of traditional fusion methods in handling multi-level features,we introduce a Feature Selection Pyramid Network integrated into the Neck component,which preserves shallow feature details critical for detecting small objects.Additionally,recognizing that deep network structures often neglect or degrade small object features,we design a specialized small object detection head in the shallow layers to enhance detection accuracy for these challenging targets.To effectively model both local and global dependencies,we introduce a Conv-Former module that simulates Transformer mechanisms using a convolutional structure,thereby improving feature enhancement.Furthermore,we employ ISIoU to address object imbalance and scale variation This approach accelerates model conver-gence and improves regression accuracy.Experimental results show that,compared to the baseline model,the proposed method significantly improves small object detection performance on the VisDrone2019 dataset,with mAP@50 increasing by 4.9%and mAP@50-95 rising by 6.7%.This model also outperforms other state-of-the-art algorithms,demonstrating its reliability and effectiveness in both small object detection and remote sensing image fusion tasks.展开更多
基金supported by the Natural Science Foundation of China (No.62103298)the South African National Research Foundation (Nos.132797 and 137951)。
文摘In this paper,a two-stage light detection and ranging(LiDAR) three-dimensional(3D) object detection framework is presented,namely point-voxel dual transformer(PV-DT3D),which is a transformer-based method.In the proposed PV-DT3D,point-voxel fusion features are used for proposal refinement.Specifically,keypoints are sampled from entire point cloud scene and used to encode representative scene features via a proposal-aware voxel set abstraction module.Subsequently,following the generation of proposals by the region proposal networks(RPN),the internal encoded keypoints are fed into the dual transformer encoder-decoder architecture.In 3D object detection,the proposed PV-DT3D takes advantage of both point-wise transformer and channel-wise architecture to capture contextual information from the spatial and channel dimensions.Experiments conducted on the highly competitive KITTI 3D car detection leaderboard show that the PV-DT3D achieves superior detection accuracy among state-of-the-art point-voxel-based methods.
基金supported by the National Natural Science Foundation of China(No.62103298)。
文摘Aiming at the problems of low detection accuracy and large model size of existing object detection algorithms applied to complex road scenes,an improved you only look once version 8(YOLOv8)object detection algorithm for infrared images,F-YOLOv8,is proposed.First,a spatial-to-depth network replaces the traditional backbone network's strided convolution or pooling layer.At the same time,it combines with the channel attention mechanism so that the neural network focuses on the channels with large weight values to better extract low-resolution image feature information;then an improved feature pyramid network of lightweight bidirectional feature pyramid network(L-BiFPN)is proposed,which can efficiently fuse features of different scales.In addition,a loss function of insertion of union based on the minimum point distance(MPDIoU)is introduced for bounding box regression,which obtains faster convergence speed and more accurate regression results.Experimental results on the FLIR dataset show that the improved algorithm can accurately detect infrared road targets in real time with 3%and 2.2%enhancement in mean average precision at 50%IoU(mAP50)and mean average precision at 50%—95%IoU(mAP50-95),respectively,and 38.1%,37.3%and 16.9%reduction in the number of model parameters,the model weight,and floating-point operations per second(FLOPs),respectively.To further demonstrate the detection capability of the improved algorithm,it is tested on the public dataset PASCAL VOC,and the results show that F-YOLO has excellent generalized detection performance.
基金supported by the National Natural Science Foundation of China(Nos.62276204 and 62203343)the Fundamental Research Funds for the Central Universities(No.YJSJ24011)+1 种基金the Natural Science Basic Research Program of Shanxi,China(Nos.2022JM-340 and 2023-JC-QN-0710)the China Postdoctoral Science Foundation(Nos.2020T130494 and 2018M633470).
文摘Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones and lack of detail information for small-scale objects make drone-based small object detection a formidable challenge. To address these issues, we first develop a mathematical model to explore how changing receptive fields impacts the polynomial fitting results. Subsequently, based on the obtained conclusions, we propose a simple but effective Hybrid Receptive Field Network (HRFNet), whose modules include Hybrid Feature Augmentation (HFA), Hybrid Feature Pyramid (HFP) and Dual Scale Head (DSH). Specifically, HFA employs parallel dilated convolution kernels of different sizes to extend shallow features with different receptive fields, committed to improving the multi-scale adaptability of the network;HFP enhances the perception of small objects by capturing contextual information across layers, while DSH reconstructs the original prediction head utilizing a set of high-resolution features and ultrahigh-resolution features. In addition, in order to train HRFNet, the corresponding dual-scale loss function is designed. Finally, comprehensive evaluation results on public benchmarks such as VisDrone-DET and TinyPerson demonstrate the robustness of the proposed method. Most impressively, the proposed HRFNet achieves a mAP of 51.0 on VisDrone-DET with 29.3 M parameters, which outperforms the extant state-of-the-art detectors. HRFNet also performs excellently in complex scenarios captured by drones, achieving the best performance on the CS-Drone dataset we built.
基金supported by the National Natural Science Foundation of China(Grant Nos.62101275 and 62101274).
文摘UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,complex backgrounds,and variable lighting persist due to the unique perspective of UAV imagery.To address these issues,this paper introduces DAFPN-YOLO,an innovative model based on YOLOv8s(You Only Look Once version 8s).Themodel strikes a balance between detection accuracy and speed while reducing parameters,making itwell-suited for multi-object detection tasks from drone perspectives.A key feature of DAFPN-YOLO is the enhanced Drone-AFPN(Adaptive Feature Pyramid Network),which adaptively fuses multi-scale features to optimize feature extraction and enhance spatial and small-object information.To leverage Drone-AFPN’smulti-scale capabilities fully,a dedicated 160×160 small-object detection head was added,significantly boosting detection accuracy for small targets.In the backbone,the C2f_Dual(Cross Stage Partial with Cross-Stage Feature Fusion Dual)module and SPPELAN(Spatial Pyramid Pooling with Enhanced LocalAttentionNetwork)modulewere integrated.These components improve feature extraction and information aggregationwhile reducing parameters and computational complexity,enhancing inference efficiency.Additionally,Shape-IoU(Shape Intersection over Union)is used as the loss function for bounding box regression,enabling more precise shape-based object matching.Experimental results on the VisDrone 2019 dataset demonstrate the effectiveness ofDAFPN-YOLO.Compared to YOLOv8s,the proposedmodel achieves a 5.4 percentage point increase inmAP@0.5,a 3.8 percentage point improvement in mAP@0.5:0.95,and a 17.2%reduction in parameter count.These results highlight DAFPN-YOLO’s advantages in UAV-based object detection,offering valuable insights for applying deep learning to UAV-specific multi-object detection tasks.
基金supported by National Natural Science Foundation of China(Nos.62171042,62102033,U24A20331)the R&D Program of Beijing Municipal Education Commission(No.KZ202211417048)+2 种基金the Project of Construction and Support for High-Level Innovative Teams of Beijing Municipal Institutions(No.BPHR20220121)Beijing Natural Science Foundation(Nos.4232026,4242020)the Academic Research Projects of Beijing Union University(Nos.ZKZD202302,ZK20202403)。
文摘Top-view fisheye cameras are widely used in personnel surveillance for their broad field of view,but their unique imaging characteristics pose challenges like distortion,complex scenes,scale variations,and small objects near image edges.To tackle these,we proposed peripheral focus you only look once(PF-YOLO),an enhanced YOLOv8n-based method.Firstly,we introduced a cutting-patch data augmentation strategy to mitigate the problem of insufficient small-object samples in various scenes.Secondly,to enhance the model's focus on small objects near the edges,we designed the peripheral focus loss,which uses dynamic focus coefficients to provide greater gradient gains for these objects,improving their regression accuracy.Finally,we designed the three dimensional(3D)spatial-channel coordinate attention C2f module,enhancing spatial and channel perception,suppressing noise,and improving personnel detection.Experimental results demonstrate that PF-YOLO achieves strong performance on the challenging events for person detection from overhead fisheye images(CEPDTOF)and in-the-wild events for people detection and tracking from overhead fisheye cameras(WEPDTOF)datasets.Compared to the original YOLOv8n model,PFYOLO achieves improvements on CEPDTOF with increases of 2.1%,1.7%and 2.9%in mean average precision 50(mAP 50),mAP 50-95,and tively.On WEPDTOF,PF-YOLO achieves substantial improvements with increases of 31.4%,14.9%,61.1%and 21.0%in 91.2%and 57.2%,respectively.
基金supported by the Shanghai Science and Technology Innovation Action Plan High-Tech Field Project(Grant No.22511100601)for the year 2022 and Technology Development Fund for People’s Livelihood Research(Research on Transmission Line Deep Foundation Pit Environmental Situation Awareness System Based on Multi-Source Data).
文摘To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-captured images has posed a challenge for traditional target detection methods,especially in identifying small objects in high-resolution images.This study presents an enhanced object detection algorithm based on the Faster Regionbased Convolutional Neural Network(Faster R-CNN)framework,specifically tailored for detecting small-scale electrical components like insulators,shock hammers,and screws in transmission line.The algorithm features an improved backbone network for Faster R-CNN,which significantly boosts the feature extraction network’s ability to detect fine details.The Region Proposal Network is optimized using a method of guided feature refinement(GFR),which achieves a balance between accuracy and speed.The incorporation of Generalized Intersection over Union(GIOU)and Region of Interest(ROI)Align further refines themodel’s accuracy.Experimental results demonstrate a notable improvement in mean Average Precision,reaching 89.3%,an 11.1%increase compared to the standard Faster R-CNN.This highlights the effectiveness of the proposed algorithm in identifying electrical components in high-resolution aerial images.
基金supported by National Natural Science Foundation of China(61673186 and 61871196)Beijing Normal University Education Reform Project(jx2024040)Guangdong Undergraduate Universities Teaching Quality and Reform Project(jx2024309).
文摘Data augmentation plays an important role in boosting the performance of 3D models,while very few studies handle the 3D point cloud data with this technique.Global augmentation and cut-paste are commonly used augmentation techniques for point clouds,where global augmentation is applied to the entire point cloud of the scene,and cut-paste samples objects from other frames into the current frame.Both types of data augmentation can improve performance,but the cut-paste technique cannot effectively deal with the occlusion relationship between the foreground object and the background scene and the rationality of object sampling,which may be counterproductive and may hurt the overall performance.In addition,LiDAR is susceptible to signal loss,external occlusion,extreme weather and other factors,which can easily cause object shape changes,while global augmentation and cut-paste cannot effectively enhance the robustness of the model.To this end,we propose Syn-Aug,a synchronous data augmentation framework for LiDAR-based 3D object detection.Specifically,we first propose a novel rendering-based object augmentation technique(Ren-Aug)to enrich training data while enhancing scene realism.Second,we propose a local augmentation technique(Local-Aug)to generate local noise by rotating and scaling objects in the scene while avoiding collisions,which can improve generalisation performance.Finally,we make full use of the structural information of 3D labels to make the model more robust by randomly changing the geometry of objects in the training frames.We verify the proposed framework with four different types of 3D object detectors.Experimental results show that our proposed Syn-Aug significantly improves the performance of various 3D object detectors in the KITTI and nuScenes datasets,proving the effectiveness and generality of Syn-Aug.On KITTI,four different types of baseline models using Syn-Aug improved mAP by 0.89%,1.35%,1.61%and 1.14%respectively.On nuScenes,four different types of baseline models using Syn-Aug improved mAP by 14.93%,10.42%,8.47%and 6.81%respectively.The code is available at https://github.com/liuhuaijjin/Syn-Aug.
基金supported in part by the National Science Foundation of China(52371372)the Project of Science and Technology Commission of Shanghai Municipality,China(22JC1401400,21190780300)the 111 Project,China(D18003)
文摘Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dense small objects is challenging.
文摘Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)and three-dimensional(3D)object detection methods havemade significant progress,they often fall short under severe occlusion due to depth ambiguities in 2D imagery and the high cost and deployment limitations of 3D sensors such as Light Detection and Ranging(LiDAR).This paper presents a comparative review of recent 2D and 3D detection models,focusing on their occlusion-handling capabilities and the impact of sensor modalities such as stereo vision,Time-of-Flight(ToF)cameras,and LiDAR.In this context,we introduce FuDensityNet,our multimodal occlusion-aware detection framework that combines Red-Green-Blue(RGB)images and LiDAR data to enhance detection performance.As a forward-looking direction,we propose a monocular depth-estimation extension to FuDensityNet,aimed at replacing expensive 3D sensors with a more scalable CNN-based pipeline.Although this enhancement is not experimentally evaluated in this manuscript,we describe its conceptual design and potential for future implementation.
基金supported by the National Natural Science Foundation of China under grant number 62066016the Natural Science Foundation of Hunan Province of China under grant number 2024JJ7395+2 种基金the Scientific Research Project of Education Department of Hunan Province of China under grant number 22B0549International and Regional Science and Technology Cooperation and Exchange Program of the Hunan Association for Science and Technology under grant number 025SKX-KJ-04Hunan Province Undergraduate Innovation and Entrepreneurship Training Program(grant number S202410531015).
文摘The YOLO(You Only Look Once)series,a leading single-stage object detection framework,has gained significant prominence in medical-image analysis due to its real-time efficiency and robust performance.Recent iterations of YOLO have further enhanced its accuracy and reliability in critical clinical tasks such as tumor detection,lesion segmentation,and microscopic image analysis,thereby accelerating the development of clinical decision support systems.This paper systematically reviews advances in YOLO-based medical object detection from 2018 to 2024.It compares YOLO’s performance with othermodels(e.g.,Faster R-CNN,RetinaNet)inmedical contexts,summarizes standard evaluation metrics(e.g.,mean Average Precision(mAP),sensitivity),and analyzes hardware deployment strategies using public datasets such as LUNA16,BraTS,andCheXpert.Thereviewhighlights the impressive performance of YOLO models,particularly from YOLOv5 to YOLOv8,in achieving high precision(up to 99.17%),sensitivity(up to 97.5%),and mAP exceeding 95%in tasks such as lung nodule,breast cancer,and polyp detection.These results demonstrate the significant potential of YOLO models for early disease detection and real-time clinical applications,indicating their ability to enhance clinical workflows.However,the study also identifies key challenges,including high small-object miss rates,limited generalization in low-contrast images,scarcity of annotated data,and model interpretability issues.Finally,the potential future research directions are also proposed to address these challenges and further advance the application of YOLO models in healthcare.
基金supported by the National Natural Science Foundation of China(Grant Nos.62101275 and 62101274).
文摘In foggy traffic scenarios, existing object detection algorithms face challenges such as low detection accuracy, poor robustness, occlusion, missed detections, and false detections. To address this issue, a multi-scale object detection algorithm based on an improved YOLOv8 has been proposed. Firstly, a lightweight attention mechanism, Triplet Attention, is introduced to enhance the algorithm’s ability to extract multi-dimensional and multi-scale features, thereby improving the receptive capability of the feature maps. Secondly, the Diverse Branch Block (DBB) is integrated into the CSP Bottleneck with two Convolutions (C2F) module to strengthen the fusion of semantic information across different layers. Thirdly, a new decoupled detection head is proposed by redesigning the original network head based on the Diverse Branch Block module to improve detection accuracy and reduce missed and false detections. Finally, the Minimum Point Distance based Intersection-over-Union (MPDIoU) is used to replace the original YOLOv8 Complete Intersection-over-Union (CIoU) to accelerate the network’s training convergence. Comparative experiments and dehazing pre-processing tests were conducted on the RTTS and VOC-Fog datasets. Compared to the baseline YOLOv8 model, the improved algorithm achieved mean Average Precision (mAP) improvements of 4.6% and 3.8%, respectively. After defogging pre-processing, the mAP increased by 5.3% and 4.4%, respectively. The experimental results demonstrate that the improved algorithm exhibits high practicality and effectiveness in foggy traffic scenarios.
文摘To address the challenges of low accuracy and insufficient real-time performance in dynamic object detection for UAV surveillance,this paper proposes a novel tracking framework that integrates a lightweight improved YOLOv5s model with adaptive motion compensation.A UAV-view dynamic feature enhancement strategy is innovatively introduced,and a lightweight detection network combining attention mechanisms and multi-scale fusion is constructed.The robustness of tracking under motion blur scenarios is also optimized.Experimental results demonstrate that the proposed method achieves a mAP@0.5 of 68.2%on the VisDrone dataset and reaches an inference speed of 32 FPS on the NVIDIA Jetson TX2 platform.This significantly improves the balance between accuracy and efficiency in complex scenes,offering reliable technical support for real-time applications such as emergency response.
基金funded by the China Chongqing Municipal Science and Technology Bureau,grant numbers CSTB2024TIAD-CYKJCXX0009,CSTB2024NSCQ-LZX0043,CSTB2022NSCQ-MSX0288Chongqing Municipal Commission of Housing and Urban-Rural Development,grant number CKZ2024-87+3 种基金the Chongqing University of Technology Graduate Education High-Quality Development Project,grant number gzlsz202401the Chongqing University of Technology—Chongqing LINGLUE Technology Co.,Ltd.Electronic Information(Artificial Intelligence)Graduate Joint Training Basethe Postgraduate Education and Teaching Reform Research Project in Chongqing,grant number yjg213116the Chongqing University of Technology-CISDI Chongqing Information Technology Co.,Ltd.Computer Technology Graduate Joint Training Base.
文摘Although conventional object detection methods achieve high accuracy through extensively annotated datasets,acquiring such large-scale labeled data remains challenging and cost-prohibitive in numerous real-world applications.Few-shot object detection presents a new research idea that aims to localize and classify objects in images using only limited annotated examples.However,the inherent challenge in few-shot object detection lies in the insufficient sample diversity to fully characterize the sample feature distribution,which consequently impacts model performance.Inspired by contrastive learning principles,we propose an Implicit Feature Contrastive Learning(IFCL)module to address this limitation and augment feature diversity for more robust representational learning.This module generates augmented support sample features in a mixed feature space and implicitly contrasts them with query Region of Interest(RoI)features.This approach facilitates more comprehensive learning of both intra-class feature similarity and inter-class feature diversity,thereby enhancing the model’s object classification and localization capabilities.Extensive experiments on PASCAL VOC show that our method achieves a respective improvement of 3.2%,1.8%,and 2.3%on 10-shot of three Novel Sets compared to the baseline model FPD.
基金supported by Chongqing Municipal Commission of Housing and Urban-Rural Development(Grant No.CKZ2024-87)China Chongqing Municipal Science and Technology Bureau(Grant No.2024TIAD-CYKJCXX0121).
文摘Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limited,and mainstream downsampling convolution operations further exacerbate feature loss.Additionally,due to the occlusionprone nature of small objects and their higher sensitivity to localization deviations,conventional Intersection over Union(IoU)loss functions struggle to achieve stable convergence.To address these limitations,LR-Net is proposed for small object detection.Specifically,the proposed Lossless Feature Fusion(LFF)method transfers spatial features into the channel domain while leveraging a hybrid attentionmechanism to focus on critical features,mitigating feature loss caused by downsampling.Furthermore,RSIoU is proposed to enhance the convergence performance of IoU-based losses for small objects.RSIoU corrects the inherent convergence direction issues in SIoU and proposes a penalty term as a Dynamic Focusing Mechanism parameter,enabling it to dynamically emphasize the loss contribution of small object samples.Ultimately,RSIoU significantly improves the convergence performance of the loss function for small objects,particularly under occlusion scenarios.Experiments demonstrate that LR-Net achieves significant improvements across variousmetrics onmultiple datasets compared with YOLOv8n,achieving a 3.7% increase in mean Average Precision(AP)on the VisDrone2019 dataset,along with improvements of 3.3% on the AI-TOD dataset and 1.2% on the COCO dataset.
文摘Accurate defect detection plays a critical role in ensuring product quality and equipment reliability.Small-object detection poses unique challenges due to weak feature representation and significant background interference.To address these issues,this study incorporates three key innovations into the YOLOv8 framework:the use of GhostNet convolution for lightweight and efficient feature extraction,the addition of a P2 detection layer to enhance small-object detection capabilities,and the integration of the Triplet Attention mechanism to capture comprehensive spatial and channel dependencies.These improvements collectively optimize detection performance for small objects while reducing computational complexity.Experimental results demonstrate that the enhanced model achieves a mean average precision(mAP@0.5)of 97.46%and a mAP@0.5∶0.95 of 61.84%,representing a performance improvement of 1.9%and 3.2%,respectively,compared to the baseline YOLOv8 model.Additionally,the model achieves a frame rate of 158 FPS,maintaining real-time detection capabilities while reducing the parameter count by 50%,further underscoring its efficiency and suitability for smallobject detection in complex scenarios.
基金supported in part by the National Key R&D Program of China(Grant No.2023YFB3307604)the Shanxi Province Basic Research Program Youth Science Research Project(Grant Nos.202303021212054 and 202303021212046)+3 种基金the Key Projects Supported by Hebei Natural Science Foundation(Grant No.E2024203125)the National Science Foundation of China(Grant No.52105391)the Hebei Provincial Science and Technology Major Project(Grant No.23280101Z)the National Key Laboratory of Metal Forming Technology and Heavy Equipment Open Fund(Grant No.S2308100.W17).
文摘A novel dual-branch decoding fusion convolutional neural network model(DDFNet)specifically designed for real-time salient object detection(SOD)on steel surfaces is proposed.DDFNet is based on a standard encoder–decoder architecture.DDFNet integrates three key innovations:first,we introduce a novel,lightweight multi-scale progressive aggregation residual network that effectively suppresses background interference and refines defect details,enabling efficient salient feature extraction.Then,we propose an innovative dual-branch decoding fusion structure,comprising the refined defect representation branch and the enhanced defect representation branch,which enhance accuracy in defect region identification and feature representation.Additionally,to further improve the detection of small and complex defects,we incorporate a multi-scale attention fusion module.Experimental results on the public ESDIs-SOD dataset show that DDFNet,with only 3.69 million parameters,achieves detection performance comparable to current state-of-the-art models,demonstrating its potential for real-time industrial applications.Furthermore,our DDFNet-L variant consistently outperforms leading methods in detection performance.The code is available at https://github.com/13140W/DDFNet.
文摘Recent years have seen a surge in interest in object detection on remote sensing images for applications such as surveillance andmanagement.However,challenges like small object detection,scale variation,and the presence of closely packed objects in these images hinder accurate detection.Additionally,the motion blur effect further complicates the identification of such objects.To address these issues,we propose enhanced YOLOv9 with a transformer head(YOLOv9-TH).The model introduces an additional prediction head for detecting objects of varying sizes and swaps the original prediction heads for transformer heads to leverage self-attention mechanisms.We further improve YOLOv9-TH using several strategies,including data augmentation,multi-scale testing,multi-model integration,and the introduction of an additional classifier.The cross-stage partial(CSP)method and the ghost convolution hierarchical graph(GCHG)are combined to improve detection accuracy by better utilizing feature maps,widening the receptive field,and precisely extracting multi-scale objects.Additionally,we incorporate the E-SimAM attention mechanism to address low-resolution feature loss.Extensive experiments on the VisDrone2021 and DIOR datasets demonstrate the effectiveness of YOLOv9-TH,showing good improvement in mAP compared to the best existing methods.The YOLOv9-TH-e achieved 54.2% of mAP50 on the VisDrone2021 dataset and 92.3% of mAP on the DIOR dataset.The results confirmthemodel’s robustness and suitability for real-world applications,particularly for small object detection in remote sensing images.
基金funded by National Natural Science Foundation of China(61603245).
文摘Knowledge distillation(KD)is an emerging model compression technique for learning compact object detector models.Previous KD often focused solely on distilling from the logits layer or the feature intermediate layers,which may limit the comprehensive learning of the student network.Additionally,the imbalance between the foreground and background also affects the performance of the model.To address these issues,this paper employs feature-based distillation to enhance the detection performance of the bounding box localization part,and logit-based distillation to improve the detection performance of the category prediction part.Specifically,for the intermediate layer feature distillation,we introduce feature resampling to reduce the risk of the student model merely imitating the teacher model.At the same time,we incorporate a Spatial Attention Mechanism(SAM)to highlight the foreground features learned by the student model.In terms of output layer feature distillation,we divide the traditional distillation targets into target-class objects and non-target-class objects,aiming to improve overall distillation performance.Furthermore,we introduce a one-to-many matching distillation strategy based on Feature Alignment Module(FAM),which further enhances the studentmodel’s feature representation ability,making its feature distribution closer to that of the teacher model,and thus demonstrating superior localization and classification capabilities in object detection tasks.Experimental results demonstrate that our proposedmethodology outperforms conventional distillation techniques in terms of object detecting performance.
文摘At present, salient object detection (SOD) has achieved considerable progress. However, the methods that perform well still face the issue of inadequate detection accuracy. For example, sometimes there are problems of missed and false detections. Effectively optimizing features to capture key information and better integrating different levels of features to enhance their complementarity are two significant challenges in the domain of SOD. In response to these challenges, this study proposes a novel SOD method based on multi-strategy feature optimization. We propose the multi-size feature extraction module (MSFEM), which uses the attention mechanism, the multi-level feature fusion, and the residual block to obtain finer features. This module provides robust support for the subsequent accurate detection of the salient object. In addition, we use two rounds of feature fusion and the feedback mechanism to optimize the features obtained by the MSFEM to improve detection accuracy. The first round of feature fusion is applied to integrate the features extracted by the MSFEM to obtain more refined features. Subsequently, the feedback mechanism and the second round of feature fusion are applied to refine the features, thereby providing a stronger foundation for accurately detecting salient objects. To improve the fusion effect, we propose the feature enhancement module (FEM) and the feature optimization module (FOM). The FEM integrates the upper and lower features with the optimized features obtained by the FOM to enhance feature complementarity. The FOM uses different receptive fields, the attention mechanism, and the residual block to more effectively capture key information. Experimental results demonstrate that our method outperforms 10 state-of-the-art SOD methods.
文摘Object detection plays a critical role in drone imagery analysis,especially in remote sensing applications where accurate and efficient detection of small objects is essential.Despite significant advancements in drone imagery detection,most models still struggle with small object detection due to challenges such as object size,complex backgrounds.To address these issues,we propose a robust detection model based on You Only Look Once(YOLO)that balances accuracy and efficiency.The model mainly contains several major innovation:feature selection pyramid network,Inner-Shape Intersection over Union(ISIoU)loss function and small object detection head.To overcome the limitations of traditional fusion methods in handling multi-level features,we introduce a Feature Selection Pyramid Network integrated into the Neck component,which preserves shallow feature details critical for detecting small objects.Additionally,recognizing that deep network structures often neglect or degrade small object features,we design a specialized small object detection head in the shallow layers to enhance detection accuracy for these challenging targets.To effectively model both local and global dependencies,we introduce a Conv-Former module that simulates Transformer mechanisms using a convolutional structure,thereby improving feature enhancement.Furthermore,we employ ISIoU to address object imbalance and scale variation This approach accelerates model conver-gence and improves regression accuracy.Experimental results show that,compared to the baseline model,the proposed method significantly improves small object detection performance on the VisDrone2019 dataset,with mAP@50 increasing by 4.9%and mAP@50-95 rising by 6.7%.This model also outperforms other state-of-the-art algorithms,demonstrating its reliability and effectiveness in both small object detection and remote sensing image fusion tasks.