The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditio...The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditional approaches like network compression,quantization,and lightweight design often sacrifice accuracy or feature representation robustness.This article introduces the Fast Multi-scale Channel Shuffling Network(FMCSNet),a novel lightweight detection model optimized for mobile devices.FMCSNet integrates a fully convolutional Multilayer Perceptron(MLP)module,offering global perception without significantly increasing parameters,effectively bridging the gap between CNNs and Vision Transformers.FMCSNet achieves a delicate balance between computation and accuracy mainly by two key modules:the ShiftMLP module,including a shift operation and an MLP module,and a Partial group Convolutional(PGConv)module,reducing computation while enhancing information exchange between channels.With a computational complexity of 1.4G FLOPs and 1.3M parameters,FMCSNet outperforms CNN-based and DWConv-based ShuffleNetv2 by 1%and 4.5%mAP on the Pascal VOC 2007 dataset,respectively.Additionally,FMCSNet achieves a mAP of 30.0(0.5:0.95 IoU threshold)with only 2.5G FLOPs and 2.0M parameters.It achieves 32 FPS on low-performance i5-series CPUs,meeting real-time detection requirements.The versatility of the PGConv module’s adaptability across scenarios further highlights FMCSNet as a promising solution for real-time mobile object detection.展开更多
To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework ba...To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework based on face-pedestrian joint feature modeling.By constructing a joint tracking model centered on“intra-class independent tracking+cross-category dynamic binding”,designing a multi-modal matching metric with spatio-temporal and appearance constraints,and innovatively introducing a cross-category feature mutual verification mechanism and a dual matching strategy,this work effectively resolves performance degradation in traditional single-category tracking methods caused by short-term occlusion,cross-camera tracking,and crowded environments.Experiments on the Chokepoint_Face_Pedestrian_Track test set demonstrate that in complex scenes,the proposed method improves Face-Pedestrian Matching F1 area under the curve(F1 AUC)by approximately 4 to 43 percentage points compared to several traditional methods.The joint tracking model achieves overall performance metrics of IDF1:85.1825%and MOTA:86.5956%,representing improvements of 0.91 and 0.06 percentage points,respectively,over the baseline model.Ablation studies confirm the effectiveness of key modules such as the Intersection over Area(IoA)/Intersection over Union(IoU)joint metric and dynamic threshold adjustment,validating the significant role of the cross-category identity matching mechanism in enhancing tracking stability.Our_model shows a 16.7%frame per second(FPS)drop vs.fairness of detection and re-identification in multiple object tracking(FairMOT),with its cross-category binding module adding aboute 10%overhead,yet maintains near-real-time performance for essential face-pedestrian tracking at small resolutions.展开更多
With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods ...With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods face numerous challenges in practical deployment,including scale variation handling,feature degradation,and complex backgrounds.To address these issues,we propose Edge-enhanced and Detail-Capturing You Only Look Once(EHDC-YOLO),a novel framework for object detection in Unmanned Aerial Vehicle(UAV)imagery.Based on the You Only Look Once version 11 nano(YOLOv11n)baseline,EHDC-YOLO systematically introduces several architectural enhancements:(1)a Multi-Scale Edge Enhancement(MSEE)module that leverages multi-scale pooling and edge information to enhance boundary feature extraction;(2)an Enhanced Feature Pyramid Network(EFPN)that integrates P2-level features with Cross Stage Partial(CSP)structures and OmniKernel convolutions for better fine-grained representation;and(3)Dynamic Head(DyHead)with multi-dimensional attention mechanisms for enhanced cross-scale modeling and perspective adaptability.Comprehensive experiments on the Vision meets Drones for Detection(VisDrone-DET)2019 dataset demonstrate that EHDC-YOLO achieves significant improvements,increasing mean Average Precision(mAP)@0.5 from 33.2%to 46.1%(an absolute improvement of 12.9 percentage points)and mAP@0.5:0.95 from 19.5%to 28.0%(an absolute improvement of 8.5 percentage points)compared with the YOLOv11n baseline,while maintaining a reasonable parameter count(2.81 M vs the baseline’s 2.58 M).Further ablation studies confirm the effectiveness of each proposed component,while visualization results highlight EHDC-YOLO’s superior performance in detecting objects and handling occlusions in complex drone scenarios.展开更多
Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone t...Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities.展开更多
In recent years,Turkey has turned from merging with the West to"returning to the Middle East"and become an ambitious but indispensible force in regional affairs.On the one hand,the extent of Turkey’s involv...In recent years,Turkey has turned from merging with the West to"returning to the Middle East"and become an ambitious but indispensible force in regional affairs.On the one hand,the extent of Turkey’s involvement in the regional affairs in the Middle East has never been seen before:supporting Islamic forces such as the"Muslim Brotherhood",which has made Turkey fall in enmity with the Sisi展开更多
Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by ...Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by exploring the evolution of different methods and applications over the past three years,highlighting the shift from conventional computer vision to deep learning-based methodologies owing to their enhanced efficacy in real time.The review emphasizes the integration of advanced models,such as You Only Look Once(YOLO)v9,v10,EfficientDet,Transformer-based models,and hybrid frameworks that improve the precision,accuracy,and scalability for crop monitoring and disease detection.The review also highlights benchmark datasets and evaluation metrics.It addresses limitations,like domain adaptation challenges,dataset heterogeneity,and occlusion,while offering insights into prospective research avenues,such as multimodal learning,explainable AI,and federated learning.Furthermore,the main aim of this paper is to serve as a thorough resource guide for scientists,researchers,and stakeholders for implementing deep learning-based object detection methods for the development of intelligent,robust,and sustainable agricultural systems.展开更多
To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-cap...To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-captured images has posed a challenge for traditional target detection methods,especially in identifying small objects in high-resolution images.This study presents an enhanced object detection algorithm based on the Faster Regionbased Convolutional Neural Network(Faster R-CNN)framework,specifically tailored for detecting small-scale electrical components like insulators,shock hammers,and screws in transmission line.The algorithm features an improved backbone network for Faster R-CNN,which significantly boosts the feature extraction network’s ability to detect fine details.The Region Proposal Network is optimized using a method of guided feature refinement(GFR),which achieves a balance between accuracy and speed.The incorporation of Generalized Intersection over Union(GIOU)and Region of Interest(ROI)Align further refines themodel’s accuracy.Experimental results demonstrate a notable improvement in mean Average Precision,reaching 89.3%,an 11.1%increase compared to the standard Faster R-CNN.This highlights the effectiveness of the proposed algorithm in identifying electrical components in high-resolution aerial images.展开更多
The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can caus...The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can cause changes in cutting force/heat,resulting in affecting gear machining precision.Therefore,this paper studies the effect of different process parameters on gear machining precision.A multi-objective optimization model is established for the relationship between process parameters and tooth surface deviations,tooth profile deviations,and tooth lead deviations through the cutting speed,feed rate,and cutting depth of the worm wheel gear grinding machine.The response surface method(RSM)is used for experimental design,and the corresponding experimental results and optimal process parameters are obtained.Subsequently,gray relational analysis-principal component analysis(GRA-PCA),particle swarm optimization(PSO),and genetic algorithm-particle swarm optimization(GA-PSO)methods are used to analyze the experimental results and obtain different optimal process parameters.The results show that optimal process parameters obtained by the GRA-PCA,PSO,and GA-PSO methods improve the gear machining precision.Moreover,the gear machining precision obtained by GA-PSO is superior to other methods.展开更多
Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in ed...Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in education continues to increase,educators actively seek innovative and immersive methods to engage students in learning.However,exploring these possibilities also entails identifying and overcoming existing barriers to optimal educational integration.Concurrently,this surge in demand has prompted the identification of specific barriers,one of which is three-dimensional(3D)modeling.Creating 3D objects for augmented reality education applications can be challenging and time-consuming for the educators.To address this,we have developed a pipeline that creates realistic 3D objects from the two-dimensional(2D)photograph.Applications for augmented and virtual reality can then utilize these created 3D objects.We evaluated the proposed pipeline based on the usability of the 3D object and performance metrics.Quantitatively,with 117 respondents,the co-creation team was surveyed with openended questions to evaluate the precision of the 3D object created by the proposed photogrammetry pipeline.We analyzed the survey data using descriptive-analytical methods and found that the proposed pipeline produces 3D models that are positively accurate when compared to real-world objects,with an average mean score above 8.This study adds new knowledge in creating 3D objects for augmented reality applications by using the photogrammetry technique;finally,it discusses potential problems and future research directions for 3D objects in the education sector.展开更多
The subcortical visual pathway is generally thought to be involved in dangerous information processing,such as fear processing and defensive behavior.A recent study,published in Human Brain Mapping,shows a new functio...The subcortical visual pathway is generally thought to be involved in dangerous information processing,such as fear processing and defensive behavior.A recent study,published in Human Brain Mapping,shows a new function of the subcortical pathway involved in the fast processing of non-emotional object perception.Rapid object processing is a critical function of visual system.Topological perception theory proposes that the initial perception of objects begins with the extraction of topological property(TP).However,the mechanism of rapid TP processing remains unclear.The researchers investigated the subcortical mechanism of TP processing with transcranial magnetic stimulation(TMS).They find that a subcortical magnocellular pathway is responsible for the early processing of TP,and this subcortical processing of TP accelerates object recognition.Based on their findings,we propose a novel training approach called subcortical magnocellular pathway training(SMPT),aimed at improving the efficiency of the subcortical M pathway to restore visual and attentional functions in disorders associated with subcortical pathway dysfunction.展开更多
Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dens...Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dense small objects is challenging.展开更多
To investigate the applicability of four commonly used color difference formulas(CIELAB,CIE94,CMC(1:1),and CIEDE2000)in the printing field on 3D objects,as well as the impact of four standard light sources(D65,D50,A,a...To investigate the applicability of four commonly used color difference formulas(CIELAB,CIE94,CMC(1:1),and CIEDE2000)in the printing field on 3D objects,as well as the impact of four standard light sources(D65,D50,A,and TL84)on 3D color difference evaluations,50 glossy spheres with a diameter of 2cm based on the Sailner J4003D color printing device were created.These spheres were centered around the five recommended colors(gray,red,yellow,green,and blue)by CIE.Color difference was calculated according to the four formulas,and 111 pairs of experimental samples meeting the CIELAB gray scale color difference requirements(1.0-14.0)were selected.Ten observers,aged between 22 and 27 with normal color vision,were participated in this study,using the gray scale method from psychophysical experiments to conduct color difference evaluations under the four light sources,with repeated experiments for each observer.The results indicated that the overall effect of the D65 light source on 3D objects color difference was minimal.In contrast,D50 and A light sources had a significant impact within the small color difference range,while the TL84 light source influenced both large and small color difference considerably.Among the four color difference formulas,CIEDE2000 demonstrated the best predictive performance for color difference in 3D objects,followed by CMC(1:1),CIE94,and CIELAB.展开更多
Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones...Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones and lack of detail information for small-scale objects make drone-based small object detection a formidable challenge. To address these issues, we first develop a mathematical model to explore how changing receptive fields impacts the polynomial fitting results. Subsequently, based on the obtained conclusions, we propose a simple but effective Hybrid Receptive Field Network (HRFNet), whose modules include Hybrid Feature Augmentation (HFA), Hybrid Feature Pyramid (HFP) and Dual Scale Head (DSH). Specifically, HFA employs parallel dilated convolution kernels of different sizes to extend shallow features with different receptive fields, committed to improving the multi-scale adaptability of the network;HFP enhances the perception of small objects by capturing contextual information across layers, while DSH reconstructs the original prediction head utilizing a set of high-resolution features and ultrahigh-resolution features. In addition, in order to train HRFNet, the corresponding dual-scale loss function is designed. Finally, comprehensive evaluation results on public benchmarks such as VisDrone-DET and TinyPerson demonstrate the robustness of the proposed method. Most impressively, the proposed HRFNet achieves a mAP of 51.0 on VisDrone-DET with 29.3 M parameters, which outperforms the extant state-of-the-art detectors. HRFNet also performs excellently in complex scenarios captured by drones, achieving the best performance on the CS-Drone dataset we built.展开更多
Top-view fisheye cameras are widely used in personnel surveillance for their broad field of view,but their unique imaging characteristics pose challenges like distortion,complex scenes,scale variations,and small objec...Top-view fisheye cameras are widely used in personnel surveillance for their broad field of view,but their unique imaging characteristics pose challenges like distortion,complex scenes,scale variations,and small objects near image edges.To tackle these,we proposed peripheral focus you only look once(PF-YOLO),an enhanced YOLOv8n-based method.Firstly,we introduced a cutting-patch data augmentation strategy to mitigate the problem of insufficient small-object samples in various scenes.Secondly,to enhance the model's focus on small objects near the edges,we designed the peripheral focus loss,which uses dynamic focus coefficients to provide greater gradient gains for these objects,improving their regression accuracy.Finally,we designed the three dimensional(3D)spatial-channel coordinate attention C2f module,enhancing spatial and channel perception,suppressing noise,and improving personnel detection.Experimental results demonstrate that PF-YOLO achieves strong performance on the challenging events for person detection from overhead fisheye images(CEPDTOF)and in-the-wild events for people detection and tracking from overhead fisheye cameras(WEPDTOF)datasets.Compared to the original YOLOv8n model,PFYOLO achieves improvements on CEPDTOF with increases of 2.1%,1.7%and 2.9%in mean average precision 50(mAP 50),mAP 50-95,and tively.On WEPDTOF,PF-YOLO achieves substantial improvements with increases of 31.4%,14.9%,61.1%and 21.0%in 91.2%and 57.2%,respectively.展开更多
To improve small object detection and trajectory estimation from an aerial moving perspective,we propose the Aerial View Attention-PRB(AVA-PRB)model.AVA-PRB integrates two attention mechanisms—Coordinate Attention(CA...To improve small object detection and trajectory estimation from an aerial moving perspective,we propose the Aerial View Attention-PRB(AVA-PRB)model.AVA-PRB integrates two attention mechanisms—Coordinate Attention(CA)and the Convolutional Block Attention Module(CBAM)—to enhance detection accuracy.Additionally,Shape-IoU is employed as the loss function to refine localization precision.Our model further incorporates an adaptive feature fusion mechanism,which optimizes multi-scale object representation,ensuring robust tracking in complex aerial environments.We evaluate the performance of AVA-PRB on two benchmark datasets:Aerial Person Detection and VisDrone2019-Det.The model achieves 60.9%mAP@0.5 on the Aerial Person Detection dataset,and 51.2%mAP@0.5 on VisDrone2019-Det,demonstrating its effectiveness in aerial object detection.Beyond detection,we propose a novel trajectory estimation method that improves movement path prediction under aerial motion.Experimental results indicate that our approach reduces path deviation by up to 64%,effectively mitigating errors caused by rapid camera movements and background variations.By optimizing feature extraction and enhancing spatialtemporal coherence,our method significantly improves object tracking under aerial moving perspectives.This research addresses the limitations of fixed-camera tracking,enhancing flexibility and accuracy in aerial tracking applications.The proposed approach has broad potential for real-world applications,including surveillance,traffic monitoring,and environmental observation.展开更多
AIM:To compare objective dry retinoscopy and subjective refraction measurements in patients with mild keratoconus(KCN)and quantify any differences.METHODS:This cross-sectional study was done on 68 eyes of 68 patients ...AIM:To compare objective dry retinoscopy and subjective refraction measurements in patients with mild keratoconus(KCN)and quantify any differences.METHODS:This cross-sectional study was done on 68 eyes of 68 patients diagnosed with mild KCN.Objective dry retinoscopy using autorefractometer and subjective refraction measurements were performed.Sphere,cylinder,J0,J45,and spherical equivalent values were compared between the two techniques.RESULTS:The mean age of 68 patients with mild KCN was 21.32±5.03y(12–35y).There were 37(54.4%)males.Objective refraction yielded significantly more myopic sphere(-1.44 D vs-0.57 D),higher cylinder magnitude(-2.24 D vs-1.48 D),and more myopic spherical equivalent(-2.56 D vs-1.31 D)compared to subjective refraction(all P<0.05).The mean differences were-0.87 D for sphere,-0.76 D for cylinder,and-1.25 D for spherical equivalent.No significant differences were found for J0 and J45 values,indicating agreement in astigmatism axis(P>0.05).CONCLUSION:In patients with mild KCN,objective dry retinoscopy overestimates the degree of myopia and astigmatism compared to subjective refraction.The irregular cornea in KCN likely impacts objective measurements.Subjective refraction allows compensation for irregularity,providing a more accurate correction.When determining refractive targets,the tendency of objective methods to overcorrect should be considered.展开更多
In this paper,a two-stage light detection and ranging(LiDAR) three-dimensional(3D) object detection framework is presented,namely point-voxel dual transformer(PV-DT3D),which is a transformer-based method.In the propos...In this paper,a two-stage light detection and ranging(LiDAR) three-dimensional(3D) object detection framework is presented,namely point-voxel dual transformer(PV-DT3D),which is a transformer-based method.In the proposed PV-DT3D,point-voxel fusion features are used for proposal refinement.Specifically,keypoints are sampled from entire point cloud scene and used to encode representative scene features via a proposal-aware voxel set abstraction module.Subsequently,following the generation of proposals by the region proposal networks(RPN),the internal encoded keypoints are fed into the dual transformer encoder-decoder architecture.In 3D object detection,the proposed PV-DT3D takes advantage of both point-wise transformer and channel-wise architecture to capture contextual information from the spatial and channel dimensions.Experiments conducted on the highly competitive KITTI 3D car detection leaderboard show that the PV-DT3D achieves superior detection accuracy among state-of-the-art point-voxel-based methods.展开更多
Three-dimensional(3D)object detection is crucial for applications such as robotic control and autonomous driving.While high-precision sensors like LiDAR are expensive,RGB-D sensors(e.g.,Kinect)offer a cost-effective a...Three-dimensional(3D)object detection is crucial for applications such as robotic control and autonomous driving.While high-precision sensors like LiDAR are expensive,RGB-D sensors(e.g.,Kinect)offer a cost-effective alternative,especially for indoor environments.However,RGB-D sensors still face limitations in accuracy and depth perception.This paper proposes an enhanced method that integrates attention-driven YOLOv9 with xLSTM into the F-ConvNet framework.By improving the precision of 2D bounding boxes generated for 3D object detection,this method addresses issues in indoor environments with complex structures and occlusions.The proposed approach enhances detection accuracy and robustness by combining RGB images and depth data,offering improved indoor 3D object detection performance.展开更多
Transorbital craniocerebral injury is a relatively rare type of penetrating head injury that poses a significant threat to the ocular and cerebral structures.^([1])The clinical prognosis of transorbital craniocerebral...Transorbital craniocerebral injury is a relatively rare type of penetrating head injury that poses a significant threat to the ocular and cerebral structures.^([1])The clinical prognosis of transorbital craniocerebral injury is closely related to the size,shape,speed,nature,and trajectory of the foreign object,as well as the incidence of central nervous system damage and secondary complications.The foreign objects reported to have caused these injuries are categorized into wooden items,metallic items,^([2-8])and other materials,which penetrate the intracranial region via fi ve major pathways,including the orbital roof (OR),superior orbital fissure (SOF),inferior orbital fissure(IOF),optic canal (OC),and sphenoid wing.Herein,we present eight cases of transorbital craniocerebral injury caused by an unusual metallic foreign body.展开更多
Detecting oriented targets in remote sensing images amidst complex and heterogeneous backgrounds remains a formidable challenge in the field of object detection.Current frameworks for oriented detection modules are co...Detecting oriented targets in remote sensing images amidst complex and heterogeneous backgrounds remains a formidable challenge in the field of object detection.Current frameworks for oriented detection modules are constrained by intrinsic limitations,including excessive computational and memory overheads,discrepancies between predefined anchors and ground truth bounding boxes,intricate training processes,and feature alignment inconsistencies.To overcome these challenges,we present ASL-OOD(Angle-based SIOU Loss for Oriented Object Detection),a novel,efficient,and robust one-stage framework tailored for oriented object detection.The ASL-OOD framework comprises three core components:the Transformer-based Backbone(TB),the Transformer-based Neck(TN),and the Angle-SIOU(Scylla Intersection over Union)based Decoupled Head(ASDH).By leveraging the Swin Transformer,the TB and TN modules offer several key advantages,such as the capacity to model long-range dependencies,preserve high-resolution feature representations,seamlessly integrate multi-scale features,and enhance parameter efficiency.These improvements empower the model to accurately detect objects across varying scales.The ASDH module further enhances detection performance by incorporating angle-aware optimization based on SIOU,ensuring precise angular consistency and bounding box coherence.This approach effectively harmonizes shape loss and distance loss during the optimization process,thereby significantly boosting detection accuracy.Comprehensive evaluations and ablation studies on standard benchmark datasets such as DOTA with an mAP(mean Average Precision)of 80.16 percent,HRSC2016 with an mAP of 91.07 percent,MAR20 with an mAP of 85.45 percent,and UAVDT with an mAP of 39.7 percent demonstrate the clear superiority of ASL-OOD over state-of-the-art oriented object detection models.These findings underscore the model’s efficacy as an advanced solution for challenging remote sensing object detection tasks.展开更多
基金funded by the National Natural Science Foundation of China under Grant No.62371187the Open Program of Hunan Intelligent Rehabilitation Robot and Auxiliary Equipment Engineering Technology Research Center under Grant No.2024JS101.
文摘The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditional approaches like network compression,quantization,and lightweight design often sacrifice accuracy or feature representation robustness.This article introduces the Fast Multi-scale Channel Shuffling Network(FMCSNet),a novel lightweight detection model optimized for mobile devices.FMCSNet integrates a fully convolutional Multilayer Perceptron(MLP)module,offering global perception without significantly increasing parameters,effectively bridging the gap between CNNs and Vision Transformers.FMCSNet achieves a delicate balance between computation and accuracy mainly by two key modules:the ShiftMLP module,including a shift operation and an MLP module,and a Partial group Convolutional(PGConv)module,reducing computation while enhancing information exchange between channels.With a computational complexity of 1.4G FLOPs and 1.3M parameters,FMCSNet outperforms CNN-based and DWConv-based ShuffleNetv2 by 1%and 4.5%mAP on the Pascal VOC 2007 dataset,respectively.Additionally,FMCSNet achieves a mAP of 30.0(0.5:0.95 IoU threshold)with only 2.5G FLOPs and 2.0M parameters.It achieves 32 FPS on low-performance i5-series CPUs,meeting real-time detection requirements.The versatility of the PGConv module’s adaptability across scenarios further highlights FMCSNet as a promising solution for real-time mobile object detection.
基金supported by the confidential research grant No.a8317。
文摘To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework based on face-pedestrian joint feature modeling.By constructing a joint tracking model centered on“intra-class independent tracking+cross-category dynamic binding”,designing a multi-modal matching metric with spatio-temporal and appearance constraints,and innovatively introducing a cross-category feature mutual verification mechanism and a dual matching strategy,this work effectively resolves performance degradation in traditional single-category tracking methods caused by short-term occlusion,cross-camera tracking,and crowded environments.Experiments on the Chokepoint_Face_Pedestrian_Track test set demonstrate that in complex scenes,the proposed method improves Face-Pedestrian Matching F1 area under the curve(F1 AUC)by approximately 4 to 43 percentage points compared to several traditional methods.The joint tracking model achieves overall performance metrics of IDF1:85.1825%and MOTA:86.5956%,representing improvements of 0.91 and 0.06 percentage points,respectively,over the baseline model.Ablation studies confirm the effectiveness of key modules such as the Intersection over Area(IoA)/Intersection over Union(IoU)joint metric and dynamic threshold adjustment,validating the significant role of the cross-category identity matching mechanism in enhancing tracking stability.Our_model shows a 16.7%frame per second(FPS)drop vs.fairness of detection and re-identification in multiple object tracking(FairMOT),with its cross-category binding module adding aboute 10%overhead,yet maintains near-real-time performance for essential face-pedestrian tracking at small resolutions.
文摘With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods face numerous challenges in practical deployment,including scale variation handling,feature degradation,and complex backgrounds.To address these issues,we propose Edge-enhanced and Detail-Capturing You Only Look Once(EHDC-YOLO),a novel framework for object detection in Unmanned Aerial Vehicle(UAV)imagery.Based on the You Only Look Once version 11 nano(YOLOv11n)baseline,EHDC-YOLO systematically introduces several architectural enhancements:(1)a Multi-Scale Edge Enhancement(MSEE)module that leverages multi-scale pooling and edge information to enhance boundary feature extraction;(2)an Enhanced Feature Pyramid Network(EFPN)that integrates P2-level features with Cross Stage Partial(CSP)structures and OmniKernel convolutions for better fine-grained representation;and(3)Dynamic Head(DyHead)with multi-dimensional attention mechanisms for enhanced cross-scale modeling and perspective adaptability.Comprehensive experiments on the Vision meets Drones for Detection(VisDrone-DET)2019 dataset demonstrate that EHDC-YOLO achieves significant improvements,increasing mean Average Precision(mAP)@0.5 from 33.2%to 46.1%(an absolute improvement of 12.9 percentage points)and mAP@0.5:0.95 from 19.5%to 28.0%(an absolute improvement of 8.5 percentage points)compared with the YOLOv11n baseline,while maintaining a reasonable parameter count(2.81 M vs the baseline’s 2.58 M).Further ablation studies confirm the effectiveness of each proposed component,while visualization results highlight EHDC-YOLO’s superior performance in detecting objects and handling occlusions in complex drone scenarios.
文摘Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities.
文摘In recent years,Turkey has turned from merging with the West to"returning to the Middle East"and become an ambitious but indispensible force in regional affairs.On the one hand,the extent of Turkey’s involvement in the regional affairs in the Middle East has never been seen before:supporting Islamic forces such as the"Muslim Brotherhood",which has made Turkey fall in enmity with the Sisi
文摘Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by exploring the evolution of different methods and applications over the past three years,highlighting the shift from conventional computer vision to deep learning-based methodologies owing to their enhanced efficacy in real time.The review emphasizes the integration of advanced models,such as You Only Look Once(YOLO)v9,v10,EfficientDet,Transformer-based models,and hybrid frameworks that improve the precision,accuracy,and scalability for crop monitoring and disease detection.The review also highlights benchmark datasets and evaluation metrics.It addresses limitations,like domain adaptation challenges,dataset heterogeneity,and occlusion,while offering insights into prospective research avenues,such as multimodal learning,explainable AI,and federated learning.Furthermore,the main aim of this paper is to serve as a thorough resource guide for scientists,researchers,and stakeholders for implementing deep learning-based object detection methods for the development of intelligent,robust,and sustainable agricultural systems.
基金supported by the Shanghai Science and Technology Innovation Action Plan High-Tech Field Project(Grant No.22511100601)for the year 2022 and Technology Development Fund for People’s Livelihood Research(Research on Transmission Line Deep Foundation Pit Environmental Situation Awareness System Based on Multi-Source Data).
文摘To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-captured images has posed a challenge for traditional target detection methods,especially in identifying small objects in high-resolution images.This study presents an enhanced object detection algorithm based on the Faster Regionbased Convolutional Neural Network(Faster R-CNN)framework,specifically tailored for detecting small-scale electrical components like insulators,shock hammers,and screws in transmission line.The algorithm features an improved backbone network for Faster R-CNN,which significantly boosts the feature extraction network’s ability to detect fine details.The Region Proposal Network is optimized using a method of guided feature refinement(GFR),which achieves a balance between accuracy and speed.The incorporation of Generalized Intersection over Union(GIOU)and Region of Interest(ROI)Align further refines themodel’s accuracy.Experimental results demonstrate a notable improvement in mean Average Precision,reaching 89.3%,an 11.1%increase compared to the standard Faster R-CNN.This highlights the effectiveness of the proposed algorithm in identifying electrical components in high-resolution aerial images.
基金Projects(U22B2084,52275483,52075142)supported by the National Natural Science Foundation of ChinaProject(2023ZY01050)supported by the Ministry of Industry and Information Technology High Quality Development,China。
文摘The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can cause changes in cutting force/heat,resulting in affecting gear machining precision.Therefore,this paper studies the effect of different process parameters on gear machining precision.A multi-objective optimization model is established for the relationship between process parameters and tooth surface deviations,tooth profile deviations,and tooth lead deviations through the cutting speed,feed rate,and cutting depth of the worm wheel gear grinding machine.The response surface method(RSM)is used for experimental design,and the corresponding experimental results and optimal process parameters are obtained.Subsequently,gray relational analysis-principal component analysis(GRA-PCA),particle swarm optimization(PSO),and genetic algorithm-particle swarm optimization(GA-PSO)methods are used to analyze the experimental results and obtain different optimal process parameters.The results show that optimal process parameters obtained by the GRA-PCA,PSO,and GA-PSO methods improve the gear machining precision.Moreover,the gear machining precision obtained by GA-PSO is superior to other methods.
文摘Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in education continues to increase,educators actively seek innovative and immersive methods to engage students in learning.However,exploring these possibilities also entails identifying and overcoming existing barriers to optimal educational integration.Concurrently,this surge in demand has prompted the identification of specific barriers,one of which is three-dimensional(3D)modeling.Creating 3D objects for augmented reality education applications can be challenging and time-consuming for the educators.To address this,we have developed a pipeline that creates realistic 3D objects from the two-dimensional(2D)photograph.Applications for augmented and virtual reality can then utilize these created 3D objects.We evaluated the proposed pipeline based on the usability of the 3D object and performance metrics.Quantitatively,with 117 respondents,the co-creation team was surveyed with openended questions to evaluate the precision of the 3D object created by the proposed photogrammetry pipeline.We analyzed the survey data using descriptive-analytical methods and found that the proposed pipeline produces 3D models that are positively accurate when compared to real-world objects,with an average mean score above 8.This study adds new knowledge in creating 3D objects for augmented reality applications by using the photogrammetry technique;finally,it discusses potential problems and future research directions for 3D objects in the education sector.
文摘The subcortical visual pathway is generally thought to be involved in dangerous information processing,such as fear processing and defensive behavior.A recent study,published in Human Brain Mapping,shows a new function of the subcortical pathway involved in the fast processing of non-emotional object perception.Rapid object processing is a critical function of visual system.Topological perception theory proposes that the initial perception of objects begins with the extraction of topological property(TP).However,the mechanism of rapid TP processing remains unclear.The researchers investigated the subcortical mechanism of TP processing with transcranial magnetic stimulation(TMS).They find that a subcortical magnocellular pathway is responsible for the early processing of TP,and this subcortical processing of TP accelerates object recognition.Based on their findings,we propose a novel training approach called subcortical magnocellular pathway training(SMPT),aimed at improving the efficiency of the subcortical M pathway to restore visual and attentional functions in disorders associated with subcortical pathway dysfunction.
基金supported in part by the National Science Foundation of China(52371372)the Project of Science and Technology Commission of Shanghai Municipality,China(22JC1401400,21190780300)the 111 Project,China(D18003)
文摘Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dense small objects is challenging.
文摘To investigate the applicability of four commonly used color difference formulas(CIELAB,CIE94,CMC(1:1),and CIEDE2000)in the printing field on 3D objects,as well as the impact of four standard light sources(D65,D50,A,and TL84)on 3D color difference evaluations,50 glossy spheres with a diameter of 2cm based on the Sailner J4003D color printing device were created.These spheres were centered around the five recommended colors(gray,red,yellow,green,and blue)by CIE.Color difference was calculated according to the four formulas,and 111 pairs of experimental samples meeting the CIELAB gray scale color difference requirements(1.0-14.0)were selected.Ten observers,aged between 22 and 27 with normal color vision,were participated in this study,using the gray scale method from psychophysical experiments to conduct color difference evaluations under the four light sources,with repeated experiments for each observer.The results indicated that the overall effect of the D65 light source on 3D objects color difference was minimal.In contrast,D50 and A light sources had a significant impact within the small color difference range,while the TL84 light source influenced both large and small color difference considerably.Among the four color difference formulas,CIEDE2000 demonstrated the best predictive performance for color difference in 3D objects,followed by CMC(1:1),CIE94,and CIELAB.
基金supported by the National Natural Science Foundation of China(Nos.62276204 and 62203343)the Fundamental Research Funds for the Central Universities(No.YJSJ24011)+1 种基金the Natural Science Basic Research Program of Shanxi,China(Nos.2022JM-340 and 2023-JC-QN-0710)the China Postdoctoral Science Foundation(Nos.2020T130494 and 2018M633470).
文摘Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones and lack of detail information for small-scale objects make drone-based small object detection a formidable challenge. To address these issues, we first develop a mathematical model to explore how changing receptive fields impacts the polynomial fitting results. Subsequently, based on the obtained conclusions, we propose a simple but effective Hybrid Receptive Field Network (HRFNet), whose modules include Hybrid Feature Augmentation (HFA), Hybrid Feature Pyramid (HFP) and Dual Scale Head (DSH). Specifically, HFA employs parallel dilated convolution kernels of different sizes to extend shallow features with different receptive fields, committed to improving the multi-scale adaptability of the network;HFP enhances the perception of small objects by capturing contextual information across layers, while DSH reconstructs the original prediction head utilizing a set of high-resolution features and ultrahigh-resolution features. In addition, in order to train HRFNet, the corresponding dual-scale loss function is designed. Finally, comprehensive evaluation results on public benchmarks such as VisDrone-DET and TinyPerson demonstrate the robustness of the proposed method. Most impressively, the proposed HRFNet achieves a mAP of 51.0 on VisDrone-DET with 29.3 M parameters, which outperforms the extant state-of-the-art detectors. HRFNet also performs excellently in complex scenarios captured by drones, achieving the best performance on the CS-Drone dataset we built.
基金supported by National Natural Science Foundation of China(Nos.62171042,62102033,U24A20331)the R&D Program of Beijing Municipal Education Commission(No.KZ202211417048)+2 种基金the Project of Construction and Support for High-Level Innovative Teams of Beijing Municipal Institutions(No.BPHR20220121)Beijing Natural Science Foundation(Nos.4232026,4242020)the Academic Research Projects of Beijing Union University(Nos.ZKZD202302,ZK20202403)。
文摘Top-view fisheye cameras are widely used in personnel surveillance for their broad field of view,but their unique imaging characteristics pose challenges like distortion,complex scenes,scale variations,and small objects near image edges.To tackle these,we proposed peripheral focus you only look once(PF-YOLO),an enhanced YOLOv8n-based method.Firstly,we introduced a cutting-patch data augmentation strategy to mitigate the problem of insufficient small-object samples in various scenes.Secondly,to enhance the model's focus on small objects near the edges,we designed the peripheral focus loss,which uses dynamic focus coefficients to provide greater gradient gains for these objects,improving their regression accuracy.Finally,we designed the three dimensional(3D)spatial-channel coordinate attention C2f module,enhancing spatial and channel perception,suppressing noise,and improving personnel detection.Experimental results demonstrate that PF-YOLO achieves strong performance on the challenging events for person detection from overhead fisheye images(CEPDTOF)and in-the-wild events for people detection and tracking from overhead fisheye cameras(WEPDTOF)datasets.Compared to the original YOLOv8n model,PFYOLO achieves improvements on CEPDTOF with increases of 2.1%,1.7%and 2.9%in mean average precision 50(mAP 50),mAP 50-95,and tively.On WEPDTOF,PF-YOLO achieves substantial improvements with increases of 31.4%,14.9%,61.1%and 21.0%in 91.2%and 57.2%,respectively.
基金funded by theNational Science and TechnologyCouncil(NSTC),Taiwan,under grant numbers NSTC 113-2634-F-A49-007 and NSTC 112-2634-F-A49-007.
文摘To improve small object detection and trajectory estimation from an aerial moving perspective,we propose the Aerial View Attention-PRB(AVA-PRB)model.AVA-PRB integrates two attention mechanisms—Coordinate Attention(CA)and the Convolutional Block Attention Module(CBAM)—to enhance detection accuracy.Additionally,Shape-IoU is employed as the loss function to refine localization precision.Our model further incorporates an adaptive feature fusion mechanism,which optimizes multi-scale object representation,ensuring robust tracking in complex aerial environments.We evaluate the performance of AVA-PRB on two benchmark datasets:Aerial Person Detection and VisDrone2019-Det.The model achieves 60.9%mAP@0.5 on the Aerial Person Detection dataset,and 51.2%mAP@0.5 on VisDrone2019-Det,demonstrating its effectiveness in aerial object detection.Beyond detection,we propose a novel trajectory estimation method that improves movement path prediction under aerial motion.Experimental results indicate that our approach reduces path deviation by up to 64%,effectively mitigating errors caused by rapid camera movements and background variations.By optimizing feature extraction and enhancing spatialtemporal coherence,our method significantly improves object tracking under aerial moving perspectives.This research addresses the limitations of fixed-camera tracking,enhancing flexibility and accuracy in aerial tracking applications.The proposed approach has broad potential for real-world applications,including surveillance,traffic monitoring,and environmental observation.
文摘AIM:To compare objective dry retinoscopy and subjective refraction measurements in patients with mild keratoconus(KCN)and quantify any differences.METHODS:This cross-sectional study was done on 68 eyes of 68 patients diagnosed with mild KCN.Objective dry retinoscopy using autorefractometer and subjective refraction measurements were performed.Sphere,cylinder,J0,J45,and spherical equivalent values were compared between the two techniques.RESULTS:The mean age of 68 patients with mild KCN was 21.32±5.03y(12–35y).There were 37(54.4%)males.Objective refraction yielded significantly more myopic sphere(-1.44 D vs-0.57 D),higher cylinder magnitude(-2.24 D vs-1.48 D),and more myopic spherical equivalent(-2.56 D vs-1.31 D)compared to subjective refraction(all P<0.05).The mean differences were-0.87 D for sphere,-0.76 D for cylinder,and-1.25 D for spherical equivalent.No significant differences were found for J0 and J45 values,indicating agreement in astigmatism axis(P>0.05).CONCLUSION:In patients with mild KCN,objective dry retinoscopy overestimates the degree of myopia and astigmatism compared to subjective refraction.The irregular cornea in KCN likely impacts objective measurements.Subjective refraction allows compensation for irregularity,providing a more accurate correction.When determining refractive targets,the tendency of objective methods to overcorrect should be considered.
基金supported by the Natural Science Foundation of China (No.62103298)the South African National Research Foundation (Nos.132797 and 137951)。
文摘In this paper,a two-stage light detection and ranging(LiDAR) three-dimensional(3D) object detection framework is presented,namely point-voxel dual transformer(PV-DT3D),which is a transformer-based method.In the proposed PV-DT3D,point-voxel fusion features are used for proposal refinement.Specifically,keypoints are sampled from entire point cloud scene and used to encode representative scene features via a proposal-aware voxel set abstraction module.Subsequently,following the generation of proposals by the region proposal networks(RPN),the internal encoded keypoints are fed into the dual transformer encoder-decoder architecture.In 3D object detection,the proposed PV-DT3D takes advantage of both point-wise transformer and channel-wise architecture to capture contextual information from the spatial and channel dimensions.Experiments conducted on the highly competitive KITTI 3D car detection leaderboard show that the PV-DT3D achieves superior detection accuracy among state-of-the-art point-voxel-based methods.
文摘Three-dimensional(3D)object detection is crucial for applications such as robotic control and autonomous driving.While high-precision sensors like LiDAR are expensive,RGB-D sensors(e.g.,Kinect)offer a cost-effective alternative,especially for indoor environments.However,RGB-D sensors still face limitations in accuracy and depth perception.This paper proposes an enhanced method that integrates attention-driven YOLOv9 with xLSTM into the F-ConvNet framework.By improving the precision of 2D bounding boxes generated for 3D object detection,this method addresses issues in indoor environments with complex structures and occlusions.The proposed approach enhances detection accuracy and robustness by combining RGB images and depth data,offering improved indoor 3D object detection performance.
文摘Transorbital craniocerebral injury is a relatively rare type of penetrating head injury that poses a significant threat to the ocular and cerebral structures.^([1])The clinical prognosis of transorbital craniocerebral injury is closely related to the size,shape,speed,nature,and trajectory of the foreign object,as well as the incidence of central nervous system damage and secondary complications.The foreign objects reported to have caused these injuries are categorized into wooden items,metallic items,^([2-8])and other materials,which penetrate the intracranial region via fi ve major pathways,including the orbital roof (OR),superior orbital fissure (SOF),inferior orbital fissure(IOF),optic canal (OC),and sphenoid wing.Herein,we present eight cases of transorbital craniocerebral injury caused by an unusual metallic foreign body.
基金supported by the Key Research and Development Program of Shaanxi Province(2024GX-YBXM-010).
文摘Detecting oriented targets in remote sensing images amidst complex and heterogeneous backgrounds remains a formidable challenge in the field of object detection.Current frameworks for oriented detection modules are constrained by intrinsic limitations,including excessive computational and memory overheads,discrepancies between predefined anchors and ground truth bounding boxes,intricate training processes,and feature alignment inconsistencies.To overcome these challenges,we present ASL-OOD(Angle-based SIOU Loss for Oriented Object Detection),a novel,efficient,and robust one-stage framework tailored for oriented object detection.The ASL-OOD framework comprises three core components:the Transformer-based Backbone(TB),the Transformer-based Neck(TN),and the Angle-SIOU(Scylla Intersection over Union)based Decoupled Head(ASDH).By leveraging the Swin Transformer,the TB and TN modules offer several key advantages,such as the capacity to model long-range dependencies,preserve high-resolution feature representations,seamlessly integrate multi-scale features,and enhance parameter efficiency.These improvements empower the model to accurately detect objects across varying scales.The ASDH module further enhances detection performance by incorporating angle-aware optimization based on SIOU,ensuring precise angular consistency and bounding box coherence.This approach effectively harmonizes shape loss and distance loss during the optimization process,thereby significantly boosting detection accuracy.Comprehensive evaluations and ablation studies on standard benchmark datasets such as DOTA with an mAP(mean Average Precision)of 80.16 percent,HRSC2016 with an mAP of 91.07 percent,MAR20 with an mAP of 85.45 percent,and UAVDT with an mAP of 39.7 percent demonstrate the clear superiority of ASL-OOD over state-of-the-art oriented object detection models.These findings underscore the model’s efficacy as an advanced solution for challenging remote sensing object detection tasks.