Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and stru...Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and struggle with diverse data acquisition techniques.This research presents a novel approach for vehicle classification and recognition in aerial image sequences,integrating multiple advanced techniques to enhance detection accuracy.The proposed model begins with preprocessing using Multiscale Retinex(MSR)to enhance image quality,followed by Expectation-Maximization(EM)Segmentation for precise foreground object identification.Vehicle detection is performed using the state-of-the-art YOLOv10 framework,while feature extraction incorporates Maximally Stable Extremal Regions(MSER),Dense Scale-Invariant Feature Transform(Dense SIFT),and Zernike Moments Features to capture distinct object characteristics.Feature optimization is further refined through a Hybrid Swarm-based Optimization algorithm,ensuring optimal feature selection for improved classification performance.The final classification is conducted using a Vision Transformer,leveraging its robust learning capabilities for enhanced accuracy.Experimental evaluations on benchmark datasets,including UAVDT and the Unmanned Aerial Vehicle Intruder Dataset(UAVID),demonstrate the superiority of the proposed approach,achieving an accuracy of 94.40%on UAVDT and 93.57%on UAVID.The results highlight the efficacy of the model in significantly enhancing vehicle detection and classification in aerial imagery,outperforming existing methodologies and offering a statistically validated improvement for intelligent traffic monitoring systems compared to existing approaches.展开更多
Traffic sign detection is an important part of autonomous driving,and its recognition accuracy and speed are directly related to road traffic safety.Although convolutional neural networks(CNNs)have made certain breakt...Traffic sign detection is an important part of autonomous driving,and its recognition accuracy and speed are directly related to road traffic safety.Although convolutional neural networks(CNNs)have made certain breakthroughs in this field,in the face of complex scenes,such as image blur and target occlusion,the traffic sign detection continues to exhibit limited accuracy,accompanied by false positives and missed detections.To address the above problems,a traffic sign detection algorithm,You Only Look Once-based Skip Dynamic Way(YOLO-SDW)based on You Only Look Once version 8 small(YOLOv8s),is proposed.Firstly,a Skip Connection Reconstruction(SCR)module is introduced to efficiently integrate fine-grained feature information and enhance the detection accuracy of the algorithm in complex scenes.Secondly,a C2f module based on Dynamic Snake Convolution(C2f-DySnake)is proposed to dynamically adjust the receptive field information,improve the algorithm’s feature extraction ability for blurred or occluded targets,and reduce the occurrence of false detections and missed detections.Finally,the Wise Powerful IoU v2(WPIoUv2)loss function is proposed to further improve the detection accuracy of the algorithm.Experimental results show that the average precision mAP@0.5 of YOLO-SDW on the TT100K dataset is 89.2%,and mAP@0.5:0.95 is 68.5%,which is 4%and 3.3%higher than the YOLOv8s baseline,respectively.YOLO-SDW ensures real-time performance while having higher accuracy.展开更多
The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditio...The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditional approaches like network compression,quantization,and lightweight design often sacrifice accuracy or feature representation robustness.This article introduces the Fast Multi-scale Channel Shuffling Network(FMCSNet),a novel lightweight detection model optimized for mobile devices.FMCSNet integrates a fully convolutional Multilayer Perceptron(MLP)module,offering global perception without significantly increasing parameters,effectively bridging the gap between CNNs and Vision Transformers.FMCSNet achieves a delicate balance between computation and accuracy mainly by two key modules:the ShiftMLP module,including a shift operation and an MLP module,and a Partial group Convolutional(PGConv)module,reducing computation while enhancing information exchange between channels.With a computational complexity of 1.4G FLOPs and 1.3M parameters,FMCSNet outperforms CNN-based and DWConv-based ShuffleNetv2 by 1%and 4.5%mAP on the Pascal VOC 2007 dataset,respectively.Additionally,FMCSNet achieves a mAP of 30.0(0.5:0.95 IoU threshold)with only 2.5G FLOPs and 2.0M parameters.It achieves 32 FPS on low-performance i5-series CPUs,meeting real-time detection requirements.The versatility of the PGConv module’s adaptability across scenarios further highlights FMCSNet as a promising solution for real-time mobile object detection.展开更多
Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challeng...Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challenge due to factors such as light scattering,absorption,restricted visibility,and ambient noise.The advancement of deep learning has introduced powerful techniques for processing large amounts of unstructured and imperfect data collected from underwater environments.This study evaluated the efficacy of the You Only Look Once(YOLO)algorithm,a real-time object detection and localization model based on convolutional neural networks,in identifying and classifying various types of pipeline defects in underwater settings.YOLOv8,the latest evolution in the YOLO family,integrates advanced capabilities,such as anchor-free detection,a cross-stage partial network backbone for efficient feature extraction,and a feature pyramid network+path aggregation network neck for robust multi-scale object detection,which make it particularly well-suited for complex underwater environments.Due to the lack of suitable open-access datasets for underwater pipeline defects,a custom dataset was captured using a remotely operated vehicle in a controlled environment.This application has the following assets available for use.Extensive experimentation demonstrated that YOLOv8 X-Large consistently outperformed other models in terms of pipe defect detection and classification and achieved a strong balance between precision and recall in identifying pipeline cracks,rust,corners,defective welds,flanges,tapes,and holes.This research establishes the baseline performance of YOLOv8 for underwater defect detection and showcases its potential to enhance the reliability and efficiency of pipeline inspection tasks in challenging underwater environments.展开更多
To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework ba...To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework based on face-pedestrian joint feature modeling.By constructing a joint tracking model centered on“intra-class independent tracking+cross-category dynamic binding”,designing a multi-modal matching metric with spatio-temporal and appearance constraints,and innovatively introducing a cross-category feature mutual verification mechanism and a dual matching strategy,this work effectively resolves performance degradation in traditional single-category tracking methods caused by short-term occlusion,cross-camera tracking,and crowded environments.Experiments on the Chokepoint_Face_Pedestrian_Track test set demonstrate that in complex scenes,the proposed method improves Face-Pedestrian Matching F1 area under the curve(F1 AUC)by approximately 4 to 43 percentage points compared to several traditional methods.The joint tracking model achieves overall performance metrics of IDF1:85.1825%and MOTA:86.5956%,representing improvements of 0.91 and 0.06 percentage points,respectively,over the baseline model.Ablation studies confirm the effectiveness of key modules such as the Intersection over Area(IoA)/Intersection over Union(IoU)joint metric and dynamic threshold adjustment,validating the significant role of the cross-category identity matching mechanism in enhancing tracking stability.Our_model shows a 16.7%frame per second(FPS)drop vs.fairness of detection and re-identification in multiple object tracking(FairMOT),with its cross-category binding module adding aboute 10%overhead,yet maintains near-real-time performance for essential face-pedestrian tracking at small resolutions.展开更多
With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods ...With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods face numerous challenges in practical deployment,including scale variation handling,feature degradation,and complex backgrounds.To address these issues,we propose Edge-enhanced and Detail-Capturing You Only Look Once(EHDC-YOLO),a novel framework for object detection in Unmanned Aerial Vehicle(UAV)imagery.Based on the You Only Look Once version 11 nano(YOLOv11n)baseline,EHDC-YOLO systematically introduces several architectural enhancements:(1)a Multi-Scale Edge Enhancement(MSEE)module that leverages multi-scale pooling and edge information to enhance boundary feature extraction;(2)an Enhanced Feature Pyramid Network(EFPN)that integrates P2-level features with Cross Stage Partial(CSP)structures and OmniKernel convolutions for better fine-grained representation;and(3)Dynamic Head(DyHead)with multi-dimensional attention mechanisms for enhanced cross-scale modeling and perspective adaptability.Comprehensive experiments on the Vision meets Drones for Detection(VisDrone-DET)2019 dataset demonstrate that EHDC-YOLO achieves significant improvements,increasing mean Average Precision(mAP)@0.5 from 33.2%to 46.1%(an absolute improvement of 12.9 percentage points)and mAP@0.5:0.95 from 19.5%to 28.0%(an absolute improvement of 8.5 percentage points)compared with the YOLOv11n baseline,while maintaining a reasonable parameter count(2.81 M vs the baseline’s 2.58 M).Further ablation studies confirm the effectiveness of each proposed component,while visualization results highlight EHDC-YOLO’s superior performance in detecting objects and handling occlusions in complex drone scenarios.展开更多
This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagno...This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.展开更多
In printed circuit board(PCB)manufacturing,surface defects can significantly affect product quality.To address the performance degradation,high false detection rates,and missed detections caused by complex backgrounds...In printed circuit board(PCB)manufacturing,surface defects can significantly affect product quality.To address the performance degradation,high false detection rates,and missed detections caused by complex backgrounds in current intelligent inspection algorithms,this paper proposes CG-YOLOv8,a lightweight and improved model based on YOLOv8n for PCB surface defect detection.The proposed method optimizes the network architecture and compresses parameters to reduce model complexity while maintaining high detection accuracy,thereby enhancing the capability of identifying diverse defects under complex conditions.Specifically,a cascaded multi-receptive field(CMRF)module is adopted to replace the SPPF module in the backbone to improve feature perception,and an inverted residual mobile block(IRMB)is integrated into the C2f module to further enhance performance.Additionally,conventional convolution layers are replaced with GSConv to reduce computational cost,and a lightweight Convolutional Block Attention Module based Convolution(CBAMConv)module is introduced after Grouped Spatial Convolution(GSConv)to preserve accuracy through attention mechanisms.The detection head is also optimized by removing medium and large-scale detection layers,thereby enhancing the model’s ability to detect small-scale defects and further reducing complexity.Experimental results show that,compared to the original YOLOv8n,the proposed CG-YOLOv8 reduces parameter count by 53.9%,improves mAP@0.5 by 2.2%,and increases precision and recall by 2.0%and 1.8%,respectively.These improvements demonstrate that CG-YOLOv8 offers an efficient and lightweight solution for PCB surface defect detection.展开更多
Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone t...Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities.展开更多
Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dens...Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dense small objects is challenging.展开更多
Objective weather classification methods have been extensively applied to identify dominant ozone-favorable synoptic weather patterns(SWPs),however,the consistency of different classification methods is rarely examine...Objective weather classification methods have been extensively applied to identify dominant ozone-favorable synoptic weather patterns(SWPs),however,the consistency of different classification methods is rarely examined.In this study,we apply two widely-used objective methods,the self-organizing map(SOM)and K-means clustering analysis,to derive ozone-favorable SWPs at four Chinese megacities in 2015-2022.We find that the two algorithms are largely consistent in recognizing dominant ozone-favorable SWPs for four Chinese megacities.In the case of classifying six SWPs,the derived circulation fields are highly similar with a spatial correlation of 0.99 between the two methods,and the difference in themean frequency of each SWP is less than 7%.The six dominant ozone-favorable SWPs in Guangzhou are all characterized by anomaly higher radiation and temperature,lower cloud cover,relative humidity,and wind speed,and stronger subsidence compared to climatology mean.We find that during 2015-2022,the occurrence of ozone-favorable SWPs days increases significantly at a rate of 3.2 days/year,faster than the increases in the ozone exceedance days(3.0 days/year).The interannual variability between the occurrence of ozone-favorable SWPs and ozone exceedance days are generally consistent with a temporal correlation coefficient of 0.6.In particular,the significant increase in ozone-favorable SWPs in 2022,especially the Subtropical High type which typically occurs in September,is consistent with a long-lasting ozone pollution episode in Guangzhou during September 2022.Our results thus reveal that enhanced frequency of ozone-favorable SWPs plays an important role in the observed 2015-2022 ozone increase in Guangzhou.展开更多
Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones...Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones and lack of detail information for small-scale objects make drone-based small object detection a formidable challenge. To address these issues, we first develop a mathematical model to explore how changing receptive fields impacts the polynomial fitting results. Subsequently, based on the obtained conclusions, we propose a simple but effective Hybrid Receptive Field Network (HRFNet), whose modules include Hybrid Feature Augmentation (HFA), Hybrid Feature Pyramid (HFP) and Dual Scale Head (DSH). Specifically, HFA employs parallel dilated convolution kernels of different sizes to extend shallow features with different receptive fields, committed to improving the multi-scale adaptability of the network;HFP enhances the perception of small objects by capturing contextual information across layers, while DSH reconstructs the original prediction head utilizing a set of high-resolution features and ultrahigh-resolution features. In addition, in order to train HRFNet, the corresponding dual-scale loss function is designed. Finally, comprehensive evaluation results on public benchmarks such as VisDrone-DET and TinyPerson demonstrate the robustness of the proposed method. Most impressively, the proposed HRFNet achieves a mAP of 51.0 on VisDrone-DET with 29.3 M parameters, which outperforms the extant state-of-the-art detectors. HRFNet also performs excellently in complex scenarios captured by drones, achieving the best performance on the CS-Drone dataset we built.展开更多
Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by ...Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by exploring the evolution of different methods and applications over the past three years,highlighting the shift from conventional computer vision to deep learning-based methodologies owing to their enhanced efficacy in real time.The review emphasizes the integration of advanced models,such as You Only Look Once(YOLO)v9,v10,EfficientDet,Transformer-based models,and hybrid frameworks that improve the precision,accuracy,and scalability for crop monitoring and disease detection.The review also highlights benchmark datasets and evaluation metrics.It addresses limitations,like domain adaptation challenges,dataset heterogeneity,and occlusion,while offering insights into prospective research avenues,such as multimodal learning,explainable AI,and federated learning.Furthermore,the main aim of this paper is to serve as a thorough resource guide for scientists,researchers,and stakeholders for implementing deep learning-based object detection methods for the development of intelligent,robust,and sustainable agricultural systems.展开更多
Extreme ozone pollution events(EOPEs)are associated with synoptic weather patterns(SWPs)and pose severe health and ecological risks.However,a systematic investigation of themeteorological causes,transport pathways,and...Extreme ozone pollution events(EOPEs)are associated with synoptic weather patterns(SWPs)and pose severe health and ecological risks.However,a systematic investigation of themeteorological causes,transport pathways,and source contributions to historical EOPEs is still lacking.In this paper,the K-means clustering method is applied to identify six dominant SWPs during the warm season in the Yangtze River Delta(YRD)region from 2016 to 2022.It provides an integrated analysis of the meteorological factors affecting ozone pollution in Hefei under different SWPs.Using the WRF-FLEXPART model,the transport pathways(TPPs)and geographical sources of the near-surface air masses in Hefei during EOPEs are investigated.The results reveal that Hefei experienced the highest ozone concentration(134.77±42.82μg/m^(3)),exceedance frequency(46 days(23.23%)),and proportion of EOPEs(21 instances,47.7%)under the control of peripheral subsidence of typhoon(Type 5).Regional southeast winds correlated with the ozone pollution in Hefei.During EOPEs,a high boundary layer height,solar radiation,and temperature;lowhumidity and cloud cover;and pronounced subsidence airflow occurred over Hefei and the broader YRD region.The East-South(E_S)patterns exhibited the highest frequency(28 instances,65.11%).Regarding the TPPs and geographical sources of the near-surface air masses during historical EOPEs.The YRD was the main source for land-originating air masses under E_S patterns(50.28%),with Hefei,southern Anhui,southern Jiangsu,and northern Zhejiang being key contributors.These findings can help improve ozone pollution early warning and control mechanisms at urban and regional scales.展开更多
Existing single-pixel imaging(SPI)and sensing techniques suffer from poor reconstruction quality and heavy computation cost,limiting their widespread application.To tackle these challenges,we propose a large-scale sin...Existing single-pixel imaging(SPI)and sensing techniques suffer from poor reconstruction quality and heavy computation cost,limiting their widespread application.To tackle these challenges,we propose a large-scale single-pixel imaging and sensing(SPIS)technique that enables high-quality megapixel SPI and highly efficient image-free sensing with a low sampling rate.Specifically,we first scan and sample the entire scene using small-size optimized patterns to obtain information-coupled measurements.Compared with the conventional full-sized patterns,small-sized optimized patterns achieve higher imaging fidelity and sensing accuracy with 1 order of magnitude fewer pattern parameters.Next,the coupled measurements are processed through a transformer-based encoder to extract high-dimensional features,followed by a task-specific plugand-play decoder for imaging or image-free sensing.Considering that the regions with rich textures and edges are more difficult to reconstruct,we use an uncertainty-driven self-adaptive loss function to reinforce the network’s attention to these regions,thereby improving the imaging and sensing performance.Extensive experiments demonstrate that the reported technique achieves 24.13 dB megapixel SPI at a sampling rate of 3%within 1 s.In terms of sensing,it outperforms existing methods by 12%on image-free segmentation accuracy and achieves state-of-the-art image-free object detection accuracy with an order of magnitude less data bandwidth.展开更多
This paper introduces an advanced and efficient method for distributed drone-based fruit recognition and localization, tailored to satisfy the precision and security requirements of autonomous agricultural operations....This paper introduces an advanced and efficient method for distributed drone-based fruit recognition and localization, tailored to satisfy the precision and security requirements of autonomous agricultural operations. Our method incorporates depth information to ensure precise localization and utilizes a streamlined detection network centered on the RepVGG module. This module replaces the traditional C2f module, enhancing detection performance while maintaining speed. To bolster the detection of small, distant fruits in complex settings, we integrate Selective Kernel Attention (SKAttention) and a specialized small-target detection layer. This adaptation allows the system to manage difficult conditions, such as variable lighting and obstructive foliage. To reinforce security, the tasks of recognition and localization are distributed among multiple drones, enhancing resilience against tampering and data manipulation. This distribution also optimizes resource allocation through collaborative processing. The model remains lightweight and is optimized for rapid and accurate detection, which is essential for real-time applications. Our proposed system, validated with a D435 depth camera, achieves a mean Average Precision (mAP) of 0.943 and a frame rate of 169 FPS, which represents a significant improvement over the baseline by 0.039 percentage points and 25 FPS, respectively. Additionally, the average localization error is reduced to 0.82 cm, highlighting the model’s high precision. These enhancements render our system highly effective for secure, autonomous fruit-picking operations, effectively addressing significant performance and cybersecurity challenges in agriculture. This approach establishes a foundation for reliable, efficient, and secure distributed fruit-picking applications, facilitating the advancement of autonomous systems in contemporary agricultural practices.展开更多
The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can caus...The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can cause changes in cutting force/heat,resulting in affecting gear machining precision.Therefore,this paper studies the effect of different process parameters on gear machining precision.A multi-objective optimization model is established for the relationship between process parameters and tooth surface deviations,tooth profile deviations,and tooth lead deviations through the cutting speed,feed rate,and cutting depth of the worm wheel gear grinding machine.The response surface method(RSM)is used for experimental design,and the corresponding experimental results and optimal process parameters are obtained.Subsequently,gray relational analysis-principal component analysis(GRA-PCA),particle swarm optimization(PSO),and genetic algorithm-particle swarm optimization(GA-PSO)methods are used to analyze the experimental results and obtain different optimal process parameters.The results show that optimal process parameters obtained by the GRA-PCA,PSO,and GA-PSO methods improve the gear machining precision.Moreover,the gear machining precision obtained by GA-PSO is superior to other methods.展开更多
To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-cap...To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-captured images has posed a challenge for traditional target detection methods,especially in identifying small objects in high-resolution images.This study presents an enhanced object detection algorithm based on the Faster Regionbased Convolutional Neural Network(Faster R-CNN)framework,specifically tailored for detecting small-scale electrical components like insulators,shock hammers,and screws in transmission line.The algorithm features an improved backbone network for Faster R-CNN,which significantly boosts the feature extraction network’s ability to detect fine details.The Region Proposal Network is optimized using a method of guided feature refinement(GFR),which achieves a balance between accuracy and speed.The incorporation of Generalized Intersection over Union(GIOU)and Region of Interest(ROI)Align further refines themodel’s accuracy.Experimental results demonstrate a notable improvement in mean Average Precision,reaching 89.3%,an 11.1%increase compared to the standard Faster R-CNN.This highlights the effectiveness of the proposed algorithm in identifying electrical components in high-resolution aerial images.展开更多
Dear Editor,This letter addresses the impulse game problem for a general scope of deterministic,multi-player,nonzero-sum differential games wherein all participants adopt impulse controls.Our objective is to formulate...Dear Editor,This letter addresses the impulse game problem for a general scope of deterministic,multi-player,nonzero-sum differential games wherein all participants adopt impulse controls.Our objective is to formulate this impulse game problem with the modified objective function including interaction costs among the players in a discontinuous fashion,and subsequently,to derive a verification theorem for identifying the feedback Nash equilibrium strategy.展开更多
Pulmonary nodules represent an early manifestation of lung cancer.However,pulmonary nodules only constitute a small portion of the overall image,posing challenges for physicians in image interpretation and potentially...Pulmonary nodules represent an early manifestation of lung cancer.However,pulmonary nodules only constitute a small portion of the overall image,posing challenges for physicians in image interpretation and potentially leading to false positives or missed detections.To solve these problems,the YOLOv8 network is enhanced by adding deformable convolution and atrous spatial pyramid pooling(ASPP),along with the integration of a coordinate attention(CA)mechanism.This allows the network to focus on small targets while expanding the receptive field without losing resolution.At the same time,context information on the target is gathered and feature expression is enhanced by attention modules in different directions.It effectively improves the positioning accuracy and achieves good results on the LUNA16 dataset.Compared with other detection algorithms,it improves the accuracy of pulmonary nodule detection to a certain extent.展开更多
文摘Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and struggle with diverse data acquisition techniques.This research presents a novel approach for vehicle classification and recognition in aerial image sequences,integrating multiple advanced techniques to enhance detection accuracy.The proposed model begins with preprocessing using Multiscale Retinex(MSR)to enhance image quality,followed by Expectation-Maximization(EM)Segmentation for precise foreground object identification.Vehicle detection is performed using the state-of-the-art YOLOv10 framework,while feature extraction incorporates Maximally Stable Extremal Regions(MSER),Dense Scale-Invariant Feature Transform(Dense SIFT),and Zernike Moments Features to capture distinct object characteristics.Feature optimization is further refined through a Hybrid Swarm-based Optimization algorithm,ensuring optimal feature selection for improved classification performance.The final classification is conducted using a Vision Transformer,leveraging its robust learning capabilities for enhanced accuracy.Experimental evaluations on benchmark datasets,including UAVDT and the Unmanned Aerial Vehicle Intruder Dataset(UAVID),demonstrate the superiority of the proposed approach,achieving an accuracy of 94.40%on UAVDT and 93.57%on UAVID.The results highlight the efficacy of the model in significantly enhancing vehicle detection and classification in aerial imagery,outperforming existing methodologies and offering a statistically validated improvement for intelligent traffic monitoring systems compared to existing approaches.
基金funded by Key research and development Program of Henan Province(No.251111211200)National Natural Science Foundation of China(Grant No.U2004163).
文摘Traffic sign detection is an important part of autonomous driving,and its recognition accuracy and speed are directly related to road traffic safety.Although convolutional neural networks(CNNs)have made certain breakthroughs in this field,in the face of complex scenes,such as image blur and target occlusion,the traffic sign detection continues to exhibit limited accuracy,accompanied by false positives and missed detections.To address the above problems,a traffic sign detection algorithm,You Only Look Once-based Skip Dynamic Way(YOLO-SDW)based on You Only Look Once version 8 small(YOLOv8s),is proposed.Firstly,a Skip Connection Reconstruction(SCR)module is introduced to efficiently integrate fine-grained feature information and enhance the detection accuracy of the algorithm in complex scenes.Secondly,a C2f module based on Dynamic Snake Convolution(C2f-DySnake)is proposed to dynamically adjust the receptive field information,improve the algorithm’s feature extraction ability for blurred or occluded targets,and reduce the occurrence of false detections and missed detections.Finally,the Wise Powerful IoU v2(WPIoUv2)loss function is proposed to further improve the detection accuracy of the algorithm.Experimental results show that the average precision mAP@0.5 of YOLO-SDW on the TT100K dataset is 89.2%,and mAP@0.5:0.95 is 68.5%,which is 4%and 3.3%higher than the YOLOv8s baseline,respectively.YOLO-SDW ensures real-time performance while having higher accuracy.
基金funded by the National Natural Science Foundation of China under Grant No.62371187the Open Program of Hunan Intelligent Rehabilitation Robot and Auxiliary Equipment Engineering Technology Research Center under Grant No.2024JS101.
文摘The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditional approaches like network compression,quantization,and lightweight design often sacrifice accuracy or feature representation robustness.This article introduces the Fast Multi-scale Channel Shuffling Network(FMCSNet),a novel lightweight detection model optimized for mobile devices.FMCSNet integrates a fully convolutional Multilayer Perceptron(MLP)module,offering global perception without significantly increasing parameters,effectively bridging the gap between CNNs and Vision Transformers.FMCSNet achieves a delicate balance between computation and accuracy mainly by two key modules:the ShiftMLP module,including a shift operation and an MLP module,and a Partial group Convolutional(PGConv)module,reducing computation while enhancing information exchange between channels.With a computational complexity of 1.4G FLOPs and 1.3M parameters,FMCSNet outperforms CNN-based and DWConv-based ShuffleNetv2 by 1%and 4.5%mAP on the Pascal VOC 2007 dataset,respectively.Additionally,FMCSNet achieves a mAP of 30.0(0.5:0.95 IoU threshold)with only 2.5G FLOPs and 2.0M parameters.It achieves 32 FPS on low-performance i5-series CPUs,meeting real-time detection requirements.The versatility of the PGConv module’s adaptability across scenarios further highlights FMCSNet as a promising solution for real-time mobile object detection.
文摘Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challenge due to factors such as light scattering,absorption,restricted visibility,and ambient noise.The advancement of deep learning has introduced powerful techniques for processing large amounts of unstructured and imperfect data collected from underwater environments.This study evaluated the efficacy of the You Only Look Once(YOLO)algorithm,a real-time object detection and localization model based on convolutional neural networks,in identifying and classifying various types of pipeline defects in underwater settings.YOLOv8,the latest evolution in the YOLO family,integrates advanced capabilities,such as anchor-free detection,a cross-stage partial network backbone for efficient feature extraction,and a feature pyramid network+path aggregation network neck for robust multi-scale object detection,which make it particularly well-suited for complex underwater environments.Due to the lack of suitable open-access datasets for underwater pipeline defects,a custom dataset was captured using a remotely operated vehicle in a controlled environment.This application has the following assets available for use.Extensive experimentation demonstrated that YOLOv8 X-Large consistently outperformed other models in terms of pipe defect detection and classification and achieved a strong balance between precision and recall in identifying pipeline cracks,rust,corners,defective welds,flanges,tapes,and holes.This research establishes the baseline performance of YOLOv8 for underwater defect detection and showcases its potential to enhance the reliability and efficiency of pipeline inspection tasks in challenging underwater environments.
基金supported by the confidential research grant No.a8317。
文摘To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework based on face-pedestrian joint feature modeling.By constructing a joint tracking model centered on“intra-class independent tracking+cross-category dynamic binding”,designing a multi-modal matching metric with spatio-temporal and appearance constraints,and innovatively introducing a cross-category feature mutual verification mechanism and a dual matching strategy,this work effectively resolves performance degradation in traditional single-category tracking methods caused by short-term occlusion,cross-camera tracking,and crowded environments.Experiments on the Chokepoint_Face_Pedestrian_Track test set demonstrate that in complex scenes,the proposed method improves Face-Pedestrian Matching F1 area under the curve(F1 AUC)by approximately 4 to 43 percentage points compared to several traditional methods.The joint tracking model achieves overall performance metrics of IDF1:85.1825%and MOTA:86.5956%,representing improvements of 0.91 and 0.06 percentage points,respectively,over the baseline model.Ablation studies confirm the effectiveness of key modules such as the Intersection over Area(IoA)/Intersection over Union(IoU)joint metric and dynamic threshold adjustment,validating the significant role of the cross-category identity matching mechanism in enhancing tracking stability.Our_model shows a 16.7%frame per second(FPS)drop vs.fairness of detection and re-identification in multiple object tracking(FairMOT),with its cross-category binding module adding aboute 10%overhead,yet maintains near-real-time performance for essential face-pedestrian tracking at small resolutions.
文摘With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods face numerous challenges in practical deployment,including scale variation handling,feature degradation,and complex backgrounds.To address these issues,we propose Edge-enhanced and Detail-Capturing You Only Look Once(EHDC-YOLO),a novel framework for object detection in Unmanned Aerial Vehicle(UAV)imagery.Based on the You Only Look Once version 11 nano(YOLOv11n)baseline,EHDC-YOLO systematically introduces several architectural enhancements:(1)a Multi-Scale Edge Enhancement(MSEE)module that leverages multi-scale pooling and edge information to enhance boundary feature extraction;(2)an Enhanced Feature Pyramid Network(EFPN)that integrates P2-level features with Cross Stage Partial(CSP)structures and OmniKernel convolutions for better fine-grained representation;and(3)Dynamic Head(DyHead)with multi-dimensional attention mechanisms for enhanced cross-scale modeling and perspective adaptability.Comprehensive experiments on the Vision meets Drones for Detection(VisDrone-DET)2019 dataset demonstrate that EHDC-YOLO achieves significant improvements,increasing mean Average Precision(mAP)@0.5 from 33.2%to 46.1%(an absolute improvement of 12.9 percentage points)and mAP@0.5:0.95 from 19.5%to 28.0%(an absolute improvement of 8.5 percentage points)compared with the YOLOv11n baseline,while maintaining a reasonable parameter count(2.81 M vs the baseline’s 2.58 M).Further ablation studies confirm the effectiveness of each proposed component,while visualization results highlight EHDC-YOLO’s superior performance in detecting objects and handling occlusions in complex drone scenarios.
文摘This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.
基金funded by the Joint Funds of the National Natural Science Foundation of China(U2341223)the Beijing Municipal Natural Science Foundation(No.4232067).
文摘In printed circuit board(PCB)manufacturing,surface defects can significantly affect product quality.To address the performance degradation,high false detection rates,and missed detections caused by complex backgrounds in current intelligent inspection algorithms,this paper proposes CG-YOLOv8,a lightweight and improved model based on YOLOv8n for PCB surface defect detection.The proposed method optimizes the network architecture and compresses parameters to reduce model complexity while maintaining high detection accuracy,thereby enhancing the capability of identifying diverse defects under complex conditions.Specifically,a cascaded multi-receptive field(CMRF)module is adopted to replace the SPPF module in the backbone to improve feature perception,and an inverted residual mobile block(IRMB)is integrated into the C2f module to further enhance performance.Additionally,conventional convolution layers are replaced with GSConv to reduce computational cost,and a lightweight Convolutional Block Attention Module based Convolution(CBAMConv)module is introduced after Grouped Spatial Convolution(GSConv)to preserve accuracy through attention mechanisms.The detection head is also optimized by removing medium and large-scale detection layers,thereby enhancing the model’s ability to detect small-scale defects and further reducing complexity.Experimental results show that,compared to the original YOLOv8n,the proposed CG-YOLOv8 reduces parameter count by 53.9%,improves mAP@0.5 by 2.2%,and increases precision and recall by 2.0%and 1.8%,respectively.These improvements demonstrate that CG-YOLOv8 offers an efficient and lightweight solution for PCB surface defect detection.
基金National Science and Technology Council,the Republic of China,under grants NSTC 113-2221-E-194-011-MY3 and Research Center on Artificial Intelligence and Sustainability,National Chung Cheng University under the research project grant titled“Generative Digital Twin System Design for Sustainable Smart City Development in Taiwan.
文摘Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities.
基金supported in part by the National Science Foundation of China(52371372)the Project of Science and Technology Commission of Shanghai Municipality,China(22JC1401400,21190780300)the 111 Project,China(D18003)
文摘Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dense small objects is challenging.
基金supported by the Guangdong Basic and Applied Basic Research project (No.2020B0301030004)the Key-Area Research and Development Program of Guangdong Province (No.2020B1111360003)+1 种基金the National Natural Science Foundation of China (No.42105103)the Guangdong Basic and Applied Basic Research Foundation (No.2022A1515011554).
文摘Objective weather classification methods have been extensively applied to identify dominant ozone-favorable synoptic weather patterns(SWPs),however,the consistency of different classification methods is rarely examined.In this study,we apply two widely-used objective methods,the self-organizing map(SOM)and K-means clustering analysis,to derive ozone-favorable SWPs at four Chinese megacities in 2015-2022.We find that the two algorithms are largely consistent in recognizing dominant ozone-favorable SWPs for four Chinese megacities.In the case of classifying six SWPs,the derived circulation fields are highly similar with a spatial correlation of 0.99 between the two methods,and the difference in themean frequency of each SWP is less than 7%.The six dominant ozone-favorable SWPs in Guangzhou are all characterized by anomaly higher radiation and temperature,lower cloud cover,relative humidity,and wind speed,and stronger subsidence compared to climatology mean.We find that during 2015-2022,the occurrence of ozone-favorable SWPs days increases significantly at a rate of 3.2 days/year,faster than the increases in the ozone exceedance days(3.0 days/year).The interannual variability between the occurrence of ozone-favorable SWPs and ozone exceedance days are generally consistent with a temporal correlation coefficient of 0.6.In particular,the significant increase in ozone-favorable SWPs in 2022,especially the Subtropical High type which typically occurs in September,is consistent with a long-lasting ozone pollution episode in Guangzhou during September 2022.Our results thus reveal that enhanced frequency of ozone-favorable SWPs plays an important role in the observed 2015-2022 ozone increase in Guangzhou.
基金supported by the National Natural Science Foundation of China(Nos.62276204 and 62203343)the Fundamental Research Funds for the Central Universities(No.YJSJ24011)+1 种基金the Natural Science Basic Research Program of Shanxi,China(Nos.2022JM-340 and 2023-JC-QN-0710)the China Postdoctoral Science Foundation(Nos.2020T130494 and 2018M633470).
文摘Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones and lack of detail information for small-scale objects make drone-based small object detection a formidable challenge. To address these issues, we first develop a mathematical model to explore how changing receptive fields impacts the polynomial fitting results. Subsequently, based on the obtained conclusions, we propose a simple but effective Hybrid Receptive Field Network (HRFNet), whose modules include Hybrid Feature Augmentation (HFA), Hybrid Feature Pyramid (HFP) and Dual Scale Head (DSH). Specifically, HFA employs parallel dilated convolution kernels of different sizes to extend shallow features with different receptive fields, committed to improving the multi-scale adaptability of the network;HFP enhances the perception of small objects by capturing contextual information across layers, while DSH reconstructs the original prediction head utilizing a set of high-resolution features and ultrahigh-resolution features. In addition, in order to train HRFNet, the corresponding dual-scale loss function is designed. Finally, comprehensive evaluation results on public benchmarks such as VisDrone-DET and TinyPerson demonstrate the robustness of the proposed method. Most impressively, the proposed HRFNet achieves a mAP of 51.0 on VisDrone-DET with 29.3 M parameters, which outperforms the extant state-of-the-art detectors. HRFNet also performs excellently in complex scenarios captured by drones, achieving the best performance on the CS-Drone dataset we built.
文摘Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by exploring the evolution of different methods and applications over the past three years,highlighting the shift from conventional computer vision to deep learning-based methodologies owing to their enhanced efficacy in real time.The review emphasizes the integration of advanced models,such as You Only Look Once(YOLO)v9,v10,EfficientDet,Transformer-based models,and hybrid frameworks that improve the precision,accuracy,and scalability for crop monitoring and disease detection.The review also highlights benchmark datasets and evaluation metrics.It addresses limitations,like domain adaptation challenges,dataset heterogeneity,and occlusion,while offering insights into prospective research avenues,such as multimodal learning,explainable AI,and federated learning.Furthermore,the main aim of this paper is to serve as a thorough resource guide for scientists,researchers,and stakeholders for implementing deep learning-based object detection methods for the development of intelligent,robust,and sustainable agricultural systems.
基金supported by the National Natural Science Foundation of China(Nos.U19A2044,42105132,42030609,and 41975037)the National Key Research and Development Programof China(No.2022YFC3700303).
文摘Extreme ozone pollution events(EOPEs)are associated with synoptic weather patterns(SWPs)and pose severe health and ecological risks.However,a systematic investigation of themeteorological causes,transport pathways,and source contributions to historical EOPEs is still lacking.In this paper,the K-means clustering method is applied to identify six dominant SWPs during the warm season in the Yangtze River Delta(YRD)region from 2016 to 2022.It provides an integrated analysis of the meteorological factors affecting ozone pollution in Hefei under different SWPs.Using the WRF-FLEXPART model,the transport pathways(TPPs)and geographical sources of the near-surface air masses in Hefei during EOPEs are investigated.The results reveal that Hefei experienced the highest ozone concentration(134.77±42.82μg/m^(3)),exceedance frequency(46 days(23.23%)),and proportion of EOPEs(21 instances,47.7%)under the control of peripheral subsidence of typhoon(Type 5).Regional southeast winds correlated with the ozone pollution in Hefei.During EOPEs,a high boundary layer height,solar radiation,and temperature;lowhumidity and cloud cover;and pronounced subsidence airflow occurred over Hefei and the broader YRD region.The East-South(E_S)patterns exhibited the highest frequency(28 instances,65.11%).Regarding the TPPs and geographical sources of the near-surface air masses during historical EOPEs.The YRD was the main source for land-originating air masses under E_S patterns(50.28%),with Hefei,southern Anhui,southern Jiangsu,and northern Zhejiang being key contributors.These findings can help improve ozone pollution early warning and control mechanisms at urban and regional scales.
基金supported by the National Natural Science Foundation of China(Grant Nos.62322502,62131003,and 62088101)the Guangdong Province Key Laboratory of Intelligent Detection in Complex Environment of Aerospace,Land and Sea(Grant No.2022KSYS016).
文摘Existing single-pixel imaging(SPI)and sensing techniques suffer from poor reconstruction quality and heavy computation cost,limiting their widespread application.To tackle these challenges,we propose a large-scale single-pixel imaging and sensing(SPIS)technique that enables high-quality megapixel SPI and highly efficient image-free sensing with a low sampling rate.Specifically,we first scan and sample the entire scene using small-size optimized patterns to obtain information-coupled measurements.Compared with the conventional full-sized patterns,small-sized optimized patterns achieve higher imaging fidelity and sensing accuracy with 1 order of magnitude fewer pattern parameters.Next,the coupled measurements are processed through a transformer-based encoder to extract high-dimensional features,followed by a task-specific plugand-play decoder for imaging or image-free sensing.Considering that the regions with rich textures and edges are more difficult to reconstruct,we use an uncertainty-driven self-adaptive loss function to reinforce the network’s attention to these regions,thereby improving the imaging and sensing performance.Extensive experiments demonstrate that the reported technique achieves 24.13 dB megapixel SPI at a sampling rate of 3%within 1 s.In terms of sensing,it outperforms existing methods by 12%on image-free segmentation accuracy and achieves state-of-the-art image-free object detection accuracy with an order of magnitude less data bandwidth.
基金supported by Guangdong Province Rural Science and Technology Commissioner Project,Zen Tea Reliable Traceability and Intelligent Planting Key Technology Research and Development,Promotion and Application(KTP20210199)Special Project of Guangdong Provincial Education Department,Research on Abnormal Behavior Recognition Technology of Pregnant Sows Based onGraph Convolution(2021ZDZX1091)+2 种基金Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515110729Shenzhen Science and Technology Program under Grant 20231128093642002the Research Foundation of Shenzhen Polytechnic University under Grant 6023312007K.
文摘This paper introduces an advanced and efficient method for distributed drone-based fruit recognition and localization, tailored to satisfy the precision and security requirements of autonomous agricultural operations. Our method incorporates depth information to ensure precise localization and utilizes a streamlined detection network centered on the RepVGG module. This module replaces the traditional C2f module, enhancing detection performance while maintaining speed. To bolster the detection of small, distant fruits in complex settings, we integrate Selective Kernel Attention (SKAttention) and a specialized small-target detection layer. This adaptation allows the system to manage difficult conditions, such as variable lighting and obstructive foliage. To reinforce security, the tasks of recognition and localization are distributed among multiple drones, enhancing resilience against tampering and data manipulation. This distribution also optimizes resource allocation through collaborative processing. The model remains lightweight and is optimized for rapid and accurate detection, which is essential for real-time applications. Our proposed system, validated with a D435 depth camera, achieves a mean Average Precision (mAP) of 0.943 and a frame rate of 169 FPS, which represents a significant improvement over the baseline by 0.039 percentage points and 25 FPS, respectively. Additionally, the average localization error is reduced to 0.82 cm, highlighting the model’s high precision. These enhancements render our system highly effective for secure, autonomous fruit-picking operations, effectively addressing significant performance and cybersecurity challenges in agriculture. This approach establishes a foundation for reliable, efficient, and secure distributed fruit-picking applications, facilitating the advancement of autonomous systems in contemporary agricultural practices.
基金Projects(U22B2084,52275483,52075142)supported by the National Natural Science Foundation of ChinaProject(2023ZY01050)supported by the Ministry of Industry and Information Technology High Quality Development,China。
文摘The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can cause changes in cutting force/heat,resulting in affecting gear machining precision.Therefore,this paper studies the effect of different process parameters on gear machining precision.A multi-objective optimization model is established for the relationship between process parameters and tooth surface deviations,tooth profile deviations,and tooth lead deviations through the cutting speed,feed rate,and cutting depth of the worm wheel gear grinding machine.The response surface method(RSM)is used for experimental design,and the corresponding experimental results and optimal process parameters are obtained.Subsequently,gray relational analysis-principal component analysis(GRA-PCA),particle swarm optimization(PSO),and genetic algorithm-particle swarm optimization(GA-PSO)methods are used to analyze the experimental results and obtain different optimal process parameters.The results show that optimal process parameters obtained by the GRA-PCA,PSO,and GA-PSO methods improve the gear machining precision.Moreover,the gear machining precision obtained by GA-PSO is superior to other methods.
基金supported by the Shanghai Science and Technology Innovation Action Plan High-Tech Field Project(Grant No.22511100601)for the year 2022 and Technology Development Fund for People’s Livelihood Research(Research on Transmission Line Deep Foundation Pit Environmental Situation Awareness System Based on Multi-Source Data).
文摘To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-captured images has posed a challenge for traditional target detection methods,especially in identifying small objects in high-resolution images.This study presents an enhanced object detection algorithm based on the Faster Regionbased Convolutional Neural Network(Faster R-CNN)framework,specifically tailored for detecting small-scale electrical components like insulators,shock hammers,and screws in transmission line.The algorithm features an improved backbone network for Faster R-CNN,which significantly boosts the feature extraction network’s ability to detect fine details.The Region Proposal Network is optimized using a method of guided feature refinement(GFR),which achieves a balance between accuracy and speed.The incorporation of Generalized Intersection over Union(GIOU)and Region of Interest(ROI)Align further refines themodel’s accuracy.Experimental results demonstrate a notable improvement in mean Average Precision,reaching 89.3%,an 11.1%increase compared to the standard Faster R-CNN.This highlights the effectiveness of the proposed algorithm in identifying electrical components in high-resolution aerial images.
基金supported in part by the National Natural Science Foundation of China(62173051)the Fundamental Research Funds for the Central Universities(2024CDJCGJ012,2023CDJXY-010)+1 种基金the Chongqing Technology Innovation and Application Development Special Key Project(CSTB2022TIADCUX0015,CSTB2022TIAD-KPX0162)the China Postdoctoral Science Foundation(2024M763865)
文摘Dear Editor,This letter addresses the impulse game problem for a general scope of deterministic,multi-player,nonzero-sum differential games wherein all participants adopt impulse controls.Our objective is to formulate this impulse game problem with the modified objective function including interaction costs among the players in a discontinuous fashion,and subsequently,to derive a verification theorem for identifying the feedback Nash equilibrium strategy.
文摘Pulmonary nodules represent an early manifestation of lung cancer.However,pulmonary nodules only constitute a small portion of the overall image,posing challenges for physicians in image interpretation and potentially leading to false positives or missed detections.To solve these problems,the YOLOv8 network is enhanced by adding deformable convolution and atrous spatial pyramid pooling(ASPP),along with the integration of a coordinate attention(CA)mechanism.This allows the network to focus on small targets while expanding the receptive field without losing resolution.At the same time,context information on the target is gathered and feature expression is enhanced by attention modules in different directions.It effectively improves the positioning accuracy and achieves good results on the LUNA16 dataset.Compared with other detection algorithms,it improves the accuracy of pulmonary nodule detection to a certain extent.