期刊文献+
共找到3,628篇文章
< 1 2 182 >
每页显示 20 50 100
Intelligent Semantic Segmentation with Vision Transformers for Aerial Vehicle Monitoring
1
作者 Moneerah Alotaibi 《Computers, Materials & Continua》 2026年第1期1629-1648,共20页
Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and stru... Advanced traffic monitoring systems encounter substantial challenges in vehicle detection and classification due to the limitations of conventional methods,which often demand extensive computational resources and struggle with diverse data acquisition techniques.This research presents a novel approach for vehicle classification and recognition in aerial image sequences,integrating multiple advanced techniques to enhance detection accuracy.The proposed model begins with preprocessing using Multiscale Retinex(MSR)to enhance image quality,followed by Expectation-Maximization(EM)Segmentation for precise foreground object identification.Vehicle detection is performed using the state-of-the-art YOLOv10 framework,while feature extraction incorporates Maximally Stable Extremal Regions(MSER),Dense Scale-Invariant Feature Transform(Dense SIFT),and Zernike Moments Features to capture distinct object characteristics.Feature optimization is further refined through a Hybrid Swarm-based Optimization algorithm,ensuring optimal feature selection for improved classification performance.The final classification is conducted using a Vision Transformer,leveraging its robust learning capabilities for enhanced accuracy.Experimental evaluations on benchmark datasets,including UAVDT and the Unmanned Aerial Vehicle Intruder Dataset(UAVID),demonstrate the superiority of the proposed approach,achieving an accuracy of 94.40%on UAVDT and 93.57%on UAVID.The results highlight the efficacy of the model in significantly enhancing vehicle detection and classification in aerial imagery,outperforming existing methodologies and offering a statistically validated improvement for intelligent traffic monitoring systems compared to existing approaches. 展开更多
关键词 Machine learning semantic segmentation remote sensors deep learning object monitoring system
在线阅读 下载PDF
YOLO-SDW: Traffic Sign Detection Algorithm Based on YOLOv8s Skip Connection and Dynamic Convolution
2
作者 Qing Guo Juwei Zhang Bingyi Ren 《Computers, Materials & Continua》 2026年第1期1433-1452,共20页
Traffic sign detection is an important part of autonomous driving,and its recognition accuracy and speed are directly related to road traffic safety.Although convolutional neural networks(CNNs)have made certain breakt... Traffic sign detection is an important part of autonomous driving,and its recognition accuracy and speed are directly related to road traffic safety.Although convolutional neural networks(CNNs)have made certain breakthroughs in this field,in the face of complex scenes,such as image blur and target occlusion,the traffic sign detection continues to exhibit limited accuracy,accompanied by false positives and missed detections.To address the above problems,a traffic sign detection algorithm,You Only Look Once-based Skip Dynamic Way(YOLO-SDW)based on You Only Look Once version 8 small(YOLOv8s),is proposed.Firstly,a Skip Connection Reconstruction(SCR)module is introduced to efficiently integrate fine-grained feature information and enhance the detection accuracy of the algorithm in complex scenes.Secondly,a C2f module based on Dynamic Snake Convolution(C2f-DySnake)is proposed to dynamically adjust the receptive field information,improve the algorithm’s feature extraction ability for blurred or occluded targets,and reduce the occurrence of false detections and missed detections.Finally,the Wise Powerful IoU v2(WPIoUv2)loss function is proposed to further improve the detection accuracy of the algorithm.Experimental results show that the average precision mAP@0.5 of YOLO-SDW on the TT100K dataset is 89.2%,and mAP@0.5:0.95 is 68.5%,which is 4%and 3.3%higher than the YOLOv8s baseline,respectively.YOLO-SDW ensures real-time performance while having higher accuracy. 展开更多
关键词 Traffic sign detection YOLOv8 object detection deep learning
在线阅读 下载PDF
FMCSNet: Mobile Devices-Oriented Lightweight Multi-Scale Object Detection via Fast Multi-Scale Channel Shuffling Network Model
3
作者 Lijuan Huang Xianyi Liu +1 位作者 Jinping Liu Pengfei Xu 《Computers, Materials & Continua》 2026年第1期1292-1311,共20页
The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditio... The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditional approaches like network compression,quantization,and lightweight design often sacrifice accuracy or feature representation robustness.This article introduces the Fast Multi-scale Channel Shuffling Network(FMCSNet),a novel lightweight detection model optimized for mobile devices.FMCSNet integrates a fully convolutional Multilayer Perceptron(MLP)module,offering global perception without significantly increasing parameters,effectively bridging the gap between CNNs and Vision Transformers.FMCSNet achieves a delicate balance between computation and accuracy mainly by two key modules:the ShiftMLP module,including a shift operation and an MLP module,and a Partial group Convolutional(PGConv)module,reducing computation while enhancing information exchange between channels.With a computational complexity of 1.4G FLOPs and 1.3M parameters,FMCSNet outperforms CNN-based and DWConv-based ShuffleNetv2 by 1%and 4.5%mAP on the Pascal VOC 2007 dataset,respectively.Additionally,FMCSNet achieves a mAP of 30.0(0.5:0.95 IoU threshold)with only 2.5G FLOPs and 2.0M parameters.It achieves 32 FPS on low-performance i5-series CPUs,meeting real-time detection requirements.The versatility of the PGConv module’s adaptability across scenarios further highlights FMCSNet as a promising solution for real-time mobile object detection. 展开更多
关键词 Object detection lightweight network partial group convolution multilayer perceptron
在线阅读 下载PDF
Automated Pipe Defect Identification in Underwater Robot Imagery with Deep Learning
4
作者 Mansour Taheri Andani Farhad Ameri 《哈尔滨工程大学学报(英文版)》 2026年第1期197-215,共19页
Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challeng... Underwater pipeline inspection plays a vital role in the proactive maintenance and management of critical marine infrastructure and subaquatic systems.However,the inspection of underwater pipelines presents a challenge due to factors such as light scattering,absorption,restricted visibility,and ambient noise.The advancement of deep learning has introduced powerful techniques for processing large amounts of unstructured and imperfect data collected from underwater environments.This study evaluated the efficacy of the You Only Look Once(YOLO)algorithm,a real-time object detection and localization model based on convolutional neural networks,in identifying and classifying various types of pipeline defects in underwater settings.YOLOv8,the latest evolution in the YOLO family,integrates advanced capabilities,such as anchor-free detection,a cross-stage partial network backbone for efficient feature extraction,and a feature pyramid network+path aggregation network neck for robust multi-scale object detection,which make it particularly well-suited for complex underwater environments.Due to the lack of suitable open-access datasets for underwater pipeline defects,a custom dataset was captured using a remotely operated vehicle in a controlled environment.This application has the following assets available for use.Extensive experimentation demonstrated that YOLOv8 X-Large consistently outperformed other models in terms of pipe defect detection and classification and achieved a strong balance between precision and recall in identifying pipeline cracks,rust,corners,defective welds,flanges,tapes,and holes.This research establishes the baseline performance of YOLOv8 for underwater defect detection and showcases its potential to enhance the reliability and efficiency of pipeline inspection tasks in challenging underwater environments. 展开更多
关键词 YOLO8 Underwater robot Object detection Underwater pipelines Remotely operated vehicle Deep learning
在线阅读 下载PDF
Face-Pedestrian Joint Feature Modeling with Cross-Category Dynamic Matching for Occlusion-Robust Multi-Object Tracking
5
作者 Qin Hu Hongshan Kong 《Computers, Materials & Continua》 2026年第1期870-900,共31页
To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework ba... To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework based on face-pedestrian joint feature modeling.By constructing a joint tracking model centered on“intra-class independent tracking+cross-category dynamic binding”,designing a multi-modal matching metric with spatio-temporal and appearance constraints,and innovatively introducing a cross-category feature mutual verification mechanism and a dual matching strategy,this work effectively resolves performance degradation in traditional single-category tracking methods caused by short-term occlusion,cross-camera tracking,and crowded environments.Experiments on the Chokepoint_Face_Pedestrian_Track test set demonstrate that in complex scenes,the proposed method improves Face-Pedestrian Matching F1 area under the curve(F1 AUC)by approximately 4 to 43 percentage points compared to several traditional methods.The joint tracking model achieves overall performance metrics of IDF1:85.1825%and MOTA:86.5956%,representing improvements of 0.91 and 0.06 percentage points,respectively,over the baseline model.Ablation studies confirm the effectiveness of key modules such as the Intersection over Area(IoA)/Intersection over Union(IoU)joint metric and dynamic threshold adjustment,validating the significant role of the cross-category identity matching mechanism in enhancing tracking stability.Our_model shows a 16.7%frame per second(FPS)drop vs.fairness of detection and re-identification in multiple object tracking(FairMOT),with its cross-category binding module adding aboute 10%overhead,yet maintains near-real-time performance for essential face-pedestrian tracking at small resolutions. 展开更多
关键词 Cross-category dynamic binding joint feature modeling face-pedestrian association multi object tracking occlusion robustness
在线阅读 下载PDF
EHDC-YOLO: Enhancing Object Detection for UAV Imagery via Multi-Scale Edge and Detail Capture
6
作者 Zhiyong Deng Yanchen Ye Jiangling Guo 《Computers, Materials & Continua》 2026年第1期1665-1682,共18页
With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods ... With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods face numerous challenges in practical deployment,including scale variation handling,feature degradation,and complex backgrounds.To address these issues,we propose Edge-enhanced and Detail-Capturing You Only Look Once(EHDC-YOLO),a novel framework for object detection in Unmanned Aerial Vehicle(UAV)imagery.Based on the You Only Look Once version 11 nano(YOLOv11n)baseline,EHDC-YOLO systematically introduces several architectural enhancements:(1)a Multi-Scale Edge Enhancement(MSEE)module that leverages multi-scale pooling and edge information to enhance boundary feature extraction;(2)an Enhanced Feature Pyramid Network(EFPN)that integrates P2-level features with Cross Stage Partial(CSP)structures and OmniKernel convolutions for better fine-grained representation;and(3)Dynamic Head(DyHead)with multi-dimensional attention mechanisms for enhanced cross-scale modeling and perspective adaptability.Comprehensive experiments on the Vision meets Drones for Detection(VisDrone-DET)2019 dataset demonstrate that EHDC-YOLO achieves significant improvements,increasing mean Average Precision(mAP)@0.5 from 33.2%to 46.1%(an absolute improvement of 12.9 percentage points)and mAP@0.5:0.95 from 19.5%to 28.0%(an absolute improvement of 8.5 percentage points)compared with the YOLOv11n baseline,while maintaining a reasonable parameter count(2.81 M vs the baseline’s 2.58 M).Further ablation studies confirm the effectiveness of each proposed component,while visualization results highlight EHDC-YOLO’s superior performance in detecting objects and handling occlusions in complex drone scenarios. 展开更多
关键词 UAV imagery object detection multi-scale feature fusion edge enhancement detail preservation YOLO feature pyramid network attention mechanism
在线阅读 下载PDF
Lightweight YOLOv5 with ShuffleNetV2 for Rice Disease Detection in Edge Computing
7
作者 Qingtao Meng Sang-Hyun Lee 《Computers, Materials & Continua》 2026年第1期1395-1409,共15页
This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagno... This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements. 展开更多
关键词 Lightweight object detection YOLOv5-V2 ShuffleNet V2 edge computing rice disease detection
在线阅读 下载PDF
Lightweight Small Defect Detection with YOLOv8 Using Cascaded Multi-Receptive Fields and Enhanced Detection Heads
8
作者 Shengran Zhao Zhensong Li +2 位作者 Xiaotan Wei Yutong Wang Kai Zhao 《Computers, Materials & Continua》 2026年第1期1278-1291,共14页
In printed circuit board(PCB)manufacturing,surface defects can significantly affect product quality.To address the performance degradation,high false detection rates,and missed detections caused by complex backgrounds... In printed circuit board(PCB)manufacturing,surface defects can significantly affect product quality.To address the performance degradation,high false detection rates,and missed detections caused by complex backgrounds in current intelligent inspection algorithms,this paper proposes CG-YOLOv8,a lightweight and improved model based on YOLOv8n for PCB surface defect detection.The proposed method optimizes the network architecture and compresses parameters to reduce model complexity while maintaining high detection accuracy,thereby enhancing the capability of identifying diverse defects under complex conditions.Specifically,a cascaded multi-receptive field(CMRF)module is adopted to replace the SPPF module in the backbone to improve feature perception,and an inverted residual mobile block(IRMB)is integrated into the C2f module to further enhance performance.Additionally,conventional convolution layers are replaced with GSConv to reduce computational cost,and a lightweight Convolutional Block Attention Module based Convolution(CBAMConv)module is introduced after Grouped Spatial Convolution(GSConv)to preserve accuracy through attention mechanisms.The detection head is also optimized by removing medium and large-scale detection layers,thereby enhancing the model’s ability to detect small-scale defects and further reducing complexity.Experimental results show that,compared to the original YOLOv8n,the proposed CG-YOLOv8 reduces parameter count by 53.9%,improves mAP@0.5 by 2.2%,and increases precision and recall by 2.0%and 1.8%,respectively.These improvements demonstrate that CG-YOLOv8 offers an efficient and lightweight solution for PCB surface defect detection. 展开更多
关键词 YOLOv8n PCB surface defect detection lightweight model small object detection
在线阅读 下载PDF
Deep Learning-Based Toolkit Inspection: Object Detection and Segmentation in Assembly Lines
9
作者 Arvind Mukundan Riya Karmakar +1 位作者 Devansh Gupta Hsiang-Chen Wang 《Computers, Materials & Continua》 2026年第1期1255-1277,共23页
Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone t... Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities. 展开更多
关键词 Tool detection image segmentation object detection assembly line automation Industry 4.0 Intel RealSense deep learning toolkit verification RGB-D imaging quality assurance
在线阅读 下载PDF
MAGPNet:Multi-Domain Attention-Guided Pyramid Network for Infrared Small Object Detection
10
作者 DING Leqi WANG Biyun +1 位作者 YAO Lixiu CAI Yunze 《Journal of Shanghai Jiaotong university(Science)》 2025年第5期935-951,共17页
To overcome the obstacles of poor feature extraction and little prior information on the appearance of infrared dim small targets,we propose a multi-domain attention-guided pyramid network(MAGPNet).Specifically,we des... To overcome the obstacles of poor feature extraction and little prior information on the appearance of infrared dim small targets,we propose a multi-domain attention-guided pyramid network(MAGPNet).Specifically,we design three modules to ensure that salient features of small targets can be acquired and retained in the multi-scale feature maps.To improve the adaptability of the network for targets of different sizes,we design a kernel aggregation attention block with a receptive field attention branch and weight the feature maps under different perceptual fields with attention mechanism.Based on the research on human vision system,we further propose an adaptive local contrast measure module to enhance the local features of infrared small targets.With this parameterized component,we can implement the information aggregation of multi-scale contrast saliency maps.Finally,to fully utilize the information within spatial and channel domains in feature maps of different scales,we propose the mixed spatial-channel attention-guided fusion module to achieve high-quality fusion effects while ensuring that the small target features can be preserved at deep layers.Experiments on public datasets demonstrate that our MAGPNet can achieve a better performance over other state-of-the-art methods in terms of the intersection of union,Precision,Recall,and F-measure.In addition,we conduct detailed ablation studies to verify the effectiveness of each component in our network. 展开更多
关键词 infrared small objection detection kernel aggregation attention adaptive local contrast measure mixed spatial-channel attention
原文传递
DM-Mpedia,一个用于制药污染微生物风险评估的数字化知识库
11
作者 麻鲁鹏 李珏 +3 位作者 陈欢 王知坚 柴惠 刘程智 《药物分析杂志》 北大核心 2025年第6期1056-1066,共11页
目的:为识别制药环境中的污染微生物,并评估其对药品质量的影响,建立应对策略,提供信息参考。方法:对《伯杰氏系统细菌学手册》、权威文献、监督机构文件的收集、汇总、分析、提取,总结制药行业的微生物特性和风险信息注释模式,并通过... 目的:为识别制药环境中的污染微生物,并评估其对药品质量的影响,建立应对策略,提供信息参考。方法:对《伯杰氏系统细菌学手册》、权威文献、监督机构文件的收集、汇总、分析、提取,总结制药行业的微生物特性和风险信息注释模式,并通过对微生物关键特性信息的结构化,将其存储于基于MySQL的知识库管理系统。利用Vue+node.js进行前端设计、ECharts进行数据可视化,构建了便捷、可统计的综合性知识库。结果:获得包含20678个微生物物种特性及风险信息的查询云平台——DM-Mpedia(http://dmcloud.dmicrobe.cn/#/preview)。结论:DM-Mpedia的建立有助于微生物工作者对污染微生物及不可接受微生物进行更好的评估和控制。 展开更多
关键词 DM-Mpedia 微生物知识库 风险评估 不可接受微生物 制药环境
原文传递
2025年版《中国药典》9212非无菌产品不可接受微生物风险评估与控制指导原则解读
12
作者 宋明辉 张宁 +6 位作者 李琼琼 邵泓 范一灵 杨美成 马仕洪 张军 胡昌勤 《中国药品标准》 2025年第5期455-461,共7页
《中国药典》2020年版微生物限度标准对非无菌产品仅控制污染微生物的数量和特定控制菌,难以全面控制微生物对药品有效性和患者安全性的潜在风险。随着药品监管要求的不断提高,亟需建立一套科学、系统的药品微生物风险评估与控制体系。2... 《中国药典》2020年版微生物限度标准对非无菌产品仅控制污染微生物的数量和特定控制菌,难以全面控制微生物对药品有效性和患者安全性的潜在风险。随着药品监管要求的不断提高,亟需建立一套科学、系统的药品微生物风险评估与控制体系。2025年版《中国药典》新增的9212非无菌产品不可接受微生物风险评估与控制指导原则,系统构建了对不可接受微生物的风险识别与控制体系,填补了国际药典在该领域的空白。本文针对通则9212的核心框架内容,系统解读其制定背景、核心理念、不可接受微生物鉴定策略、风险特征因子评估、风险控制措施、实施路径等关键内容,以期指导制药企业和监管机构合理应用。 展开更多
关键词 非无菌产品 不可接受微生物 风险评估 风险控制 标准解读
暂未订购
Rising frequency of ozone-favorable synoptic weather patterns contributes to 2015-2022 ozone increase in Guangzhou 被引量:2
13
作者 Nanxi Liu Guowen He +8 位作者 Haolin Wang Cheng He Haofan Wang Chenxi Liu Yiming Wang Haichao Wang Lei Li Xiao Lu Shaojia Fan 《Journal of Environmental Sciences》 2025年第2期502-514,共13页
Objective weather classification methods have been extensively applied to identify dominant ozone-favorable synoptic weather patterns(SWPs),however,the consistency of different classification methods is rarely examine... Objective weather classification methods have been extensively applied to identify dominant ozone-favorable synoptic weather patterns(SWPs),however,the consistency of different classification methods is rarely examined.In this study,we apply two widely-used objective methods,the self-organizing map(SOM)and K-means clustering analysis,to derive ozone-favorable SWPs at four Chinese megacities in 2015-2022.We find that the two algorithms are largely consistent in recognizing dominant ozone-favorable SWPs for four Chinese megacities.In the case of classifying six SWPs,the derived circulation fields are highly similar with a spatial correlation of 0.99 between the two methods,and the difference in themean frequency of each SWP is less than 7%.The six dominant ozone-favorable SWPs in Guangzhou are all characterized by anomaly higher radiation and temperature,lower cloud cover,relative humidity,and wind speed,and stronger subsidence compared to climatology mean.We find that during 2015-2022,the occurrence of ozone-favorable SWPs days increases significantly at a rate of 3.2 days/year,faster than the increases in the ozone exceedance days(3.0 days/year).The interannual variability between the occurrence of ozone-favorable SWPs and ozone exceedance days are generally consistent with a temporal correlation coefficient of 0.6.In particular,the significant increase in ozone-favorable SWPs in 2022,especially the Subtropical High type which typically occurs in September,is consistent with a long-lasting ozone pollution episode in Guangzhou during September 2022.Our results thus reveal that enhanced frequency of ozone-favorable SWPs plays an important role in the observed 2015-2022 ozone increase in Guangzhou. 展开更多
关键词 Ozone(O_(3)) Objective weather classification methods Synoptic weather patterns Trends GUANGZHOU
原文传递
DI-YOLOv5:An Improved Dual-Wavelet-Based YOLOv5 for Dense Small Object Detection 被引量:1
14
作者 Zi-Xin Li Yu-Long Wang Fei Wang 《IEEE/CAA Journal of Automatica Sinica》 2025年第2期457-459,共3页
Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dens... Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dense small objects is challenging. 展开更多
关键词 small objects receptive fields feature maps detection dense small objects object detection dense objects
在线阅读 下载PDF
Hybrid receptive field network for small object detection on drone view 被引量:1
15
作者 Zhaodong CHEN Hongbing JI +2 位作者 Yongquan ZHANG Wenke LIU Zhigang ZHU 《Chinese Journal of Aeronautics》 2025年第2期322-338,共17页
Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones... Drone-based small object detection is of great significance in practical applications such as military actions, disaster rescue, transportation, etc. However, the severe scale differences in objects captured by drones and lack of detail information for small-scale objects make drone-based small object detection a formidable challenge. To address these issues, we first develop a mathematical model to explore how changing receptive fields impacts the polynomial fitting results. Subsequently, based on the obtained conclusions, we propose a simple but effective Hybrid Receptive Field Network (HRFNet), whose modules include Hybrid Feature Augmentation (HFA), Hybrid Feature Pyramid (HFP) and Dual Scale Head (DSH). Specifically, HFA employs parallel dilated convolution kernels of different sizes to extend shallow features with different receptive fields, committed to improving the multi-scale adaptability of the network;HFP enhances the perception of small objects by capturing contextual information across layers, while DSH reconstructs the original prediction head utilizing a set of high-resolution features and ultrahigh-resolution features. In addition, in order to train HRFNet, the corresponding dual-scale loss function is designed. Finally, comprehensive evaluation results on public benchmarks such as VisDrone-DET and TinyPerson demonstrate the robustness of the proposed method. Most impressively, the proposed HRFNet achieves a mAP of 51.0 on VisDrone-DET with 29.3 M parameters, which outperforms the extant state-of-the-art detectors. HRFNet also performs excellently in complex scenarios captured by drones, achieving the best performance on the CS-Drone dataset we built. 展开更多
关键词 Drone remote sensing Object detection on drone view Small object detector Hybrid receptive field Feature pyramid network Feature augmentation Multi-scale object detection
原文传递
A Systematic Review of Deep Learning-Based Object Detection in Agriculture: Methods, Challenges, and Future Directions 被引量:1
16
作者 Mukesh Dalal Payal Mittal 《Computers, Materials & Continua》 2025年第7期57-91,共35页
Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by ... Deep learning-based object detection has revolutionized various fields,including agriculture.This paper presents a systematic review based on the PRISMA 2020 approach for object detection techniques in agriculture by exploring the evolution of different methods and applications over the past three years,highlighting the shift from conventional computer vision to deep learning-based methodologies owing to their enhanced efficacy in real time.The review emphasizes the integration of advanced models,such as You Only Look Once(YOLO)v9,v10,EfficientDet,Transformer-based models,and hybrid frameworks that improve the precision,accuracy,and scalability for crop monitoring and disease detection.The review also highlights benchmark datasets and evaluation metrics.It addresses limitations,like domain adaptation challenges,dataset heterogeneity,and occlusion,while offering insights into prospective research avenues,such as multimodal learning,explainable AI,and federated learning.Furthermore,the main aim of this paper is to serve as a thorough resource guide for scientists,researchers,and stakeholders for implementing deep learning-based object detection methods for the development of intelligent,robust,and sustainable agricultural systems. 展开更多
关键词 Artificial intelligence object detection computer vision AGRICULTURE deep learning
在线阅读 下载PDF
Impacts of synoptic weather patterns on Hefei's ozone in warm season and analysis of transport pathways during extreme pollution events 被引量:1
17
作者 Feng Hu Pinhua Xie +5 位作者 Jin Xu Ang Li Yinsheng Lv Zhidong Zhang Jiangyi Zheng Xin Tian 《Journal of Environmental Sciences》 2025年第10期371-384,共14页
Extreme ozone pollution events(EOPEs)are associated with synoptic weather patterns(SWPs)and pose severe health and ecological risks.However,a systematic investigation of themeteorological causes,transport pathways,and... Extreme ozone pollution events(EOPEs)are associated with synoptic weather patterns(SWPs)and pose severe health and ecological risks.However,a systematic investigation of themeteorological causes,transport pathways,and source contributions to historical EOPEs is still lacking.In this paper,the K-means clustering method is applied to identify six dominant SWPs during the warm season in the Yangtze River Delta(YRD)region from 2016 to 2022.It provides an integrated analysis of the meteorological factors affecting ozone pollution in Hefei under different SWPs.Using the WRF-FLEXPART model,the transport pathways(TPPs)and geographical sources of the near-surface air masses in Hefei during EOPEs are investigated.The results reveal that Hefei experienced the highest ozone concentration(134.77±42.82μg/m^(3)),exceedance frequency(46 days(23.23%)),and proportion of EOPEs(21 instances,47.7%)under the control of peripheral subsidence of typhoon(Type 5).Regional southeast winds correlated with the ozone pollution in Hefei.During EOPEs,a high boundary layer height,solar radiation,and temperature;lowhumidity and cloud cover;and pronounced subsidence airflow occurred over Hefei and the broader YRD region.The East-South(E_S)patterns exhibited the highest frequency(28 instances,65.11%).Regarding the TPPs and geographical sources of the near-surface air masses during historical EOPEs.The YRD was the main source for land-originating air masses under E_S patterns(50.28%),with Hefei,southern Anhui,southern Jiangsu,and northern Zhejiang being key contributors.These findings can help improve ozone pollution early warning and control mechanisms at urban and regional scales. 展开更多
关键词 OZONE Objective weather classification Transport pathway Source attribution Hefei
原文传递
Large-scale single-pixel imaging and sensing 被引量:1
18
作者 Lintao Peng Siyu Xie +1 位作者 Hui Lu Liheng Bian 《Advanced Photonics Nexus》 2025年第2期97-110,共14页
Existing single-pixel imaging(SPI)and sensing techniques suffer from poor reconstruction quality and heavy computation cost,limiting their widespread application.To tackle these challenges,we propose a large-scale sin... Existing single-pixel imaging(SPI)and sensing techniques suffer from poor reconstruction quality and heavy computation cost,limiting their widespread application.To tackle these challenges,we propose a large-scale single-pixel imaging and sensing(SPIS)technique that enables high-quality megapixel SPI and highly efficient image-free sensing with a low sampling rate.Specifically,we first scan and sample the entire scene using small-size optimized patterns to obtain information-coupled measurements.Compared with the conventional full-sized patterns,small-sized optimized patterns achieve higher imaging fidelity and sensing accuracy with 1 order of magnitude fewer pattern parameters.Next,the coupled measurements are processed through a transformer-based encoder to extract high-dimensional features,followed by a task-specific plugand-play decoder for imaging or image-free sensing.Considering that the regions with rich textures and edges are more difficult to reconstruct,we use an uncertainty-driven self-adaptive loss function to reinforce the network’s attention to these regions,thereby improving the imaging and sensing performance.Extensive experiments demonstrate that the reported technique achieves 24.13 dB megapixel SPI at a sampling rate of 3%within 1 s.In terms of sensing,it outperforms existing methods by 12%on image-free segmentation accuracy and achieves state-of-the-art image-free object detection accuracy with an order of magnitude less data bandwidth. 展开更多
关键词 single-pixel imaging image-free segmentation image-free object detection deep learning
在线阅读 下载PDF
Enhancing Security in Distributed Drone-Based Litchi Fruit Recognition and Localization Systems 被引量:1
19
作者 Liang Mao Yue Li +4 位作者 Linlin Wang Jie Li Jiajun Tan Yang Meng Cheng Xiong 《Computers, Materials & Continua》 2025年第2期1985-1999,共15页
This paper introduces an advanced and efficient method for distributed drone-based fruit recognition and localization, tailored to satisfy the precision and security requirements of autonomous agricultural operations.... This paper introduces an advanced and efficient method for distributed drone-based fruit recognition and localization, tailored to satisfy the precision and security requirements of autonomous agricultural operations. Our method incorporates depth information to ensure precise localization and utilizes a streamlined detection network centered on the RepVGG module. This module replaces the traditional C2f module, enhancing detection performance while maintaining speed. To bolster the detection of small, distant fruits in complex settings, we integrate Selective Kernel Attention (SKAttention) and a specialized small-target detection layer. This adaptation allows the system to manage difficult conditions, such as variable lighting and obstructive foliage. To reinforce security, the tasks of recognition and localization are distributed among multiple drones, enhancing resilience against tampering and data manipulation. This distribution also optimizes resource allocation through collaborative processing. The model remains lightweight and is optimized for rapid and accurate detection, which is essential for real-time applications. Our proposed system, validated with a D435 depth camera, achieves a mean Average Precision (mAP) of 0.943 and a frame rate of 169 FPS, which represents a significant improvement over the baseline by 0.039 percentage points and 25 FPS, respectively. Additionally, the average localization error is reduced to 0.82 cm, highlighting the model’s high precision. These enhancements render our system highly effective for secure, autonomous fruit-picking operations, effectively addressing significant performance and cybersecurity challenges in agriculture. This approach establishes a foundation for reliable, efficient, and secure distributed fruit-picking applications, facilitating the advancement of autonomous systems in contemporary agricultural practices. 展开更多
关键词 Objective detection deep learning machine learning
在线阅读 下载PDF
上一页 1 2 182 下一页 到第
使用帮助 返回顶部