期刊文献+
共找到100,378篇文章
< 1 2 250 >
每页显示 20 50 100
TQU-GraspingObject:3D Common Objects Detection,Recognition,and Localization on Point Cloud for Hand Grasping in Sharing Environments
1
作者 Thi-Loan Nguyen Huy-Nam Chu +2 位作者 The-Thanh Hua Trung-Nghia Phung Van-Hung Le 《Computers, Materials & Continua》 2026年第5期1701-1722,共22页
To support the process of grasping objects on a tabletop for the blind or robotic arm,it is necessary to address fundamental computer vision tasks,such as detecting,recognizing,and locating objects in space,and determ... To support the process of grasping objects on a tabletop for the blind or robotic arm,it is necessary to address fundamental computer vision tasks,such as detecting,recognizing,and locating objects in space,and determining the position of the grasping information.These results can then be used to guide the visually impaired or to execute grasping tasks with a robotic arm.In this paper,we collected,annotated,and published the benchmark TQUGraspingObject dataset for testing,validation,and evaluation of deep learning(DL)models for detecting,recognizing,and localizing grasping objects in 2D and 3D space,especially 3D point cloud data.Our dataset is collected in a shared room,with common everyday objects placed on the tabletop in jumbled positions by Intel RealSense D435(IR-D435).This dataset includes more than 63k RGB-D pairs and related data such as normalized 3D object point cloud,3D object point cloud segmented,coordinate system normalizationmatrix,3D object point cloud normalized,and hand pose for grasping each object.At the same time,we also conducted experiments on fourDL networks with the best performance:SSD-MobileNetV3,ResNet50-Transformer,ResNet101-Transformer,and YOLOv12.The results present that YOLOv12 has the most suitable results in detecting and recognizing objects in images.All data,annotations,toolkit,source code,point cloud data,and results are publicly available on our project website:https://github.com/HuaTThanhIT2327Tqu/datasetv2. 展开更多
关键词 Grasping object of blind/Robot arm TQU-graspingobject benchmark dataset 3D point cloud data deep learning(DL) object detection/recognition intel realsense D435(IR-D435)
在线阅读 下载PDF
Transformer-Driven Multimodal for Human-Object Detection and Recognition for Intelligent Robotic Surveillance
2
作者 Aman Aman Ullah Yanfeng Wu +3 位作者 Shaheryar Najam Nouf Abdullah Almujally Ahmad Jalal Hui Liu 《Computers, Materials & Continua》 2026年第4期1364-1383,共20页
Human object detection and recognition is essential for elderly monitoring and assisted living however,models relying solely on pose or scene context often struggle in cluttered or visually ambiguous settings.To addre... Human object detection and recognition is essential for elderly monitoring and assisted living however,models relying solely on pose or scene context often struggle in cluttered or visually ambiguous settings.To address this,we present SCENET-3D,a transformer-drivenmultimodal framework that unifies human-centric skeleton features with scene-object semantics for intelligent robotic vision through a three-stage pipeline.In the first stage,scene analysis,rich geometric and texture descriptors are extracted from RGB frames,including surface-normal histograms,angles between neighboring normals,Zernike moments,directional standard deviation,and Gabor-filter responses.In the second stage,scene-object analysis,non-human objects are segmented and represented using local feature descriptors and complementary surface-normal information.In the third stage,human-pose estimation,silhouettes are processed through an enhanced MoveNet to obtain 2D anatomical keypoints,which are fused with depth information and converted into RGB-based point clouds to construct pseudo-3D skeletons.Features from all three stages are fused and fed in a transformer encoder with multi-head attention to resolve visually similar activities.Experiments on UCLA(95.8%),ETRI-Activity3D(89.4%),andCAD-120(91.2%)demonstrate that combining pseudo-3D skeletonswith rich scene-object fusion significantly improves generalizable activity recognition,enabling safer elderly care,natural human–robot interaction,and robust context-aware robotic perception in real-world environments. 展开更多
关键词 Human object detection elderly care RGB-based pose estimation scene context analysis object recognition Gabor features point cloud reconstruction
在线阅读 下载PDF
An Improved Variant of Multi-Population Cooperative Constrained Multi-Objective Optimization(MCCMO)for Multi-Objective Optimization Problem
3
作者 Muhammad Waqar Khan Adnan Ahmed Siddiqui Syed Sajjad Hussain Rizvi 《Computers, Materials & Continua》 2026年第2期1874-1888,共15页
The multi-objective optimization problems,especially in constrained environments such as power distribution planning,demand robust strategies for discovering effective solutions.This work presents the improved variant... The multi-objective optimization problems,especially in constrained environments such as power distribution planning,demand robust strategies for discovering effective solutions.This work presents the improved variant of the Multi-population Cooperative Constrained Multi-Objective Optimization(MCCMO)Algorithm,termed Adaptive Diversity Preservation(ADP).This enhancement is primarily focused on the improvement of constraint handling strategies,local search integration,hybrid selection approaches,and adaptive parameter control.Theimproved variant was experimented on with the RWMOP50 power distribution systemplanning benchmark.As per the findings,the improved variant outperformed the original MCCMO across the eleven performance metrics,particularly in terms of convergence speed,constraint handling efficiency,and solution diversity.The results also establish that MCCMOADP consistently delivers substantial performance gains over the baseline MCCMO,demonstrating its effectiveness across performancemetrics.The new variant also excels atmaintaining the balanced trade-off between exploration and exploitation throughout the search process,making it especially suitable for complex optimization problems in multiconstrained power systems.These enhancements make MCCMO-ADP a valuable and promising candidate for handling problems such as renewable energy scheduling,logistics planning,and power system optimization.Future work will benchmark the MCCMO-ADP against widely recognized algorithms such as NSGA-Ⅱ,NSGA-Ⅲ,and MOEA/D and will also extend its validation to large-scale real-world optimization domains to further consolidate its generalizability. 展开更多
关键词 MCCMO algorithms adaptive diversity preservation RWMOP50 power distribution system multi-modal multi objective optimization evolutionary algorithm multi objective problem
在线阅读 下载PDF
Global-local feature optimization based RGB-IR fusion object detection on drone view 被引量:1
4
作者 Zhaodong CHEN Hongbing JI Yongquan ZHANG 《Chinese Journal of Aeronautics》 2026年第1期436-453,共18页
Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still st... Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still struggle to deal with the complex and changing scenarios captured by drones,mainly due to two reasons:(A)RGB-IR fusion detectors are susceptible to inferior inputs that degrade performance and stability.(B)RGB-IR fusion detectors are susceptible to redundant features that reduce accuracy and efficiency.In this paper,an innovative RGB-IR fusion detection framework based on global-local feature optimization,named GLFDet,is proposed to improve the detection performance and efficiency of drone-captured objects.The key components of GLFDet include a Global Feature Optimization(GFO)module,a Local Feature Optimization(LFO)module and a Channel Separation Fusion(CSF)module.Specifically,GFO calculates the information content of the input image from the frequency domain and optimizes the features holistically.Then,LFO dynamically selects high-value features and filters out low-value features before fusion,which significantly improves the efficiency of fusion.Finally,CSF fuses the RGB and IR features across the corresponding channels,which avoids the rearrangement of the channel relationships and enhances the model stability.Extensive experimental results show that the proposed method achieves the best performance on three popular RGB-IR datasets Drone Vehicle,VEDAI,and LLVIP.In addition,GLFDet is more lightweight than other comparable models,making it more appealing to edge devices such as drones.The code is available at https://github.com/lao chen330/GLFDet. 展开更多
关键词 object detection Deep learning RGB-IR fusion DRONES Global feature Local feature
原文传递
DI-YOLOv5:An Improved Dual-Wavelet-Based YOLOv5 for Dense Small Object Detection 被引量:1
5
作者 Zi-Xin Li Yu-Long Wang Fei Wang 《IEEE/CAA Journal of Automatica Sinica》 2025年第2期457-459,共3页
Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dens... Dear Editor,This letter focuses on the fact that small objects with few pixels disappear in feature maps with large receptive fields, as the network deepens, in object detection tasks. Therefore, the detection of dense small objects is challenging. 展开更多
关键词 small objects receptive fields feature maps detection dense small objects object detection dense objects
在线阅读 下载PDF
Enhanced Multi-Scale Feature Extraction Lightweight Network for Remote Sensing Object Detection
6
作者 Xiang Luo Yuxuan Peng +2 位作者 Renghong Xie Peng Li Yuwen Qian 《Computers, Materials & Continua》 2026年第3期2097-2118,共22页
Deep learning has made significant progress in the field of oriented object detection for remote sensing images.However,existing methods still face challenges when dealing with difficult tasks such as multi-scale targ... Deep learning has made significant progress in the field of oriented object detection for remote sensing images.However,existing methods still face challenges when dealing with difficult tasks such as multi-scale targets,complex backgrounds,and small objects in remote sensing.Maintaining model lightweight to address resource constraints in remote sensing scenarios while improving task completion for remote sensing tasks remains a research hotspot.Therefore,we propose an enhanced multi-scale feature extraction lightweight network EM-YOLO based on the YOLOv8s architecture,specifically optimized for the characteristics of large target scale variations,diverse orientations,and numerous small objects in remote sensing images.Our innovations lie in two main aspects:First,a dynamic snake convolution(DSC)is introduced into the backbone network to enhance the model’s feature extraction capability for oriented targets.Second,an innovative focusing-diffusion module is designed in the feature fusion neck to effectively integrate multi-scale feature information.Finally,we introduce Layer-Adaptive Sparsity for magnitude-based Pruning(LASP)method to perform lightweight network pruning to better complete tasks in resource-constrained scenarios.Experimental results on the lightweight platform Orin demonstrate that the proposed method significantly outperforms the original YOLOv8s model in oriented remote sensing object detection tasks,and achieves comparable or superior performance to state-of-the-art methods on three authoritative remote sensing datasets(DOTA v1.0,DOTA v1.5,and HRSC2016). 展开更多
关键词 Deep learning object detection feature extraction feature fusion remote sensing
在线阅读 下载PDF
An Unsupervised Online Detection Method for Foreign Objects in Complex Environments
7
作者 YANG Xiaoyang YANG Yanzhu DENG Haiping 《Journal of Donghua University(English Edition)》 2026年第1期140-151,共12页
In modern industrial production,foreign object detection in complex environments is crucial to ensure product quality and production safety.Detection systems based on deep-learning image processing algorithms often fa... In modern industrial production,foreign object detection in complex environments is crucial to ensure product quality and production safety.Detection systems based on deep-learning image processing algorithms often face challenges with handling high-resolution images and achieving accurate detection against complex backgrounds.To address these issues,this study employs the PatchCore unsupervised anomaly detection algorithm combined with data augmentation techniques to enhance the system’s generalization capability across varying lighting conditions,viewing angles,and object scales.The proposed method is evaluated in a complex industrial detection scenario involving the bogie of an electric multiple unit(EMU).A dataset consisting of complex backgrounds,diverse lighting conditions,and multiple viewing angles is constructed to validate the performance of the detection system in real industrial environments.Experimental results show that the proposed model achieves an average area under the receiver operating characteristic curve(AUROC)of 0.92 and an average F1 score of 0.85.Combined with data augmentation,the proposed model exhibits improvements in AUROC by 0.06 and F1 score by 0.03,demonstrating enhanced accuracy and robustness for foreign object detection in complex industrial settings.In addition,the effects of key factors on detection performance are systematically analyzed,providing practical guidance for parameter selection in real industrial applications. 展开更多
关键词 foreign object detection unsupervised learning data augmentation complex environment BOGIE DATASET
在线阅读 下载PDF
A Comprehensive Literature Review on YOLO-Based Small Object Detection:Methods,Challenges,and Future Trends
8
作者 Hui Yu Jun Liu Mingwei Lin 《Computers, Materials & Continua》 2026年第4期258-309,共52页
Small object detection has been a focus of attention since the emergence of deep learning-based object detection.Although classical object detection frameworks have made significant contributions to the development of... Small object detection has been a focus of attention since the emergence of deep learning-based object detection.Although classical object detection frameworks have made significant contributions to the development of object detection,there are still many issues to be resolved in detecting small objects due to the inherent complexity and diversity of real-world visual scenes.In particular,the YOLO(You Only Look Once)series of detection models,renowned for their real-time performance,have undergone numerous adaptations aimed at improving the detection of small targets.In this survey,we summarize the state-of-the-art YOLO-based small object detection methods.This review presents a systematic categorization of YOLO-based approaches for small-object detection,organized into four methodological avenues,namely attention-based feature enhancement,detection-head optimization,loss function,and multi-scale feature fusion strategies.We then examine the principal challenges addressed by each category.Finally,we analyze the performance of thesemethods on public benchmarks and,by comparing current approaches,identify limitations and outline directions for future research. 展开更多
关键词 Small object detection YOLO real-time detection feature fusion deep learning
在线阅读 下载PDF
AdvYOLO:An Improved Cross-Conv-Block Feature Fusion-Based YOLO Network for Transferable Adversarial Attacks on ORSIs Object Detection
9
作者 Leyu Dai Jindong Wang +2 位作者 Ming Zhou Song Guo Hengwei Zhang 《Computers, Materials & Continua》 2026年第4期767-792,共26页
In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free... In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions. 展开更多
关键词 Remote sensing object detection transferable adversarial attack feature fusion cross-conv-block
在线阅读 下载PDF
Hybrid Quantum Gate Enabled CNN Framework with Optimized Features for Human-Object Detection and Recognition
10
作者 Nouf Abdullah Almujally Tanvir Fatima Naik Bukht +3 位作者 Shuaa S.Alharbi Asaad Algarni Ahmad Jalal Jeongmin Park 《Computers, Materials & Continua》 2026年第4期2254-2271,共18页
Recognising human-object interactions(HOI)is a challenging task for traditional machine learning models,including convolutional neural networks(CNNs).Existing models show limited transferability across complex dataset... Recognising human-object interactions(HOI)is a challenging task for traditional machine learning models,including convolutional neural networks(CNNs).Existing models show limited transferability across complex datasets such as D3D-HOI and SYSU 3D HOI.The conventional architecture of CNNs restricts their ability to handle HOI scenarios with high complexity.HOI recognition requires improved feature extraction methods to overcome the current limitations in accuracy and scalability.This work proposes a Novel quantum gate-enabled hybrid CNN(QEH-CNN)for effectiveHOI recognition.Themodel enhancesCNNperformance by integrating quantumcomputing components.The framework begins with bilateral image filtering,followed bymulti-object tracking(MOT)and Felzenszwalb superpixel segmentation.A watershed algorithm refines object boundaries by cleaning merged superpixels.Feature extraction combines a histogram of oriented gradients(HOG),Global Image Statistics for Texture(GIST)descriptors,and a novel 23-joint keypoint extractionmethod using relative joint angles and joint proximitymeasures.A fuzzy optimization process refines the extracted features before feeding them into the QEH-CNNmodel.The proposed model achieves 95.06%accuracy on the 3D-D3D-HOI dataset and 97.29%on the SYSU3DHOI dataset.Theintegration of quantum computing enhances feature optimization,leading to improved accuracy and overall model efficiency. 展开更多
关键词 Pattern recognition image segmentation computer vision object detection
在线阅读 下载PDF
Superpixel-Aware Transformer with Attention-Guided Boundary Refinement for Salient Object Detection
11
作者 Burhan Baraklı Can Yüzkollar +1 位作者 Tugrul Ta¸sçı Ibrahim Yıldırım 《Computer Modeling in Engineering & Sciences》 2026年第1期1092-1129,共38页
Salient object detection(SOD)models struggle to simultaneously preserve global structure,maintain sharp object boundaries,and sustain computational efficiency in complex scenes.In this study,we propose SPSALNet,a task... Salient object detection(SOD)models struggle to simultaneously preserve global structure,maintain sharp object boundaries,and sustain computational efficiency in complex scenes.In this study,we propose SPSALNet,a task-driven two-stage(macro–micro)architecture that restructures the SOD process around superpixel representations.In the proposed approach,a“split-and-enhance”principle,introduced to our knowledge for the first time in the SOD literature,hierarchically classifies superpixels and then applies targeted refinement only to ambiguous or error-prone regions.At the macro stage,the image is partitioned into content-adaptive superpixel regions,and each superpixel is represented by a high-dimensional region-level feature vector.These representations define a regional decomposition problem in which superpixels are assigned to three classes:background,object interior,and transition regions.Superpixel tokens interact with a global feature vector from a deep network backbone through a cross-attention module and are projected into an enriched embedding space that jointly encodes local topology and global context.At the micro stage,the model employs a U-Net-based refinement process that allocates computational resources only to ambiguous transition regions.The image and distance–similarity maps derived from superpixels are processed through a dual-encoder pathway.Subsequently,channel-aware fusion blocks adaptively combine information from these two sources,producing sharper and more stable object boundaries.Experimental results show that SPSALNet achieves high accuracy with lower computational cost compared to recent competing methods.On the PASCAL-S and DUT-OMRON datasets,SPSALNet exhibits a clear performance advantage across all key metrics,and it ranks first on accuracy-oriented measures on HKU-IS.On the challenging DUT-OMRON benchmark,SPSALNet reaches a MAE of 0.034.Across all datasets,it preserves object boundaries and regional structure in a stable and competitive manner. 展开更多
关键词 Salient object detection superpixel segmentation TRANSFORMERS attention mechanism multi-level fusion edge-preserving refinement model-driven
在线阅读 下载PDF
FMCSNet: Mobile Devices-Oriented Lightweight Multi-Scale Object Detection via Fast Multi-Scale Channel Shuffling Network Model
12
作者 Lijuan Huang Xianyi Liu +1 位作者 Jinping Liu Pengfei Xu 《Computers, Materials & Continua》 2026年第1期1292-1311,共20页
The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditio... The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditional approaches like network compression,quantization,and lightweight design often sacrifice accuracy or feature representation robustness.This article introduces the Fast Multi-scale Channel Shuffling Network(FMCSNet),a novel lightweight detection model optimized for mobile devices.FMCSNet integrates a fully convolutional Multilayer Perceptron(MLP)module,offering global perception without significantly increasing parameters,effectively bridging the gap between CNNs and Vision Transformers.FMCSNet achieves a delicate balance between computation and accuracy mainly by two key modules:the ShiftMLP module,including a shift operation and an MLP module,and a Partial group Convolutional(PGConv)module,reducing computation while enhancing information exchange between channels.With a computational complexity of 1.4G FLOPs and 1.3M parameters,FMCSNet outperforms CNN-based and DWConv-based ShuffleNetv2 by 1%and 4.5%mAP on the Pascal VOC 2007 dataset,respectively.Additionally,FMCSNet achieves a mAP of 30.0(0.5:0.95 IoU threshold)with only 2.5G FLOPs and 2.0M parameters.It achieves 32 FPS on low-performance i5-series CPUs,meeting real-time detection requirements.The versatility of the PGConv module’s adaptability across scenarios further highlights FMCSNet as a promising solution for real-time mobile object detection. 展开更多
关键词 object detection lightweight network partial group convolution multilayer perceptron
在线阅读 下载PDF
Railway Track Defect Detection Based on Dynamic Multi-Modal Fusion and Challenging Object Enhanced Perception
13
作者 Yaguan Wang Linlin Kou +3 位作者 Yang Gao Qiang Sun Yong Qin Genwang Peng 《Structural Durability & Health Monitoring》 2026年第2期195-212,共18页
The fasteners employed in the railway tracks are susceptible to defects arising from their intricate composition.Foreign objects are frequently observed on the track bed in an open environment.These two types of defec... The fasteners employed in the railway tracks are susceptible to defects arising from their intricate composition.Foreign objects are frequently observed on the track bed in an open environment.These two types of defects pose potential threats to high-speed trains,thus necessitating timely and accurate track inspection.The majority of extant automatic inspection methods are predicated on the utilization of single visible light data,and the efficacy of the algorithmic processes is influenced by complex environments.Furthermore,due to the single information dimension,the detection accuracy of defects in similar,occluded,and small object categories is low.To address the aforementioned issues,this paper proposes a track defect detectionmethod based on dynamicmulti-modal fusion and challenging object enhanced perception.First,in light of the variances in the representation dimensions ofmultimodal information,this paper proposes a dynamic weighted multi-modal feature fusion module.The fused multi-modal features are assigned weights,and thenmultiplied with the extracted single-modal features atmultiple levels,achieving adaptive adjustment of the response degree of fusion features.Second,a novel stepwise multi-scale convolution feature aggregation module is proposed for challenging objects.The proposed method employs depth separable convolution and cross-scale aggregation operations of different receptive fields to enhance feature extraction and reuse,thereby reducing the degree of progressive loss of effective information.The experimental results demonstrate the efficacy of the proposed method in comparison to eight established methods,encompassing both single-modal and multi-modal methods,as evidenced by the extensive findings within the constructed RGBD dataset. 展开更多
关键词 Railway safety track defect detection multi-modal data object detection
在线阅读 下载PDF
Face-Pedestrian Joint Feature Modeling with Cross-Category Dynamic Matching for Occlusion-Robust Multi-Object Tracking
14
作者 Qin Hu Hongshan Kong 《Computers, Materials & Continua》 2026年第1期870-900,共31页
To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework ba... To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework based on face-pedestrian joint feature modeling.By constructing a joint tracking model centered on“intra-class independent tracking+cross-category dynamic binding”,designing a multi-modal matching metric with spatio-temporal and appearance constraints,and innovatively introducing a cross-category feature mutual verification mechanism and a dual matching strategy,this work effectively resolves performance degradation in traditional single-category tracking methods caused by short-term occlusion,cross-camera tracking,and crowded environments.Experiments on the Chokepoint_Face_Pedestrian_Track test set demonstrate that in complex scenes,the proposed method improves Face-Pedestrian Matching F1 area under the curve(F1 AUC)by approximately 4 to 43 percentage points compared to several traditional methods.The joint tracking model achieves overall performance metrics of IDF1:85.1825%and MOTA:86.5956%,representing improvements of 0.91 and 0.06 percentage points,respectively,over the baseline model.Ablation studies confirm the effectiveness of key modules such as the Intersection over Area(IoA)/Intersection over Union(IoU)joint metric and dynamic threshold adjustment,validating the significant role of the cross-category identity matching mechanism in enhancing tracking stability.Our_model shows a 16.7%frame per second(FPS)drop vs.fairness of detection and re-identification in multiple object tracking(FairMOT),with its cross-category binding module adding aboute 10%overhead,yet maintains near-real-time performance for essential face-pedestrian tracking at small resolutions. 展开更多
关键词 Cross-category dynamic binding joint feature modeling face-pedestrian association multi object tracking occlusion robustness
在线阅读 下载PDF
Ghost-Attention You Only Look Once(GA-YOLO):Enhancing Small Object Detection for Traffic Monitoring
15
作者 Xinyue Zhang Yuxuan Zhao +5 位作者 Jeremy S.Smith Yuechun Wang Gabriela Mogos Ka Lok Man Yutao Yue Young-Ae Jung 《Computers, Materials & Continua》 2026年第5期1773-1804,共32页
Intelligent Transportation Systems(ITS)represent a cornerstone in modern traffic management,leveraging surveillance cameras as primary visual sensors to monitor road conditions.However,the fixed characteristics of pub... Intelligent Transportation Systems(ITS)represent a cornerstone in modern traffic management,leveraging surveillance cameras as primary visual sensors to monitor road conditions.However,the fixed characteristics of public surveillance cameras,coupled with inherent image resolution limitations,pose significant challenges for Small ObjectDetection(SOD)in traffic surveillance.To address these challenges,this paper proposes Ghost-Attention YOLO(GA-YOLO),a lightweight model derived from YOLOv8 and specifically designed for traffic SOD.To enhance the attention of small targets and critical features,a novel channel-spatial attentionmechanism,termed Small-object Extend Attention(SEA),is introduced.In addition,the original C2fmodule is replaced with a more efficient Cross-Stage Partial(CSP)module,C3k2,to achieve improved feature processing with lower cost.Building upon these designs,a CSP-based Ghost Bottleneck with Attention(CGBA)module is further developed by integrating SEA into C3k2 and is deployed within the FPN–PAN network to strengthen feature extraction and fusion.Compared with similar-scale baseline modelsYOLOv8n andYOLOv11n,GA-YOLOdemonstrates clear performance advantages on theUA-DETRACdataset.Specifically,GA-YOLOachieves over 3%improvements in precision and mAP@50,along with a 5.6%gain inmAP@50-95,while reducing the parameter count by nearly 10%and computational complexity by 0.5 GFLOPS compared with YOLOv8n.In addition,GA-YOLO outperforms YOLOv11n by 8.6%in precision and 3.2%in mAP@50-95.These results indicate that GA-YOLO effectively balances detection accuracy and computational efficiency.Furthermore,additional evaluations across varying occlusion levels and representative detection models indicate the effectiveness and practicality of GA-YOLOfor traffic-oriented SODtasks. 展开更多
关键词 Small object detection(SOD) intelligent transportation system(ITS) attention mechanism YOLO
在线阅读 下载PDF
EHDC-YOLO: Enhancing Object Detection for UAV Imagery via Multi-Scale Edge and Detail Capture
16
作者 Zhiyong Deng Yanchen Ye Jiangling Guo 《Computers, Materials & Continua》 2026年第1期1665-1682,共18页
With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods ... With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods face numerous challenges in practical deployment,including scale variation handling,feature degradation,and complex backgrounds.To address these issues,we propose Edge-enhanced and Detail-Capturing You Only Look Once(EHDC-YOLO),a novel framework for object detection in Unmanned Aerial Vehicle(UAV)imagery.Based on the You Only Look Once version 11 nano(YOLOv11n)baseline,EHDC-YOLO systematically introduces several architectural enhancements:(1)a Multi-Scale Edge Enhancement(MSEE)module that leverages multi-scale pooling and edge information to enhance boundary feature extraction;(2)an Enhanced Feature Pyramid Network(EFPN)that integrates P2-level features with Cross Stage Partial(CSP)structures and OmniKernel convolutions for better fine-grained representation;and(3)Dynamic Head(DyHead)with multi-dimensional attention mechanisms for enhanced cross-scale modeling and perspective adaptability.Comprehensive experiments on the Vision meets Drones for Detection(VisDrone-DET)2019 dataset demonstrate that EHDC-YOLO achieves significant improvements,increasing mean Average Precision(mAP)@0.5 from 33.2%to 46.1%(an absolute improvement of 12.9 percentage points)and mAP@0.5:0.95 from 19.5%to 28.0%(an absolute improvement of 8.5 percentage points)compared with the YOLOv11n baseline,while maintaining a reasonable parameter count(2.81 M vs the baseline’s 2.58 M).Further ablation studies confirm the effectiveness of each proposed component,while visualization results highlight EHDC-YOLO’s superior performance in detecting objects and handling occlusions in complex drone scenarios. 展开更多
关键词 UAV imagery object detection multi-scale feature fusion edge enhancement detail preservation YOLO feature pyramid network attention mechanism
在线阅读 下载PDF
A Robust Damage Identification Method Based on Modified Holistic Swarm Optimization Algorithm and Hybrid Objective Function
17
作者 Xiansong Xie Xiaoqian Qian 《Structural Durability & Health Monitoring》 2026年第2期235-259,共25页
Correlation function of acceleration responses-based damage identificationmethods has been developed and employed,while they still face the difficulty in identifying local orminor structural damages.To deal with this ... Correlation function of acceleration responses-based damage identificationmethods has been developed and employed,while they still face the difficulty in identifying local orminor structural damages.To deal with this issue,a robust structural damage identification method is developed,integrating a modified holistic swarm optimization(MHSO)algorithm with a hybrid objective function.The MHSO is developed by combining Hammersley sequencebased population initialization,chaotic search around the worst solution,and Hooke-Jeeves pattern search around the best solution,thereby improving both global exploration and local exploitation capabilities.A hybrid objective function is constructed by merging acceleration correlation function-based and strain correlation function-based objective functions,effectively leveraging the complementary sensitivities of global and local responses.To further suppress spurious solutions and promote sparsity in parameter estimation,an additional L0.5 regularization term is introduced.The effectiveness of the proposed method is validated through numerical simulations on a simply supported beam and a steel girder benchmark structure.Comparative studies with sequential quadratic programming,genetic algorithm,andHSO demonstrate that theMHSOachieves superior accuracy and convergence efficiency,even with limited sensors and 20%noise-contaminated measurements.Results highlight that the hybrid objective function significantly enhances the detection of both major and minor damages,while the inclusion of sparse regularization improves robustness against noise and model uncertainties.The findings indicate that the proposed framework provides a reliable and computationally efficient solution for simultaneous localization and quantification of structural damages,offering promising applicability to real-world structural health monitoring scenarios. 展开更多
关键词 Damage identification holistic swarm optimization algorithm combined correlation function hybrid objective function sparse regularization grid structure
在线阅读 下载PDF
Deep Learning-Based Toolkit Inspection:Object Detection and Segmentation in Assembly Lines
18
作者 Arvind Mukundan Riya Karmakar +1 位作者 Devansh Gupta Hsiang-Chen Wang 《Computers, Materials & Continua》 2026年第1期1255-1277,共23页
Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone t... Modern manufacturing processes have become more reliant on automation because of the accelerated transition from Industry 3.0 to Industry 4.0.Manual inspection of products on assembly lines remains inefficient,prone to errors and lacks consistency,emphasizing the need for a reliable and automated inspection system.Leveraging both object detection and image segmentation approaches,this research proposes a vision-based solution for the detection of various kinds of tools in the toolkit using deep learning(DL)models.Two Intel RealSense D455f depth cameras were arranged in a top down configuration to capture both RGB and depth images of the toolkits.After applying multiple constraints and enhancing them through preprocessing and augmentation,a dataset consisting of 3300 annotated RGB-D photos was generated.Several DL models were selected through a comprehensive assessment of mean Average Precision(mAP),precision-recall equilibrium,inference latency(target≥30 FPS),and computational burden,resulting in a preference for YOLO and Region-based Convolutional Neural Networks(R-CNN)variants over ViT-based models due to the latter’s increased latency and resource requirements.YOLOV5,YOLOV8,YOLOV11,Faster R-CNN,and Mask R-CNN were trained on the annotated dataset and evaluated using key performance metrics(Recall,Accuracy,F1-score,and Precision).YOLOV11 demonstrated balanced excellence with 93.0%precision,89.9%recall,and a 90.6%F1-score in object detection,as well as 96.9%precision,95.3%recall,and a 96.5%F1-score in instance segmentation with an average inference time of 25 ms per frame(≈40 FPS),demonstrating real-time performance.Leveraging these results,a YOLOV11-based windows application was successfully deployed in a real-time assembly line environment,where it accurately processed live video streams to detect and segment tools within toolkits,demonstrating its practical effectiveness in industrial automation.The application is capable of precisely measuring socket dimensions by utilising edge detection techniques on YOLOv11 segmentation masks,in addition to detection and segmentation.This makes it possible to do specification-level quality control right on the assembly line,which improves the ability to examine things in real time.The implementation is a big step forward for intelligent manufacturing in the Industry 4.0 paradigm.It provides a scalable,efficient,and accurate way to do automated inspection and dimensional verification activities. 展开更多
关键词 Tool detection image segmentation object detection assembly line automation Industry 4.0 Intel RealSense deep learning toolkit verification RGB-D imaging quality assurance
在线阅读 下载PDF
OP-SLAM:An RGB-D SLAMMOT Method Leveraging the Constraints of Object Planar Features
19
作者 WANG Yingli LIU Yang GUO Chi 《Wuhan University Journal of Natural Sciences》 2026年第1期45-57,共13页
By integrating self-localization,environment mapping,and dynamic object tracking into a unified framework,visual simultaneous localization and mapping with multiple object tracking(SLAMMOT)enhances decision-making and... By integrating self-localization,environment mapping,and dynamic object tracking into a unified framework,visual simultaneous localization and mapping with multiple object tracking(SLAMMOT)enhances decision-making and interaction capabilities in applications such as autonomous driving,robotic navigation,and augmented reality.While numerous outstanding visual SLAMMOT methods have been proposed,the majority rely only on point features,overlooking the abundant and stable planar features in artificial objects that can provide valuable constraints.To address this limitation,we propose OP(object planar)-SLAM,an RGB-D SLAMMOT system that leverages planar features to improve object pose estimation and reconstruction accuracy.Specifically,we introduce an accurate object planar feature extraction and association method using normal images,alongside a novel object bundle adjustment framework that incorporates planar constraints for enhanced optimization.The proposed system is evaluated on both synthetic and public real-world datasets,including Oxford multimotion dataset(OMD)and KITTI tracking dataset.Especially on the OMD,where planar features are prominent,our method improves object pose estimation accuracy by approximately 60%.Extensive experiments demonstrate its effectiveness in enhancing object pose estimation and reconstruction,achieving notable performance compared with existing methods.Furthermore,OP-SLAM runs in real time,making it suitable for practical robots and augmented reality applications. 展开更多
关键词 visual simultaneous localization and mapping(SLAM) multiple object tracking(MOT) dynamic scenes planar feature
原文传递
Multi-objective ANN-driven genetic algorithm optimization of energy efficiency measures in an NZEB multi-family house building in Greece
20
《建筑节能(中英文)》 2026年第2期62-62,共1页
The goal of the present work is to demonstrate the potential of Artificial Neural Network(ANN)-driven Genetic Algorithm(GA)methods for energy efficiency and economic performance optimization of energy efficiency measu... The goal of the present work is to demonstrate the potential of Artificial Neural Network(ANN)-driven Genetic Algorithm(GA)methods for energy efficiency and economic performance optimization of energy efficiency measures in a multi-family house building in Greece.The energy efficiency measures include different heating/cooling systems(such as low-temperature and high-temperature heat pumps,natural gas boilers,split units),building envelope components for floor,walls,roof and windows of variable heat transfer coefficients,the installation of solar thermal collectors and PVs.The calculations of the building loads and investment and operating and maintenance costs of the measures are based on the methodology defined in Directive 2010/31/EU,while economic assumptions are based on EN 15459-1 standard.Typically,multi-objective optimization of energy efficiency measures often requires the simulation of very large numbers of cases involving numerous possible combinations,resulting in intense computational load.The results of the study indicate that ANN-driven GA methods can be used as an alternative,valuable tool for reliably predicting the optimal measures which minimize primary energy consumption and life cycle cost of the building with greatly reduced computational requirements.Through GA methods,the computational time needed for obtaining the optimal solutions is reduced by 96.4%-96.8%. 展开更多
关键词 energy efficiency measures gas boilerssplit units building envelope components energy efficiency economic performance artificial neural network ann driven multi objective optimization economic performance optimization ANN driven GA methods
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部