期刊文献+
共找到350篇文章
< 1 2 18 >
每页显示 20 50 100
AdvYOLO:An Improved Cross-Conv-Block Feature Fusion-Based YOLO Network for Transferable Adversarial Attacks on ORSIs Object Detection
1
作者 Leyu Dai Jindong Wang +2 位作者 Ming Zhou Song Guo Hengwei Zhang 《Computers, Materials & Continua》 2026年第4期767-792,共26页
In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free... In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions. 展开更多
关键词 Remote sensing object detection transferable adversarial attack feature fusion cross-conv-block
在线阅读 下载PDF
Research on Camouflage Target Detection Method Based on Edge Guidance and Multi-Scale Feature Fusion
2
作者 Tianze Yu Jianxun Zhang Hongji Chen 《Computers, Materials & Continua》 2026年第4期1676-1697,共22页
Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the backgroun... Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the background,camouflaged objects often exhibit vague boundaries and varying scales,making it challenging to accurately locate targets and delineate their indistinct edges.To address this,we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network(EGMFNet),which leverages edge-guided multi-scale integration for enhanced performance.The model incorporates two innovative components:a Multi-scale Fusion Module(MSFM)and an Edge-Guided Attention Module(EGA).These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries.Moreover,recognizing the rich contextual information in fused features,we introduce a Dual-Branch Global Context Module(DGCM)to refine features using extensive global context,thereby generatingmore informative representations.Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics.Specifically,on COD10K,our EGMFNet-P improves F_(β)by 4.8 points and reduces mean absolute error(MAE)by 0.006 compared with ZoomNeXt;on NC4K,it achieves a 3.6-point increase in F_(β).OnCAMO and CHAMELEON,it obtains 4.5-point increases in F_(β),respectively.These consistent gains substantiate the superiority and robustness of EGMFNet. 展开更多
关键词 Camouflaged object detection multi-scale feature fusion edge-guided image segmentation
在线阅读 下载PDF
An attention mechanism-based multi-domain feature fusion approach for active sonar target recognition
3
作者 Tongjing SUN Haoran XU +1 位作者 Shishuo REN Denghui ZHANG 《ENGINEERING Information Technology & Electronic Engineering》 2026年第2期83-94,共12页
Due to the complex and changeable marine environment,the active sonar target recognition problem has always been difficult in the field of underwater acoustics.Deep learning-based fusion recognition technology provide... Due to the complex and changeable marine environment,the active sonar target recognition problem has always been difficult in the field of underwater acoustics.Deep learning-based fusion recognition technology provides an effective way to solve this problem,but relying on simple concatenation strategies to fuse multi-domain features can cause information redundancy,and it is not easy to effectively mine correlation information between domains.Therefore,this paper proposes an attention mechanism-based multi-domain feature fusion approach for active sonar target recognition.By preprocessing active sonar echo signals and constructing a multi-domain feature extraction and fusion network,this method uses a one-dimensional convolutional neural network with long short-term memory(1DCNN-LSTM)and a two-dimensional convolutional neural network(2DCNN)with channel attention introduced to extract deep features from different domains.Subsequently,combining feature concatenation and constructing multi-domain cross-attention,intra-and cross-domain feature fusion is performed,which can effectively eliminate redundant information and promote inter-domain information interaction,while maximizing the retention of target features.Experimental results show that compared with single-domain methods,the network using an attention mechanism for multi-domain feature fusion strengthens cross-domain information interaction and significantly improves feature representation capability.Compared with other methods,the proposed method has obvious advantages in performance and maintains stable generalization ability in scenarios with low signal-clutter ratios. 展开更多
关键词 Acoustic target recognition Neural network Attention mechanism Multi-domain feature fusion
在线阅读 下载PDF
Attention Mechanisms and FFM Feature Fusion Module-Based Modification of the Deep Neural Network for Detection of Structural Cracks
4
作者 Tao Jin Zhekun Shou +1 位作者 Hongchao Liu Yuchun Shao 《Computer Modeling in Engineering & Sciences》 2026年第2期345-366,共22页
This research centers on structural health monitoring of bridges,a critical transportation infrastructure.Owing to the cumulative action of heavy vehicle loads,environmental variations,and material aging,bridge compon... This research centers on structural health monitoring of bridges,a critical transportation infrastructure.Owing to the cumulative action of heavy vehicle loads,environmental variations,and material aging,bridge components are prone to cracks and other defects,severely compromising structural safety and service life.Traditional inspection methods relying on manual visual assessment or vehicle-mounted sensors suffer from low efficiency,strong subjectivity,and high costs,while conventional image processing techniques and early deep learning models(e.g.,UNet,Faster R-CNN)still performinadequately in complex environments(e.g.,varying illumination,noise,false cracks)due to poor perception of fine cracks andmulti-scale features,limiting practical application.To address these challenges,this paper proposes CACNN-Net(CBAM-Augmented CNN),a novel dual-encoder architecture that innovatively couples a CNN for local detail extraction with a CBAM-Transformer for global context modeling.A key contribution is the dedicated Feature FusionModule(FFM),which strategically integratesmulti-scale features and focuses attention on crack regions while suppressing irrelevant noise.Experiments on bridge crack datasets demonstrate that CACNNNet achieves a precision of 77.6%,a recall of 79.4%,and an mIoU of 62.7%.These results significantly outperform several typical models(e.g.,UNet-ResNet34,Deeplabv3),confirming their superior accuracy and robust generalization,providing a high-precision automated solution for bridge crack detection and a novel network design paradigm for structural surface defect identification in complex scenarios,while future research may integrate physical features like depth information to advance intelligent infrastructure maintenance and digital twin management. 展开更多
关键词 Bridge crack diseases structural health monitoring convolutional neural network feature fusion
在线阅读 下载PDF
Federated Semi-Supervised Learning Based on Feature Space Fusion
5
作者 Zhe Ding Hao Yi +6 位作者 Wenrui Xie Ming Zhang Yuxuan Xiao Qixu Wang Qing Chen Zhiguang Qin Dajiang Chen 《Computers, Materials & Continua》 2026年第5期2062-2076,共15页
Federated semi-supervised learning(FSSL)has garnered substantial attention for enabling collaborative global model training across multiple clients to address the scarcity of labeled data and to preserve data privacy.... Federated semi-supervised learning(FSSL)has garnered substantial attention for enabling collaborative global model training across multiple clients to address the scarcity of labeled data and to preserve data privacy.However,FSSL is plagued by formidable challenges stemming fromcross-client data heterogeneity,as existing methods fail to achieve effective fusion of feature subspaces across distinct clients.To address this issue,we propose a novel FSSL framework,named FedSPQR,which is explicitly tailored for the label-at-server scenario.On the server side,FedSPQR adopts subspace clustering and fusion method based on the Grassmann manifold to construct a unified global feature space,which is further leveraged to refine the global model.On the client side,the pre-established global feature space acts as a benchmark for aligning the local feature subspaces.Based on the aligned local feature subspaces,integrating self-supervised learning with knowledge distillation facilitates effective local learning to alleviate local bias caused by data heterogeneity.Extensive experiments on two standard public benchmarks confirm that FedSPQR outperforms state-of-the-art(SOTA)baselines by a significant margin. 展开更多
关键词 Federated semi-supervised learning feature space fusion knowledge distillation
在线阅读 下载PDF
A Lightweight Multiscale Feature Fusion Network for Solar Cell Defect Detection
6
作者 Xiaoyun Chen Lanyao Zhang +3 位作者 Xiaoling Chen Yigang Cen Linna Zhang Fugui Zhang 《Computers, Materials & Continua》 SCIE EI 2025年第1期521-542,共22页
Solar cell defect detection is crucial for quality inspection in photovoltaic power generation modules.In the production process,defect samples occur infrequently and exhibit random shapes and sizes,which makes it cha... Solar cell defect detection is crucial for quality inspection in photovoltaic power generation modules.In the production process,defect samples occur infrequently and exhibit random shapes and sizes,which makes it challenging to collect defective samples.Additionally,the complex surface background of polysilicon cell wafers complicates the accurate identification and localization of defective regions.This paper proposes a novel Lightweight Multiscale Feature Fusion network(LMFF)to address these challenges.The network comprises a feature extraction network,a multi-scale feature fusion module(MFF),and a segmentation network.Specifically,a feature extraction network is proposed to obtain multi-scale feature outputs,and a multi-scale feature fusion module(MFF)is used to fuse multi-scale feature information effectively.In order to capture finer-grained multi-scale information from the fusion features,we propose a multi-scale attention module(MSA)in the segmentation network to enhance the network’s ability for small target detection.Moreover,depthwise separable convolutions are introduced to construct depthwise separable residual blocks(DSR)to reduce the model’s parameter number.Finally,to validate the proposed method’s defect segmentation and localization performance,we constructed three solar cell defect detection datasets:SolarCells,SolarCells-S,and PVEL-S.SolarCells and SolarCells-S are monocrystalline silicon datasets,and PVEL-S is a polycrystalline silicon dataset.Experimental results show that the IOU of our method on these three datasets can reach 68.5%,51.0%,and 92.7%,respectively,and the F1-Score can reach 81.3%,67.5%,and 96.2%,respectively,which surpasses other commonly usedmethods and verifies the effectiveness of our LMFF network. 展开更多
关键词 Defect segmentation multi-scale feature fusion multi-scale attention depthwise separable residual block
在线阅读 下载PDF
Multi-scale feature fusion optical remote sensing target detection method 被引量:1
7
作者 BAI Liang DING Xuewen +1 位作者 LIU Ying CHANG Limei 《Optoelectronics Letters》 2025年第4期226-233,共8页
An improved model based on you only look once version 8(YOLOv8)is proposed to solve the problem of low detection accuracy due to the diversity of object sizes in optical remote sensing images.Firstly,the feature pyram... An improved model based on you only look once version 8(YOLOv8)is proposed to solve the problem of low detection accuracy due to the diversity of object sizes in optical remote sensing images.Firstly,the feature pyramid network(FPN)structure of the original YOLOv8 mode is replaced by the generalized-FPN(GFPN)structure in GiraffeDet to realize the"cross-layer"and"cross-scale"adaptive feature fusion,to enrich the semantic information and spatial information on the feature map to improve the target detection ability of the model.Secondly,a pyramid-pool module of multi atrous spatial pyramid pooling(MASPP)is designed by using the idea of atrous convolution and feature pyramid structure to extract multi-scale features,so as to improve the processing ability of the model for multi-scale objects.The experimental results show that the detection accuracy of the improved YOLOv8 model on DIOR dataset is 92%and mean average precision(mAP)is 87.9%,respectively 3.5%and 1.7%higher than those of the original model.It is proved the detection and classification ability of the proposed model on multi-dimensional optical remote sensing target has been improved. 展开更多
关键词 multi scale feature fusion optical remote sensing feature map improve target detection ability optical remote sensing imagesfirstlythe target detection feature fusionto enrich semantic information spatial information
原文传递
Multi-Scale Feature Fusion and Advanced Representation Learning for Multi Label Image Classification
8
作者 Naikang Zhong Xiao Lin +1 位作者 Wen Du Jin Shi 《Computers, Materials & Continua》 2025年第3期5285-5306,共22页
Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feat... Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification. 展开更多
关键词 Image classification MULTI-LABEL multi scale attention mechanisms feature fusion
在线阅读 下载PDF
Detection of Abnormal Cardiac Rhythms Using Feature Fusion Technique with Heart Sound Spectrograms
9
作者 Saif Ur Rehman Khan Zia Khan 《Journal of Bionic Engineering》 2025年第4期2030-2049,共20页
A heart attack disrupts the normal flow of blood to the heart muscle,potentially causing severe damage or death if not treated promptly.It can lead to long-term health complications,reduce quality of life,and signific... A heart attack disrupts the normal flow of blood to the heart muscle,potentially causing severe damage or death if not treated promptly.It can lead to long-term health complications,reduce quality of life,and significantly impact daily activities and overall well-being.Despite the growing popularity of deep learning,several drawbacks persist,such as complexity and the limitation of single-model learning.In this paper,we introduce a residual learning-based feature fusion technique to achieve high accuracy in differentiating abnormal cardiac rhythms heart sound.Combining MobileNet with DenseNet201 for feature fusion leverages MobileNet lightweight,efficient architecture with DenseNet201,dense connections,resulting in enhanced feature extraction and improved model performance with reduced computational cost.To further enhance the fusion,we employed residual learning to optimize the hierarchical features of heart abnormal sounds during training.The experimental results demonstrate that the proposed fusion method achieved an accuracy of 95.67%on the benchmark PhysioNet-2016 Spectrogram dataset.To further validate the performance,we applied it to the BreakHis dataset with a magnification level of 100X.The results indicate that the model maintains robust performance on the second dataset,achieving an accuracy of 96.55%.it highlights its consistent performance,making it a suitable for various applications. 展开更多
关键词 Cardiac rhythms feature fusion Residual learning BreakHis Spectrogram sound
在线阅读 下载PDF
Oversampling-Enhanced Feature Fusion-Based Hybrid ViT-1DCNN Model for Ransomware Cyber Attack Detection
10
作者 Muhammad Armghan Latif Zohaib Mushtaq +4 位作者 Saifur Rahman Saad Arif Salim Nasar Faraj Mursal Muhammad Irfan Haris Aziz 《Computer Modeling in Engineering & Sciences》 2025年第2期1667-1695,共29页
Ransomware attacks pose a significant threat to critical infrastructures,demanding robust detection mechanisms.This study introduces a hybrid model that combines vision transformer(ViT)and one-dimensional convolutiona... Ransomware attacks pose a significant threat to critical infrastructures,demanding robust detection mechanisms.This study introduces a hybrid model that combines vision transformer(ViT)and one-dimensional convolutional neural network(1DCNN)architectures to enhance ransomware detection capabilities.Addressing common challenges in ransomware detection,particularly dataset class imbalance,the synthetic minority oversampling technique(SMOTE)is employed to generate synthetic samples for minority class,thereby improving detection accuracy.The integration of ViT and 1DCNN through feature fusion enables the model to capture both global contextual and local sequential features,resulting in comprehensive ransomware classification.Tested on the UNSW-NB15 dataset,the proposed ViT-1DCNN model achieved 98%detection accuracy with precision,recall,and F1-score metrics surpassing conventional methods.This approach not only reduces false positives and negatives but also offers scalability and robustness for real-world cybersecurity applications.The results demonstrate the model’s potential as an effective tool for proactive ransomware detection,especially in environments where evolving threats require adaptable and high-accuracy solutions. 展开更多
关键词 Ransomware attacks CYBERSECURITY vision transformer convolutional neural network feature fusion ENCRYPTION threat detection
在线阅读 下载PDF
An Ochotona Curzoniae Object Detection Model Based on Feature Fusion with SCConv Attention Mechanism
11
作者 Haiyan Chen Rong Li 《Computers, Materials & Continua》 2025年第9期5693-5712,共20页
The detection of Ochotona Curzoniae serves as a fundamental component for estimating the population size of this species and for analyzing the dynamics of its population fluctuations.In natural environments,the pixels... The detection of Ochotona Curzoniae serves as a fundamental component for estimating the population size of this species and for analyzing the dynamics of its population fluctuations.In natural environments,the pixels representing Ochotona Curzoniae constitute a small fraction of the total pixels,and their distinguishing features are often subtle,complicating the target detection process.To effectively extract the characteristics of these small targets,a feature fusion approach that utilizes up-sampling and channel integration from various layers within a CNN can significantly enhance the representation of target features,ultimately improving detection accuracy.However,the top-down fusion of features from different layers may lead to information duplication and semantic bias,resulting in redundancy and high-frequency noise.To address the challenges of information redundancy and high-frequency noise during the feature fusion process in CNN,we have developed a target detection model for Ochotona Curzoniae.This model is based on a spatial-channel reconfiguration convolutional(SCConv)attentional mechanism and feature fusion(FFBCA),integrated with the Faster R-CNN framework.It consists of a feature extraction network,an attention mechanism-based feature fusion module,and a jump residual connection fusion module.Initially,we designed a dual attention mechanism feature fusion module that employs spatial-channel reconstruction convolution.In the spatial dimension,the attention mechanism adopts a separation-reconstruction approach,calculating a weight matrix for the spatial information within the feature map through group normalization.This process directs the model to concentrate on feature information assigned varying weights,thereby reducing redundancy during feature fusion.In the channel dimension,the attention mechanism utilizes a partition-transpose-fusion method,segmenting the input feature map into high-noise and low-noise components based on the variance of the feature information.The high-noise segment is processed through a low-pass filter constructed from pointwise convolution(PWC)to eliminate some high-frequency noise,while the low-noise segment employs a bottleneck structure with global average pooling(GAP)to generate a weight matrix that emphasizes the significance of channel dimension feature information.This approach diminishes the model’s focus on low-weight feature information,thereby preserving low-frequency semantic information while reducing information redundancy.Furthermore,we have developed a novel feature extraction network,ResNeXt-S,by integrating the Sim attention mechanism into ResNeXt50.This configuration assigns three-dimensional attention weights to each position within the feature map,thereby enhancing the local feature information of small targets while reducing background noise.Finally,we constructed a jump residual connection fusion module to minimize the loss of high-level semantic information during the feature fusion process.Experiments on Ochotona Curzoniae target detection on the Ochotona Curzoniae dataset show that the detection accuracy of the model in this paper is 92.3%,which is higher than that of FSSD512(84.6%),TDFSSD512(81.3%),FPN(86.5%),FFBAM(88.5%),Faster R-CNN(89.6%),and SSD512(88.6%)detection accuracies. 展开更多
关键词 Ochotona curzoniae target detection SCConv attention feature fusion
在线阅读 下载PDF
BAHGRF^(3):Human gait recognition in the indoor environment using deep learning features fusion assisted framework and posterior probability moth flame optimisation
12
作者 Muhammad Abrar Ahmad Khan Muhammad Attique Khan +5 位作者 Ateeq Ur Rehman Ahmed Ibrahim Alzahrani Nasser Alalwan Deepak Gupta Saima Ahmed Rahin Yudong Zhang 《CAAI Transactions on Intelligence Technology》 2025年第2期387-401,共15页
Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework... Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework for human gait classification in video sequences using deep learning(DL)fusion assisted and posterior probability-based moth flames optimization(MFO)is proposed.In the first step,the video frames are resized and finetuned by two pre-trained lightweight DL models,EfficientNetB0 and MobileNetV2.Both models are selected based on the top-5 accuracy and less number of parameters.Later,both models are trained through deep transfer learning and extracted deep features fused using a voting scheme.In the last step,the authors develop a posterior probabilitybased MFO feature selection algorithm to select the best features.The selected features are classified using several supervised learning methods.The CASIA-B publicly available dataset has been employed for the experimental process.On this dataset,the authors selected six angles such as 0°,18°,90°,108°,162°,and 180°and obtained an average accuracy of 96.9%,95.7%,86.8%,90.0%,95.1%,and 99.7%.Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state-of-the-art techniques. 展开更多
关键词 deep learning feature fusion feature optimization gait classification indoor environment machine learning
在线阅读 下载PDF
LViT‑Net:a domain generalization person re‑identification model combining local semantics and multi‑feature cross fusion
13
作者 Xintong Hu Peishun Liu +2 位作者 Xuefang Wang Peiyao Wu Ruichun Tang 《Visual Computing for Industry,Biomedicine,and Art》 2025年第1期162-176,共15页
In the task of domain generalization person re-identification(ReID),pedestrian image features exhibit significant intraclass variability and inter-class similarity.Existing methods rely on a single feature extraction ... In the task of domain generalization person re-identification(ReID),pedestrian image features exhibit significant intraclass variability and inter-class similarity.Existing methods rely on a single feature extraction architecture and struggle to capture both global context and local spatial information,resulting in weaker generalization to unseen domains.To address this issue,an innovative domain generalization person ReID method–LViT-Net,which combines local semantics and multi-feature cross fusion,is proposed.LViT-Net adopts a dual-branch encoder with a parallel hierarchical structure to extract both local and global discriminative features.In the local branch,the local multi-scale feature fusion module is designed to fuse local feature units at different scales to ensure that the fine-grained local features at various levels are accurately captured,thereby enhancing the robustness of the features.In the global branch,the dual feature cross fusion module fuses local features and global semantic information,focusing on critical semantic information and enabling the mutual refinement and matching of local and global features.This allows the model to achieve a dynamic balance between detailed and holistic information,forming robust feature representations of pedestrians.Extensive experiments demonstrate the effectiveness of LViT-Net.In both single-source and multisource comparison experiments,the proposed method outperforms existing state-of-the-art methods. 展开更多
关键词 Domain generalization Person re-identification feature fusion Semantic representation Dual-branch network architecture
在线阅读 下载PDF
Self-attention and convolutional feature fusion for real-time intelligent fault detection of high-speed railway pantographs
14
作者 Xufeng LI Jien MA +3 位作者 Ping TAN Lanfen LIN Lin QIU Youtong FANG 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 2025年第10期997-1009,共13页
Currently,most trains are equipped with dedicated cameras for capturing pantograph videos.Pantographs are core to the high-speed-railway pantograph-catenary system,and their failure directly affects the normal operati... Currently,most trains are equipped with dedicated cameras for capturing pantograph videos.Pantographs are core to the high-speed-railway pantograph-catenary system,and their failure directly affects the normal operation of high-speed trains.However,given the complex and variable real-world operational conditions of high-speed railways,there is no real-time and robust pantograph fault-detection method capable of handling large volumes of surveillance video.Hence,it is of paramount importance to maintain real-time monitoring and analysis of pantographs.Our study presents a real-time intelligent detection technology for identifying faults in high-speed railway pantographs,utilizing a fusion of self-attention and convolution features.We delved into lightweight multi-scale feature-extraction and fault-detection models based on deep learning to detect pantograph anomalies.Compared with traditional methods,this approach achieves high recall and accuracy in pantograph recognition,accurately pinpointing issues like discharge sparks,pantograph horns,and carbon pantograph-slide malfunctions.After experimentation and validation with actual surveillance videos of electric multiple-unit train,our algorithmic model demonstrates real-time,high-accuracy performance even under complex operational conditions. 展开更多
关键词 High-speed railway pantograph Self-attention Convolutional neural network(CNN) REAL-TIME feature fusion Faultdetection
原文传递
LR-Net:Lossless Feature Fusion and Revised SIoU for Small Object Detection
15
作者 Gang Li Ru Wang +5 位作者 Yang Zhang Chuanyun Xu Xinyu Fan Zheng Zhou Pengfei Lv Zihan Ruan 《Computers, Materials & Continua》 2025年第11期3267-3288,共22页
Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limi... Currently,challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection.Since small objects occupy only a few pixels in an image,the extracted features are limited,and mainstream downsampling convolution operations further exacerbate feature loss.Additionally,due to the occlusionprone nature of small objects and their higher sensitivity to localization deviations,conventional Intersection over Union(IoU)loss functions struggle to achieve stable convergence.To address these limitations,LR-Net is proposed for small object detection.Specifically,the proposed Lossless Feature Fusion(LFF)method transfers spatial features into the channel domain while leveraging a hybrid attentionmechanism to focus on critical features,mitigating feature loss caused by downsampling.Furthermore,RSIoU is proposed to enhance the convergence performance of IoU-based losses for small objects.RSIoU corrects the inherent convergence direction issues in SIoU and proposes a penalty term as a Dynamic Focusing Mechanism parameter,enabling it to dynamically emphasize the loss contribution of small object samples.Ultimately,RSIoU significantly improves the convergence performance of the loss function for small objects,particularly under occlusion scenarios.Experiments demonstrate that LR-Net achieves significant improvements across variousmetrics onmultiple datasets compared with YOLOv8n,achieving a 3.7% increase in mean Average Precision(AP)on the VisDrone2019 dataset,along with improvements of 3.3% on the AI-TOD dataset and 1.2% on the COCO dataset. 展开更多
关键词 Small object detection lossless feature fusion attention mechanisms loss function penalty term
在线阅读 下载PDF
Low-Light Image Enhancement Based on Wavelet Local and Global Feature Fusion Network
16
作者 Shun Song Xiangqian Jiang Dawei Zhao 《Journal of Contemporary Educational Research》 2025年第11期209-214,共6页
A wavelet-based local and global feature fusion network(LAGN)is proposed for low-light image enhancement,aiming to enhance image details and restore colors in dark areas.This study focuses on addressing three key issu... A wavelet-based local and global feature fusion network(LAGN)is proposed for low-light image enhancement,aiming to enhance image details and restore colors in dark areas.This study focuses on addressing three key issues in low-light image enhancement:Enhancing low-light images using LAGN to preserve image details and colors;extracting image edge information via wavelet transform to enhance image details;and extracting local and global features of images through convolutional neural networks and Transformer to improve image contrast.Comparisons with state-of-the-art methods on two datasets verify that LAGN achieves the best performance in terms of details,brightness,and contrast. 展开更多
关键词 Image enhancement feature fusion Wavelet transform Convolutional Neural Network(CNN) TRANSFORMER
在线阅读 下载PDF
Attention Shift-Invariant Cross-Evolutionary Feature Fusion Network for Infrared Small Target Detection
17
作者 Siqi Zhang Shengda Pan 《Computers, Materials & Continua》 2025年第9期4655-4676,共22页
Infrared images typically exhibit diverse backgrounds,each potentially containing noise and target-like interference elements.In complex backgrounds,infrared small targets are prone to be submerged by background noise... Infrared images typically exhibit diverse backgrounds,each potentially containing noise and target-like interference elements.In complex backgrounds,infrared small targets are prone to be submerged by background noise due to their low pixel proportion and limited available features,leading to detection failure.To address this problem,this paper proposes an Attention Shift-Invariant Cross-Evolutionary Feature Fusion Network(ASCFNet)tailored for the detection of infrared weak and small targets.The network architecture first designs a Multidimensional Lightweight Pixel-level Attention Module(MLPA),which alleviates the issue of small-target feature suppression during deep network propagation by combining channel reshaping,multi-scale parallel subnet architectures,and local cross-channel interactions.Then,a Multidimensional Shift-Invariant Recall Module(MSIR)is designed to ensure the network remains unaffected by minor input perturbations when processing infrared images,through focusing on the model’s shift invariance.Subsequently,a Cross-Evolutionary Feature Fusion structure(CEFF)is designed to allow flexible and efficient integration of multidimensional feature information from different network hierarchies,thereby achieving complementarity and enhancement among features.Experimental results on three public datasets,SIRST,NUDT-SIRST,and IRST640,demonstrate that our proposed network outperforms advanced algorithms in the field.Specifically,on the NUDT-SIRST dataset,the mAP50,mAP50-95,and metrics reached 99.26%,85.22%,and 99.31%,respectively.Visual evaluations of detection results in diverse scenarios indicate that our algorithm exhibits an increased detection rate and reduced false alarm rate.Our method balances accuracy and real-time performance,and achieves efficient and stable detection of infrared weak and small targets. 展开更多
关键词 Deep learning infrared small target detection complex scenes feature fusion convolution pooling
在线阅读 下载PDF
A lithium-ion battery state-of-health prediction model based on physical information constraints and multimodal feature fusion
18
作者 XU Hai-ming YU Tian-jian +3 位作者 FENG En-lai ZENG Xiao-yan HU Yu-song CHEN Lan 《Journal of Central South University》 2025年第11期4593-4612,共20页
Accurate estimation of lithium battery state-of-health(SOH)is essential for ensuring safe operation and efficient utilization.To address the challenges of complex degradation factors and unreliable feature extraction,... Accurate estimation of lithium battery state-of-health(SOH)is essential for ensuring safe operation and efficient utilization.To address the challenges of complex degradation factors and unreliable feature extraction,we develop a novel SOH prediction model integrating physical information constraints and multimodal feature fusion.Our approach employs a multi-channel encoder to process heterogeneous data modalities,including health indicators,raw charge/discharge sequences,and incremental capacity data,and uses multi-channel encoders to achieve structured input.A physics-informed loss function,derived from an empirical capacity decay equation,is incorporated to enforce interpretability,while a cross-layer attention mechanism dynamically weights features to handle missing modalities and random noise.Experimental validation on multiple battery types demonstrates that our model reduces mean absolute error(MAE)by at least 51.09%compared to unimodal baselines,maintains robustness under adverse conditions such as partial data loss,and achieves an average MAE of 0.0201 in real-world battery pack applications.This model significantly enhances the accuracy and universality of prediction,enabling accurate prediction of battery SOH under actual engineering conditions. 展开更多
关键词 lithium-ion batteries state-of-health prediction multimodal feature fusion physics-informed neural networks attention mechanism
在线阅读 下载PDF
A Global-Local Parallel Dual-Branch Deep Learning Model with Attention-Enhanced Feature Fusion for Brain Tumor MRI Classification
19
作者 Zhiyong Li Xinlian Zhou 《Computers, Materials & Continua》 2025年第4期739-760,共22页
Brain tumor classification is crucial for personalized treatment planning.Although deep learning-based Artificial Intelligence(AI)models can automatically analyze tumor images,fine details of small tumor regions may b... Brain tumor classification is crucial for personalized treatment planning.Although deep learning-based Artificial Intelligence(AI)models can automatically analyze tumor images,fine details of small tumor regions may be overlooked during global feature extraction.Therefore,we propose a brain tumor Magnetic Resonance Imaging(MRI)classification model based on a global-local parallel dual-branch structure.The global branch employs ResNet50 with a Multi-Head Self-Attention(MHSA)to capture global contextual information from whole brain images,while the local branch utilizes VGG16 to extract fine-grained features from segmented brain tumor regions.The features from both branches are processed through designed attention-enhanced feature fusion module to filter and integrate important features.Additionally,to address sample imbalance in the dataset,we introduce a category attention block to improve the recognition of minority classes.Experimental results indicate that our method achieved a classification accuracy of 98.04%and a micro-average Area Under the Curve(AUC)of 0.989 in the classification of three types of brain tumors,surpassing several existing pre-trained Convolutional Neural Network(CNN)models.Additionally,feature interpretability analysis validated the effectiveness of the proposed model.This suggests that the method holds significant potential for brain tumor image classification. 展开更多
关键词 Deep learning attention mechanism feature fusion dual-branch structure brain tumor MRI classification
在线阅读 下载PDF
上一页 1 2 18 下一页 到第
使用帮助 返回顶部