期刊文献+
共找到2,680篇文章
< 1 2 134 >
每页显示 20 50 100
Research on Camouflage Target Detection Method Based on Edge Guidance and Multi-Scale Feature Fusion
1
作者 Tianze Yu Jianxun Zhang Hongji Chen 《Computers, Materials & Continua》 2026年第4期1676-1697,共22页
Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the backgroun... Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the background,camouflaged objects often exhibit vague boundaries and varying scales,making it challenging to accurately locate targets and delineate their indistinct edges.To address this,we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network(EGMFNet),which leverages edge-guided multi-scale integration for enhanced performance.The model incorporates two innovative components:a Multi-scale Fusion Module(MSFM)and an Edge-Guided Attention Module(EGA).These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries.Moreover,recognizing the rich contextual information in fused features,we introduce a Dual-Branch Global Context Module(DGCM)to refine features using extensive global context,thereby generatingmore informative representations.Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics.Specifically,on COD10K,our EGMFNet-P improves F_(β)by 4.8 points and reduces mean absolute error(MAE)by 0.006 compared with ZoomNeXt;on NC4K,it achieves a 3.6-point increase in F_(β).OnCAMO and CHAMELEON,it obtains 4.5-point increases in F_(β),respectively.These consistent gains substantiate the superiority and robustness of EGMFNet. 展开更多
关键词 Camouflaged object detection multi-scale feature fusion edge-guided image segmentation
在线阅读 下载PDF
DL-YOLO:AMulti-Scale Feature Fusion Detection Algorithm for Low-Light Environments
2
作者 Yuanmeng Chang Hongmei Liu 《Computers, Materials & Continua》 2026年第5期1901-1915,共15页
Driven by rapid advances in deep learning,object detection has been widely adopted across diverse application scenarios.However,in low-light conditions,critical visual cues of target objects are severely degraded,posi... Driven by rapid advances in deep learning,object detection has been widely adopted across diverse application scenarios.However,in low-light conditions,critical visual cues of target objects are severely degraded,posing a significant challenge for accurate low-light object detection.Existing methods struggle to preserve discriminative features while maintaining semantic consistency between low-light and normal-light images.For this purpose,this study proposes a DL-YOLO model specially tailored for low-light detection.To mitigate target feature attenuation introduced by repeated downsampling,we design aMulti-Scale FeatureConvolution(MSF-Conv)module that captures rich,multi-level details via multi-scale feature learning,thereby reducing model complexity and computational cost.For feature fusion,we integrated the C3k2-DWRmodule by embedding the Dilation-wise Residual(DWR)mechanism into the 2-core optimized Cross Stage Partial(C3)framework,achieving efficient feature integration.In addition,we replace conventional localization losses with WIoU(Weighted Intersection over Union),which dynamically adjusts gradient gain according to sample quality,thereby improving localization robustness and precision.Experiments on the ExDark dataset demonstrate that DL-YOLO delivers strong low-light detection performance.The relevant code is published at https://github.com/cym0997/DL-YOLO. 展开更多
关键词 multi-scale feature extraction object detection low-light environments ExDark dataset
在线阅读 下载PDF
Global context-aware multi-scale feature iterative refinement for aviation-road traffic semantic segmentation
3
作者 Mengyue ZHANG Shichun YANG +1 位作者 Xinjie FENG Yaoguang CAO 《Chinese Journal of Aeronautics》 2026年第2期429-441,共13页
Semantic segmentation for mixed scenes of aerial remote sensing and road traffic is one of the key technologies for visual perception of flying cars.The State-of-the-Art(SOTA)semantic segmentation methods have made re... Semantic segmentation for mixed scenes of aerial remote sensing and road traffic is one of the key technologies for visual perception of flying cars.The State-of-the-Art(SOTA)semantic segmentation methods have made remarkable achievements in both fine-grained segmentation and real-time performance.However,when faced with the huge differences in scale and semantic categories brought about by the mixed scenes of aerial remote sensing and road traffic,they still face great challenges and there is little related research.Addressing the above issue,this paper proposes a semantic segmentation model specifically for mixed datasets of aerial remote sensing and road traffic scenes.First,a novel decoding-recoding multi-scale feature iterative refinement structure is proposed,which utilizes the re-integration and continuous enhancement of multi-scale information to effectively deal with the huge scale differences between cross-domain scenes,while using a fully convolutional structure to ensure the lightweight and real-time requirements.Second,a welldesigned cross-window attention mechanism combined with a global information integration decoding block forms an enhanced global context perception,which can effectively capture the long-range dependencies and multi-scale global context information of different scenes,thereby achieving fine-grained semantic segmentation.The proposed method is tested on a large-scale mixed dataset of aerial remote sensing and road traffic scenes.The results confirm that it can effectively deal with the problem of large-scale differences in cross-domain scenes.Its segmentation accuracy surpasses that of the SOTA methods,which meets the real-time requirements. 展开更多
关键词 Aviation-road traffic Flying cars Global context-aware multi-scale feature iterative refinement Semantic segmentation
原文传递
Cascading Class Activation Mapping:A Counterfactual Reasoning-Based Explainable Method for Comprehensive Feature Discovery
4
作者 Seoyeon Choi Hayoung Kim Guebin Choi 《Computer Modeling in Engineering & Sciences》 2026年第2期1043-1069,共27页
Most Convolutional Neural Network(CNN)interpretation techniques visualize only the dominant cues that the model relies on,but there is no guarantee that these represent all the evidence the model uses for classificati... Most Convolutional Neural Network(CNN)interpretation techniques visualize only the dominant cues that the model relies on,but there is no guarantee that these represent all the evidence the model uses for classification.This limitation becomes critical when hidden secondary cues—potentially more meaningful than the visualized ones—remain undiscovered.This study introduces CasCAM(Cascaded Class Activation Mapping)to address this fundamental limitation through counterfactual reasoning.By asking“if this dominant cue were absent,what other evidence would the model use?”,CasCAM progressively masks the most salient features and systematically uncovers the hierarchy of classification evidence hidden beneath them.Experimental results demonstrate that CasCAM effectively discovers the full spectrum of reasoning evidence and can be universally applied with nine existing interpretation methods. 展开更多
关键词 Explainable AI class activation mapping counterfactual reasoning shortcut learning feature discovery
在线阅读 下载PDF
Optimized Convolutional Neural Networks with Multi-Scale Pyramid Feature Integration for Efficient Traffic Light Detection in Intelligent Transportation Systems 被引量:1
5
作者 Yahia Said Yahya Alassaf +2 位作者 Refka Ghodhbani Taoufik Saidani Olfa Ben Rhaiem 《Computers, Materials & Continua》 2025年第2期3005-3018,共14页
Transportation systems are experiencing a significant transformation due to the integration of advanced technologies, including artificial intelligence and machine learning. In the context of intelligent transportatio... Transportation systems are experiencing a significant transformation due to the integration of advanced technologies, including artificial intelligence and machine learning. In the context of intelligent transportation systems (ITS) and Advanced Driver Assistance Systems (ADAS), the development of efficient and reliable traffic light detection mechanisms is crucial for enhancing road safety and traffic management. This paper presents an optimized convolutional neural network (CNN) framework designed to detect traffic lights in real-time within complex urban environments. Leveraging multi-scale pyramid feature maps, the proposed model addresses key challenges such as the detection of small, occluded, and low-resolution traffic lights amidst complex backgrounds. The integration of dilated convolutions, Region of Interest (ROI) alignment, and Soft Non-Maximum Suppression (Soft-NMS) further improves detection accuracy and reduces false positives. By optimizing computational efficiency and parameter complexity, the framework is designed to operate seamlessly on embedded systems, ensuring robust performance in real-world applications. Extensive experiments using real-world datasets demonstrate that our model significantly outperforms existing methods, providing a scalable solution for ITS and ADAS applications. This research contributes to the advancement of Artificial Intelligence-driven (AI-driven) pattern recognition in transportation systems and offers a mathematical approach to improving efficiency and safety in logistics and transportation networks. 展开更多
关键词 Intelligent transportation systems(ITS) traffic light detection multi-scale pyramid feature maps advanced driver assistance systems(ADAS) real-time detection AI in transportation
在线阅读 下载PDF
Multi-scale feature fused stacked autoencoder and its application for soft sensor modeling 被引量:1
6
作者 Zhi Li Yuchong Xia +2 位作者 Jian Long Chensheng Liu Longfei Zhang 《Chinese Journal of Chemical Engineering》 2025年第5期241-254,共14页
Deep Learning has been widely used to model soft sensors in modern industrial processes with nonlinear variables and uncertainty.Due to the outstanding ability for high-level feature extraction,stacked autoencoder(SAE... Deep Learning has been widely used to model soft sensors in modern industrial processes with nonlinear variables and uncertainty.Due to the outstanding ability for high-level feature extraction,stacked autoencoder(SAE)has been widely used to improve the model accuracy of soft sensors.However,with the increase of network layers,SAE may encounter serious information loss issues,which affect the modeling performance of soft sensors.Besides,there are typically very few labeled samples in the data set,which brings challenges to traditional neural networks to solve.In this paper,a multi-scale feature fused stacked autoencoder(MFF-SAE)is suggested for feature representation related to hierarchical output,where stacked autoencoder,mutual information(MI)and multi-scale feature fusion(MFF)strategies are integrated.Based on correlation analysis between output and input variables,critical hidden variables are extracted from the original variables in each autoencoder's input layer,which are correspondingly given varying weights.Besides,an integration strategy based on multi-scale feature fusion is adopted to mitigate the impact of information loss with the deepening of the network layers.Then,the MFF-SAE method is designed and stacked to form deep networks.Two practical industrial processes are utilized to evaluate the performance of MFF-SAE.Results from simulations indicate that in comparison to other cutting-edge techniques,the proposed method may considerably enhance the accuracy of soft sensor modeling,where the suggested method reduces the root mean square error(RMSE)by 71.8%,17.1%and 64.7%,15.1%,respectively. 展开更多
关键词 multi-scale feature fusion Soft sensors Stacked autoencoders Computational chemistry Chemical processes Parameter estimation
在线阅读 下载PDF
Multi-scale feature fusion optical remote sensing target detection method 被引量:1
7
作者 BAI Liang DING Xuewen +1 位作者 LIU Ying CHANG Limei 《Optoelectronics Letters》 2025年第4期226-233,共8页
An improved model based on you only look once version 8(YOLOv8)is proposed to solve the problem of low detection accuracy due to the diversity of object sizes in optical remote sensing images.Firstly,the feature pyram... An improved model based on you only look once version 8(YOLOv8)is proposed to solve the problem of low detection accuracy due to the diversity of object sizes in optical remote sensing images.Firstly,the feature pyramid network(FPN)structure of the original YOLOv8 mode is replaced by the generalized-FPN(GFPN)structure in GiraffeDet to realize the"cross-layer"and"cross-scale"adaptive feature fusion,to enrich the semantic information and spatial information on the feature map to improve the target detection ability of the model.Secondly,a pyramid-pool module of multi atrous spatial pyramid pooling(MASPP)is designed by using the idea of atrous convolution and feature pyramid structure to extract multi-scale features,so as to improve the processing ability of the model for multi-scale objects.The experimental results show that the detection accuracy of the improved YOLOv8 model on DIOR dataset is 92%and mean average precision(mAP)is 87.9%,respectively 3.5%and 1.7%higher than those of the original model.It is proved the detection and classification ability of the proposed model on multi-dimensional optical remote sensing target has been improved. 展开更多
关键词 multi scale feature fusion optical remote sensing feature map improve target detection ability optical remote sensing imagesfirstlythe target detection feature fusionto enrich semantic information spatial information
原文传递
BDMFuse:Multi-scale network fusion for infrared and visible images based on base and detail features
8
作者 SI Hai-Ping ZHAO Wen-Rui +4 位作者 LI Ting-Ting LI Fei-Tao Fernando Bacao SUN Chang-Xia LI Yan-Ling 《红外与毫米波学报》 北大核心 2025年第2期289-298,共10页
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f... The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception. 展开更多
关键词 infrared image visible image image fusion encoder-decoder multi-scale features
在线阅读 下载PDF
AMSFuse:Adaptive Multi-Scale Feature Fusion Network for Diabetic Retinopathy Classification
9
作者 Chengzhang Zhu Ahmed Alasri +5 位作者 Tao Xu Yalong Xiao Abdulrahman Noman Raeed Alsabri Xuanchu Duan Monir Abdullah 《Computers, Materials & Continua》 2025年第3期5153-5167,共15页
Globally,diabetic retinopathy(DR)is the primary cause of blindness,affecting millions of people worldwide.This widespread impact underscores the critical need for reliable and precise diagnostic techniques to ensure p... Globally,diabetic retinopathy(DR)is the primary cause of blindness,affecting millions of people worldwide.This widespread impact underscores the critical need for reliable and precise diagnostic techniques to ensure prompt diagnosis and effective treatment.Deep learning-based automated diagnosis for diabetic retinopathy can facilitate early detection and treatment.However,traditional deep learning models that focus on local views often learn feature representations that are less discriminative at the semantic level.On the other hand,models that focus on global semantic-level information might overlook critical,subtle local pathological features.To address this issue,we propose an adaptive multi-scale feature fusion network called(AMSFuse),which can adaptively combine multi-scale global and local features without compromising their individual representation.Specifically,our model incorporates global features for extracting high-level contextual information from retinal images.Concurrently,local features capture fine-grained details,such as microaneurysms,hemorrhages,and exudates,which are critical for DR diagnosis.These global and local features are adaptively fused using a fusion block,followed by an Integrated Attention Mechanism(IAM)that refines the fused features by emphasizing relevant regions,thereby enhancing classification accuracy for DR classification.Our model achieves 86.3%accuracy on the APTOS dataset and 96.6%RFMiD,both of which are comparable to state-of-the-art methods. 展开更多
关键词 Diabetic retinopathy multi-scale feature fusion global features local features integrated attention mechanism retinal images
暂未订购
Multi-Scale Feature Fusion Network for Accurate Detection of Cervical Abnormal Cells
10
作者 Chuanyun Xu Die Hu +3 位作者 Yang Zhang Shuaiye Huang Yisha Sun Gang Li 《Computers, Materials & Continua》 2025年第4期559-574,共16页
Detecting abnormal cervical cells is crucial for early identification and timely treatment of cervical cancer.However,this task is challenging due to the morphological similarities between abnormal and normal cells an... Detecting abnormal cervical cells is crucial for early identification and timely treatment of cervical cancer.However,this task is challenging due to the morphological similarities between abnormal and normal cells and the significant variations in cell size.Pathologists often refer to surrounding cells to identify abnormalities.To emulate this slide examination behavior,this study proposes a Multi-Scale Feature Fusion Network(MSFF-Net)for detecting cervical abnormal cells.MSFF-Net employs a Cross-Scale Pooling Model(CSPM)to effectively capture diverse features and contextual information,ranging from local details to the overall structure.Additionally,a Multi-Scale Fusion Attention(MSFA)module is introduced to mitigate the impact of cell size variations by adaptively fusing local and global information at different scales.To handle the complex environment of cervical cell images,such as cell adhesion and overlapping,the Inner-CIoU loss function is utilized to more precisely measure the overlap between bounding boxes,thereby improving detection accuracy in such scenarios.Experimental results on the Comparison detector dataset demonstrate that MSFF-Net achieves a mean average precision(mAP)of 63.2%,outperforming state-of-the-art methods while maintaining a relatively small number of parameters(26.8 M).This study highlights the effectiveness of multi-scale feature fusion in enhancing the detection of cervical abnormal cells,contributing to more accurate and efficient cervical cancer screening. 展开更多
关键词 Cervical abnormal cells image detection multi-scale feature fusion contextual information
在线阅读 下载PDF
Fake News Detection Based on Cross-Modal Ambiguity Computation and Multi-Scale Feature Fusion
11
作者 Jianxiang Cao Jinyang Wu +5 位作者 Wenqian Shang Chunhua Wang Kang Song Tong Yi Jiajun Cai Haibin Zhu 《Computers, Materials & Continua》 2025年第5期2659-2675,共17页
With the rapid growth of socialmedia,the spread of fake news has become a growing problem,misleading the public and causing significant harm.As social media content is often composed of both images and text,the use of... With the rapid growth of socialmedia,the spread of fake news has become a growing problem,misleading the public and causing significant harm.As social media content is often composed of both images and text,the use of multimodal approaches for fake news detection has gained significant attention.To solve the problems existing in previous multi-modal fake news detection algorithms,such as insufficient feature extraction and insufficient use of semantic relations between modes,this paper proposes the MFFFND-Co(Multimodal Feature Fusion Fake News Detection with Co-Attention Block)model.First,the model deeply explores the textual content,image content,and frequency domain features.Then,it employs a Co-Attention mechanism for cross-modal fusion.Additionally,a semantic consistency detectionmodule is designed to quantify semantic deviations,thereby enhancing the performance of fake news detection.Experimentally verified on two commonly used datasets,Twitter and Weibo,the model achieved F1 scores of 90.0% and 94.0%,respectively,significantly outperforming the pre-modified MFFFND(Multimodal Feature Fusion Fake News Detection with Attention Block)model and surpassing other baseline models.This improves the accuracy of detecting fake information in artificial intelligence detection and engineering software detection. 展开更多
关键词 Fake news detection MULTIMODAL cross-modal ambiguity computation multi-scale feature fusion
在线阅读 下载PDF
MSFResNet:A ResNeXt50 model based on multi-scale feature fusion for wild mushroom identification
12
作者 YANG Yang JU Tao +1 位作者 YANG Wenjie ZHAO Yuyang 《Journal of Measurement Science and Instrumentation》 2025年第1期66-74,共9页
To solve the problems of redundant feature information,the insignificant difference in feature representation,and low recognition accuracy of the fine-grained image,based on the ResNeXt50 model,an MSFResNet network mo... To solve the problems of redundant feature information,the insignificant difference in feature representation,and low recognition accuracy of the fine-grained image,based on the ResNeXt50 model,an MSFResNet network model is proposed by fusing multi-scale feature information.Firstly,a multi-scale feature extraction module is designed to obtain multi-scale information on feature images by using different scales of convolution kernels.Meanwhile,the channel attention mechanism is used to increase the global information acquisition of the network.Secondly,the feature images processed by the multi-scale feature extraction module are fused with the deep feature images through short links to guide the full learning of the network,thus reducing the loss of texture details of the deep network feature images,and improving network generalization ability and recognition accuracy.Finally,the validity of the MSFResNet model is verified using public datasets and applied to wild mushroom identification.Experimental results show that compared with ResNeXt50 network model,the accuracy of the MSFResNet model is improved by 6.01%on the FGVC-Aircraft common dataset.It achieves 99.13%classification accuracy on the wild mushroom dataset,which is 0.47%higher than ResNeXt50.Furthermore,the experimental results of the thermal map show that the MSFResNet model significantly reduces the interference of background information,making the network focus on the location of the main body of wild mushroom,which can effectively improve the accuracy of wild mushroom identification. 展开更多
关键词 multi-scale feature fusion attention mechanism ResNeXt50 wild mushroom identification deep learning
在线阅读 下载PDF
EHDC-YOLO: Enhancing Object Detection for UAV Imagery via Multi-Scale Edge and Detail Capture
13
作者 Zhiyong Deng Yanchen Ye Jiangling Guo 《Computers, Materials & Continua》 2026年第1期1665-1682,共18页
With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods ... With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods face numerous challenges in practical deployment,including scale variation handling,feature degradation,and complex backgrounds.To address these issues,we propose Edge-enhanced and Detail-Capturing You Only Look Once(EHDC-YOLO),a novel framework for object detection in Unmanned Aerial Vehicle(UAV)imagery.Based on the You Only Look Once version 11 nano(YOLOv11n)baseline,EHDC-YOLO systematically introduces several architectural enhancements:(1)a Multi-Scale Edge Enhancement(MSEE)module that leverages multi-scale pooling and edge information to enhance boundary feature extraction;(2)an Enhanced Feature Pyramid Network(EFPN)that integrates P2-level features with Cross Stage Partial(CSP)structures and OmniKernel convolutions for better fine-grained representation;and(3)Dynamic Head(DyHead)with multi-dimensional attention mechanisms for enhanced cross-scale modeling and perspective adaptability.Comprehensive experiments on the Vision meets Drones for Detection(VisDrone-DET)2019 dataset demonstrate that EHDC-YOLO achieves significant improvements,increasing mean Average Precision(mAP)@0.5 from 33.2%to 46.1%(an absolute improvement of 12.9 percentage points)and mAP@0.5:0.95 from 19.5%to 28.0%(an absolute improvement of 8.5 percentage points)compared with the YOLOv11n baseline,while maintaining a reasonable parameter count(2.81 M vs the baseline’s 2.58 M).Further ablation studies confirm the effectiveness of each proposed component,while visualization results highlight EHDC-YOLO’s superior performance in detecting objects and handling occlusions in complex drone scenarios. 展开更多
关键词 UAV imagery object detection multi-scale feature fusion edge enhancement detail preservation YOLO feature pyramid network attention mechanism
在线阅读 下载PDF
YOLO-SPDNet:Multi-Scale Sequence and Attention-Based Tomato Leaf Disease Detection Model
14
作者 Meng Wang Jinghan Cai +6 位作者 Wenzheng Liu Xue Yang Jingjing Zhang Qiangmin Zhou Fanzhen Wang Hang Zhang Tonghai Liu 《Phyton-International Journal of Experimental Botany》 2026年第1期290-308,共19页
Tomato is a major economic crop worldwide,and diseases on tomato leaves can significantly reduce both yield and quality.Traditional manual inspection is inefficient and highly subjective,making it difficult to meet th... Tomato is a major economic crop worldwide,and diseases on tomato leaves can significantly reduce both yield and quality.Traditional manual inspection is inefficient and highly subjective,making it difficult to meet the requirements of early disease identification in complex natural environments.To address this issue,this study proposes an improved YOLO11-based model,YOLO-SPDNet(Scale Sequence Fusion,Position-Channel Attention,and Dual Enhancement Network).The model integrates the SEAM(Self-Ensembling Attention Mechanism)semantic enhancement module,the MLCA(Mixed Local Channel Attention)lightweight attention mechanism,and the SPA(Scale-Position-Detail Awareness)module composed of SSFF(Scale Sequence Feature Fusion),TFE(Triple Feature Encoding),and CPAM(Channel and Position Attention Mechanism).These enhancements strengthen fine-grained lesion detection while maintaining model lightweightness.Experimental results show that YOLO-SPDNet achieves an accuracy of 91.8%,a recall of 86.5%,and an mAP@0.5 of 90.6%on the test set,with a computational complexity of 12.5 GFLOPs.Furthermore,the model reaches a real-time inference speed of 987 FPS,making it suitable for deployment on mobile agricultural terminals and online monitoring systems.Comparative analysis and ablation studies further validate the reliability and practical applicability of the proposed model in complex natural scenes. 展开更多
关键词 Tomato disease detection YOLO multi-scale feature fusion attention mechanism lightweight model
在线阅读 下载PDF
OP-SLAM:An RGB-D SLAMMOT Method Leveraging the Constraints of Object Planar Features
15
作者 WANG Yingli LIU Yang GUO Chi 《Wuhan University Journal of Natural Sciences》 2026年第1期45-57,共13页
By integrating self-localization,environment mapping,and dynamic object tracking into a unified framework,visual simultaneous localization and mapping with multiple object tracking(SLAMMOT)enhances decision-making and... By integrating self-localization,environment mapping,and dynamic object tracking into a unified framework,visual simultaneous localization and mapping with multiple object tracking(SLAMMOT)enhances decision-making and interaction capabilities in applications such as autonomous driving,robotic navigation,and augmented reality.While numerous outstanding visual SLAMMOT methods have been proposed,the majority rely only on point features,overlooking the abundant and stable planar features in artificial objects that can provide valuable constraints.To address this limitation,we propose OP(object planar)-SLAM,an RGB-D SLAMMOT system that leverages planar features to improve object pose estimation and reconstruction accuracy.Specifically,we introduce an accurate object planar feature extraction and association method using normal images,alongside a novel object bundle adjustment framework that incorporates planar constraints for enhanced optimization.The proposed system is evaluated on both synthetic and public real-world datasets,including Oxford multimotion dataset(OMD)and KITTI tracking dataset.Especially on the OMD,where planar features are prominent,our method improves object pose estimation accuracy by approximately 60%.Extensive experiments demonstrate its effectiveness in enhancing object pose estimation and reconstruction,achieving notable performance compared with existing methods.Furthermore,OP-SLAM runs in real time,making it suitable for practical robots and augmented reality applications. 展开更多
关键词 visual simultaneous localization and mapping(SLAM) multiple object tracking(MOT) dynamic scenes planar feature
原文传递
A multi-scale optimal interpolation method of high computational efficiency for mapping oceanic data
16
作者 Ying Wen Zhijin Li +1 位作者 Wenlong Ma Xingliang Jiang 《Acta Oceanologica Sinica》 2025年第11期245-258,共14页
Ocean observations are inherently characterized by irregular temporal and spatial distributions,as well as heterogeneous spatial resolutions and error characteristics arising from the use of diverse observational plat... Ocean observations are inherently characterized by irregular temporal and spatial distributions,as well as heterogeneous spatial resolutions and error characteristics arising from the use of diverse observational platforms and techniques.To enable their application across a broad range of scientific and practical problems,it is essential to map these heterogeneous datasets into temporally and spatially consistent gridded products.Optimal Interpolation remains the most widely adopted algorithm for the mapping of oceanographic data.Two principal implementations of the optimal interpolation algorithm are commonly employed.The first,known as the basic optimal interpolation,is derived from the theory of optimal estimation and involves computationally intensive matrix operations,posing significant challenges when applied to high-dimensional problems.The second,referred to as the point-wise optimal interpolation,reduces computational complexity through point-wise estimation,thereby circumventing high-dimensional operations;however,this approach results in a substantially higher overall computational cost.In this study,a novel optimal interpolation algorithm is proposed that utilizes the Kronecker product to approximate the background error covariance matrix.This formulation enables the decomposition of high-dimensional matrix operations into smaller,computationally tractable sub-problems,thereby improving the scalability of optimal interpolation for large spatial domains with dense observational coverage.Building upon this framework,a multi-scale optimal interpolation method is further developed to enhance the integration of observational datasets with widely varying spatial resolutions,thereby improving the accuracy and applicability of the resulting gridded products. 展开更多
关键词 oceanic observation mapping optimal interpolation algorithm multi-scale algorithm background error covariance Kronecker product
在线阅读 下载PDF
A Generative Image Steganography Based on Disentangled Attribute Feature Transformation and Invertible Mapping Rule
17
作者 Xiang Zhang Shenyan Han +1 位作者 Wenbin Huang Daoyong Fu 《Computers, Materials & Continua》 2025年第4期1149-1171,共23页
Generative image steganography is a technique that directly generates stego images from secret infor-mation.Unlike traditional methods,it theoretically resists steganalysis because there is no cover image.Currently,th... Generative image steganography is a technique that directly generates stego images from secret infor-mation.Unlike traditional methods,it theoretically resists steganalysis because there is no cover image.Currently,the existing generative image steganography methods generally have good steganography performance,but there is still potential room for enhancing both the quality of stego images and the accuracy of secret information extraction.Therefore,this paper proposes a generative image steganography algorithm based on attribute feature transformation and invertible mapping rule.Firstly,the reference image is disentangled by a content and an attribute encoder to obtain content features and attribute features,respectively.Then,a mean mapping rule is introduced to map the binary secret information into a noise vector,conforming to the distribution of attribute features.This noise vector is input into the generator to produce the attribute transformed stego image with the content feature of the reference image.Additionally,we design an adversarial loss,a reconstruction loss,and an image diversity loss to train the proposed model.Experimental results demonstrate that the stego images generated by the proposed method are of high quality,with an average extraction accuracy of 99.4%for the hidden information.Furthermore,since the stego image has a uniform distribution similar to the attribute-transformed image without secret information,it effectively resists both subjective and objective steganalysis. 展开更多
关键词 Image information hiding generative information hiding disentangled attribute feature transformation invertible mapping rule steganalysis resistance
在线阅读 下载PDF
Radar emitter signal recognition based on multi-scale wavelet entropy and feature weighting 被引量:16
18
作者 李一兵 葛娟 +1 位作者 林云 叶方 《Journal of Central South University》 SCIE EI CAS 2014年第11期4254-4260,共7页
In modern electromagnetic environment, radar emitter signal recognition is an important research topic. On the basis of multi-resolution wavelet analysis, an adaptive radar emitter signal recognition method based on m... In modern electromagnetic environment, radar emitter signal recognition is an important research topic. On the basis of multi-resolution wavelet analysis, an adaptive radar emitter signal recognition method based on multi-scale wavelet entropy feature extraction and feature weighting was proposed. With the only priori knowledge of signal to noise ratio(SNR), the method of extracting multi-scale wavelet entropy features of wavelet coefficients from different received signals were combined with calculating uneven weight factor and stability weight factor of the extracted multi-dimensional characteristics. Radar emitter signals of different modulation types and different parameters modulated were recognized through feature weighting and feature fusion. Theoretical analysis and simulation results show that the presented algorithm has a high recognition rate. Additionally, when the SNR is greater than-4 d B, the correct recognition rate is higher than 93%. Hence, the proposed algorithm has great application value. 展开更多
关键词 emitter recognition multi-scale wavelet entropy feature weighting uneven weight factor stability weight factor
在线阅读 下载PDF
Feature Extraction by Multi-Scale Principal Component Analysis and Classification in Spectral Domain 被引量:2
19
作者 Shengkun Xie Anna T. Lawnizak +1 位作者 Pietro Lio Sridhar Krishnan 《Engineering(科研)》 2013年第10期268-271,共4页
Feature extraction of signals plays an important role in classification problems because of data dimension reduction property and potential improvement of a classification accuracy rate. Principal component analysis (... Feature extraction of signals plays an important role in classification problems because of data dimension reduction property and potential improvement of a classification accuracy rate. Principal component analysis (PCA), wavelets transform or Fourier transform methods are often used for feature extraction. In this paper, we propose a multi-scale PCA, which combines discrete wavelet transform, and PCA for feature extraction of signals in both the spatial and temporal domains. Our study shows that the multi-scale PCA combined with the proposed new classification methods leads to high classification accuracy for the considered signals. 展开更多
关键词 multi-scale Principal Component Analysis Discrete WAVELET TRANSFORM feature Extraction Signal CLASSIFICATION Empirical CLASSIFICATION
在线阅读 下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification 被引量:2
20
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet Image Classification Lightweight Convolutional Neural Network Depthwise Dilated Separable Convolution Hierarchical multi-scale feature Fusion
在线阅读 下载PDF
上一页 1 2 134 下一页 到第
使用帮助 返回顶部