期刊文献+
共找到12,749篇文章
< 1 2 250 >
每页显示 20 50 100
GaitMAFF:Adaptive Multi-Modal Fusion of Skeleton Maps and Silhouettes for Robust Gait Recognition in Complex Scenarios
1
作者 Zhongbin Luo Zhaoyang Guan +2 位作者 Wenxing You Yunteng Wang Yanqiu Bi 《Computers, Materials & Continua》 2026年第5期540-558,共19页
Gait recognition is a key biometric for long-distance identification,yet its performance is severely degraded by real-world challenges such as varying clothing,carrying conditions,and changing viewpoints.While combini... Gait recognition is a key biometric for long-distance identification,yet its performance is severely degraded by real-world challenges such as varying clothing,carrying conditions,and changing viewpoints.While combining silhouette and skeleton data is a promising direction,effectively fusing these heterogeneous modalities and adaptively weighting their contributions in response to diverse conditions remains a central problem.This paper introduces GaitMAFF,a novelMulti-modal Adaptive Feature Fusion Network,to address this challenge.Our approach first transforms discrete skeleton joints into a dense SkeletonMap representation to align with silhouettes,then employs an attention-based module to dynamically learn the fusion weights between the two modalities.These fused features are processed by a powerful spatio-temporal backbone withWeighted Global-Local Feature FusionModules(WFFM)to learn a discriminative representation.Extensive experiments on the challenging CCPG and Gait3D datasets show that GaitMAFF achieves state-of-the-art performance,with an average Rank-1 accuracy of 84.6%on CCPG and 58.7%on Gait3D.These results demonstrate that our adaptive fusion strategy effectively integrates complementary multimodal information,significantly enhancing gait recognition robustness and accuracy in complex scenes and providing a practical solution for real-world applications. 展开更多
关键词 Gait recognition multi-modal fusion adaptive feature fusion skeleton map SILHOUETTE
在线阅读 下载PDF
Multi-modal data analysis for autism spectrum disorder in children:State of the art and trends
2
作者 Lukai Pang Xiaoke Zhao +4 位作者 Lulu Zhao Jianqing Li Fengyi Kuo Hongxing Wang Chengyu Liu 《EngMedicine》 2026年第1期47-56,共10页
Autism spectrum disorder(AsD)is a highly heterogeneous neurodevelopmental disorder.Early diagnosis and intervention are crucial for improving outcomes.Traditional single-modality diagnostic methods are subjective,limi... Autism spectrum disorder(AsD)is a highly heterogeneous neurodevelopmental disorder.Early diagnosis and intervention are crucial for improving outcomes.Traditional single-modality diagnostic methods are subjective,limited,and struggle to reveal the underlying pathological mechanisms.In contrast,multimodal data analysis integrates behavioral,physiological,and neuroimaging information with advanced machine-learning and deeplearning algorithms to overcome these limitations.In this review,we surveyed the recent pediatric AsD literature,highlighting artificial intelligence-driven diagnostic techniques,multimodal data fusion strategies,and emerging trends in ASD assessment.We surveyed studies that integrated two or more modalities and summarized the fusion levels,learning paradigms,tasks,datasets,and metrics.Multimodal approaches outperform singlemodality baselines in classification,severity estimation,and subtyping by leveraging complementary information and reducing modality-specific biases.Multimodal approaches significantly enhance diagnostic accuracy and comprehensiveness,enabling early screening of AsD,symptom subtyping,severity assessment,and personalized interventions.Advances in multimodal fusion techniques have promoted progress in precision medicine for the treatment of ASD. 展开更多
关键词 Autism spectrum disorder multi-modal data Machine learning Early screening Symptom subtyping
暂未订购
Computing the Planet:Integrating Machine Learning,Remote Sensing,and Sensor Data Fusion for Environmental Insights
3
作者 Kai Mao 《Journal of Environmental & Earth Sciences》 2026年第1期277-297,共21页
Indeed,a range of systems in the environment requires timely,spatially explicit,and credible information to support its environmental decision-making,but no one observing system can give the complete and reliable meas... Indeed,a range of systems in the environment requires timely,spatially explicit,and credible information to support its environmental decision-making,but no one observing system can give the complete and reliable measures of the Earth system across scales.This review summarizes how the realization of the Compute the Planet is underway in the form of machine learning,remote sensing,and sensor data fusion to generate decision-ready environmental insights.We use the application-first approach,which considers remote sensing,in situ and Internet of Things(IoT)sensing,and physics-based models as complementary streams of evidence with similar strengths and failures.We look critically at how an integrated system can convert heterogeneous observations to action products across three high impact application areas:atmosphere and air quality,water–land–ecosystem dynamics,and hazards.Rapid-response situational awareness,ecosystem condition metrics,drought and flood indicators,exposure maps,and hazard/extreme indicators are key products.The integrated systems to environment interface in three high impact application areas:atmosphere and air quality,water-land-ecosystem dynamics,and hazard Examine Our operational requirements can often determine real-life value such as latency,time stability,smooth degradation in the presence of missing or degraded inputs,and calibrated uncertainty usable in thresholdbased decisions.These pitfalls are common across fields:mismatch in the scale between a point sensor and a gridded product,objectives on proxies in remotely sensed measurements,domain shift in the extremes and changing baselines,and evaluation aspects,which overestimate generalization because of spatiotemporal autocorrelation.Based on these lessons,we present cross-domain proposals for strong validation,uncertainty quantification,provenance,and versioning,as well as fair performance evaluation.We conclude that the next era of environmental intelligence will see a reduction in average accuracy improvement and an increase in terms of robustness,transparency,and operational responsibility,thus allowing the integrated environmental intelligence system to be deployed,which may be relied on to monitor human health,resource allocation,and survival in a more climate-adapted world. 展开更多
关键词 Machine Learning Remote Sensing Sensor data fusion Environmental Monitoring Uncertainty Quantification
在线阅读 下载PDF
Drive-by spatial offset detection for high-speed railway bridges based on fusion analysis of multi-source data from comprehensive inspection train
4
作者 Chuang Wang Jiawang Zhan +4 位作者 Nan Zhang Yujie Wang Xinxiang Xu Zhihang Wang Zhen Ni 《Railway Engineering Science》 2026年第1期128-148,共21页
The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR ... The spatial offset of bridge has a significant impact on the safety,comfort,and durability of high-speed railway(HSR)operations,so it is crucial to rapidly and effectively detect the spatial offset of operational HSR bridges.Drive-by monitoring of bridge uneven settlement demonstrates significant potential due to its practicality,cost-effectiveness,and efficiency.However,existing drive-by methods for detecting bridge offset have limitations such as reliance on a single data source,low detection accuracy,and the inability to identify lateral deformations of bridges.This paper proposes a novel drive-by inspection method for spatial offset of HSR bridge based on multi-source data fusion of comprehensive inspection train.Firstly,dung beetle optimizer-variational mode decomposition was employed to achieve adaptive decomposition of non-stationary dynamic signals,and explore the hidden temporal relationships in the data.Subsequently,a long short-term memory neural network was developed to achieve feature fusion of multi-source signal and accurate prediction of spatial settlement of HSR bridge.A dataset of track irregularities and CRH380A high-speed train responses was generated using a 3D train-track-bridge interaction model,and the accuracy and effectiveness of the proposed hybrid deep learning model were numerically validated.Finally,the reliability of the proposed drive-by inspection method was further validated by analyzing the actual measurement data obtained from comprehensive inspection train.The research findings indicate that the proposed approach enables rapid and accurate detection of spatial offset in HSR bridge,ensuring the long-term operational safety of HSR bridges. 展开更多
关键词 High-speed railway bridge Drive-by inspection Spatial offset Multi-source data fusion Deep learning
在线阅读 下载PDF
Multi-Modal Pre-Synergistic Fusion Entity Alignment Based on Mutual Information Strategy Optimization
5
作者 Huayu Li Xinxin Chen +3 位作者 Lizhuang Tan Konstantin I.Kostromitin Athanasios V.Vasilakos Peiying Zhang 《Computers, Materials & Continua》 2025年第11期4133-4153,共21页
To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities... To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model. 展开更多
关键词 Knowledge graph multi-modal entity alignment feature fusion pre-synergistic fusion
在线阅读 下载PDF
Multi-Modal Named Entity Recognition with Auxiliary Visual Knowledge and Word-Level Fusion
6
作者 Huansha Wang Ruiyang Huang +1 位作者 Qinrang Liu Xinghao Wang 《Computers, Materials & Continua》 2025年第6期5747-5760,共14页
Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or ... Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or obtaining entity related external knowledge from knowledge bases or Large Language Models(LLMs).However,these approaches ignore the poor semantic correlation between visual and textual modalities in MNER datasets and do not explore different multi-modal fusion approaches.In this paper,we present MMAVK,a multi-modal named entity recognition model with auxiliary visual knowledge and word-level fusion,which aims to leverage the Multi-modal Large Language Model(MLLM)as an implicit knowledge base.It also extracts vision-based auxiliary knowledge from the image formore accurate and effective recognition.Specifically,we propose vision-based auxiliary knowledge generation,which guides the MLLM to extract external knowledge exclusively derived from images to aid entity recognition by designing target-specific prompts,thus avoiding redundant recognition and cognitive confusion caused by the simultaneous processing of image-text pairs.Furthermore,we employ a word-level multi-modal fusion mechanism to fuse the extracted external knowledge with each word-embedding embedded from the transformerbased encoder.Extensive experimental results demonstrate that MMAVK outperforms or equals the state-of-the-art methods on the two classical MNER datasets,even when the largemodels employed have significantly fewer parameters than other baselines. 展开更多
关键词 multi-modal named entity recognition large language model multi-modal fusion
在线阅读 下载PDF
Research Progress on Multi-Modal Fusion Object Detection Algorithms for Autonomous Driving:A Review
7
作者 Peicheng Shi Li Yang +2 位作者 Xinlong Dong Heng Qi Aixi Yang 《Computers, Materials & Continua》 2025年第6期3877-3917,共41页
As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advan... As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advancing the development of perception technology in autonomous driving.To further promote the development of fusion algorithms and improve detection performance,this paper discusses the advantages and recent advancements of multimodal fusion-based object detection algorithms.Starting fromsingle-modal sensor detection,the paper provides a detailed overview of typical sensors used in autonomous driving and introduces object detection methods based on images and point clouds.For image-based detection methods,they are categorized into monocular detection and binocular detection based on different input types.For point cloud-based detection methods,they are classified into projection-based,voxel-based,point cluster-based,pillar-based,and graph structure-based approaches based on the technical pathways for processing point cloud features.Additionally,multimodal fusion algorithms are divided into Camera-LiDAR fusion,Camera-Radar fusion,Camera-LiDAR-Radar fusion,and other sensor fusion methods based on the types of sensors involved.Furthermore,the paper identifies five key future research directions in this field,aiming to provide insights for researchers engaged in multimodal fusion-based object detection algorithms and to encourage broader attention to the research and application of multimodal fusion-based object detection. 展开更多
关键词 multi-modal fusion 3D object detection deep learning autonomous driving
在线阅读 下载PDF
Tomato Growth Height Prediction Method by Phenotypic Feature Extraction Using Multi-modal Data
8
作者 GONG Yu WANG Ling +3 位作者 ZHAO Rongqiang YOU Haibo ZHOU Mo LIU Jie 《智慧农业(中英文)》 2025年第1期97-110,共14页
[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-base... [Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-based models that utilize either images data or environmental data.These methods fail to fully leverage multi-modal data to capture the diverse aspects of plant growth comprehensively.[Methods]To address this limitation,a two-stage phenotypic feature extraction(PFE)model based on deep learning algorithm of recurrent neural network(RNN)and long short-term memory(LSTM)was developed.The model integrated environment and plant information to provide a holistic understanding of the growth process,emploied phenotypic and temporal feature extractors to comprehensively capture both types of features,enabled a deeper understanding of the interaction between tomato plants and their environment,ultimately leading to highly accurate predictions of growth height.[Results and Discussions]The experimental results showed the model's ef‐fectiveness:When predicting the next two days based on the past five days,the PFE-based RNN and LSTM models achieved mean absolute percentage error(MAPE)of 0.81%and 0.40%,respectively,which were significantly lower than the 8.00%MAPE of the large language model(LLM)and 6.72%MAPE of the Transformer-based model.In longer-term predictions,the 10-day prediction for 4 days ahead and the 30-day prediction for 12 days ahead,the PFE-RNN model continued to outperform the other two baseline models,with MAPE of 2.66%and 14.05%,respectively.[Conclusions]The proposed method,which leverages phenotypic-temporal collaboration,shows great potential for intelligent,data-driven management of tomato cultivation,making it a promising approach for enhancing the efficiency and precision of smart tomato planting management. 展开更多
关键词 tomato growth prediction deep learning phenotypic feature extraction multi-modal data recurrent neural net‐work long short-term memory large language model
在线阅读 下载PDF
Adaptive Multi-modal Fusion Instance Segmentation for CAEVs in Complex Conditions:Dataset,Framework and Verifications 被引量:3
9
作者 Pai Peng Keke Geng +3 位作者 Guodong Yin Yanbo Lu Weichao Zhuang Shuaipeng Liu 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2021年第5期96-106,共11页
Current works of environmental perception for connected autonomous electrified vehicles(CAEVs)mainly focus on the object detection task in good weather and illumination conditions,they often perform poorly in adverse ... Current works of environmental perception for connected autonomous electrified vehicles(CAEVs)mainly focus on the object detection task in good weather and illumination conditions,they often perform poorly in adverse scenarios and have a vague scene parsing ability.This paper aims to develop an end-to-end sharpening mixture of experts(SMoE)fusion framework to improve the robustness and accuracy of the perception systems for CAEVs in complex illumination and weather conditions.Three original contributions make our work distinctive from the existing relevant literature.The Complex KITTI dataset is introduced which consists of 7481 pairs of modified KITTI RGB images and the generated LiDAR dense depth maps,and this dataset is fine annotated in instance-level with the proposed semi-automatic annotation method.The SMoE fusion approach is devised to adaptively learn the robust kernels from complementary modalities.Comprehensive comparative experiments are implemented,and the results show that the proposed SMoE framework yield significant improvements over the other fusion techniques in adverse environmental conditions.This research proposes a SMoE fusion framework to improve the scene parsing ability of the perception systems for CAEVs in adverse conditions. 展开更多
关键词 Connected autonomous electrified vehicles multi-modal fusion Semi-automatic annotation Sharpening mixture of experts Comparative experiments
在线阅读 下载PDF
Adaptive multi-modal feature fusion for far and hard object detection
10
作者 LI Yang GE Hongwei 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2021年第2期232-241,共10页
In order to solve difficult detection of far and hard objects due to the sparseness and insufficient semantic information of LiDAR point cloud,a 3D object detection network with multi-modal data adaptive fusion is pro... In order to solve difficult detection of far and hard objects due to the sparseness and insufficient semantic information of LiDAR point cloud,a 3D object detection network with multi-modal data adaptive fusion is proposed,which makes use of multi-neighborhood information of voxel and image information.Firstly,design an improved ResNet that maintains the structure information of far and hard objects in low-resolution feature maps,which is more suitable for detection task.Meanwhile,semantema of each image feature map is enhanced by semantic information from all subsequent feature maps.Secondly,extract multi-neighborhood context information with different receptive field sizes to make up for the defect of sparseness of point cloud which improves the ability of voxel features to represent the spatial structure and semantic information of objects.Finally,propose a multi-modal feature adaptive fusion strategy which uses learnable weights to express the contribution of different modal features to the detection task,and voxel attention further enhances the fused feature expression of effective target objects.The experimental results on the KITTI benchmark show that this method outperforms VoxelNet with remarkable margins,i.e.increasing the AP by 8.78%and 5.49%on medium and hard difficulty levels.Meanwhile,our method achieves greater detection performance compared with many mainstream multi-modal methods,i.e.outperforming the AP by 1%compared with that of MVX-Net on medium and hard difficulty levels. 展开更多
关键词 3D object detection adaptive fusion multi-modal data fusion attention mechanism multi-neighborhood features
在线阅读 下载PDF
A Comprehensive Survey on Deep Learning Multi-Modal Fusion:Methods,Technologies and Applications 被引量:10
11
作者 Tianzhe Jiao Chaopeng Guo +2 位作者 Xiaoyue Feng Yuming Chen Jie Song 《Computers, Materials & Continua》 SCIE EI 2024年第7期1-35,共35页
Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant resear... Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities.Under complex scenes,multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions.However,achieving outstanding performance is challenging because of equipment performance limitations,missing information,and data noise.This paper comprehensively reviews existing methods based onmulti-modal fusion techniques and completes a detailed and in-depth analysis.According to the data fusion stage,multi-modal fusion has four primary methods:early fusion,deep fusion,late fusion,and hybrid fusion.The paper surveys the three majormulti-modal fusion technologies that can significantly enhance the effect of data fusion and further explore the applications of multi-modal fusion technology in various fields.Finally,it discusses the challenges and explores potential research opportunities.Multi-modal tasks still need intensive study because of data heterogeneity and quality.Preserving complementary information and eliminating redundant information between modalities is critical in multi-modal technology.Invalid data fusion methods may introduce extra noise and lead to worse results.This paper provides a comprehensive and detailed summary in response to these challenges. 展开更多
关键词 multi-modal fusion REPRESENTATION TRANSLATION ALIGNMENT deep learning comparative analysis
在线阅读 下载PDF
PowerDetector:Malicious PowerShell Script Family Classification Based on Multi-Modal Semantic Fusion and Deep Learning 被引量:8
12
作者 Xiuzhang Yang Guojun Peng +2 位作者 Dongni Zhang Yuhang Gao Chenguang Li 《China Communications》 SCIE CSCD 2023年第11期202-224,共23页
Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and ... Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and malicious detection,lacking the malicious Power Shell families classification and behavior analysis.Moreover,the state-of-the-art methods fail to capture fine-grained features and semantic relationships,resulting in low robustness and accuracy.To this end,we propose Power Detector,a novel malicious Power Shell script detector based on multimodal semantic fusion and deep learning.Specifically,we design four feature extraction methods to extract key features from character,token,abstract syntax tree(AST),and semantic knowledge graph.Then,we intelligently design four embeddings(i.e.,Char2Vec,Token2Vec,AST2Vec,and Rela2Vec) and construct a multi-modal fusion algorithm to concatenate feature vectors from different views.Finally,we propose a combined model based on transformer and CNN-Bi LSTM to implement Power Shell family detection.Our experiments with five types of Power Shell attacks show that PowerDetector can accurately detect various obfuscated and stealth PowerShell scripts,with a 0.9402 precision,a 0.9358 recall,and a 0.9374 F1-score.Furthermore,through singlemodal and multi-modal comparison experiments,we demonstrate that PowerDetector’s multi-modal embedding and deep learning model can achieve better accuracy and even identify more unknown attacks. 展开更多
关键词 deep learning malicious family detection multi-modal semantic fusion POWERSHELL
在线阅读 下载PDF
MMGC-Net: Deep neural network for classification of mineral grains using multi-modal polarization images 被引量:1
13
作者 Jun Shu Xiaohai He +3 位作者 Qizhi Teng Pengcheng Yan Haibo He Honggang Chen 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第6期3894-3909,共16页
The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring ef... The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models. 展开更多
关键词 Mineral particles multi-modal image classification Shared parameters Feature fusion Spatiotemporal feature
暂未订购
Dynamic UAV data fusion and deep learning for improved maize phenological-stage tracking 被引量:1
14
作者 Ziheng Feng Jiliang Zhao +8 位作者 Liunan Suo Heguang Sun Huiling Long Hao Yang Xiaoyu Song Haikuan Feng Bo Xu Guijun Yang Chunjiang Zhao 《The Crop Journal》 2025年第3期961-974,共14页
Near real-time maize phenology monitoring is crucial for field management,cropping system adjustments,and yield estimation.Most phenological monitoring methods are post-seasonal and heavily rely on high-frequency time... Near real-time maize phenology monitoring is crucial for field management,cropping system adjustments,and yield estimation.Most phenological monitoring methods are post-seasonal and heavily rely on high-frequency time-series data.These methods are not applicable on the unmanned aerial vehicle(UAV)platform due to the high cost of acquiring time-series UAV images and the shortage of UAV-based phenological monitoring methods.To address these challenges,we employed the Synthetic Minority Oversampling Technique(SMOTE)for sample augmentation,aiming to resolve the small sample modelling problem.Moreover,we utilized enhanced"separation"and"compactness"feature selection methods to identify input features from multiple data sources.In this process,we incorporated dynamic multi-source data fusion strategies,involving Vegetation index(VI),Color index(CI),and Texture features(TF).A two-stage neural network that combines Convolutional Neural Network(CNN)and Long Short-Term Memory Network(LSTM)is proposed to identify maize phenological stages(including sowing,seedling,jointing,trumpet,tasseling,maturity,and harvesting)on UAV platforms.The results indicate that the dataset generated by SMOTE closely resembles the measured dataset.Among dynamic data fusion strategies,the VI-TF combination proves to be most effective,with CI-TF and VI-CI combinations following behind.Notably,as more data sources are integrated,the model's demand for input features experiences a significant decline.In particular,the CNN-LSTM model,based on the fusion of three data sources,exhibited remarkable reliability when validating the three datasets.For Dataset 1(Beijing Xiaotangshan,2023:Data from 12 UAV Flight Missions),the model achieved an overall accuracy(OA)of 86.53%.Additionally,its precision(Pre),recall(Rec),F1 score(F1),false acceptance rate(FAR),and false rejection rate(FRR)were 0.89,0.89,0.87,0.11,and 0.11,respectively.The model also showed strong generalizability in Dataset 2(Beijing Xiaotangshan,2023:Data from 6 UAV Flight Missions)and Dataset 3(Beijing Xiaotangshan,2022:Data from 4 UAV Flight Missions),with OAs of 89.4%and 85%,respectively.Meanwhile,the model has a low demand for input featu res,requiring only 54.55%(99 of all featu res).The findings of this study not only offer novel insights into near real-time crop phenology monitoring,but also provide technical support for agricultural field management and cropping system adaptation. 展开更多
关键词 Near real-time Maize phenology Deep learning UAV Multi-source data fusion
在线阅读 下载PDF
Design Discussion of a Wireless Fire Alarm System Based on Data Fusion Technology 被引量:1
15
作者 Qun Wu Jinyang Wu 《Journal of Electronic Research and Application》 2025年第2期58-64,共7页
This article explores the design of a wireless fire alarm system supported by advanced data fusion technology.It includes discussions on the basic design ideas of the wireless fire alarm system,hardware design analysi... This article explores the design of a wireless fire alarm system supported by advanced data fusion technology.It includes discussions on the basic design ideas of the wireless fire alarm system,hardware design analysis,software design analysis,and simulation analysis,all supported by data fusion technology.Hopefully,this analysis can provide some reference for the rational application of data fusion technology to meet the actual design and application requirements of the system. 展开更多
关键词 data fusion technology Fire alarm system Wireless alarm Hardware design Software design
在线阅读 下载PDF
Risk Prediction of Tunnel Water and Mud Inrush Based on Decision-Level Fusion of Multisource Data 被引量:1
16
作者 Shi-shu Zhang Peng Wang +4 位作者 Hua-bo Xiao Huai-bing Wang Yi-guo Xue Wei-dong Chen Kai Zhang 《Applied Geophysics》 2025年第2期472-487,559,560,共18页
This paper addresses the accuracy and timeliness limitations of traditional comprehensive prediction methods by proposing an approach of decision-level fusion of multisource data.A risk prediction indicator system was... This paper addresses the accuracy and timeliness limitations of traditional comprehensive prediction methods by proposing an approach of decision-level fusion of multisource data.A risk prediction indicator system was established for water and mud inrush in tunnels by analyzing advanced prediction data for specifi c tunnel segments.Additionally,the indicator weights were determined using the analytic hierarchy process combined with the Huber weighting method.Subsequently,a multisource data decision-layer fusion algorithm was utilized to generate fused imaging results for tunnel water and mud inrush risk predictions.Meanwhile,risk analysis was performed for different tunnel sections to achieve spatial and temporal complementarity within the indicator system and optimize redundant information.Finally,model feasibility was validated using the CZ Project Sejila Mountain Tunnel segment as a case study,yielding favorable risk prediction results and enabling effi cient information fusion and support for construction decision-making. 展开更多
关键词 Tunnel water and mud inrush prediction methods risk indicators multisource data decision-level fusion
在线阅读 下载PDF
Test method of laser paint removal based on multi-modal feature fusion 被引量:2
17
作者 HUANG Hai-peng HAO Ben-tian +2 位作者 YE De-jun GAO Hao LI Liang 《Journal of Central South University》 SCIE EI CAS CSCD 2022年第10期3385-3398,共14页
Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion net... Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion network model was constructed based on a laser paint removal experiment. The alignment of heterogeneous data under different modals was solved by combining the piecewise aggregate approximation and gramian angular field. Moreover, the attention mechanism was introduced to optimize the dual-path network and dense connection network, enabling the sampling characteristics to be extracted and integrated. Consequently, the multi-modal discriminant detection of laser paint removal was realized. According to the experimental results, the verification accuracy of the constructed model on the experimental dataset was 99.17%, which is 5.77% higher than the optimal single-modal detection results of the laser paint removal. The feature extraction network was optimized by the attention mechanism, and the model accuracy was increased by 3.3%. Results verify the improved classification performance of the constructed multi-modal feature fusion model in detecting laser paint removal, the effective integration of acoustic data and visual image data, and the accurate detection of laser paint removal. 展开更多
关键词 laser cleaning multi-modal fusion image processing deep learning
在线阅读 下载PDF
Multi-modality hierarchical fusion network for lumbar spine segmentation with magnetic resonance images 被引量:1
18
作者 Han Yan Guangtao Zhang +1 位作者 Wei Cui Zhuliang Yu 《Control Theory and Technology》 EI CSCD 2024年第4期612-622,共11页
For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual diffe... For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual differences,conventional automatic segmentation methods perform poorly.Since the success of deep learning in the segmentation of medical images has been shown in the past few years,it has been applied to this task in a number of ways.The multi-scale and multi-modal features of lumbar tissues,however,are rarely explored by methodologies of deep learning.Because of the inadequacies in medical images availability,it is crucial to effectively fuse various modes of data collection for model training to alleviate the problem of insufficient samples.In this paper,we propose a novel multi-modality hierarchical fusion network(MHFN)for improving lumbar spine segmentation by learning robust feature representations from multi-modality magnetic resonance images.An adaptive group fusion module(AGFM)is introduced in this paper to fuse features from various modes to extract cross-modality features that could be valuable.Furthermore,to combine features from low to high levels of cross-modality,we design a hierarchical fusion structure based on AGFM.Compared to the other feature fusion methods,AGFM is more effective based on experimental results on multi-modality MR images of the lumbar spine.To further enhance segmentation accuracy,we compare our network with baseline fusion structures.Compared to the baseline fusion structures(input-level:76.27%,layer-level:78.10%,decision-level:79.14%),our network was able to segment fractured vertebrae more accurately(85.05%). 展开更多
关键词 Lumbar spine segmentation Deep learning multi-modality fusion Feature fusion
原文传递
Multi-dimension and multi-modal rolling mill vibration prediction model based on multi-level network fusion 被引量:1
19
作者 CHEN Shu-zong LIU Yun-xiao +3 位作者 WANG Yun-long QIAN Cheng HUA Chang-chun SUN Jie 《Journal of Central South University》 SCIE EI CAS CSCD 2024年第9期3329-3348,共20页
Mill vibration is a common problem in rolling production,which directly affects the thickness accuracy of the strip and may even lead to strip fracture accidents in serious cases.The existing vibration prediction mode... Mill vibration is a common problem in rolling production,which directly affects the thickness accuracy of the strip and may even lead to strip fracture accidents in serious cases.The existing vibration prediction models do not consider the features contained in the data,resulting in limited improvement of model accuracy.To address these challenges,this paper proposes a multi-dimensional multi-modal cold rolling vibration time series prediction model(MDMMVPM)based on the deep fusion of multi-level networks.In the model,the long-term and short-term modal features of multi-dimensional data are considered,and the appropriate prediction algorithms are selected for different data features.Based on the established prediction model,the effects of tension and rolling force on mill vibration are analyzed.Taking the 5th stand of a cold mill in a steel mill as the research object,the innovative model is applied to predict the mill vibration for the first time.The experimental results show that the correlation coefficient(R^(2))of the model proposed in this paper is 92.5%,and the root-mean-square error(RMSE)is 0.0011,which significantly improves the modeling accuracy compared with the existing models.The proposed model is also suitable for the hot rolling process,which provides a new method for the prediction of strip rolling vibration. 展开更多
关键词 rolling mill vibration multi-dimension data multi-modal data convolutional neural network time series prediction
在线阅读 下载PDF
MMCSD:Multi-Modal Knowledge Graph Completion Based on Super-Resolution and Detailed Description Generation
20
作者 Huansha Wang Ruiyang Huang +2 位作者 Qinrang Liu Shaomei Li Jianpeng Zhang 《Computers, Materials & Continua》 2025年第4期761-783,共23页
Multi-modal knowledge graph completion(MMKGC)aims to complete missing entities or relations in multi-modal knowledge graphs,thereby discovering more previously unknown triples.Due to the continuous growth of data and ... Multi-modal knowledge graph completion(MMKGC)aims to complete missing entities or relations in multi-modal knowledge graphs,thereby discovering more previously unknown triples.Due to the continuous growth of data and knowledge and the limitations of data sources,the visual knowledge within the knowledge graphs is generally of low quality,and some entities suffer from the issue of missing visual modality.Nevertheless,previous studies of MMKGC have primarily focused on how to facilitate modality interaction and fusion while neglecting the problems of low modality quality and modality missing.In this case,mainstream MMKGC models only use pre-trained visual encoders to extract features and transfer the semantic information to the joint embeddings through modal fusion,which inevitably suffers from problems such as error propagation and increased uncertainty.To address these problems,we propose a Multi-modal knowledge graph Completion model based on Super-resolution and Detailed Description Generation(MMCSD).Specifically,we leverage a pre-trained residual network to enhance the resolution and improve the quality of the visual modality.Moreover,we design multi-level visual semantic extraction and entity description generation,thereby further extracting entity semantics from structural triples and visual images.Meanwhile,we train a variational multi-modal auto-encoder and utilize a pre-trained multi-modal language model to complement the missing visual features.We conducted experiments on FB15K-237 and DB13K,and the results showed that MMCSD can effectively perform MMKGC and achieve state-of-the-art performance. 展开更多
关键词 multi-modal knowledge graph knowledge graph completion multi-modal fusion
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部