期刊文献+
共找到79,603篇文章
< 1 2 250 >
每页显示 20 50 100
Near-infrared Spectroscopy Detection of Rice Protein Content Based on Stacking Multi-model Fusion
1
作者 Shengye WANG Siting WU +2 位作者 Jinming LIU Chunqi WANG Zhijiang LI 《Agricultural Biotechnology》 2026年第1期42-46,共5页
[Objectives]This study was conducted to achieve rapid and accurate detection of protein content in rice with a particle size of 1.0 mm.[Methods]A multi-model fusion strategy was proposed on the basis of Stacking ensem... [Objectives]This study was conducted to achieve rapid and accurate detection of protein content in rice with a particle size of 1.0 mm.[Methods]A multi-model fusion strategy was proposed on the basis of Stacking ensemble learning.A base learner pool was constructed,containing Partial Least Squares(PLS),Support Vector Machine(SVM),Deep Extreme Learning Machine(DELM),Random Forest(RF),Gradient Boosting Decision Tree(GBDT),and Multilayer Perceptron(MLP).PLS,DELM,and Linear Regression(LR)were used as meta-learner candidates.Employing integer coding technology,systematic dynamic combinations of base learners and meta-learners were generated,resulting in a total of 40 non-repetitive fusion models.The optimal combination was selected through a comprehensive evaluation based on multiple assessment indicators.[Results]The combination"PLS-DELM-MLP-LR"(code 1367)achieved coefficients of determination of 0.9732 and 0.9780 on the validation set and independent test set,respectively,with relative root mean square errors of 2.35%and 2.36%,and residual predictive deviations of 6.1075 and 6.7479,respectively.[Conclusions]The Stacking fusion model significantly enhances the predictive accuracy and robustness of spectral quantitative analysis,providing an efficient and feasible solution for modeling complex agricultural product spectral data. 展开更多
关键词 Rice protein Near-infrared spectroscopy Stacking ensemble learning multi-model fusion Integer encoding
在线阅读 下载PDF
A lightweight physics-conditioned diffusion multi-model for medical image reconstruction
2
作者 Raja Vavekanand Ganesh Kumar Shakhlokhon Kurbanova 《Biomedical Engineering Communications》 2026年第2期50-59,共10页
Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstructio... Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications. 展开更多
关键词 medical image reconstruction physics-conditioned diffusion multi-task learning self-supervised fine-tuning multimodal fusion lightweight neural networks
在线阅读 下载PDF
Multi-modality hierarchical fusion network for lumbar spine segmentation with magnetic resonance images 被引量:1
3
作者 Han Yan Guangtao Zhang +1 位作者 Wei Cui Zhuliang Yu 《Control Theory and Technology》 EI CSCD 2024年第4期612-622,共11页
For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual diffe... For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual differences,conventional automatic segmentation methods perform poorly.Since the success of deep learning in the segmentation of medical images has been shown in the past few years,it has been applied to this task in a number of ways.The multi-scale and multi-modal features of lumbar tissues,however,are rarely explored by methodologies of deep learning.Because of the inadequacies in medical images availability,it is crucial to effectively fuse various modes of data collection for model training to alleviate the problem of insufficient samples.In this paper,we propose a novel multi-modality hierarchical fusion network(MHFN)for improving lumbar spine segmentation by learning robust feature representations from multi-modality magnetic resonance images.An adaptive group fusion module(AGFM)is introduced in this paper to fuse features from various modes to extract cross-modality features that could be valuable.Furthermore,to combine features from low to high levels of cross-modality,we design a hierarchical fusion structure based on AGFM.Compared to the other feature fusion methods,AGFM is more effective based on experimental results on multi-modality MR images of the lumbar spine.To further enhance segmentation accuracy,we compare our network with baseline fusion structures.Compared to the baseline fusion structures(input-level:76.27%,layer-level:78.10%,decision-level:79.14%),our network was able to segment fractured vertebrae more accurately(85.05%). 展开更多
关键词 Lumbar spine segmentation Deep learning multi-modality fusion Feature fusion
原文传递
SFMFusion:基于语义特征映射自编码的红外与可见光图像融合
4
作者 管芳景 汪娟 罗晓清 《红外技术》 北大核心 2026年第2期156-165,共10页
以往的红外与可见光图像融合方法常忽略了语义信息特征的关系,导致红外图像的独特信息挖掘不够充分。为了充分提取挖掘图像的语义信息和细粒度判别特征,本文提出了一种基于语义特征映射自编码的红外与可见光图像融合方法(SFMFusion)。... 以往的红外与可见光图像融合方法常忽略了语义信息特征的关系,导致红外图像的独特信息挖掘不够充分。为了充分提取挖掘图像的语义信息和细粒度判别特征,本文提出了一种基于语义特征映射自编码的红外与可见光图像融合方法(SFMFusion)。该方法针对粗、细粒度关注的信息重点不同,采取了两重融合策略:对于包含图像空间细节纹理的浅层信息,本文设计了基于内容丰富度的融合规则;对于蕴含图像判别性内容的深层语义信息,设计了基于最小二乘法的语义特征映射融合规则,通过寻求最佳特征映射以便最大限度地保留红外图像的独特信息。在此基础上,为了进一步增强语义融合特征的上下文相关性,本文设计了多尺度增强模块。该模块使用多个具有不同扩张率的空洞卷积对特征进行并行处理语义融合特征,以此学习特征不同尺度的信息。最后,在浅层融合细节信息的逐层引导下,从粗到细重构出最终的融合图像。通过在标准图像TNO和RoadScene数据集上进行主客观实验,与传统和近来深度学习融合方法进行比较分析,结果显示本文方法能有效保留并融合红外与可见光图像中的互补信息,在视觉感知和定量指标上均取得较好的效果。 展开更多
关键词 特征映射 语义 最小二乘法 多尺度 红外与可见光 图像融合
在线阅读 下载PDF
基于UPLC-Orbitrap Fusion Lumos Tribrid-MS的女贞子酒蒸前后血清药物化学对比分析
5
作者 刘昊霖 郑历史 +3 位作者 孙淑仃 赵迪 李焕茹 冯素香 《中华中医药学刊》 北大核心 2026年第1期175-186,I0027,共13页
目的基于超高效液相色谱-四极杆-静电场轨道阱-线性离子阱质谱法(ultra performance liquid chromatography-orbitrap fusion lumos tribrid-mass spectrometry,UPLC-Orbitrap Fusion Lumos Tribrid-MS)对大鼠灌胃女贞子、酒女贞子水提... 目的基于超高效液相色谱-四极杆-静电场轨道阱-线性离子阱质谱法(ultra performance liquid chromatography-orbitrap fusion lumos tribrid-mass spectrometry,UPLC-Orbitrap Fusion Lumos Tribrid-MS)对大鼠灌胃女贞子、酒女贞子水提液后血清中的移行成分进行对比分析。方法雄性Sprague-Dawley(SD)大鼠随机分为空白组、女贞子组(10.8 g·kg^(-1)·d^(-1))和酒女贞子组(10.8 g·kg^(-1)·d^(-1)),每组6只,给药组分别灌胃给予女贞子、酒女贞子水提液,空白组灌胃等体积纯净水,早晚各1次,连续5 d,末次给药1 h后腹主动脉取血,制备血清样品。采用Accucore^(TM) C_(18)(100 mm×2.1 mm,2.6μm)色谱柱,流动相为乙腈(A)-0.1%甲酸水(B),梯度洗脱(0~5 min,95%B→85%B;5~10 min,85%B→73%B;10~24 min,73%B→15%B),流速0.2 mL·min^(-1),进样量5μL,正、负离子模式扫描,扫描范围m/z 120~1200。采用Compound Discoverer 3.3软件,根据质谱数据和相关文献对女贞子、酒女贞子入血原型成分和代谢产物进行分析鉴定;采用多元统计分析方法对比女贞子、酒女贞子含药血清间的差异性成分。结果在给予女贞子水提液大鼠血清中共鉴定得到64个入血成分,包括40个原型成分和24个代谢产物;在给予酒女贞子水提液大鼠血清中共鉴定得到57个入血成分,包括35个原型成分和22个代谢产物。原型成分主要包括苯乙醇苷类、环烯醚萜类、三萜类、黄酮类等,代谢途径主要包括羟基化、甲基化、葡萄糖醛酸化等。根据变量重要性投影(variable importance in projection,VIP)值>1,t检验(Student's t test)结果P<0.05筛选出特女贞苷、女贞苷酸等12个差异性入血成分,其中7个原型成分、5个代谢产物。结论女贞子酒蒸后血清移行成分发生明显改变,可为阐明女贞子、酒女贞子药效物质基础提供理论依据。 展开更多
关键词 女贞子 炮制 血清药物化学 UPLC-Orbitrap fusion Lumos Tribrid-MS 多元统计分析
原文传递
Construction and evaluation of a predictive model for the degree of coronary artery occlusion based on adaptive weighted multi-modal fusion of traditional Chinese and western medicine data 被引量:2
6
作者 Jiyu ZHANG Jiatuo XU +1 位作者 Liping TU Hongyuan FU 《Digital Chinese Medicine》 2025年第2期163-173,共11页
Objective To develop a non-invasive predictive model for coronary artery stenosis severity based on adaptive multi-modal integration of traditional Chinese and western medicine data.Methods Clinical indicators,echocar... Objective To develop a non-invasive predictive model for coronary artery stenosis severity based on adaptive multi-modal integration of traditional Chinese and western medicine data.Methods Clinical indicators,echocardiographic data,traditional Chinese medicine(TCM)tongue manifestations,and facial features were collected from patients who underwent coro-nary computed tomography angiography(CTA)in the Cardiac Care Unit(CCU)of Shanghai Tenth People's Hospital between May 1,2023 and May 1,2024.An adaptive weighted multi-modal data fusion(AWMDF)model based on deep learning was constructed to predict the severity of coronary artery stenosis.The model was evaluated using metrics including accura-cy,precision,recall,F1 score,and the area under the receiver operating characteristic(ROC)curve(AUC).Further performance assessment was conducted through comparisons with six ensemble machine learning methods,data ablation,model component ablation,and various decision-level fusion strategies.Results A total of 158 patients were included in the study.The AWMDF model achieved ex-cellent predictive performance(AUC=0.973,accuracy=0.937,precision=0.937,recall=0.929,and F1 score=0.933).Compared with model ablation,data ablation experiments,and various traditional machine learning models,the AWMDF model demonstrated superior per-formance.Moreover,the adaptive weighting strategy outperformed alternative approaches,including simple weighting,averaging,voting,and fixed-weight schemes.Conclusion The AWMDF model demonstrates potential clinical value in the non-invasive prediction of coronary artery disease and could serve as a tool for clinical decision support. 展开更多
关键词 Coronary artery disease Deep learning multi-modAL Clinical prediction Traditional Chinese medicine diagnosis
暂未订购
Research Progress on Multi-Modal Fusion Object Detection Algorithms for Autonomous Driving:A Review
7
作者 Peicheng Shi Li Yang +2 位作者 Xinlong Dong Heng Qi Aixi Yang 《Computers, Materials & Continua》 2025年第6期3877-3917,共41页
As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advan... As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advancing the development of perception technology in autonomous driving.To further promote the development of fusion algorithms and improve detection performance,this paper discusses the advantages and recent advancements of multimodal fusion-based object detection algorithms.Starting fromsingle-modal sensor detection,the paper provides a detailed overview of typical sensors used in autonomous driving and introduces object detection methods based on images and point clouds.For image-based detection methods,they are categorized into monocular detection and binocular detection based on different input types.For point cloud-based detection methods,they are classified into projection-based,voxel-based,point cluster-based,pillar-based,and graph structure-based approaches based on the technical pathways for processing point cloud features.Additionally,multimodal fusion algorithms are divided into Camera-LiDAR fusion,Camera-Radar fusion,Camera-LiDAR-Radar fusion,and other sensor fusion methods based on the types of sensors involved.Furthermore,the paper identifies five key future research directions in this field,aiming to provide insights for researchers engaged in multimodal fusion-based object detection algorithms and to encourage broader attention to the research and application of multimodal fusion-based object detection. 展开更多
关键词 multi-modal fusion 3D object detection deep learning autonomous driving
在线阅读 下载PDF
VIFusion:低光场景下可见光与红外图像的互补融合模型
8
作者 张晓滨 牛燕皓 陈金广 《西安工程大学学报》 2026年第1期126-135,共10页
针对低光场景下可见光与红外图像融合算法存在时序信息丢失、特征图通道冗余、细节模糊等问题,本文基于Vision Transformer框架,提出了一种低光场景下可见光与红外图像的互补融合模型VIFusion。该模型通过包含的双时态特征聚合(dual tem... 针对低光场景下可见光与红外图像融合算法存在时序信息丢失、特征图通道冗余、细节模糊等问题,本文基于Vision Transformer框架,提出了一种低光场景下可见光与红外图像的互补融合模型VIFusion。该模型通过包含的双时态特征聚合(dual temporal feature aggregation,DTFA)模块、特征细化前馈网络(feature refinement feedforward network,FRFN)模块和空间通道注意力机制(spatial channel attention,SCA)模块提升了融合图像的质量和信息表达能力。其中,DTFA模块使用分组卷积保持特征空间完整性,然后进行时序对齐与融合,以增强时序一致性并减少信息损失。FRFN模块对提取的特征进行逐层优化,减少通道冗余。SCA模块通过自适应建模图像空间和通道关系,突出关键特征,提高信息表达能力、增强边缘、纹理等细节信息。实验结果表明:在LLVIP数据集上,VIFusion模型在客观指标(AG、CC、EN、SF、SSIM、VIF、MI)上优于传统方法和深度学习模型(如GTF、TarDAL、DenseFuse等)。在数据集TNO上的泛化实验中,生成的融合图像在细节保留和目标突出上也表现更佳。VIFusion模型为低光场景下的多模态图像融合提供了一种高效实用的解决方案。 展开更多
关键词 双时态特征聚合 特征细化前馈网络 空间通道注意力 图像融合
在线阅读 下载PDF
Multi-Modal Named Entity Recognition with Auxiliary Visual Knowledge and Word-Level Fusion
9
作者 Huansha Wang Ruiyang Huang +1 位作者 Qinrang Liu Xinghao Wang 《Computers, Materials & Continua》 2025年第6期5747-5760,共14页
Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or ... Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or obtaining entity related external knowledge from knowledge bases or Large Language Models(LLMs).However,these approaches ignore the poor semantic correlation between visual and textual modalities in MNER datasets and do not explore different multi-modal fusion approaches.In this paper,we present MMAVK,a multi-modal named entity recognition model with auxiliary visual knowledge and word-level fusion,which aims to leverage the Multi-modal Large Language Model(MLLM)as an implicit knowledge base.It also extracts vision-based auxiliary knowledge from the image formore accurate and effective recognition.Specifically,we propose vision-based auxiliary knowledge generation,which guides the MLLM to extract external knowledge exclusively derived from images to aid entity recognition by designing target-specific prompts,thus avoiding redundant recognition and cognitive confusion caused by the simultaneous processing of image-text pairs.Furthermore,we employ a word-level multi-modal fusion mechanism to fuse the extracted external knowledge with each word-embedding embedded from the transformerbased encoder.Extensive experimental results demonstrate that MMAVK outperforms or equals the state-of-the-art methods on the two classical MNER datasets,even when the largemodels employed have significantly fewer parameters than other baselines. 展开更多
关键词 multi-modal named entity recognition large language model multi-modal fusion
在线阅读 下载PDF
Multi-Modal Pre-Synergistic Fusion Entity Alignment Based on Mutual Information Strategy Optimization
10
作者 Huayu Li Xinxin Chen +3 位作者 Lizhuang Tan Konstantin I.Kostromitin Athanasios V.Vasilakos Peiying Zhang 《Computers, Materials & Continua》 2025年第11期4133-4153,共21页
To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities... To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model. 展开更多
关键词 Knowledge graph multi-modAL entity alignment feature fusion pre-synergistic fusion
在线阅读 下载PDF
Global-local feature optimization based RGB-IR fusion object detection on drone view 被引量:1
11
作者 Zhaodong CHEN Hongbing JI Yongquan ZHANG 《Chinese Journal of Aeronautics》 2026年第1期436-453,共18页
Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still st... Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still struggle to deal with the complex and changing scenarios captured by drones,mainly due to two reasons:(A)RGB-IR fusion detectors are susceptible to inferior inputs that degrade performance and stability.(B)RGB-IR fusion detectors are susceptible to redundant features that reduce accuracy and efficiency.In this paper,an innovative RGB-IR fusion detection framework based on global-local feature optimization,named GLFDet,is proposed to improve the detection performance and efficiency of drone-captured objects.The key components of GLFDet include a Global Feature Optimization(GFO)module,a Local Feature Optimization(LFO)module and a Channel Separation Fusion(CSF)module.Specifically,GFO calculates the information content of the input image from the frequency domain and optimizes the features holistically.Then,LFO dynamically selects high-value features and filters out low-value features before fusion,which significantly improves the efficiency of fusion.Finally,CSF fuses the RGB and IR features across the corresponding channels,which avoids the rearrangement of the channel relationships and enhances the model stability.Extensive experimental results show that the proposed method achieves the best performance on three popular RGB-IR datasets Drone Vehicle,VEDAI,and LLVIP.In addition,GLFDet is more lightweight than other comparable models,making it more appealing to edge devices such as drones.The code is available at https://github.com/lao chen330/GLFDet. 展开更多
关键词 Object detection Deep learning RGB-IR fusion DRONES Global feature Local feature
原文传递
Bearing Fault Diagnosis Based on Multimodal Fusion GRU and Swin-Transformer
12
作者 Yingyong Zou Yu Zhang +2 位作者 Long Li Tao Liu Xingkui Zhang 《Computers, Materials & Continua》 2026年第1期1587-1610,共24页
Fault diagnosis of rolling bearings is crucial for ensuring the stable operation of mechanical equipment and production safety in industrial environments.However,due to the nonlinearity and non-stationarity of collect... Fault diagnosis of rolling bearings is crucial for ensuring the stable operation of mechanical equipment and production safety in industrial environments.However,due to the nonlinearity and non-stationarity of collected vibration signals,single-modal methods struggle to capture fault features fully.This paper proposes a rolling bearing fault diagnosis method based on multi-modal information fusion.The method first employs the Hippopotamus Optimization Algorithm(HO)to optimize the number of modes in Variational Mode Decomposition(VMD)to achieve optimal modal decomposition performance.It combines Convolutional Neural Networks(CNN)and Gated Recurrent Units(GRU)to extract temporal features from one-dimensional time-series signals.Meanwhile,the Markovian Transition Field(MTF)is used to transform one-dimensional signals into two-dimensional images for spatial feature mining.Through visualization techniques,the effectiveness of generated images from different parameter combinations is compared to determine the optimal parameter configuration.A multi-modal network(GSTCN)is constructed by integrating Swin-Transformer and the Convolutional Block Attention Module(CBAM),where the attention module is utilized to enhance fault features.Finally,the fault features extracted from different modalities are deeply fused and fed into a fully connected layer to complete fault classification.Experimental results show that the GSTCN model achieves an average diagnostic accuracy of 99.5%across three datasets,significantly outperforming existing comparison methods.This demonstrates that the proposed model has high diagnostic precision and good generalization ability,providing an efficient and reliable solution for rolling bearing fault diagnosis. 展开更多
关键词 multi-modAL GRU swin-transformer CBAM CNN feature fusion
在线阅读 下载PDF
Effect of Addition of Er-TiB_(2)Dual-Phase Nanoparticles on Strength-Ductility of Al-Mn-Mg-Sc-Zr Alloy Prepared by Laser Powder Bed Fusion
13
作者 Li Suli Zhang Yanze +5 位作者 Yang Mengjia Zhang Longbo Xie Qidong Yang Laixia MaoFeng Chen Zhen 《稀有金属材料与工程》 北大核心 2026年第1期9-17,共9页
A dual-phase synergistic enhancement method was adopted to strengthen the Al-Mn-Mg-Sc-Zr alloy fabricated by laser powder bed fusion(LPBF)by leveraging the unique advantages of Er and TiB_(2).Spherical powders of 0.5w... A dual-phase synergistic enhancement method was adopted to strengthen the Al-Mn-Mg-Sc-Zr alloy fabricated by laser powder bed fusion(LPBF)by leveraging the unique advantages of Er and TiB_(2).Spherical powders of 0.5wt%Er-1wt%TiB_(2)/Al-Mn-Mg-Sc-Zr nanocomposite were prepared using vacuum homogenization technique,and the density of samples prepared through the LPBF process reached 99.8%.The strengthening and toughening mechanisms of Er-TiB_(2)were investigated.The results show that Al_(3)Er diffraction peaks are detected by X-ray diffraction analysis,and texture strength decreases according to electron backscatter diffraction results.The added Er and TiB_(2)nano-reinforcing phases act as heterogeneous nucleation sites during the LPBF forming process,hindering grain growth and effectively refining the grains.After incorporating the Er-TiB_(2)dual-phase nano-reinforcing phases,the tensile strength and elongation at break of the LPBF-deposited samples reach 550 MPa and 18.7%,which are 13.4%and 26.4%higher than those of the matrix material,respectively. 展开更多
关键词 Al-Mn-Mg-Sc-Zr alloy laser powder bed fusion nano-reinforcing phase synergistic enhancement
原文传递
基于Fusion 360的无人机机架轻量化设计及增减材制造研究
14
作者 张磊 《现代制造技术与装备》 2026年第2期22-25,共4页
随着低空经济的发展,无人机机架轻量化设计成为重要的研究方向。为此,提出一种基于Fusion 360的轻量化设计与制造方法。以无人机机架为例,首先设置原始模型材料和边界条件,分析机架的初始强度;其次进行衍生优化,完成轻量化几何模型重构... 随着低空经济的发展,无人机机架轻量化设计成为重要的研究方向。为此,提出一种基于Fusion 360的轻量化设计与制造方法。以无人机机架为例,首先设置原始模型材料和边界条件,分析机架的初始强度;其次进行衍生优化,完成轻量化几何模型重构,并校核验证其强度和稳定性;最后提出“增材整体成形+减材关键面精加工”增减材混合工艺,验证轻量化优化后零件制造的可行性。研究表明,在满足结构强度和稳定性的前提下,优化后的机架减重54.2%,机架部件由15个零件减至1个,制造成本及效率大幅提高。同时,轻量化设计后,无人机的机动性能显著增强。 展开更多
关键词 机架 轻量化设计 fusion 360 强度校核 增减材混合制造
在线阅读 下载PDF
Theory of laser-assisted nuclear fusion
15
作者 Jin-Tao Qi Zhao-Yan Zhou Xu Wang 《Nuclear Science and Techniques》 2026年第3期153-165,共13页
The process of nuclear fusion in the presence of a laser field was theoretically analyzed.The analysis is applicable to most fusion reactions and different types of currently available intense lasers,from X-ray free-e... The process of nuclear fusion in the presence of a laser field was theoretically analyzed.The analysis is applicable to most fusion reactions and different types of currently available intense lasers,from X-ray free-electron lasers to solid-state near-infrared lasers.Laser fields were shown to enhance the fusion yields,and the mechanism of this enhancement was explained.Low-frequency lasers are more efficient in enhancing fusion than high-frequency lasers.The calculation results show enhancements of fusion yields by orders of magnitude with currently available intense low-frequency laser fields.The temperature requirement for controlled nuclear fusion may be reduced with the aid of intense laser fields. 展开更多
关键词 Nuclear fusion Intense lasers Enhancement of fusion
在线阅读 下载PDF
Multi-Modality and Feature Fusion-Based COVID-19 Detection Through Long Short-Term Memory
16
作者 Noureen Fatima Rashid Jahangir +3 位作者 Ghulam Mujtaba Adnan Akhunzada Zahid Hussain Shaikh Faiza Qureshi 《Computers, Materials & Continua》 SCIE EI 2022年第9期4357-4374,共18页
The Coronavirus Disease 2019(COVID-19)pandemic poses the worldwide challenges surpassing the boundaries of country,religion,race,and economy.The current benchmark method for the detection of COVID-19 is the reverse tr... The Coronavirus Disease 2019(COVID-19)pandemic poses the worldwide challenges surpassing the boundaries of country,religion,race,and economy.The current benchmark method for the detection of COVID-19 is the reverse transcription polymerase chain reaction(RT-PCR)testing.Nevertheless,this testing method is accurate enough for the diagnosis of COVID-19.However,it is time-consuming,expensive,expert-dependent,and violates social distancing.In this paper,this research proposed an effective multimodality-based and feature fusion-based(MMFF)COVID-19 detection technique through deep neural networks.In multi-modality,we have utilized the cough samples,breathe samples and sound samples of healthy as well as COVID-19 patients from publicly available COSWARA dataset.Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach.Several useful features were extracted from the aforementioned modalities that were then fed as an input to long short-term memory recurrent neural network algorithms for the classification purpose.Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach.The experimental results showed that our proposed approach outperformed compared to four baseline approaches published recently.We believe that our proposed technique will assists potential users to diagnose the COVID-19 without the intervention of any expert in minimum amount of time. 展开更多
关键词 Covid-19 detection long short-term memory feature fusion deep learning audio classification
暂未订购
Green and mild synthesis of Ca-MOF/COF functionalized silica microspheres in an acid-base tunable deep eutectic solvent for multi-mode chromatography
17
作者 Yuanfei Liu Wanjiao Wei +5 位作者 Xu Liu Rui Hua Yanjuan Liu Yuefei Zhang Wei Chen Sheng Tang 《Chinese Chemical Letters》 2026年第1期547-551,共5页
Metal organic framework(MOF) assembled with coordination bonds has the disadvantage of poor stability that limits its application in the field of stationary phase,while covalent organic framework(COF)assembled through... Metal organic framework(MOF) assembled with coordination bonds has the disadvantage of poor stability that limits its application in the field of stationary phase,while covalent organic framework(COF)assembled through covalent bonds exhibits excellent structural stability.It has been shown that the stationary phases prepared by combining MOF and COF can make up for the poor stability of MOF@SiO_(2),and the MOF/COF composites have superior chromatographic separation performance.However,the traditional methods for preparing COF/MOF based stationary phases are generally solvent thermal synthesis.In this study,a green and low-cost synthesis method was proposed for the preparation of MOF/COF@SiO_(2) stationary phase.Firstly,COF@SiO_(2) was prepared in a choline chloride/ethylene glycol based deep eutectic solvent(DES).Secondly,another acid-base tunable DES prepared by mixing p-toluenesulfonic acid(PTSA)and 2-methylimidazole in different proportions was introduced as the reaction solvent and reactant for rapid synthesis of MOF/COF@SiO_(2).Compared with the toxic transition metal-based MOFs selected in most previous studies,a lightweight and non-toxic S-zone metal(calcium) based MOF was employed in this study.PTSA and calcium will form the calcium/oxygen-containing organic acid framework in acidic DES,which assembles with terephthalic acid dissolved in basic DES to form MOF.The strong hydrogen bonding effect of DES can facilitate rapid assembly of Ca-MOF.The obtained Ca-MOF/COF@SiO_(2) can be used for multi-mode chromatography to efficiently separate multiple isomeric/hydrophilic/hydrophobic analytes.The synthesis method of Ca-MOF/COF@SiO_(2) is green and mild,especially the use of acid-base tunable DES promotes the rapid synthesis of non-toxic Ca-MOF/COF@silica composites,which offers an innovative approach of greenly synthesizing novel MOF/COF stationary phases and extends their applications in the field of chromatography. 展开更多
关键词 Metal organic framework Covalent organic framework Deep eutectic solvent Silica composites multi-mode chromatography
原文传递
MDGET-MER:Multi-Level Dynamic Gating and Emotion Transfer for Multi-Modal Emotion Recognition
18
作者 Musheng Chen Qiang Wen +2 位作者 Xiaohong Qiu Junhua Wu Wenqing Fu 《Computers, Materials & Continua》 2026年第3期872-893,共22页
In multi-modal emotion recognition,excessive reliance on historical context often impedes the detection of emotional shifts,while modality heterogeneity and unimodal noise limit recognition performance.Existing method... In multi-modal emotion recognition,excessive reliance on historical context often impedes the detection of emotional shifts,while modality heterogeneity and unimodal noise limit recognition performance.Existing methods struggle to dynamically adjust cross-modal complementary strength to optimize fusion quality and lack effective mechanisms to model the dynamic evolution of emotions.To address these issues,we propose a multi-level dynamic gating and emotion transfer framework for multi-modal emotion recognition.A dynamic gating mechanism is applied across unimodal encoding,cross-modal alignment,and emotion transfer modeling,substantially improving noise robustness and feature alignment.First,we construct a unimodal encoder based on gated recurrent units and feature-selection gating to suppress intra-modal noise and enhance contextual representation.Second,we design a gated-attention crossmodal encoder that dynamically calibrates the complementary contributions of visual and audio modalities to the dominant textual features and eliminates redundant information.Finally,we introduce a gated enhanced emotion transfer module that explicitly models the temporal dependence of emotional evolution in dialogues via transfer gating and optimizes continuity modeling with a comparative learning loss.Experimental results demonstrate that the proposed method outperforms state-of-the-art models on the public MELD and IEMOCAP datasets. 展开更多
关键词 multi-modal emotion recognition dynamic gating emotion transfer module cross-modal dynamic alignment noise robustness
在线阅读 下载PDF
Subtle Micro-Tremor Fusion:A Cross-Modal AI Framework for Early Detection of Parkinson’s Disease from Voice and Handwriting Dynamics
19
作者 H.Ahmed Naglaa E.Ghannam +1 位作者 H.Mancy Esraa A.Mahareek 《Computer Modeling in Engineering & Sciences》 2026年第2期1070-1099,共30页
Parkinson’s disease remains a major clinical issue in terms of early detection,especially during its prodromal stage when symptoms are not evident or not distinct.To address this problem,we proposed a new deep learni... Parkinson’s disease remains a major clinical issue in terms of early detection,especially during its prodromal stage when symptoms are not evident or not distinct.To address this problem,we proposed a new deep learning 2-based approach for detecting Parkinson’s disease before any of the overt symptoms develop during their prodromal stage.We used 5 publicly accessible datasets,including UCI Parkinson’s Voice,Spiral Drawings,PaHaW,NewHandPD,and PPMI,and implemented a dual stream CNN–BiLSTM architecture with Fisher-weighted feature merging and SHAP-based explanation.The findings reveal that the model’s performance was superior and achieved 98.2%,a F1-score of 0.981,and AUC of 0.991 on the UCI Voice dataset.The model’s performance on the remaining datasets was also comparable,with up to a 2–7 percent betterment in accuracy compared to existing strong models such as CNN–RNN–MLP,ILN–GNet,and CASENet.Across the evidence,the findings back the diagnostic promise of micro-tremor assessment and demonstrate that combining temporal and spatial features with a scatter-based segment for a multi-modal approach can be an effective and scalable platform for an“early,”interpretable PD screening system. 展开更多
关键词 Early Parkinson diagnosis explainable AI(XAI) feature-level fusion handwriting analysis microtremor detection multimodal fusion Parkinson’s disease prodromal detection voice signal processing
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部