At present,most experimental teaching systems lack guidance of an operator,and thus users often do not know what to do during an experiment.The user load is therefore increased,and the learning efficiency of the stude...At present,most experimental teaching systems lack guidance of an operator,and thus users often do not know what to do during an experiment.The user load is therefore increased,and the learning efficiency of the students is decreased.To solve the problem of insufficient system interactivity and guidance,an experimental navigation system based on multi-mode fusion is proposed in this paper.The system first obtains user information by sensing the hardware devices,intelligently perceives the user intention and progress of the experiment according to the information acquired,and finally carries out a multi-modal intelligent navigation process for users.As an innovative aspect of this study,an intelligent multi-mode navigation system is used to guide users in conducting experiments,thereby reducing the user load and enabling the users to effectively complete their experiments.The results prove that this system can guide users in completing their experiments,and can effectively reduce the user load during the interaction process and improve the efficiency.展开更多
OBJECTIVE:To develop an automated system for identifying and classifying constitution types in Traditional Chinese Medicine(TCM)by leveraging multi-model fusion algorithms.METHODS:A condensed version of a physical inf...OBJECTIVE:To develop an automated system for identifying and classifying constitution types in Traditional Chinese Medicine(TCM)by leveraging multi-model fusion algorithms.METHODS:A condensed version of a physical information collection form was designed to facilitate efficient data acquisition.The collected data were analyzed using a multi-model fusion approach,which integrated several machine learning techniques.These included support vector machines,Naive Bayes,decision trees,random forests,logistic regression,multilayer perceptrons,K-nearest neighbors,gradient boosting,adaptive ensemble learning,and recurrent neural networks.A soft voting strategy was used to combine the predictive outputs of each model,enabling the selection of the most effective model combination.RESULTS:The classification models demonstrated consistent and robust performance across most TCM constitution types when enhanced by the multi-model fusion strategy.In particular,high levels of accuracy,precision,recall,and F1-score were achieved for constitution types such as Yang deficiency,Qi deficiency,and Qi stagnation.However,the classification performance for the Yin deficiency constitution was relatively lower,indicating the need for further refinement and optimization in future research.CONCLUSION:This study introduces a novel,automated method for classifying TCM constitution types through the application of multi-model fusion algorithms.The approach simplifies the complex task of constitution identification while offering a practical and theoretical framework for the intelligent diagnosis of TCM body types.The findings have the potential to enhance personalized health management and support clinical decision-making in TCM diagnosis and treatment.展开更多
[Objectives]This study was conducted to achieve rapid and accurate detection of protein content in rice with a particle size of 1.0 mm.[Methods]A multi-model fusion strategy was proposed on the basis of Stacking ensem...[Objectives]This study was conducted to achieve rapid and accurate detection of protein content in rice with a particle size of 1.0 mm.[Methods]A multi-model fusion strategy was proposed on the basis of Stacking ensemble learning.A base learner pool was constructed,containing Partial Least Squares(PLS),Support Vector Machine(SVM),Deep Extreme Learning Machine(DELM),Random Forest(RF),Gradient Boosting Decision Tree(GBDT),and Multilayer Perceptron(MLP).PLS,DELM,and Linear Regression(LR)were used as meta-learner candidates.Employing integer coding technology,systematic dynamic combinations of base learners and meta-learners were generated,resulting in a total of 40 non-repetitive fusion models.The optimal combination was selected through a comprehensive evaluation based on multiple assessment indicators.[Results]The combination"PLS-DELM-MLP-LR"(code 1367)achieved coefficients of determination of 0.9732 and 0.9780 on the validation set and independent test set,respectively,with relative root mean square errors of 2.35%and 2.36%,and residual predictive deviations of 6.1075 and 6.7479,respectively.[Conclusions]The Stacking fusion model significantly enhances the predictive accuracy and robustness of spectral quantitative analysis,providing an efficient and feasible solution for modeling complex agricultural product spectral data.展开更多
Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstructio...Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications.展开更多
Gait recognition is a key biometric for long-distance identification,yet its performance is severely degraded by real-world challenges such as varying clothing,carrying conditions,and changing viewpoints.While combini...Gait recognition is a key biometric for long-distance identification,yet its performance is severely degraded by real-world challenges such as varying clothing,carrying conditions,and changing viewpoints.While combining silhouette and skeleton data is a promising direction,effectively fusing these heterogeneous modalities and adaptively weighting their contributions in response to diverse conditions remains a central problem.This paper introduces GaitMAFF,a novelMulti-modal Adaptive Feature Fusion Network,to address this challenge.Our approach first transforms discrete skeleton joints into a dense SkeletonMap representation to align with silhouettes,then employs an attention-based module to dynamically learn the fusion weights between the two modalities.These fused features are processed by a powerful spatio-temporal backbone withWeighted Global-Local Feature FusionModules(WFFM)to learn a discriminative representation.Extensive experiments on the challenging CCPG and Gait3D datasets show that GaitMAFF achieves state-of-the-art performance,with an average Rank-1 accuracy of 84.6%on CCPG and 58.7%on Gait3D.These results demonstrate that our adaptive fusion strategy effectively integrates complementary multimodal information,significantly enhancing gait recognition robustness and accuracy in complex scenes and providing a practical solution for real-world applications.展开更多
The fasteners employed in the railway tracks are susceptible to defects arising from their intricate composition.Foreign objects are frequently observed on the track bed in an open environment.These two types of defec...The fasteners employed in the railway tracks are susceptible to defects arising from their intricate composition.Foreign objects are frequently observed on the track bed in an open environment.These two types of defects pose potential threats to high-speed trains,thus necessitating timely and accurate track inspection.The majority of extant automatic inspection methods are predicated on the utilization of single visible light data,and the efficacy of the algorithmic processes is influenced by complex environments.Furthermore,due to the single information dimension,the detection accuracy of defects in similar,occluded,and small object categories is low.To address the aforementioned issues,this paper proposes a track defect detectionmethod based on dynamicmulti-modal fusion and challenging object enhanced perception.First,in light of the variances in the representation dimensions ofmultimodal information,this paper proposes a dynamic weighted multi-modal feature fusion module.The fused multi-modal features are assigned weights,and thenmultiplied with the extracted single-modal features atmultiple levels,achieving adaptive adjustment of the response degree of fusion features.Second,a novel stepwise multi-scale convolution feature aggregation module is proposed for challenging objects.The proposed method employs depth separable convolution and cross-scale aggregation operations of different receptive fields to enhance feature extraction and reuse,thereby reducing the degree of progressive loss of effective information.The experimental results demonstrate the efficacy of the proposed method in comparison to eight established methods,encompassing both single-modal and multi-modal methods,as evidenced by the extensive findings within the constructed RGBD dataset.展开更多
Metal organic framework(MOF) assembled with coordination bonds has the disadvantage of poor stability that limits its application in the field of stationary phase,while covalent organic framework(COF)assembled through...Metal organic framework(MOF) assembled with coordination bonds has the disadvantage of poor stability that limits its application in the field of stationary phase,while covalent organic framework(COF)assembled through covalent bonds exhibits excellent structural stability.It has been shown that the stationary phases prepared by combining MOF and COF can make up for the poor stability of MOF@SiO_(2),and the MOF/COF composites have superior chromatographic separation performance.However,the traditional methods for preparing COF/MOF based stationary phases are generally solvent thermal synthesis.In this study,a green and low-cost synthesis method was proposed for the preparation of MOF/COF@SiO_(2) stationary phase.Firstly,COF@SiO_(2) was prepared in a choline chloride/ethylene glycol based deep eutectic solvent(DES).Secondly,another acid-base tunable DES prepared by mixing p-toluenesulfonic acid(PTSA)and 2-methylimidazole in different proportions was introduced as the reaction solvent and reactant for rapid synthesis of MOF/COF@SiO_(2).Compared with the toxic transition metal-based MOFs selected in most previous studies,a lightweight and non-toxic S-zone metal(calcium) based MOF was employed in this study.PTSA and calcium will form the calcium/oxygen-containing organic acid framework in acidic DES,which assembles with terephthalic acid dissolved in basic DES to form MOF.The strong hydrogen bonding effect of DES can facilitate rapid assembly of Ca-MOF.The obtained Ca-MOF/COF@SiO_(2) can be used for multi-mode chromatography to efficiently separate multiple isomeric/hydrophilic/hydrophobic analytes.The synthesis method of Ca-MOF/COF@SiO_(2) is green and mild,especially the use of acid-base tunable DES promotes the rapid synthesis of non-toxic Ca-MOF/COF@silica composites,which offers an innovative approach of greenly synthesizing novel MOF/COF stationary phases and extends their applications in the field of chromatography.展开更多
The flow behavior of molten steel in the thin slab mold under high casting speed conditions was investigated,with a focus on the multi-mode continuous casting and rolling mold.A steel-slag two-phase flow model was est...The flow behavior of molten steel in the thin slab mold under high casting speed conditions was investigated,with a focus on the multi-mode continuous casting and rolling mold.A steel-slag two-phase flow model was established using large eddy simulation,the volume of fluid,and magnetohydrodynamics methods through numerical simulation.The maximum flow velocity and wave height at the steel-slag interface within the mold are critical evaluation criteria for analyzing asymmetric flow under varying casting speeds and electromagnetic braking.The results indicate that the asymmetric flows within the mold do not occur synchronously.The severity of the asymmetric flow correlates with the velocity difference across the steel-slag interface.A greater biased flow prolongs the time required to revert to a steady state.When the magnetic field intensity is set to 0.24 T and the magnetic pole position is at 390 mm from the steel-slag interface,this configuration can reduce the velocity of the steel-slag interface,thereby mitigating the asymmetric flow.Additionally,it can diminish the velocity,impact depth,and impact intensity on the narrow face of the jet,thus improving the distribution of velocity and turbulent kinetic energy within the mold.This configuration prolongs the time required for the steel-slag interface to transition from a stable state to its maximum velocity and shortens the time for the interface to return to stability from an unstable state.Moreover,it ensures the positional stability of the steel-slag interface,confining its position within−3 mm.展开更多
Objective To develop a non-invasive predictive model for coronary artery stenosis severity based on adaptive multi-modal integration of traditional Chinese and western medicine data.Methods Clinical indicators,echocar...Objective To develop a non-invasive predictive model for coronary artery stenosis severity based on adaptive multi-modal integration of traditional Chinese and western medicine data.Methods Clinical indicators,echocardiographic data,traditional Chinese medicine(TCM)tongue manifestations,and facial features were collected from patients who underwent coro-nary computed tomography angiography(CTA)in the Cardiac Care Unit(CCU)of Shanghai Tenth People's Hospital between May 1,2023 and May 1,2024.An adaptive weighted multi-modal data fusion(AWMDF)model based on deep learning was constructed to predict the severity of coronary artery stenosis.The model was evaluated using metrics including accura-cy,precision,recall,F1 score,and the area under the receiver operating characteristic(ROC)curve(AUC).Further performance assessment was conducted through comparisons with six ensemble machine learning methods,data ablation,model component ablation,and various decision-level fusion strategies.Results A total of 158 patients were included in the study.The AWMDF model achieved ex-cellent predictive performance(AUC=0.973,accuracy=0.937,precision=0.937,recall=0.929,and F1 score=0.933).Compared with model ablation,data ablation experiments,and various traditional machine learning models,the AWMDF model demonstrated superior per-formance.Moreover,the adaptive weighting strategy outperformed alternative approaches,including simple weighting,averaging,voting,and fixed-weight schemes.Conclusion The AWMDF model demonstrates potential clinical value in the non-invasive prediction of coronary artery disease and could serve as a tool for clinical decision support.展开更多
As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advan...As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advancing the development of perception technology in autonomous driving.To further promote the development of fusion algorithms and improve detection performance,this paper discusses the advantages and recent advancements of multimodal fusion-based object detection algorithms.Starting fromsingle-modal sensor detection,the paper provides a detailed overview of typical sensors used in autonomous driving and introduces object detection methods based on images and point clouds.For image-based detection methods,they are categorized into monocular detection and binocular detection based on different input types.For point cloud-based detection methods,they are classified into projection-based,voxel-based,point cluster-based,pillar-based,and graph structure-based approaches based on the technical pathways for processing point cloud features.Additionally,multimodal fusion algorithms are divided into Camera-LiDAR fusion,Camera-Radar fusion,Camera-LiDAR-Radar fusion,and other sensor fusion methods based on the types of sensors involved.Furthermore,the paper identifies five key future research directions in this field,aiming to provide insights for researchers engaged in multimodal fusion-based object detection algorithms and to encourage broader attention to the research and application of multimodal fusion-based object detection.展开更多
Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or ...Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or obtaining entity related external knowledge from knowledge bases or Large Language Models(LLMs).However,these approaches ignore the poor semantic correlation between visual and textual modalities in MNER datasets and do not explore different multi-modal fusion approaches.In this paper,we present MMAVK,a multi-modal named entity recognition model with auxiliary visual knowledge and word-level fusion,which aims to leverage the Multi-modal Large Language Model(MLLM)as an implicit knowledge base.It also extracts vision-based auxiliary knowledge from the image formore accurate and effective recognition.Specifically,we propose vision-based auxiliary knowledge generation,which guides the MLLM to extract external knowledge exclusively derived from images to aid entity recognition by designing target-specific prompts,thus avoiding redundant recognition and cognitive confusion caused by the simultaneous processing of image-text pairs.Furthermore,we employ a word-level multi-modal fusion mechanism to fuse the extracted external knowledge with each word-embedding embedded from the transformerbased encoder.Extensive experimental results demonstrate that MMAVK outperforms or equals the state-of-the-art methods on the two classical MNER datasets,even when the largemodels employed have significantly fewer parameters than other baselines.展开更多
To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities...To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model.展开更多
To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features e...To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features extracted synchronously by the CCAE were stacked and fed to the multi-channel convolution layers for fusion. Then, the fused data was passed to all connection layers for compression and fed to the Softmax module for classification. Finally, the coupling loss function coefficients and the network parameters were optimized through an adaptive approach using the gray wolf optimization (GWO) algorithm. Experimental comparisons showed that the proposed ADCCAE fusion model was superior to existing models for multi-mode data fusion.展开更多
Carbon dots(CDs)-based composites have shown impressive performance in fields of information encryption and sensing,however,a great challenge is to simultaneously implement multi-mode luminescence and room-temperature...Carbon dots(CDs)-based composites have shown impressive performance in fields of information encryption and sensing,however,a great challenge is to simultaneously implement multi-mode luminescence and room-temperature phosphorescence(RTP)detection in single system due to the formidable synthesis.Herein,a multifunctional composite of Eu&CDs@p RHO has been designed by co-assembly strategy and prepared via a facile calcination and impregnation treatment.Eu&CDs@p RHO exhibits intense fluorescence(FL)and RTP coming from two individual luminous centers,Eu3+in the free pores and CDs in the interrupted structure of RHO zeolite.Unique four-mode color outputs including pink(Eu^(3+),ex.254 nm),light violet(CDs,ex.365 nm),blue(CDs,254 nm off),and green(CDs,365 nm off)could be realized,on the basis of it,a preliminary application of advanced information encoding has been demonstrated.Given the free pores of matrix and stable RTP in water of confined CDs,a visual RTP detection of Fe^(3+)ions is achieved with the detection limit as low as 9.8μmol/L.This work has opened up a new perspective for the strategic amalgamation of luminous vips with porous zeolite to construct the advanced functional materials.展开更多
Prostate cancer(PCa)is characterized by high incidence and propensity for easy metastasis,presenting significant challenges in clinical diagnosis and treatment.Tumor microenvironment(TME)-responsive nanomaterials prov...Prostate cancer(PCa)is characterized by high incidence and propensity for easy metastasis,presenting significant challenges in clinical diagnosis and treatment.Tumor microenvironment(TME)-responsive nanomaterials provide a promising prospect for imaging-guided precision therapy.Considering that tumor-derived alkaline phosphatase(ALP)is over-expressed in metastatic PCa,it makes a great chance to develop a theranostics system with ALP responsive in the TME.Herein,an ALP-responsive aggregationinduced emission luminogens(AIEgens)nanoprobe AMNF self-assembly was designed for enhancing the diagnosis and treatment of metastatic PCa.The nanoprobe exhibited self-aggregation in the presence of ALP resulted in aggregation-induced fluorescence,and enhanced accumulation and prolonged retention period at the tumor site.In terms of detection,the fluorescence(FL)/computed tomography(CT)/magnetic resonance(MR)multi-mode imaging effect of nanoprobe was significantly improved post-aggregation,enabling precise diagnosis through the amalgamation of multiple imaging modes.Enhanced CT/MR imaging can achieve assist preoperative tumor diagnosis,and enhanced FL imaging technology can achieve“intraoperative visual navigation”,showing its potential application value in clinical tumor detection and surgical guidance.In terms of treatment,AMNF showed strong absorption in the near infrared region after aggregation,which improved the photothermal treatment effect.Overall,our work developed an effective aggregation-enhanced theranostic strategy for ALP-related cancers.展开更多
Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still st...Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still struggle to deal with the complex and changing scenarios captured by drones,mainly due to two reasons:(A)RGB-IR fusion detectors are susceptible to inferior inputs that degrade performance and stability.(B)RGB-IR fusion detectors are susceptible to redundant features that reduce accuracy and efficiency.In this paper,an innovative RGB-IR fusion detection framework based on global-local feature optimization,named GLFDet,is proposed to improve the detection performance and efficiency of drone-captured objects.The key components of GLFDet include a Global Feature Optimization(GFO)module,a Local Feature Optimization(LFO)module and a Channel Separation Fusion(CSF)module.Specifically,GFO calculates the information content of the input image from the frequency domain and optimizes the features holistically.Then,LFO dynamically selects high-value features and filters out low-value features before fusion,which significantly improves the efficiency of fusion.Finally,CSF fuses the RGB and IR features across the corresponding channels,which avoids the rearrangement of the channel relationships and enhances the model stability.Extensive experimental results show that the proposed method achieves the best performance on three popular RGB-IR datasets Drone Vehicle,VEDAI,and LLVIP.In addition,GLFDet is more lightweight than other comparable models,making it more appealing to edge devices such as drones.The code is available at https://github.com/lao chen330/GLFDet.展开更多
基金the the National Key R&D Program of China(No.2018YFB1004901)the Independent Innovation Team Project of Jinan City(No.2019GXRC013).
文摘At present,most experimental teaching systems lack guidance of an operator,and thus users often do not know what to do during an experiment.The user load is therefore increased,and the learning efficiency of the students is decreased.To solve the problem of insufficient system interactivity and guidance,an experimental navigation system based on multi-mode fusion is proposed in this paper.The system first obtains user information by sensing the hardware devices,intelligently perceives the user intention and progress of the experiment according to the information acquired,and finally carries out a multi-modal intelligent navigation process for users.As an innovative aspect of this study,an intelligent multi-mode navigation system is used to guide users in conducting experiments,thereby reducing the user load and enabling the users to effectively complete their experiments.The results prove that this system can guide users in completing their experiments,and can effectively reduce the user load during the interaction process and improve the efficiency.
基金Supported by Traditional Chinese Medicine Standardization Project of National Administration of Traditional Chinese Medicine:Research on the Physical Characteristics and Pre-disease Health Management of the Elderly in Hubei Province(No.GZY-FJS-2022-046)。
文摘OBJECTIVE:To develop an automated system for identifying and classifying constitution types in Traditional Chinese Medicine(TCM)by leveraging multi-model fusion algorithms.METHODS:A condensed version of a physical information collection form was designed to facilitate efficient data acquisition.The collected data were analyzed using a multi-model fusion approach,which integrated several machine learning techniques.These included support vector machines,Naive Bayes,decision trees,random forests,logistic regression,multilayer perceptrons,K-nearest neighbors,gradient boosting,adaptive ensemble learning,and recurrent neural networks.A soft voting strategy was used to combine the predictive outputs of each model,enabling the selection of the most effective model combination.RESULTS:The classification models demonstrated consistent and robust performance across most TCM constitution types when enhanced by the multi-model fusion strategy.In particular,high levels of accuracy,precision,recall,and F1-score were achieved for constitution types such as Yang deficiency,Qi deficiency,and Qi stagnation.However,the classification performance for the Yin deficiency constitution was relatively lower,indicating the need for further refinement and optimization in future research.CONCLUSION:This study introduces a novel,automated method for classifying TCM constitution types through the application of multi-model fusion algorithms.The approach simplifies the complex task of constitution identification while offering a practical and theoretical framework for the intelligent diagnosis of TCM body types.The findings have the potential to enhance personalized health management and support clinical decision-making in TCM diagnosis and treatment.
文摘[Objectives]This study was conducted to achieve rapid and accurate detection of protein content in rice with a particle size of 1.0 mm.[Methods]A multi-model fusion strategy was proposed on the basis of Stacking ensemble learning.A base learner pool was constructed,containing Partial Least Squares(PLS),Support Vector Machine(SVM),Deep Extreme Learning Machine(DELM),Random Forest(RF),Gradient Boosting Decision Tree(GBDT),and Multilayer Perceptron(MLP).PLS,DELM,and Linear Regression(LR)were used as meta-learner candidates.Employing integer coding technology,systematic dynamic combinations of base learners and meta-learners were generated,resulting in a total of 40 non-repetitive fusion models.The optimal combination was selected through a comprehensive evaluation based on multiple assessment indicators.[Results]The combination"PLS-DELM-MLP-LR"(code 1367)achieved coefficients of determination of 0.9732 and 0.9780 on the validation set and independent test set,respectively,with relative root mean square errors of 2.35%and 2.36%,and residual predictive deviations of 6.1075 and 6.7479,respectively.[Conclusions]The Stacking fusion model significantly enhances the predictive accuracy and robustness of spectral quantitative analysis,providing an efficient and feasible solution for modeling complex agricultural product spectral data.
文摘Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications.
基金funded by the Natural Science Foundation of Chongqing Municipality,grant number CSTB2022NSCQ-MSX0503.
文摘Gait recognition is a key biometric for long-distance identification,yet its performance is severely degraded by real-world challenges such as varying clothing,carrying conditions,and changing viewpoints.While combining silhouette and skeleton data is a promising direction,effectively fusing these heterogeneous modalities and adaptively weighting their contributions in response to diverse conditions remains a central problem.This paper introduces GaitMAFF,a novelMulti-modal Adaptive Feature Fusion Network,to address this challenge.Our approach first transforms discrete skeleton joints into a dense SkeletonMap representation to align with silhouettes,then employs an attention-based module to dynamically learn the fusion weights between the two modalities.These fused features are processed by a powerful spatio-temporal backbone withWeighted Global-Local Feature FusionModules(WFFM)to learn a discriminative representation.Extensive experiments on the challenging CCPG and Gait3D datasets show that GaitMAFF achieves state-of-the-art performance,with an average Rank-1 accuracy of 84.6%on CCPG and 58.7%on Gait3D.These results demonstrate that our adaptive fusion strategy effectively integrates complementary multimodal information,significantly enhancing gait recognition robustness and accuracy in complex scenes and providing a practical solution for real-world applications.
基金funded by Beijing Natural Science Foundation,grant number L241078.
文摘The fasteners employed in the railway tracks are susceptible to defects arising from their intricate composition.Foreign objects are frequently observed on the track bed in an open environment.These two types of defects pose potential threats to high-speed trains,thus necessitating timely and accurate track inspection.The majority of extant automatic inspection methods are predicated on the utilization of single visible light data,and the efficacy of the algorithmic processes is influenced by complex environments.Furthermore,due to the single information dimension,the detection accuracy of defects in similar,occluded,and small object categories is low.To address the aforementioned issues,this paper proposes a track defect detectionmethod based on dynamicmulti-modal fusion and challenging object enhanced perception.First,in light of the variances in the representation dimensions ofmultimodal information,this paper proposes a dynamic weighted multi-modal feature fusion module.The fused multi-modal features are assigned weights,and thenmultiplied with the extracted single-modal features atmultiple levels,achieving adaptive adjustment of the response degree of fusion features.Second,a novel stepwise multi-scale convolution feature aggregation module is proposed for challenging objects.The proposed method employs depth separable convolution and cross-scale aggregation operations of different receptive fields to enhance feature extraction and reuse,thereby reducing the degree of progressive loss of effective information.The experimental results demonstrate the efficacy of the proposed method in comparison to eight established methods,encompassing both single-modal and multi-modal methods,as evidenced by the extensive findings within the constructed RGBD dataset.
基金supported by National Natural Science Foundation of China (Nos.21906124,32302202)Natural Science Foundation of Hubei Province (No.2017CFB220)Natural Science Foundation of Shandong Province (No.ZR2023MH278)。
文摘Metal organic framework(MOF) assembled with coordination bonds has the disadvantage of poor stability that limits its application in the field of stationary phase,while covalent organic framework(COF)assembled through covalent bonds exhibits excellent structural stability.It has been shown that the stationary phases prepared by combining MOF and COF can make up for the poor stability of MOF@SiO_(2),and the MOF/COF composites have superior chromatographic separation performance.However,the traditional methods for preparing COF/MOF based stationary phases are generally solvent thermal synthesis.In this study,a green and low-cost synthesis method was proposed for the preparation of MOF/COF@SiO_(2) stationary phase.Firstly,COF@SiO_(2) was prepared in a choline chloride/ethylene glycol based deep eutectic solvent(DES).Secondly,another acid-base tunable DES prepared by mixing p-toluenesulfonic acid(PTSA)and 2-methylimidazole in different proportions was introduced as the reaction solvent and reactant for rapid synthesis of MOF/COF@SiO_(2).Compared with the toxic transition metal-based MOFs selected in most previous studies,a lightweight and non-toxic S-zone metal(calcium) based MOF was employed in this study.PTSA and calcium will form the calcium/oxygen-containing organic acid framework in acidic DES,which assembles with terephthalic acid dissolved in basic DES to form MOF.The strong hydrogen bonding effect of DES can facilitate rapid assembly of Ca-MOF.The obtained Ca-MOF/COF@SiO_(2) can be used for multi-mode chromatography to efficiently separate multiple isomeric/hydrophilic/hydrophobic analytes.The synthesis method of Ca-MOF/COF@SiO_(2) is green and mild,especially the use of acid-base tunable DES promotes the rapid synthesis of non-toxic Ca-MOF/COF@silica composites,which offers an innovative approach of greenly synthesizing novel MOF/COF stationary phases and extends their applications in the field of chromatography.
基金support from the National Natural Science Foundation of China(Grant Nos.52174313 and 52304350)thank all members of the Hebei High Quality Steel Continuous Casting Engineering Technology Research Center at North China University of Science and Technology,Tangshan,China.
文摘The flow behavior of molten steel in the thin slab mold under high casting speed conditions was investigated,with a focus on the multi-mode continuous casting and rolling mold.A steel-slag two-phase flow model was established using large eddy simulation,the volume of fluid,and magnetohydrodynamics methods through numerical simulation.The maximum flow velocity and wave height at the steel-slag interface within the mold are critical evaluation criteria for analyzing asymmetric flow under varying casting speeds and electromagnetic braking.The results indicate that the asymmetric flows within the mold do not occur synchronously.The severity of the asymmetric flow correlates with the velocity difference across the steel-slag interface.A greater biased flow prolongs the time required to revert to a steady state.When the magnetic field intensity is set to 0.24 T and the magnetic pole position is at 390 mm from the steel-slag interface,this configuration can reduce the velocity of the steel-slag interface,thereby mitigating the asymmetric flow.Additionally,it can diminish the velocity,impact depth,and impact intensity on the narrow face of the jet,thus improving the distribution of velocity and turbulent kinetic energy within the mold.This configuration prolongs the time required for the steel-slag interface to transition from a stable state to its maximum velocity and shortens the time for the interface to return to stability from an unstable state.Moreover,it ensures the positional stability of the steel-slag interface,confining its position within−3 mm.
基金Construction Program of the Key Discipline of State Administration of Traditional Chinese Medicine of China(ZYYZDXK-2023069)Research Project of Shanghai Municipal Health Commission (2024QN018)Shanghai University of Traditional Chinese Medicine Science and Technology Development Program (23KFL005)。
文摘Objective To develop a non-invasive predictive model for coronary artery stenosis severity based on adaptive multi-modal integration of traditional Chinese and western medicine data.Methods Clinical indicators,echocardiographic data,traditional Chinese medicine(TCM)tongue manifestations,and facial features were collected from patients who underwent coro-nary computed tomography angiography(CTA)in the Cardiac Care Unit(CCU)of Shanghai Tenth People's Hospital between May 1,2023 and May 1,2024.An adaptive weighted multi-modal data fusion(AWMDF)model based on deep learning was constructed to predict the severity of coronary artery stenosis.The model was evaluated using metrics including accura-cy,precision,recall,F1 score,and the area under the receiver operating characteristic(ROC)curve(AUC).Further performance assessment was conducted through comparisons with six ensemble machine learning methods,data ablation,model component ablation,and various decision-level fusion strategies.Results A total of 158 patients were included in the study.The AWMDF model achieved ex-cellent predictive performance(AUC=0.973,accuracy=0.937,precision=0.937,recall=0.929,and F1 score=0.933).Compared with model ablation,data ablation experiments,and various traditional machine learning models,the AWMDF model demonstrated superior per-formance.Moreover,the adaptive weighting strategy outperformed alternative approaches,including simple weighting,averaging,voting,and fixed-weight schemes.Conclusion The AWMDF model demonstrates potential clinical value in the non-invasive prediction of coronary artery disease and could serve as a tool for clinical decision support.
基金funded by the Yangtze River Delta Science and Technology Innovation Community Joint Research Project(2023CSJGG1600)the Natural Science Foundation of Anhui Province(2208085MF173)Wuhu“ChiZhu Light”Major Science and Technology Project(2023ZD01,2023ZD03).
文摘As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advancing the development of perception technology in autonomous driving.To further promote the development of fusion algorithms and improve detection performance,this paper discusses the advantages and recent advancements of multimodal fusion-based object detection algorithms.Starting fromsingle-modal sensor detection,the paper provides a detailed overview of typical sensors used in autonomous driving and introduces object detection methods based on images and point clouds.For image-based detection methods,they are categorized into monocular detection and binocular detection based on different input types.For point cloud-based detection methods,they are classified into projection-based,voxel-based,point cluster-based,pillar-based,and graph structure-based approaches based on the technical pathways for processing point cloud features.Additionally,multimodal fusion algorithms are divided into Camera-LiDAR fusion,Camera-Radar fusion,Camera-LiDAR-Radar fusion,and other sensor fusion methods based on the types of sensors involved.Furthermore,the paper identifies five key future research directions in this field,aiming to provide insights for researchers engaged in multimodal fusion-based object detection algorithms and to encourage broader attention to the research and application of multimodal fusion-based object detection.
基金funded by Research Project,grant number BHQ090003000X03.
文摘Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or obtaining entity related external knowledge from knowledge bases or Large Language Models(LLMs).However,these approaches ignore the poor semantic correlation between visual and textual modalities in MNER datasets and do not explore different multi-modal fusion approaches.In this paper,we present MMAVK,a multi-modal named entity recognition model with auxiliary visual knowledge and word-level fusion,which aims to leverage the Multi-modal Large Language Model(MLLM)as an implicit knowledge base.It also extracts vision-based auxiliary knowledge from the image formore accurate and effective recognition.Specifically,we propose vision-based auxiliary knowledge generation,which guides the MLLM to extract external knowledge exclusively derived from images to aid entity recognition by designing target-specific prompts,thus avoiding redundant recognition and cognitive confusion caused by the simultaneous processing of image-text pairs.Furthermore,we employ a word-level multi-modal fusion mechanism to fuse the extracted external knowledge with each word-embedding embedded from the transformerbased encoder.Extensive experimental results demonstrate that MMAVK outperforms or equals the state-of-the-art methods on the two classical MNER datasets,even when the largemodels employed have significantly fewer parameters than other baselines.
基金partially supported by the National Natural Science Foundation of China under Grants 62471493 and 62402257(for conceptualization and investigation)partially supported by the Natural Science Foundation of Shandong Province,China under Grants ZR2023LZH017,ZR2024MF066,and 2023QF025(for formal analysis and validation)+1 种基金partially supported by the Open Foundation of Key Laboratory of Computing Power Network and Information Security,Ministry of Education,Qilu University of Technology(Shandong Academy of Sciences)under Grant 2023ZD010(for methodology and model design)partially supported by the Russian Science Foundation(RSF)Project under Grant 22-71-10095-P(for validation and results verification).
文摘To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model.
文摘To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features extracted synchronously by the CCAE were stacked and fed to the multi-channel convolution layers for fusion. Then, the fused data was passed to all connection layers for compression and fed to the Softmax module for classification. Finally, the coupling loss function coefficients and the network parameters were optimized through an adaptive approach using the gray wolf optimization (GWO) algorithm. Experimental comparisons showed that the proposed ADCCAE fusion model was superior to existing models for multi-mode data fusion.
基金supported by the National Natural Science Foundation of China(No.22288101)the 111 Project(No.B17020)。
文摘Carbon dots(CDs)-based composites have shown impressive performance in fields of information encryption and sensing,however,a great challenge is to simultaneously implement multi-mode luminescence and room-temperature phosphorescence(RTP)detection in single system due to the formidable synthesis.Herein,a multifunctional composite of Eu&CDs@p RHO has been designed by co-assembly strategy and prepared via a facile calcination and impregnation treatment.Eu&CDs@p RHO exhibits intense fluorescence(FL)and RTP coming from two individual luminous centers,Eu3+in the free pores and CDs in the interrupted structure of RHO zeolite.Unique four-mode color outputs including pink(Eu^(3+),ex.254 nm),light violet(CDs,ex.365 nm),blue(CDs,254 nm off),and green(CDs,365 nm off)could be realized,on the basis of it,a preliminary application of advanced information encoding has been demonstrated.Given the free pores of matrix and stable RTP in water of confined CDs,a visual RTP detection of Fe^(3+)ions is achieved with the detection limit as low as 9.8μmol/L.This work has opened up a new perspective for the strategic amalgamation of luminous vips with porous zeolite to construct the advanced functional materials.
基金supported by Natural Science Foundation of Jilin Province(No.SKL202302002)Key Research and Development project of Jilin Provincial Science and Technology Department(No.20210204142YY)+2 种基金The Science and Technology Development Program of Jilin Province(No.2020122256JC)Beijing Kechuang Medical Development Foundation Fund of China(No.KC2023-JX-0186BQ079)Talent Reserve Program(TRP),the First Hospital of Jilin University(No.JDYY-TRP-2024007)。
文摘Prostate cancer(PCa)is characterized by high incidence and propensity for easy metastasis,presenting significant challenges in clinical diagnosis and treatment.Tumor microenvironment(TME)-responsive nanomaterials provide a promising prospect for imaging-guided precision therapy.Considering that tumor-derived alkaline phosphatase(ALP)is over-expressed in metastatic PCa,it makes a great chance to develop a theranostics system with ALP responsive in the TME.Herein,an ALP-responsive aggregationinduced emission luminogens(AIEgens)nanoprobe AMNF self-assembly was designed for enhancing the diagnosis and treatment of metastatic PCa.The nanoprobe exhibited self-aggregation in the presence of ALP resulted in aggregation-induced fluorescence,and enhanced accumulation and prolonged retention period at the tumor site.In terms of detection,the fluorescence(FL)/computed tomography(CT)/magnetic resonance(MR)multi-mode imaging effect of nanoprobe was significantly improved post-aggregation,enabling precise diagnosis through the amalgamation of multiple imaging modes.Enhanced CT/MR imaging can achieve assist preoperative tumor diagnosis,and enhanced FL imaging technology can achieve“intraoperative visual navigation”,showing its potential application value in clinical tumor detection and surgical guidance.In terms of treatment,AMNF showed strong absorption in the near infrared region after aggregation,which improved the photothermal treatment effect.Overall,our work developed an effective aggregation-enhanced theranostic strategy for ALP-related cancers.
基金supported by the National Natural Science Foundation of China(No.62276204)the Fundamental Research Funds for the Central Universities,China(No.YJSJ24011)+1 种基金the Natural Science Basic Research Program of Shaanxi,China(Nos.2022JM-340 and 2023-JC-QN-0710)the China Postdoctoral Science Foundation(Nos.2020T130494 and 2018M633470)。
文摘Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still struggle to deal with the complex and changing scenarios captured by drones,mainly due to two reasons:(A)RGB-IR fusion detectors are susceptible to inferior inputs that degrade performance and stability.(B)RGB-IR fusion detectors are susceptible to redundant features that reduce accuracy and efficiency.In this paper,an innovative RGB-IR fusion detection framework based on global-local feature optimization,named GLFDet,is proposed to improve the detection performance and efficiency of drone-captured objects.The key components of GLFDet include a Global Feature Optimization(GFO)module,a Local Feature Optimization(LFO)module and a Channel Separation Fusion(CSF)module.Specifically,GFO calculates the information content of the input image from the frequency domain and optimizes the features holistically.Then,LFO dynamically selects high-value features and filters out low-value features before fusion,which significantly improves the efficiency of fusion.Finally,CSF fuses the RGB and IR features across the corresponding channels,which avoids the rearrangement of the channel relationships and enhances the model stability.Extensive experimental results show that the proposed method achieves the best performance on three popular RGB-IR datasets Drone Vehicle,VEDAI,and LLVIP.In addition,GLFDet is more lightweight than other comparable models,making it more appealing to edge devices such as drones.The code is available at https://github.com/lao chen330/GLFDet.