期刊文献+
共找到37,587篇文章
< 1 2 250 >
每页显示 20 50 100
Advances in longitudinal studies of amnestic mild cognitive impairment and Alzheimer's disease based on multi-modal MRI techniques 被引量:8
1
作者 Zhongjie Hu Liyong Wu +1 位作者 Jianping Jia Ying Han 《Neuroscience Bulletin》 SCIE CAS CSCD 2014年第2期198-206,共9页
Amnestic mild cognitive impairment (aMCI) is a prodromal stage of Alzheimer's disease (AD), and 75%-80% of aMCI patients finally develop AD. So, early identification of patients with aMCI or AD is of great signif... Amnestic mild cognitive impairment (aMCI) is a prodromal stage of Alzheimer's disease (AD), and 75%-80% of aMCI patients finally develop AD. So, early identification of patients with aMCI or AD is of great significance for prevention and intervention. According to cross-sectional studies, it is known that the hippocampus, posterior cingulate cortex, and corpus callosum are key areas in studies based on structural MRI (sMRI), functional MRI (fMRI), and diffusion tensor imaging (DTI) respectively. Recently, longitudinal studies using each MRI modality have demonstrated that the neuroimaging abnormalities generally involve the posterior brain regions at the very beginning and then gradually affect the anterior areas during the progression of aMCI to AD. However, it is not known whether follow-up studies based on multi-modal neuroimaging techniques (e.g., sMRI, fMRI, and DTI) can help build effective MRI models that can be directly applied to the screening and diagnosis of aMCI and AD. Thus, in the future, large-scale multi-center follow-up studies are urgently needed, not only to build an MRI diagnostic model that can be used on a single person, but also to evaluate the variability and stability of the model in the general population. In this review, we present longitudinal studies using each MRI modality separately, and then discuss the future directions in this field. 展开更多
关键词 magnetic resonance imaging amnestic mild cognitive impairment Alzheimer's disease multi-modalITY longitudinal studies
原文传递
Developing a multi-modal MRI radiomics-based model to predict the long-term overall survival of patients with hypopharyngeal cancer receiving definitive radiotherapy
2
作者 Xi-Wei Zhang Dilinaer Wusiman +8 位作者 Ye Zhang Xiao-Duo Yu Su-Sheng Miao Zhi Wang Shao-Yan Liu Zheng-Jiang Li Ying Sun Jun-Lin Yi Chang-Ming An 《World Journal of Otorhinolaryngology-Head and Neck Surgery》 2025年第3期440-448,共9页
Objective:The aim of this study is to develop a multimodal MRI radiomics-based model for predicting long-term overall survival in hypopharyngeal cancer patients undergoing definitive radiotherapy.Methods:We enrolled 2... Objective:The aim of this study is to develop a multimodal MRI radiomics-based model for predicting long-term overall survival in hypopharyngeal cancer patients undergoing definitive radiotherapy.Methods:We enrolled 207 hypopharyngeal cancer patients who underwent definitive radiotherapy and had 5-year overall survival outcomes from two major cancer centers in China.Pretreatment MRI images and clinical features were collected.Regions of interest(ROIs)for primary tumors and lymph node metastases(LNM)were delineated on T2 and contrast-enhanced T1(CE-T1)sequences.Principal component analysis(PCA),support vector machine(SVM),and 5-fold cross-validation were used to develop and evaluate the models.Results:Multivariate Cox regression analysis identified age under 50 years,advanced T stage,and N stage as risk factors for overall survival.Predictive models based solely on clinical features(Model A),single radiomics features(Model B),and their combination(Model C)performed poorly,with mean AUC values in the validation set of 0.663,0.772,and 0.779,respectively.The addition of multimodal LNM and CE-T1 radiomics features significantly improved prediction accuracy(Models D and E),with AUC values of 0.831 and 0.837 in the validation set.Conclusion:We developed a well-discriminating overall survival prediction model based on multimodal MRI radiomics,applicable to patients receiving definitive radiotherapy,which may contribute to personalized treatment strategies. 展开更多
关键词 hypopharyngeal cancer machine learning Magnetic Resonance Imaging(mri) radiomics survival analysis
原文传递
Construction and evaluation of a predictive model for the degree of coronary artery occlusion based on adaptive weighted multi-modal fusion of traditional Chinese and western medicine data 被引量:1
3
作者 Jiyu ZHANG Jiatuo XU +1 位作者 Liping TU Hongyuan FU 《Digital Chinese Medicine》 2025年第2期163-173,共11页
Objective To develop a non-invasive predictive model for coronary artery stenosis severity based on adaptive multi-modal integration of traditional Chinese and western medicine data.Methods Clinical indicators,echocar... Objective To develop a non-invasive predictive model for coronary artery stenosis severity based on adaptive multi-modal integration of traditional Chinese and western medicine data.Methods Clinical indicators,echocardiographic data,traditional Chinese medicine(TCM)tongue manifestations,and facial features were collected from patients who underwent coro-nary computed tomography angiography(CTA)in the Cardiac Care Unit(CCU)of Shanghai Tenth People's Hospital between May 1,2023 and May 1,2024.An adaptive weighted multi-modal data fusion(AWMDF)model based on deep learning was constructed to predict the severity of coronary artery stenosis.The model was evaluated using metrics including accura-cy,precision,recall,F1 score,and the area under the receiver operating characteristic(ROC)curve(AUC).Further performance assessment was conducted through comparisons with six ensemble machine learning methods,data ablation,model component ablation,and various decision-level fusion strategies.Results A total of 158 patients were included in the study.The AWMDF model achieved ex-cellent predictive performance(AUC=0.973,accuracy=0.937,precision=0.937,recall=0.929,and F1 score=0.933).Compared with model ablation,data ablation experiments,and various traditional machine learning models,the AWMDF model demonstrated superior per-formance.Moreover,the adaptive weighting strategy outperformed alternative approaches,including simple weighting,averaging,voting,and fixed-weight schemes.Conclusion The AWMDF model demonstrates potential clinical value in the non-invasive prediction of coronary artery disease and could serve as a tool for clinical decision support. 展开更多
关键词 Coronary artery disease Deep learning multi-modal Clinical prediction Traditional Chinese medicine diagnosis
暂未订购
Tri-M2MT:Multi-modalities based effective acute bilirubin encephalopathy diagnosis through multi-transformer using neonatal Magnetic Resonance Imaging
4
作者 Kumar Perumal Rakesh Kumar Mahendran +1 位作者 Arfat Ahmad Khan Seifedine Kadry 《CAAI Transactions on Intelligence Technology》 2025年第2期434-449,共16页
Acute Bilirubin Encephalopathy(ABE)is a significant threat to neonates and it leads to disability and high mortality rates.Detecting and treating ABE promptly is important to prevent further complications and long-ter... Acute Bilirubin Encephalopathy(ABE)is a significant threat to neonates and it leads to disability and high mortality rates.Detecting and treating ABE promptly is important to prevent further complications and long-term issues.Recent studies have explored ABE diagnosis.However,they often face limitations in classification due to reliance on a single modality of Magnetic Resonance Imaging(MRI).To tackle this problem,the authors propose a Tri-M2MT model for precise ABE detection by using tri-modality MRI scans.The scans include T1-weighted imaging(T1WI),T2-weighted imaging(T2WI),and apparent diffusion coefficient maps to get indepth information.Initially,the tri-modality MRI scans are collected and preprocessesed by using an Advanced Gaussian Filter for noise reduction and Z-score normalisation for data standardisation.An Advanced Capsule Network was utilised to extract relevant features by using Snake Optimization Algorithm to select optimal features based on feature correlation with the aim of minimising complexity and enhancing detection accuracy.Furthermore,a multi-transformer approach was used for feature fusion and identify feature correlations effectively.Finally,accurate ABE diagnosis is achieved through the utilisation of a SoftMax layer.The performance of the proposed Tri-M2MT model is evaluated across various metrics,including accuracy,specificity,sensitivity,F1-score,and ROC curve analysis,and the proposed methodology provides better performance compared to existing methodologies. 展开更多
关键词 Acute Bilirubin Encephalopathy(ABE)Diagnosis feature extraction mri multi-modalITY multi-transformer NEONATAL
在线阅读 下载PDF
Multi-Modal Named Entity Recognition with Auxiliary Visual Knowledge and Word-Level Fusion
5
作者 Huansha Wang Ruiyang Huang +1 位作者 Qinrang Liu Xinghao Wang 《Computers, Materials & Continua》 2025年第6期5747-5760,共14页
Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or ... Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or obtaining entity related external knowledge from knowledge bases or Large Language Models(LLMs).However,these approaches ignore the poor semantic correlation between visual and textual modalities in MNER datasets and do not explore different multi-modal fusion approaches.In this paper,we present MMAVK,a multi-modal named entity recognition model with auxiliary visual knowledge and word-level fusion,which aims to leverage the Multi-modal Large Language Model(MLLM)as an implicit knowledge base.It also extracts vision-based auxiliary knowledge from the image formore accurate and effective recognition.Specifically,we propose vision-based auxiliary knowledge generation,which guides the MLLM to extract external knowledge exclusively derived from images to aid entity recognition by designing target-specific prompts,thus avoiding redundant recognition and cognitive confusion caused by the simultaneous processing of image-text pairs.Furthermore,we employ a word-level multi-modal fusion mechanism to fuse the extracted external knowledge with each word-embedding embedded from the transformerbased encoder.Extensive experimental results demonstrate that MMAVK outperforms or equals the state-of-the-art methods on the two classical MNER datasets,even when the largemodels employed have significantly fewer parameters than other baselines. 展开更多
关键词 multi-modal named entity recognition large language model multi-modal fusion
在线阅读 下载PDF
MMCSD:Multi-Modal Knowledge Graph Completion Based on Super-Resolution and Detailed Description Generation
6
作者 Huansha Wang Ruiyang Huang +2 位作者 Qinrang Liu Shaomei Li Jianpeng Zhang 《Computers, Materials & Continua》 2025年第4期761-783,共23页
Multi-modal knowledge graph completion(MMKGC)aims to complete missing entities or relations in multi-modal knowledge graphs,thereby discovering more previously unknown triples.Due to the continuous growth of data and ... Multi-modal knowledge graph completion(MMKGC)aims to complete missing entities or relations in multi-modal knowledge graphs,thereby discovering more previously unknown triples.Due to the continuous growth of data and knowledge and the limitations of data sources,the visual knowledge within the knowledge graphs is generally of low quality,and some entities suffer from the issue of missing visual modality.Nevertheless,previous studies of MMKGC have primarily focused on how to facilitate modality interaction and fusion while neglecting the problems of low modality quality and modality missing.In this case,mainstream MMKGC models only use pre-trained visual encoders to extract features and transfer the semantic information to the joint embeddings through modal fusion,which inevitably suffers from problems such as error propagation and increased uncertainty.To address these problems,we propose a Multi-modal knowledge graph Completion model based on Super-resolution and Detailed Description Generation(MMCSD).Specifically,we leverage a pre-trained residual network to enhance the resolution and improve the quality of the visual modality.Moreover,we design multi-level visual semantic extraction and entity description generation,thereby further extracting entity semantics from structural triples and visual images.Meanwhile,we train a variational multi-modal auto-encoder and utilize a pre-trained multi-modal language model to complement the missing visual features.We conducted experiments on FB15K-237 and DB13K,and the results showed that MMCSD can effectively perform MMKGC and achieve state-of-the-art performance. 展开更多
关键词 multi-modal knowledge graph knowledge graph completion multi-modal fusion
在线阅读 下载PDF
Transformers for Multi-Modal Image Analysis in Healthcare
7
作者 Sameera V Mohd Sagheer Meghana K H +2 位作者 P M Ameer Muneer Parayangat Mohamed Abbas 《Computers, Materials & Continua》 2025年第9期4259-4297,共39页
Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status... Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status.Each of these methods contributes unique diagnostic insights,enhancing the overall assessment of patient condition.Nevertheless,the amalgamation of data from multiple modalities presents difficulties due to disparities in resolution,data collection methods,and noise levels.While traditional models like Convolutional Neural Networks(CNNs)excel in single-modality tasks,they struggle to handle multi-modal complexities,lacking the capacity to model global relationships.This research presents a novel approach for examining multi-modal medical imagery using a transformer-based system.The framework employs self-attention and cross-attention mechanisms to synchronize and integrate features across various modalities.Additionally,it shows resilience to variations in noise and image quality,making it adaptable for real-time clinical use.To address the computational hurdles linked to transformer models,particularly in real-time clinical applications in resource-constrained environments,several optimization techniques have been integrated to boost scalability and efficiency.Initially,a streamlined transformer architecture was adopted to minimize the computational load while maintaining model effectiveness.Methods such as model pruning,quantization,and knowledge distillation have been applied to reduce the parameter count and enhance the inference speed.Furthermore,efficient attention mechanisms such as linear or sparse attention were employed to alleviate the substantial memory and processing requirements of traditional self-attention operations.For further deployment optimization,researchers have implemented hardware-aware acceleration strategies,including the use of TensorRT and ONNX-based model compression,to ensure efficient execution on edge devices.These optimizations allow the approach to function effectively in real-time clinical settings,ensuring viability even in environments with limited resources.Future research directions include integrating non-imaging data to facilitate personalized treatment and enhancing computational efficiency for implementation in resource-limited environments.This study highlights the transformative potential of transformer models in multi-modal medical imaging,offering improvements in diagnostic accuracy and patient care outcomes. 展开更多
关键词 multi-modal image analysis medical imaging deep learning image segmentation disease detection multi-modal fusion Vision Transformers(ViTs) precision medicine clinical decision support
在线阅读 下载PDF
Multi-Modal Pre-Synergistic Fusion Entity Alignment Based on Mutual Information Strategy Optimization
8
作者 Huayu Li Xinxin Chen +3 位作者 Lizhuang Tan Konstantin I.Kostromitin Athanasios V.Vasilakos Peiying Zhang 《Computers, Materials & Continua》 2025年第11期4133-4153,共21页
To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities... To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model. 展开更多
关键词 Knowledge graph multi-modal entity alignment feature fusion pre-synergistic fusion
在线阅读 下载PDF
Research Progress on Multi-Modal Fusion Object Detection Algorithms for Autonomous Driving:A Review
9
作者 Peicheng Shi Li Yang +2 位作者 Xinlong Dong Heng Qi Aixi Yang 《Computers, Materials & Continua》 2025年第6期3877-3917,共41页
As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advan... As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advancing the development of perception technology in autonomous driving.To further promote the development of fusion algorithms and improve detection performance,this paper discusses the advantages and recent advancements of multimodal fusion-based object detection algorithms.Starting fromsingle-modal sensor detection,the paper provides a detailed overview of typical sensors used in autonomous driving and introduces object detection methods based on images and point clouds.For image-based detection methods,they are categorized into monocular detection and binocular detection based on different input types.For point cloud-based detection methods,they are classified into projection-based,voxel-based,point cluster-based,pillar-based,and graph structure-based approaches based on the technical pathways for processing point cloud features.Additionally,multimodal fusion algorithms are divided into Camera-LiDAR fusion,Camera-Radar fusion,Camera-LiDAR-Radar fusion,and other sensor fusion methods based on the types of sensors involved.Furthermore,the paper identifies five key future research directions in this field,aiming to provide insights for researchers engaged in multimodal fusion-based object detection algorithms and to encourage broader attention to the research and application of multimodal fusion-based object detection. 展开更多
关键词 multi-modal fusion 3D object detection deep learning autonomous driving
在线阅读 下载PDF
Effectiveness of a multi-modal intervention protocol for preventing stress ulcers in critically ill older patients after gastrointestinal surgery
10
作者 Hai-Ming Xi Man-Li Tian +3 位作者 Ya-Li Tian Hui Liu Yun Wang Min-Juan Chu 《World Journal of Gastrointestinal Surgery》 2025年第4期316-323,共8页
BACKGROUND Stress ulcers are common complications in critically ill patients,with a higher incidence observed in older patients following gastrointestinal surgery.This study aimed to develop and evaluate the effective... BACKGROUND Stress ulcers are common complications in critically ill patients,with a higher incidence observed in older patients following gastrointestinal surgery.This study aimed to develop and evaluate the effectiveness of a multi-modal intervention protocol to prevent stress ulcers in this high-risk population.AIM To assess the impact of a multi-modal intervention on preventing stress ulcers in older intensive care unit(ICU)patients postoperatively.METHODS A randomized controlled trial involving critically ill patients(aged≥65 years)admitted to the ICU after gastrointestinal surgery was conducted.Patients were randomly assigned to either the intervention group,which received a multimodal stress ulcer prevention protocol,or the control group,which received standard care.The primary outcome measure was the incidence of stress ulcers.The secondary outcomes included ulcer healing time,complication rates,and length of hospital stay.RESULTS A total of 200 patients(100 in each group)were included in this study.The intervention group exhibited a significantly lower incidence of stress ulcers than the control group(15%vs 30%,P<0.01).Additionally,the intervention group demonstrated shorter ulcer healing times(mean 5.2 vs 7.8 days,P<0.05),lower complication rates(10%vs 22%,P<0.05),and reduced length of hospital stay(mean 12.3 vs 15.7 days,P<0.05).CONCLUSION This multi-modal intervention protocol significantly reduced the incidence of stress ulcers and improved clinical outcomes in critically ill older patients after gastrointestinal surgery.This comprehensive approach may provide a valuable strategy for managing high-risk populations in intensive care settings. 展开更多
关键词 Stress ulcers Older patients Gastrointestinal surgery Critical care multi-modal intervention
暂未订购
Multi-modal intelligent situation awareness in real-time air traffic control: Control intent understanding and flight trajectory prediction
11
作者 Dongyue GUO Jianwei ZHANG +1 位作者 Bo YANG Yi LIN 《Chinese Journal of Aeronautics》 2025年第6期41-57,共17页
With the advent of the next-generation Air Traffic Control(ATC)system,there is growing interest in using Artificial Intelligence(AI)techniques to enhance Situation Awareness(SA)for ATC Controllers(ATCOs),i.e.,Intellig... With the advent of the next-generation Air Traffic Control(ATC)system,there is growing interest in using Artificial Intelligence(AI)techniques to enhance Situation Awareness(SA)for ATC Controllers(ATCOs),i.e.,Intelligent SA(ISA).However,the existing AI-based SA approaches often rely on unimodal data and lack a comprehensive description and benchmark of the ISA tasks utilizing multi-modal data for real-time ATC environments.To address this gap,by analyzing the situation awareness procedure of the ATCOs,the ISA task is refined to the processing of the two primary elements,i.e.,spoken instructions and flight trajectories.Subsequently,the ISA is further formulated into Controlling Intent Understanding(CIU)and Flight Trajectory Prediction(FTP)tasks.For the CIU task,an innovative automatic speech recognition and understanding framework is designed to extract the controlling intent from unstructured and continuous ATC communications.For the FTP task,the single-and multi-horizon FTP approaches are investigated to support the high-precision prediction of the situation evolution.A total of 32 unimodal/multi-modal advanced methods with extensive evaluation metrics are introduced to conduct the benchmarks on the real-world multi-modal ATC situation dataset.Experimental results demonstrate the effectiveness of AI-based techniques in enhancing ISA for the ATC environment. 展开更多
关键词 Airtraffic control Automatic speechrecognition and understanding Flight trajectory prediction multi-modal Situationawareness
原文传递
MMGC-Net: Deep neural network for classification of mineral grains using multi-modal polarization images
12
作者 Jun Shu Xiaohai He +3 位作者 Qizhi Teng Pengcheng Yan Haibo He Honggang Chen 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第6期3894-3909,共16页
The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring ef... The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models. 展开更多
关键词 Mineral particles multi-modal image classification Shared parameters Feature fusion Spatiotemporal feature
暂未订购
Effectiveness of Multi-Modal Teaching Based on Online Case Libraries in the Education of Gene Methylation Combined with Spiral CT Screening for Pulmonary Ground-Glass Opacity Nodules
13
作者 Yong Zhou Xi Zhang +3 位作者 Shuyi Liu Zhuoyi He Weili Tian Shuping You 《Proceedings of Anticancer Research》 2025年第1期21-26,共6页
Objective:To explore the effectiveness of multi-modal teaching based on an online case library in the education of gene methylation combined with spiral computed tomography(CT)screening for pulmonary ground-glass opac... Objective:To explore the effectiveness of multi-modal teaching based on an online case library in the education of gene methylation combined with spiral computed tomography(CT)screening for pulmonary ground-glass opacity(GGO)nodules.Methods:From October 2023 to April 2024,66 medical imaging students were selected and randomly divided into a control group and an observation group,each with 33 students.The control group received traditional lecture-based teaching,while the observation group was taught using a multi-modal teaching approach based on an online case library.Performance on assessments and teaching quality were analyzed between the two groups.Results:The observation group achieved higher scores in theoretical and practical knowledge compared to the control group(P<0.05).Additionally,the teaching quality scores were significantly higher in the observation group(P<0.05).Conclusion:Implementing multi-modal teaching based on an online case library for pulmonary GGO nodule screening with gene methylation combined with spiral CT can enhance students’knowledge acquisition,improve teaching quality,and have significant clinical application value. 展开更多
关键词 multi-modal teaching based on online case library Pulmonary nodules Gene methylation Computed tomography
在线阅读 下载PDF
Personal Style Guided Outfit Recommendation with Multi-Modal Fashion Compatibility Modeling
14
作者 WANG Kexin ZHANG Jie +3 位作者 ZHANG Peng SUN Kexin ZHAN Jiamei WEI Meng 《Journal of Donghua University(English Edition)》 2025年第2期156-167,共12页
A personalized outfit recommendation has emerged as a hot research topic in the fashion domain.However,existing recommendations do not fully exploit user style preferences.Typically,users prefer particular styles such... A personalized outfit recommendation has emerged as a hot research topic in the fashion domain.However,existing recommendations do not fully exploit user style preferences.Typically,users prefer particular styles such as casual and athletic styles,and consider attributes like color and texture when selecting outfits.To achieve personalized outfit recommendations in line with user style preferences,this paper proposes a personal style guided outfit recommendation with multi-modal fashion compatibility modeling,termed as PSGNet.Firstly,a style classifier is designed to categorize fashion images of various clothing types and attributes into distinct style categories.Secondly,a personal style prediction module extracts user style preferences by analyzing historical data.Then,to address the limitations of single-modal representations and enhance fashion compatibility,both fashion images and text data are leveraged to extract multi-modal features.Finally,PSGNet integrates these components through Bayesian personalized ranking(BPR)to unify the personal style and fashion compatibility,where the former is used as personal style features and guides the output of the personalized outfit recommendation tailored to the target user.Extensive experiments on large-scale datasets demonstrate that the proposed model is efficient on the personalized outfit recommendation. 展开更多
关键词 personalized outfit recommendation fashion compatibility modeling style preference multi-modal representation Bayesian personalized ranking(BPR) style classifier
暂未订购
Tomato Growth Height Prediction Method by Phenotypic Feature Extraction Using Multi-modal Data
15
作者 GONG Yu WANG Ling +3 位作者 ZHAO Rongqiang YOU Haibo ZHOU Mo LIU Jie 《智慧农业(中英文)》 2025年第1期97-110,共14页
[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-base... [Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-based models that utilize either images data or environmental data.These methods fail to fully leverage multi-modal data to capture the diverse aspects of plant growth comprehensively.[Methods]To address this limitation,a two-stage phenotypic feature extraction(PFE)model based on deep learning algorithm of recurrent neural network(RNN)and long short-term memory(LSTM)was developed.The model integrated environment and plant information to provide a holistic understanding of the growth process,emploied phenotypic and temporal feature extractors to comprehensively capture both types of features,enabled a deeper understanding of the interaction between tomato plants and their environment,ultimately leading to highly accurate predictions of growth height.[Results and Discussions]The experimental results showed the model's ef‐fectiveness:When predicting the next two days based on the past five days,the PFE-based RNN and LSTM models achieved mean absolute percentage error(MAPE)of 0.81%and 0.40%,respectively,which were significantly lower than the 8.00%MAPE of the large language model(LLM)and 6.72%MAPE of the Transformer-based model.In longer-term predictions,the 10-day prediction for 4 days ahead and the 30-day prediction for 12 days ahead,the PFE-RNN model continued to outperform the other two baseline models,with MAPE of 2.66%and 14.05%,respectively.[Conclusions]The proposed method,which leverages phenotypic-temporal collaboration,shows great potential for intelligent,data-driven management of tomato cultivation,making it a promising approach for enhancing the efficiency and precision of smart tomato planting management. 展开更多
关键词 tomato growth prediction deep learning phenotypic feature extraction multi-modal data recurrent neural net‐work long short-term memory large language model
在线阅读 下载PDF
SwinHCAD: A Robust Multi-Modality Segmentation Model for Brain Tumors Using Transformer and Channel-Wise Attention
16
作者 Seyong Jin Muhammad Fayaz +2 位作者 L.Minh Dang Hyoung-Kyu Song Hyeonjoon Moon 《Computers, Materials & Continua》 2026年第1期511-533,共23页
Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the b... Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics.While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information,existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors.In order to address these challenges and maximize the performance of brain tumor segmentation,this research introduces a novel SwinUNETR-based model by integrating a new decoder block,the Hierarchical Channel-wise Attention Decoder(HCAD),into a powerful SwinUNETR encoder.The HCAD decoder block utilizes hierarchical features and channelspecific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase.Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions(WT,TC,and ET)compared to baseline models.In particular,the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block.The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation. 展开更多
关键词 Attention mechanism brain tumor segmentation channel-wise attention decoder deep learning medical imaging mri TRANSFORMER U-Net
在线阅读 下载PDF
DCE-MRI对肉芽肿性乳腺炎与非肿块样强化乳腺癌鉴别诊断价值 被引量:2
17
作者 段小玲 陈淑明 张盼 《中国CT和MRI杂志》 2025年第1期103-105,共3页
目的 分析肉芽肿性乳腺炎(GM)和非肿块样强化(NMLE)乳腺癌在磁共振动态增强(DCE-MRI)影像特征,以提高GM和乳腺癌的鉴别诊断。方法 回顾性分析60例在DCE-MRI表现为非肿块样强化(NMLE)病变,分析N MLE病变的弥散加权成像(DWI)信号特点、强... 目的 分析肉芽肿性乳腺炎(GM)和非肿块样强化(NMLE)乳腺癌在磁共振动态增强(DCE-MRI)影像特征,以提高GM和乳腺癌的鉴别诊断。方法 回顾性分析60例在DCE-MRI表现为非肿块样强化(NMLE)病变,分析N MLE病变的弥散加权成像(DWI)信号特点、强化分布及强化方式,采用Pearson卡方检验对环状强化的分布、不同期相环壁强化程度。结果NMLE中GM组25例,强化分布(局灶3例、区域8例、多区域性5例、线样或导管样0例、段性6例、弥漫性3例),强化方式(不均匀2例、大小不等环23例),病灶实性区时间信号曲线(TIC)Ⅰ型9例,Ⅲ型16例,Ⅲ型曲线0例;NMLE中乳腺癌组35例,强化分布(局灶性5例、区域性15例、多区域性5例、线样或导管样3例、段性4例、弥漫性4例),强化方式(均匀强化3例、不均匀强化6例、集簇/簇环状26例),病灶实性区TIC曲线Ⅰ型7例,Ⅱ型12例及Ⅲ例16例, NMLE中GM组与乳腺癌在病灶环壁及环内容物DWI高信号(P=0.00、 P=0.00),大小不等环形强化(P=0.02),集簇/簇环状强化(P=0.01),病灶环壁晚期显著强化程度(P=0.01)有统计学意义,病灶环壁早期显著强化(P=0.07)无统计学意义。结论以线样、段样强化分布及以小簇环状强化的NMLE病变多提示恶性肿瘤;以区域或弥漫性强化分布伴大小不等环状强化的非肿块样强化,且随时间延迟渐进性均匀强化的病变多提示GM。 展开更多
关键词 动态增强磁共振 非肿块样强化 肉芽肿性乳腺炎 乳腺癌
暂未订购
骨质疏松性胸腰椎骨折MRI检查King护理的应用
18
作者 张媛媛 韩奇财 +1 位作者 张玮 王立萍 《中国矫形外科杂志》 北大核心 2025年第9期861-864,共4页
[目的]探讨基于King达标理论的临床护理路径在骨质疏松性胸腰椎骨折(osteoporotic thoracolumbar fractures,OTLF)患者核磁共振成像术(MRI)检查中的应用价值。[方法]随机数字表法将2021年3月—2023年1月接受MRI检查的201例OTLF患者分为K... [目的]探讨基于King达标理论的临床护理路径在骨质疏松性胸腰椎骨折(osteoporotic thoracolumbar fractures,OTLF)患者核磁共振成像术(MRI)检查中的应用价值。[方法]随机数字表法将2021年3月—2023年1月接受MRI检查的201例OTLF患者分为King组(n=101)和常规组(n=100),分别予以基于King达标理论的临床护理路径与常规临床护理路径护理。比较患者心理压力、感知控制度等指标。[结果]干预后,King组CPSS评分、消极应对评分均显著低于常规组(P<0.05),而CAS-R评分、积极应对评分及MRI检查依从性均高于常规组(P<0.05);King组护理不良事件总发生率显著低于常规组(0%vs 5.0%, P<0.05),King组护理满意度高于常规组[(87.9±9.6) vs(71.4±5.7), P<0.05]。[结论]基于King达标理论的临床护理路径能有效减轻OTLF患者MRI检查中的心理压力,提高其感知控制程度及积极应对方式,有利于患者检查依从性、护理满意度提升,护理不良事件降低。 展开更多
关键词 骨质疏松性胸腰椎骨折 mri检查 KING达标理论 临床护理路径
原文传递
核黄素反应性脂质沉积性肌病的临床、肌肉病理、MRI和分子生物学特征
19
作者 吴世陶 张敏 +2 位作者 石伟伟 周丽丹 刘恒方 《郑州大学学报(医学版)》 北大核心 2025年第3期358-362,共5页
目的:探讨核黄素反应性脂质沉积性肌病(RR-MADD)的临床、肌肉病理、MRI和分子生物学特征。方法:收集2013年1月至2023年12月郑州大学第五附属医院诊治的20例电子转移黄素蛋白脱氢酶(ETFDH)基因突变所致的RR-MADD的资料,对其临床、肌肉病... 目的:探讨核黄素反应性脂质沉积性肌病(RR-MADD)的临床、肌肉病理、MRI和分子生物学特征。方法:收集2013年1月至2023年12月郑州大学第五附属医院诊治的20例电子转移黄素蛋白脱氢酶(ETFDH)基因突变所致的RR-MADD的资料,对其临床、肌肉病理、肌肉MRI和分子生物学特征进行回顾性分析。结果:(1)临床特征主要表现为波动性四肢近端无力、不耐受疲劳、颈肌无力和咀嚼肌无力;多数患者血清肌酸激酶呈轻中度升高,少数合并横纹肌溶解;肌电图主要呈肌源性损害;20例患者经过核黄素治疗后均取得良好效果。(2)HE染色显示肌纤维内大量小空泡,部分空泡融合成裂隙状;油红O染色显示空泡或裂隙被大量脂滴填充;ATP酶学染色显示受累肌纤维主要为Ⅰ型肌纤维。(3)20例患者均出现两侧股二头肌长头、半膜肌和半腱肌对称性脂肪化,MRI呈T1WI和T2WI高信号、STIR低信号;12例患者在亚急性期出现肌肉水肿,MRI呈T1WI等信号、T2WI高信号、STIR高信号。(4)20例患者均发现ETFDH基因突变,其中15例为复合杂合突变,3例为纯合突变,2例为单一杂合突变;c.1781T>C、c.1327T>C、c.1411A>G和c.1277_1278insA突变未见报道。结论:RR-MADD的临床、肌肉病理、MRI具有一定的特征性,ETFDH基因c.1781T>C、c.1327T>C、c.1411A>G和c.1277_1278insA突变为首次报道,丰富了我国RR-MADD基因突变谱系。 展开更多
关键词 核黄素反应性脂质沉积性肌病 临床表现 肌肉病理 肌肉mri
暂未订购
上一页 1 2 250 下一页 到第
使用帮助 返回顶部