Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dim...Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.展开更多
Cross-lingual image description,the task of generating image captions in a target language from images and descriptions in a source language,is addressed in this study through a novel approach that combines neural net...Cross-lingual image description,the task of generating image captions in a target language from images and descriptions in a source language,is addressed in this study through a novel approach that combines neural network models and semantic matching techniques.Experiments conducted on the Flickr8k and AraImg2k benchmark datasets,featuring images and descriptions in English and Arabic,showcase remarkable performance improvements over state-of-the-art methods.Our model,equipped with the Image&Cross-Language Semantic Matching module and the Target Language Domain Evaluation module,significantly enhances the semantic relevance of generated image descriptions.For English-to-Arabic and Arabic-to-English cross-language image descriptions,our approach achieves a CIDEr score for English and Arabic of 87.9%and 81.7%,respectively,emphasizing the substantial contributions of our methodology.Comparative analyses with previous works further affirm the superior performance of our approach,and visual results underscore that our model generates image captions that are both semantically accurate and stylistically consistent with the target language.In summary,this study advances the field of cross-lingual image description,offering an effective solution for generating image captions across languages,with the potential to impact multilingual communication and accessibility.Future research directions include expanding to more languages and incorporating diverse visual and textual data sources.展开更多
Objective:To develop a multimodal deep-learning model for classifying Chinese medicine constitution,i.e.,the balanced and unbalanced constitutions,based on inspection of tongue and face images,pulse waves from palpati...Objective:To develop a multimodal deep-learning model for classifying Chinese medicine constitution,i.e.,the balanced and unbalanced constitutions,based on inspection of tongue and face images,pulse waves from palpation,and health information from a total of 540 subjects.Methods:This study data consisted of tongue and face images,pulse waves obtained by palpation,and health information,including personal information,life habits,medical history,and current symptoms,from 540 subjects(202 males and 338 females).Convolutional neural networks,recurrent neural networks,and fully connected neural networks were used to extract deep features from the data.Feature fusion and decision fusion models were constructed for the multimodal data.Results:The optimal models for tongue and face images,pulse waves and health information were ResNet18,Gate Recurrent Unit,and entity embedding,respectively.Feature fusion was superior to decision fusion.The multimodal analysis revealed that multimodal data compensated for the loss of information from a single mode,resulting in improved classification performance.Conclusions:Multimodal data fusion can supplement single model information and improve classification performance.Our research underscores the effectiveness of multimodal deep learning technology to identify body constitution for modernizing and improving the intelligent application of Chinese medicine.展开更多
Artificial intelligence(AI)serves as a key technology in global industrial transformation and technological restructuring and as the core driver of the fourth industrial revolution.Currently,deep learning techniques,s...Artificial intelligence(AI)serves as a key technology in global industrial transformation and technological restructuring and as the core driver of the fourth industrial revolution.Currently,deep learning techniques,such as convolutional neural networks,enable intelligent information collection in fields such as tongue and pulse diagnosis owing to their robust feature-processing capabilities.Natural language processing models,including long short-term memory and transformers,have been applied to traditional Chinese medicine(TCM)for diagnosis,syndrome differentiation,and prescription generation.Traditional machine learning algorithms,such as neural networks,support vector machines,and random forests,are also widely used in TCM diagnosis and treatment because of their strong regression and classification performance on small structured datasets.Future research on AI in TCM diagnosis and treatment may emphasize building large-scale,high-quality TCM datasets with unified criteria based on syndrome elements;identifying algorithms suited to TCM theoretical data distributions;and leveraging AI multimodal fusion and ensemble learning techniques for diverse raw features,such as images,text,and manually processed structured data,to increase the clinical efficacy of TCM diagnosis and treatment.展开更多
The prediction of drug-drug interactions(DDIs)is a crucial task for drug safety research,and identifying potential DDIs helps us to explore the mechanism behind combinatorial therapy.Traditional wet chemical experimen...The prediction of drug-drug interactions(DDIs)is a crucial task for drug safety research,and identifying potential DDIs helps us to explore the mechanism behind combinatorial therapy.Traditional wet chemical experiments for DDI are cumbersome and time-consuming,and are too small in scale,limiting the efficiency of DDI predictions.Therefore,it is particularly crucial to develop improved computational methods for detecting drug interactions.With the development of deep learning,several computational models based on deep learning have been proposed for DDI prediction.In this review,we summarized the high-quality DDI prediction methods based on deep learning in recent years,and divided them into four categories:neural network-based methods,graph neural network-based methods,knowledge graph-based methods,and multimodal-based methods.Furthermore,we discuss the challenges of existing methods and future potential perspectives.This review reveals that deep learning can significantly improve DDI prediction performance compared to traditional machine learning.Deep learning models can scale to large-scale datasets and accept multiple data types as input,thus making DDI predictions more efficient and accurate.展开更多
Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classific...Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classification, which makes it difficult to classify mixed and confusable wastes. In these complex situations, using multiple modalities becomes necessary to achieve a high classification accuracy. Traditionally, the fusion of multiple modalities has been limited by fixed handcrafted features. In this study, the deep-learning approach was applied to the multimodal fusion at the feature level for municipal solid-waste sorting.More specifically, the pre-trained VGG16 and one-dimensional convolutional neural networks(1 D CNNs) were utilized to extract features from visual data and acoustic data, respectively. These deeply learned features were then fused in the fully connected layers for classification. The results of comparative experiments proved that the proposed method was superior to the single-modality methods. Additionally, the feature-based fusion strategy performed better than the decision-based strategy with deeply learned features.展开更多
文摘Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review.
文摘Cross-lingual image description,the task of generating image captions in a target language from images and descriptions in a source language,is addressed in this study through a novel approach that combines neural network models and semantic matching techniques.Experiments conducted on the Flickr8k and AraImg2k benchmark datasets,featuring images and descriptions in English and Arabic,showcase remarkable performance improvements over state-of-the-art methods.Our model,equipped with the Image&Cross-Language Semantic Matching module and the Target Language Domain Evaluation module,significantly enhances the semantic relevance of generated image descriptions.For English-to-Arabic and Arabic-to-English cross-language image descriptions,our approach achieves a CIDEr score for English and Arabic of 87.9%and 81.7%,respectively,emphasizing the substantial contributions of our methodology.Comparative analyses with previous works further affirm the superior performance of our approach,and visual results underscore that our model generates image captions that are both semantically accurate and stylistically consistent with the target language.In summary,this study advances the field of cross-lingual image description,offering an effective solution for generating image captions across languages,with the potential to impact multilingual communication and accessibility.Future research directions include expanding to more languages and incorporating diverse visual and textual data sources.
基金Supported by the National Key Research and Development Program of China Under Grant(No.2018YFC1707704)。
文摘Objective:To develop a multimodal deep-learning model for classifying Chinese medicine constitution,i.e.,the balanced and unbalanced constitutions,based on inspection of tongue and face images,pulse waves from palpation,and health information from a total of 540 subjects.Methods:This study data consisted of tongue and face images,pulse waves obtained by palpation,and health information,including personal information,life habits,medical history,and current symptoms,from 540 subjects(202 males and 338 females).Convolutional neural networks,recurrent neural networks,and fully connected neural networks were used to extract deep features from the data.Feature fusion and decision fusion models were constructed for the multimodal data.Results:The optimal models for tongue and face images,pulse waves and health information were ResNet18,Gate Recurrent Unit,and entity embedding,respectively.Feature fusion was superior to decision fusion.The multimodal analysis revealed that multimodal data compensated for the loss of information from a single mode,resulting in improved classification performance.Conclusions:Multimodal data fusion can supplement single model information and improve classification performance.Our research underscores the effectiveness of multimodal deep learning technology to identify body constitution for modernizing and improving the intelligent application of Chinese medicine.
基金supported by grants from the National Natural Science Foundation of China(Key Program)(No.82230124)Traditional Chinese Medicine Inheritance and Innovation“Ten million”talent project-Qihuang Project Chief Scientist Project(No.0201000401)+1 种基金State Administration of Traditional Chinese Medicine 2nd National Traditional Chinese Medicine Inheritance Studio Construction Project(Official Letter of the State Office of Traditional Chinese Medicine[2022]No.245)National Natural Science Foundation of China(General Program)(No.81974556).
文摘Artificial intelligence(AI)serves as a key technology in global industrial transformation and technological restructuring and as the core driver of the fourth industrial revolution.Currently,deep learning techniques,such as convolutional neural networks,enable intelligent information collection in fields such as tongue and pulse diagnosis owing to their robust feature-processing capabilities.Natural language processing models,including long short-term memory and transformers,have been applied to traditional Chinese medicine(TCM)for diagnosis,syndrome differentiation,and prescription generation.Traditional machine learning algorithms,such as neural networks,support vector machines,and random forests,are also widely used in TCM diagnosis and treatment because of their strong regression and classification performance on small structured datasets.Future research on AI in TCM diagnosis and treatment may emphasize building large-scale,high-quality TCM datasets with unified criteria based on syndrome elements;identifying algorithms suited to TCM theoretical data distributions;and leveraging AI multimodal fusion and ensemble learning techniques for diverse raw features,such as images,text,and manually processed structured data,to increase the clinical efficacy of TCM diagnosis and treatment.
基金National Natural Science Foundationof China,Grant/Award Number:62102158Fundamental Research Fundsforthe Central Universities,Grant/Award Number:2662022JC004+1 种基金2021 Foshan Support Project for Promoting the Development of University Scientific and Technological Achievements ServiceIndustry,Grant/Award Number:2021DZXX05Huazhong Agricultural University Scientific Technological Selfinnovation Foundation。
文摘The prediction of drug-drug interactions(DDIs)is a crucial task for drug safety research,and identifying potential DDIs helps us to explore the mechanism behind combinatorial therapy.Traditional wet chemical experiments for DDI are cumbersome and time-consuming,and are too small in scale,limiting the efficiency of DDI predictions.Therefore,it is particularly crucial to develop improved computational methods for detecting drug interactions.With the development of deep learning,several computational models based on deep learning have been proposed for DDI prediction.In this review,we summarized the high-quality DDI prediction methods based on deep learning in recent years,and divided them into four categories:neural network-based methods,graph neural network-based methods,knowledge graph-based methods,and multimodal-based methods.Furthermore,we discuss the challenges of existing methods and future potential perspectives.This review reveals that deep learning can significantly improve DDI prediction performance compared to traditional machine learning.Deep learning models can scale to large-scale datasets and accept multiple data types as input,thus making DDI predictions more efficient and accurate.
基金supported by the National Natural Science Foundation of China(Grant Nos.51875507,52005439)the Key Research and Development Program of Zhejiang Province(Grant No.2021C01018)。
文摘Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classification, which makes it difficult to classify mixed and confusable wastes. In these complex situations, using multiple modalities becomes necessary to achieve a high classification accuracy. Traditionally, the fusion of multiple modalities has been limited by fixed handcrafted features. In this study, the deep-learning approach was applied to the multimodal fusion at the feature level for municipal solid-waste sorting.More specifically, the pre-trained VGG16 and one-dimensional convolutional neural networks(1 D CNNs) were utilized to extract features from visual data and acoustic data, respectively. These deeply learned features were then fused in the fully connected layers for classification. The results of comparative experiments proved that the proposed method was superior to the single-modality methods. Additionally, the feature-based fusion strategy performed better than the decision-based strategy with deeply learned features.