期刊文献+
共找到970篇文章
< 1 2 49 >
每页显示 20 50 100
Multimodal clinical parameters-based immune status associated with the prognosis in patients with hepatocellular carcinoma
1
作者 Yu-Zhou Zhang Yuan-Ze Tang +4 位作者 Yun-Xuan He Shu-Tong Pan Hao-Cheng Dai Yu Liu Hai-Feng Zhou 《World Journal of Gastrointestinal Oncology》 2026年第1期75-91,共17页
Hepatocellular carcinoma presents with three distinct immune phenotypes,including immune-desert,immune-excluded,and immune-inflamed,indicating various treatment responses and prognostic outcomes.The clinical applicati... Hepatocellular carcinoma presents with three distinct immune phenotypes,including immune-desert,immune-excluded,and immune-inflamed,indicating various treatment responses and prognostic outcomes.The clinical application of multi-omics parameters is still restricted by the expensive and less accessible assays,although they accurately reflect immune status.A comprehensive evaluation framework based on“easy-to-obtain”multi-model clinical parameters is urgently required,incorporating clinical features to establish baseline patient profiles and disease staging;routine blood tests assessing systemic metabolic and functional status;immune cell subsets quantifying subcluster dynamics;imaging features delineating tumor morphology,spatial configuration,and perilesional anatomical relationships;immunohistochemical markers positioning qualitative and quantitative detection of tumor antigens from the cellular and molecular level.This integrated phenomic approach aims to improve prognostic stratification and clinical decision-making in hepatocellular carcinoma management conveniently and practically. 展开更多
关键词 Hepatocellular carcinoma Immune status PHENOTYPE multimodal parameters PROGNOSIS
暂未订购
Multimodal artificial intelligence integrates imaging,endoscopic,and omics data for intelligent decision-making in individualized gastrointestinal tumor treatment
2
作者 Hui Nian Yi-Bin Wu +5 位作者 Yu Bai Zhi-Long Zhang Xiao-Huang Tu Qi-Zhi Liu De-Hua Zhou Qian-Cheng Du 《Artificial Intelligence in Gastroenterology》 2026年第1期1-19,共19页
Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including ... Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies. 展开更多
关键词 multimodal artificial intelligence Gastrointestinal tumors Individualized therapy Intelligent diagnosis Treatment optimization Prognostic prediction Data fusion Deep learning Precision medicine
在线阅读 下载PDF
AI-driven integration of multi-omics and multimodal data for precision medicine
3
作者 Heng-Rui Liu 《Medical Data Mining》 2026年第1期1-2,共2页
High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging ... High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1). 展开更多
关键词 high throughput transcriptomics multi omics single cell multimodal learning frameworks foundation models omics data modalitiesemerging ai driven precision medicine
在线阅读 下载PDF
A Comprehensive Review of Multimodal Deep Learning for Enhanced Medical Diagnostics 被引量:1
4
作者 Aya M.Al-Zoghby Ahmed Ismail Ebada +2 位作者 Aya S.Saleh Mohammed Abdelhay Wael A.Awad 《Computers, Materials & Continua》 2025年第9期4155-4193,共39页
Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dim... Multimodal deep learning has emerged as a key paradigm in contemporary medical diagnostics,advancing precision medicine by enabling integration and learning from diverse data sources.The exponential growth of high-dimensional healthcare data,encompassing genomic,transcriptomic,and other omics profiles,as well as radiological imaging and histopathological slides,makes this approach increasingly important because,when examined separately,these data sources only offer a fragmented picture of intricate disease processes.Multimodal deep learning leverages the complementary properties of multiple data modalities to enable more accurate prognostic modeling,more robust disease characterization,and improved treatment decision-making.This review provides a comprehensive overview of the current state of multimodal deep learning approaches in medical diagnosis.We classify and examine important application domains,such as(1)radiology,where automated report generation and lesion detection are facilitated by image-text integration;(2)histopathology,where fusion models improve tumor classification and grading;and(3)multi-omics,where molecular subtypes and latent biomarkers are revealed through cross-modal learning.We provide an overview of representative research,methodological advancements,and clinical consequences for each domain.Additionally,we critically analyzed the fundamental issues preventing wider adoption,including computational complexity(particularly in training scalable,multi-branch networks),data heterogeneity(resulting from modality-specific noise,resolution variations,and inconsistent annotations),and the challenge of maintaining significant cross-modal correlations during fusion.These problems impede interpretability,which is crucial for clinical trust and use,in addition to performance and generalizability.Lastly,we outline important areas for future research,including the development of standardized protocols for harmonizing data,the creation of lightweight and interpretable fusion architectures,the integration of real-time clinical decision support systems,and the promotion of cooperation for federated multimodal learning.Our goal is to provide researchers and clinicians with a concise overview of the field’s present state,enduring constraints,and exciting directions for further research through this review. 展开更多
关键词 multimodal deep learning medical diagnostics multimodal healthcare fusion healthcare data integration
暂未订购
Performance vs.Complexity Comparative Analysis of Multimodal Bilinear Pooling Fusion Approaches for Deep Learning-Based Visual Arabic-Question Answering Systems
5
作者 Sarah M.Kamel Mai A.Fadel +1 位作者 Lamiaa Elrefaei Shimaa I.Hassan 《Computer Modeling in Engineering & Sciences》 2025年第4期373-411,共39页
Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate... Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate answer.In this paper,we propose a VQA system intended to answer yes/no questions about real-world images,in Arabic.To support a robust VQA system,we work in two directions:(1)Using deep neural networks to semantically represent the given image and question in a fine-grainedmanner,namely ResNet-152 and Gated Recurrent Units(GRU).(2)Studying the role of the utilizedmultimodal bilinear pooling fusion technique in the trade-o.between the model complexity and the overall model performance.Some fusion techniques could significantly increase the model complexity,which seriously limits their applicability for VQA models.So far,there is no evidence of how efficient these multimodal bilinear pooling fusion techniques are for VQA systems dedicated to yes/no questions.Hence,a comparative analysis is conducted between eight bilinear pooling fusion techniques,in terms of their ability to reduce themodel complexity and improve themodel performance in this case of VQA systems.Experiments indicate that these multimodal bilinear pooling fusion techniques have improved the VQA model’s performance,until reaching the best performance of 89.25%.Further,experiments have proven that the number of answers in the developed VQA system is a critical factor that a.ects the effectiveness of these multimodal bilinear pooling techniques in achieving their main objective of reducing the model complexity.The Multimodal Local Perception Bilinear Pooling(MLPB)technique has shown the best balance between the model complexity and its performance,for VQA systems designed to answer yes/no questions. 展开更多
关键词 Arabic-VQA deep learning-based VQA deep multimodal information fusion multimodal representation learning VQA of yes/no questions VQA model complexity VQA model performance performance-complexity trade-off
在线阅读 下载PDF
GLAMSNet:A Gated-Linear Aspect-Aware Multimodal Sentiment Network with Alignment Supervision and External Knowledge Guidance
6
作者 Dan Wang Zhoubin Li +1 位作者 Yuze Xia Zhenhua Yu 《Computers, Materials & Continua》 2025年第12期5823-5845,共23页
Multimodal Aspect-Based Sentiment Analysis(MABSA)aims to detect sentiment polarity toward specific aspects by leveraging both textual and visual inputs.However,existing models suffer from weak aspectimage alignment,mo... Multimodal Aspect-Based Sentiment Analysis(MABSA)aims to detect sentiment polarity toward specific aspects by leveraging both textual and visual inputs.However,existing models suffer from weak aspectimage alignment,modality imbalance dominated by textual signals,and limited reasoning for implicit or ambiguous sentiments requiring external knowledge.To address these issues,we propose a unified framework named Gated-Linear Aspect-Aware Multimodal Sentiment Network(GLAMSNet).First of all,an input encoding module is employed to construct modality-specific and aspect-aware representations.Subsequently,we introduce an image–aspect correlation matching module to provide hierarchical supervision for visual-textual alignment.Building upon these components,we further design a Gated-Linear Aspect-Aware Fusion(GLAF)module to enhance aspect-aware representation learning by adaptively filtering irrelevant textual information and refining semantic alignment under aspect guidance.Additionally,an External Language Model Knowledge-Guided mechanism is integrated to incorporate sentimentaware prior knowledge from GPT-4o,enabling robust semantic reasoning especially under noisy or ambiguous inputs.Experimental studies conducted based on Twitter-15 and Twitter-17 datasets demonstrate that the proposed model outperforms most state-of-the-art methods,achieving 79.36%accuracy and 74.72%F1-score,and 74.31%accuracy and 72.01%F1-score,respectively. 展开更多
关键词 Sentiment analysis multimodal aspect-based sentiment analysis cross-modal alignment multimodal sentiment classification large language model
在线阅读 下载PDF
Low-Rank Adapter Layers and Bidirectional Gated Feature Fusion for Multimodal Hateful Memes Classification
7
作者 Youwei Huang Han Zhong +1 位作者 Cheng Cheng Yijie Peng 《Computers, Materials & Continua》 2025年第7期1863-1882,共20页
Hateful meme is a multimodal medium that combines images and texts.The potential hate content of hateful memes has caused serious problems for social media security.The current hateful memes classification task faces ... Hateful meme is a multimodal medium that combines images and texts.The potential hate content of hateful memes has caused serious problems for social media security.The current hateful memes classification task faces significant data scarcity challenges,and direct fine-tuning of large-scale pre-trained models often leads to severe overfitting issues.In addition,it is a challenge to understand the underlying relationship between text and images in the hateful memes.To address these issues,we propose a multimodal hateful memes classification model named LABF,which is based on low-rank adapter layers and bidirectional gated feature fusion.Firstly,low-rank adapter layers are adopted to learn the feature representation of the new dataset.This is achieved by introducing a small number of additional parameters while retaining prior knowledge of the CLIP model,which effectively alleviates the overfitting phenomenon.Secondly,a bidirectional gated feature fusion mechanism is designed to dynamically adjust the interaction weights of text and image features to achieve finer cross-modal fusion.Experimental results show that the method significantly outperforms existing methods on two public datasets,verifying its effectiveness and robustness. 展开更多
关键词 Hateful meme multimodal fusion multimodal data deep learning
在线阅读 下载PDF
TGNet:Intelligent Identification of Thunderstorm Wind Gusts Using Multimodal Fusion 被引量:3
8
作者 Xiaowen ZHANG Yongguang ZHENG +3 位作者 Hengde ZHANG Jie SHENG Bingjian LU Shuo FENG 《Advances in Atmospheric Sciences》 2025年第1期146-164,共19页
Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.There... Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.Therefore,it is necessary to establish thunderstorm wind gust identification techniques based on multisource high-resolution observations.This paper introduces a new algorithm,called thunderstorm wind gust identification network(TGNet).It leverages multimodal feature fusion to fuse the temporal and spatial features of thunderstorm wind gust events.The shapelet transform is first used to extract the temporal features of wind speeds from automatic weather stations,which is aimed at distinguishing thunderstorm wind gusts from those caused by synoptic-scale systems or typhoons.Then,the encoder,structured upon the U-shaped network(U-Net)and incorporating recurrent residual convolutional blocks(R2U-Net),is employed to extract the corresponding spatial convective characteristics of satellite,radar,and lightning observations.Finally,by using the multimodal deep fusion module based on multi-head cross-attention,the temporal features of wind speed at each automatic weather station are incorporated into the spatial features to obtain 10-minutely classification of thunderstorm wind gusts.TGNet products have high accuracy,with a critical success index reaching 0.77.Compared with those of U-Net and R2U-Net,the false alarm rate of TGNet products decreases by 31.28%and 24.15%,respectively.The new algorithm provides grid products of thunderstorm wind gusts with a spatial resolution of 0.01°,updated every 10minutes.The results are finer and more accurate,thereby helping to improve the accuracy of operational warnings for thunderstorm wind gusts. 展开更多
关键词 thunderstorm wind gusts shapelet transform multimodal deep feature fusion
在线阅读 下载PDF
Recent progress on artificial intelligence-enhanced multimodal sensors integrated devices and systems 被引量:2
9
作者 Haihua Wang Mingjian Zhou +5 位作者 Xiaolong Jia Hualong Wei Zhenjie Hu Wei Li Qiumeng Chen Lei Wang 《Journal of Semiconductors》 2025年第1期179-192,共14页
Multimodal sensor fusion can make full use of the advantages of various sensors,make up for the shortcomings of a single sensor,achieve information verification or information security through information redundancy,a... Multimodal sensor fusion can make full use of the advantages of various sensors,make up for the shortcomings of a single sensor,achieve information verification or information security through information redundancy,and improve the reliability and safety of the system.Artificial intelligence(AI),referring to the simulation of human intelligence in machines that are programmed to think and learn like humans,represents a pivotal frontier in modern scientific research.With the continuous development and promotion of AI technology in Sensor 4.0 age,multimodal sensor fusion is becoming more and more intelligent and automated,and is expected to go further in the future.With this context,this review article takes a comprehensive look at the recent progress on AI-enhanced multimodal sensors and their integrated devices and systems.Based on the concept and principle of sensor technologies and AI algorithms,the theoretical underpinnings,technological breakthroughs,and pragmatic applications of AI-enhanced multimodal sensors in various fields such as robotics,healthcare,and environmental monitoring are highlighted.Through a comparative study of the dual/tri-modal sensors with and without using AI technologies(especially machine learning and deep learning),AI-enhanced multimodal sensors highlight the potential of AI to improve sensor performance,data processing,and decision-making capabilities.Furthermore,the review analyzes the challenges and opportunities afforded by AI-enhanced multimodal sensors,and offers a prospective outlook on the forthcoming advancements. 展开更多
关键词 SENSOR multimodal sensors machine learning deep learning intelligent system
在线阅读 下载PDF
A Flexible‑Integrated Multimodal Hydrogel‑Based Sensing Patch 被引量:1
10
作者 Peng Wang Guoqing Wang +4 位作者 Guifen Sun Chenchen Bao Yang Li Chuizhou Meng Zhao Yao 《Nano-Micro Letters》 2025年第7期107-125,共19页
Sleep monitoring is an important part of health management because sleep quality is crucial for restoration of human health.However,current commercial products of polysomnography are cumbersome with connecting wires a... Sleep monitoring is an important part of health management because sleep quality is crucial for restoration of human health.However,current commercial products of polysomnography are cumbersome with connecting wires and state-of-the-art flexible sensors are still interferential for being attached to the body.Herein,we develop a flexible-integrated multimodal sensing patch based on hydrogel and its application in unconstraint sleep monitoring.The patch comprises a bottom hydrogel-based dualmode pressure–temperature sensing layer and a top electrospun nanofiber-based non-contact detection layer as one integrated device.The hydrogel as core substrate exhibits strong toughness and water retention,and the multimodal sensing of temperature,pressure,and non-contact proximity is realized based on different sensing mechanisms with no crosstalk interference.The multimodal sensing function is verified in a simulated real-world scenario by a robotic hand grasping objects to validate its practicability.Multiple multimodal sensing patches integrated on different locations of a pillow are assembled for intelligent sleep monitoring.Versatile human–pillow interaction information as well as their evolution over time are acquired and analyzed by a one-dimensional convolutional neural network.Track of head movement and recognition of bad patterns that may lead to poor sleep are achieved,which provides a promising approach for sleep monitoring. 展开更多
关键词 multimodal sensing Proximity sensor Pressure sensor Temperature sensor Electrospun nanofibers
在线阅读 下载PDF
A multimodal contrastive learning framework for predicting P-glycoprotein substrates and inhibitors 被引量:1
11
作者 Yixue Zhang Jialu Wu +1 位作者 Yu Kang Tingjun Hou 《Journal of Pharmaceutical Analysis》 2025年第8期1810-1824,共15页
P-glycoprotein(P-gp)is a transmembrane protein widely involved in the absorption,distribution,metabolism,excretion,and toxicity(ADMET)of drugs within the human body.Accurate prediction of Pgp inhibitors and substrates... P-glycoprotein(P-gp)is a transmembrane protein widely involved in the absorption,distribution,metabolism,excretion,and toxicity(ADMET)of drugs within the human body.Accurate prediction of Pgp inhibitors and substrates is crucial for drug discovery and toxicological assessment.However,existing models rely on limited molecular information,leading to suboptimal model performance for predicting P-gp inhibitors and substrates.To overcome this challenge,we compiled an extensive dataset from public databases and literature,consisting of 5,943 P-gp inhibitors and 4,018 substrates,notable for their high quantity,quality,and structural uniqueness.In addition,we curated two external test sets to validate the model's generalization capability.Subsequently,we developed a multimodal graph contrastive learning(GCL)model for the prediction of P-gp inhibitors and substrates(MC-PGP).This framework integrates three types of features from Simplified Molecular Input Line Entry System(SMILES)sequences,molecular fingerprints,and molecular graphs using an attention-based fusion strategy to generate a unified molecular representation.Furthermore,we employed a GCL approach to enhance structural representations by aligning local and global structures.Extensive experimental results highlight the superior performance of MC-PGP,which achieves improvements in the area under the curve of receiver operating characteristic(AUC-ROC)of 9.82%and 10.62%on the external P-gp inhibitor and external P-gp substrate datasets,respectively,compared with 12 state-of-the-art methods.Furthermore,the interpretability analysis of all three molecular feature types offers comprehensive and complementary insights,demonstrating that MC-PGP effectively identifies key functional groups involved in P-gp interactions.These chemically intuitive insights provide valuable guidance for the design and optimization of drug candidates. 展开更多
关键词 P-GLYCOPROTEIN Deep learning multimodal fusion Graph contrastive learning
暂未订购
Text-Image Feature Fine-Grained Learning for Joint Multimodal Aspect-Based Sentiment Analysis
12
作者 Tianzhi Zhang Gang Zhou +4 位作者 Shuang Zhang Shunhang Li Yepeng Sun Qiankun Pi Shuo Liu 《Computers, Materials & Continua》 SCIE EI 2025年第1期279-305,共27页
Joint Multimodal Aspect-based Sentiment Analysis(JMASA)is a significant task in the research of multimodal fine-grained sentiment analysis,which combines two subtasks:Multimodal Aspect Term Extraction(MATE)and Multimo... Joint Multimodal Aspect-based Sentiment Analysis(JMASA)is a significant task in the research of multimodal fine-grained sentiment analysis,which combines two subtasks:Multimodal Aspect Term Extraction(MATE)and Multimodal Aspect-oriented Sentiment Classification(MASC).Currently,most existing models for JMASA only perform text and image feature encoding from a basic level,but often neglect the in-depth analysis of unimodal intrinsic features,which may lead to the low accuracy of aspect term extraction and the poor ability of sentiment prediction due to the insufficient learning of intra-modal features.Given this problem,we propose a Text-Image Feature Fine-grained Learning(TIFFL)model for JMASA.First,we construct an enhanced adjacency matrix of word dependencies and adopt graph convolutional network to learn the syntactic structure features for text,which addresses the context interference problem of identifying different aspect terms.Then,the adjective-noun pairs extracted from image are introduced to enable the semantic representation of visual features more intuitive,which addresses the ambiguous semantic extraction problem during image feature learning.Thereby,the model performance of aspect term extraction and sentiment polarity prediction can be further optimized and enhanced.Experiments on two Twitter benchmark datasets demonstrate that TIFFL achieves competitive results for JMASA,MATE and MASC,thus validating the effectiveness of our proposed methods. 展开更多
关键词 multimodal sentiment analysis aspect-based sentiment analysis feature fine-grained learning graph convolutional network adjective-noun pairs
在线阅读 下载PDF
A Multimodal Learning Framework to Reduce Misclassification in GI Tract Disease Diagnosis
13
作者 Sadia Fatima Fadl Dahan +3 位作者 Jamal Hussain Shah Refan Almohamedh Mohammed Aloqaily Samia Riaz 《Computer Modeling in Engineering & Sciences》 2025年第10期971-994,共24页
The human gastrointestinal(GI)tract is influenced by numerous disorders.If not detected in the early stages,they may result in severe consequences such as organ failure or the development of cancer,and in extreme case... The human gastrointestinal(GI)tract is influenced by numerous disorders.If not detected in the early stages,they may result in severe consequences such as organ failure or the development of cancer,and in extreme cases,become life-threatening.Endoscopy is a specialised imaging technique used to examine the GI tract.However,physicians might neglect certain irregular morphologies during the examination due to continuous monitoring of the video recording.Recent advancements in artificial intelligence have led to the development of high-performance AI-based systems,which are optimal for computer-assisted diagnosis.Due to numerous limitations in endoscopic image analysis,including visual similarities between infected and healthy areas,retrieval of irrelevant features,and imbalanced testing and training datasets,performance accuracy is reduced.To address these challenges,we proposed a framework for analysing gastrointestinal tract images that provides a more robust and secure model,thereby reducing the chances of misclassification.Compared to single model solutions,the proposed methodology improves performance by integrating diverse models and optimizing feature fusion using a dual-branch CNN transformer architecture.The proposed approach employs a dual-branch feature extraction mechanism,where in the first branch,features are extracted using Extended BEiT,and EfficientNet-B5 is utilized in the second branch.Additionally,crossentropy loss is used to measure the error of prediction at both branches,followed by model stacking.This multimodal framework outperforms existing approaches acrossmultiple metrics,achieving 94.12%accuracy,recall and F1-score,as well as 94.15%precision on the Kvasir dataset.Furthermore,the model successfully reduced the false negative rate to 5.88%,enhancing its ability to minimize misdiagnosis.These results highlight the adaptability of the proposed work in clinical practice,where it can provide fast and accurate diagnostic assistance crucial for improving the early diagnosis of diseases in the gastrointestinal tract. 展开更多
关键词 multimodal gastrointestinal GI disease diagnosis MISCLASSIFICATION TRANSFORMER deep learning
在线阅读 下载PDF
An Analytical Review of Large Language Models Leveraging KDGI Fine-Tuning,Quantum Embedding’s,and Multimodal Architectures
14
作者 Uddagiri Sirisha Chanumolu Kiran Kumar +2 位作者 Revathi Durgam Poluru Eswaraiah G Muni Nagamani 《Computers, Materials & Continua》 2025年第6期4031-4059,共29页
A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehens... A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research. 展开更多
关键词 Large languagemodels quantum embeddings fine-tuning techniques multimodal architectures ethical AI scenarios
在线阅读 下载PDF
The Design and Implementation of an Intelligent Guide Dog Robot Based on Multimodal Perception
15
作者 Yanxuan Zhu 《Journal of Electronic Research and Application》 2025年第5期281-290,共10页
Aiming at the problems of traditional guide devices such as single environmental perception and poor terrain adaptability,this paper proposes an intelligent guide system based on a quadruped robot platform.Data fusion... Aiming at the problems of traditional guide devices such as single environmental perception and poor terrain adaptability,this paper proposes an intelligent guide system based on a quadruped robot platform.Data fusion between millimeter-wave radar(with an accuracy of±0.1°)and an RGB-D camera is achieved through multisensor spatiotemporal registration technology,and a dataset suitable for guide dog robots is constructed.For the application scenario of edge-end guide dog robots,a lightweight CA-YOLOv11 target detection model integrated with an attention mechanism is innovatively adopted,achieving a comprehensive recognition accuracy of 95.8% in complex scenarios,which is 2.2% higher than that of the benchmark YOLOv11 network.The system supports navigation on complex terrains such as stairs(25 cm steps)and slopes(35°gradient),and the response time to sudden disturbances is shortened to 100 ms.Actual tests show that the navigation success rate reaches 95% in eight types of scenarios,the user satisfaction score is 4.8/5.0,and the cost is 50% lower than that of traditional guide dogs. 展开更多
关键词 Quadruped robot Guide system multimodal perception Target detection Human-robot interaction Path planning
在线阅读 下载PDF
A Lightweight Multimodal Deep Fusion Network for Face Antis Poofing with Cross-Axial Attention and Deep Reinforcement Learning Technique
16
作者 Diyar Wirya Omar Ameenulhakeem Osman Nuri Uçan 《Computers, Materials & Continua》 2025年第12期5671-5702,共32页
Face antispoofing has received a lot of attention because it plays a role in strengthening the security of face recognition systems.Face recognition is commonly used for authentication in surveillance applications.How... Face antispoofing has received a lot of attention because it plays a role in strengthening the security of face recognition systems.Face recognition is commonly used for authentication in surveillance applications.However,attackers try to compromise these systems by using spoofing techniques such as using photos or videos of users to gain access to services or information.Many existing methods for face spoofing face difficulties when dealing with new scenarios,especially when there are variations in background,lighting,and other environmental factors.Recent advancements in deep learning with multi-modality methods have shown their effectiveness in face antispoofing,surpassing single-modal methods.However,these approaches often generate several features that can lead to issues with data dimensionality.In this study,we introduce a multimodal deep fusion network for face anti-spoofing that incorporates cross-axial attention and deep reinforcement learning techniques.This network operates at three patch levels and analyzes images from modalities(RGB,IR,and depth).Initially,our design includes an axial attention network(XANet)model that extracts deeply hidden features from multimodal images.Further,we use a bidirectional fusion technique that pays attention to both directions to combine features from each mode effectively.We further improve feature optimization by using the Enhanced Pity Beetle Optimization(EPBO)algorithm,which selects the features to address data dimensionality problems.Moreover,our proposed model employs a hybrid federated reinforcement learning(FDDRL)approach to detect and classify face anti-spoofing,achieving a more optimal tradeoff between detection rates and false positive rates.We evaluated the proposed approach on publicly available datasets,including CASIA-SURF and GREATFASD-S,and realized 98.985%and 97.956%classification accuracy,respectively.In addition,the current method outperforms other state-of-the-art methods in terms of precision,recall,and Fmeasures.Overall,the developed methodology boosts the effectiveness of our model in detecting various types of spoofing attempts. 展开更多
关键词 Face antispoofing LIGHTWEIGHT multimodal deep feature fusion feature extraction feature optimization
在线阅读 下载PDF
The Multimodal Bionic Robot Integrating Kangaroo-Like Jumping and Tortoise-Like Crawling
17
作者 Bin Liu Yifei Ren +2 位作者 Zhuo Wang Shikai Jin Wenjie Ge 《Journal of Bionic Engineering》 2025年第4期1637-1654,共18页
In this study,we present a small,integrated jumping-crawling robot capable of intermittent jumping and self-resetting.Compared to robots with a single mode of locomotion,this multi-modal robot exhibits enhanced obstac... In this study,we present a small,integrated jumping-crawling robot capable of intermittent jumping and self-resetting.Compared to robots with a single mode of locomotion,this multi-modal robot exhibits enhanced obstacle-surmounting capabilities.To achieve this,the robot employs a novel combination of a jumping module and a crawling module.The jumping module features improved energy storage capacity and an active clutch.Within the constraints of structural robustness,the jumping module maximizes the explosive power of the linear spring by utilizing the mechanical advantage of a closed-loop mechanism and controls the energy flow of the jumping module through an active clutch mechanism.Furthermore,inspired by the limb movements of tortoises during crawling and self-righting,a single-degree-of-freedom spatial four-bar crawling mechanism was designed to enable crawling,steering,and resetting functions.To demonstrate its practicality,the integrated jumping-crawling robot was tested in a laboratory environment for functions such as jumping,crawling,self-resetting,and steering.Experimental results confirmed the feasibility of the proposed integrated jumping-crawling robot. 展开更多
关键词 Bioinspired robot Jumping robot Crawling robot multimodal robot Self-right
在线阅读 下载PDF
Study on the Pragmatic Functions of Stickers from the Perspective of Multimodal Metaphor
18
作者 CUI Ruo-lin DUAN Rong-juan 《Journal of Literature and Art Studies》 2025年第5期432-438,共7页
With the popularization of social media,stickers have become an important tool for young students to express themselves and resist mainstream culture due to their unique visual and emotional expressiveness.Most existi... With the popularization of social media,stickers have become an important tool for young students to express themselves and resist mainstream culture due to their unique visual and emotional expressiveness.Most existing studies focus on the negative impacts of spoof stickers,while paying insufficient attention to their positive functions.From the perspective of multimodal metaphor,this paper uses methods such as virtual ethnography and image-text analysis to clarify the connotation of stickers,understand the evolution of their digital dissemination forms,and explore the multiple functions of subcultural stickers in the social interactions between teachers and students.Young students use stickers to convey emotions and information.Their expressive function,social function,and cultural metaphor function progress in a progressive manner.This not only shapes students’values but also promotes self-expression and teacher-student interaction.It also reminds teachers to correct students’negative thoughts by using stickers,achieving the effect of“cultivating and influencing people through culture.” 展开更多
关键词 stickers pragmatic functions multimodal metaphor teacher-student social interactions SUBCULTURE
在线阅读 下载PDF
Multimodal artificial intelligence technology in the precision diagnosis and treatment of gastroenterology and hepatology: Innovative applications and challenges
19
作者 Yi-Mao Wu Fei-Yang Tang Zi-Xin Qi 《World Journal of Gastroenterology》 2025年第38期26-43,共18页
With the rapid development of artificial intelligence(AI)technology,multimodal data integration has become an important means to improve the accuracy of diagnosis and treatment in gastroenterology and hepatology.This ... With the rapid development of artificial intelligence(AI)technology,multimodal data integration has become an important means to improve the accuracy of diagnosis and treatment in gastroenterology and hepatology.This article systematically reviews the latest progress of multimodal AI technology in the diagnosis,treatment,and decision-making for gastrointestinal tumors,functional gastrointestinal diseases,and liver diseases,focusing on the innovative applications of endoscopic image AI,pathological section AI,multi-omics data fusion models,and wearable devices combined with natural language processing.Multimodal AI can significantly improve the accuracy of early diagnosis and the efficiency of individualized treatment planning by integrating imaging,pathological data,molecular,and clinical phenotypic data.However,current AI technologies still face challenges such as insufficient data standardization,limited generalization of models,and ethical compliance.This paper proposes solutions,such as the establishment of cross-center data sharing platform,the development of federated learning framework,and the formulation of ethical norms,and looks forward to the application prospect of multimodal large-scale models in the disease management process.This review provides theoretical basis and practical guidance for promoting the clinical translation of AI technology in the field of gastroenterology and hepatology. 展开更多
关键词 Artificial intelligence multimodal data GASTROENTEROLOGY HEPATOLOGY Precision medicine Challenges and countermeasures
在线阅读 下载PDF
上一页 1 2 49 下一页 到第
使用帮助 返回顶部