期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
MKGViLT:visual-and-language transformer based on medical knowledge graph embedding
1
作者 CUI Wencheng SHI Wentao SHAO Hong 《High Technology Letters》 2025年第1期73-85,共13页
Medical visual question answering(MedVQA)aims to enhance diagnostic confidence and deepen patientsunderstanding of their health conditions.While the Transformer architecture is widely used in multimodal fields,its app... Medical visual question answering(MedVQA)aims to enhance diagnostic confidence and deepen patientsunderstanding of their health conditions.While the Transformer architecture is widely used in multimodal fields,its application in MedVQA requires further enhancement.A critical limitation of contemporary MedVQA systems lies in the inability to integrate lifelong knowledge with specific patient data to generate human-like responses.Existing Transformer-based MedVQA models require enhancing their capabitities for interpreting answers through the applications of medical image knowledge.The introduction of the medical knowledge graph visual language transformer(MKGViLT),designed for joint medical knowledge graphs(KGs),addresses this challenge.MKGViLT incorporates an enhanced Transformer structure to effectively extract features and combine modalities for MedVQA tasks.The MKGViLT model delivers answers based on richer background knowledge,thereby enhancing performance.The efficacy of MKGViLT is evaluated using the SLAKE and P-VQA datasets.Experimental results show that MKGViLT surpasses the most advanced methods on the SLAKE dataset. 展开更多
关键词 knowledge graph(KG) medical vision question answer(MedVQA) vision-andlanguage transformer
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部