摘要
深度学习模型目前已成为医学影像领域的重要技术手段,但其普遍面临决策过程不透明的问题,进而引发了在实际应用中关于信任与可解释性的挑战。为应对这一问题,可解释人工智能技术被引入医学影像分析领域。尽管展现出一定的应用潜力,该技术却在实践中引发了诸多伦理风险,其中虚假解释又导致患者隐私、数据安全及医疗决策权归属等方面的问题。通过对以显著图为代表的可解释人工智能在医学影像中应用的分析,论述了虚假解释的根源,并尝试提出相应的化解路径,以推动可解释人工智能技术在医学领域中的负责任与可持续发展。
Deep learning models have become a core technological tool in the field of medical image analysis.However,these models often suffer from a lack of transparency in their decision-making processes,leading to challenges related to trust and interpretability in clinical applications.To address this issue,explainable artificial intelligence(XAI)techniques have been applied to medical image analysis.While showing promising potential,XAI also brings significant ethical risks in practice—most notably,the problem of spurious explanations.Such explanations may rise further concerns regarding patient privacy,data security,and the attribution of decision-making authority in medical contexts.This paper analyzes the application of XAI methods—particularly saliency maps—in medical image interpretation,identifies the underlying causes of spurious explanations,and proposes possible mitigation strategies.The aim is to contribute to the responsible and sustainable integration of explainable AI into clinical practice.
作者
贾玮晗
曾洙
JIA Weihan;ZENG Zhu(School of Marxism,University of Science and Technology Beijing,Beijing 100083,China;School of Marxism,Kunming Univer-sity of Science and Technology,Kunming 650500,China)
出处
《医学与哲学》
北大核心
2025年第13期7-11,共5页
Medicine and Philosophy
基金
2023年国家社会科学基金一般项目(23BZX103)。
关键词
医学影像分析
可解释人工智能
虚假解释
medical image analysis
explainable artificial intelligence
spurious explanation