One concern about the application of medical artificial intelligence(AI)regards the“black box”feature which can only be viewed in terms of itsinputs and outputs,with no way to understand the AI’s algorithm.Thisis p...One concern about the application of medical artificial intelligence(AI)regards the“black box”feature which can only be viewed in terms of itsinputs and outputs,with no way to understand the AI’s algorithm.Thisis problematic because patients,physicians,and even designers,do not understand why or how a treatment recommendation is produced by AI technologies.One view claims that the worry about black-box medicine is unreasonable because AI systems outperform human doctors in identifying the disease.Furthermore,under the medical AI-physicianpatient model,the physician can undertake the responsibility of interpreting the medical AI’s diagnosis.In this study,we focus on the potential harm caused by the unexplainability feature of medical AI and try to show that such possible harm is underestimated.We will seek to contribute to the literature from three aspects.First,we appealed to a thought experiment to show that although the medical AI systems perform better on accuracy,the harm caused by medical AI’s misdiagnoses may be more serious than that caused by human doctors’misdiagnoses in some cases.Second,in patient-centered medicine,physicians were obligated to provide adequate information to their patients in medical decision-making.However,the unexplainability feature of medical AI systems would limit the patient’s autonomy.Last,we tried to illustrate the psychological and financial burdens that may be caused by the unexplainablity feature of medical AI systems,which seems to be ignored by the previous ethical discussions.展开更多
基金the Young Scholars Program of the National Social Science Fund of China(Grant No.22CZX019).
文摘One concern about the application of medical artificial intelligence(AI)regards the“black box”feature which can only be viewed in terms of itsinputs and outputs,with no way to understand the AI’s algorithm.Thisis problematic because patients,physicians,and even designers,do not understand why or how a treatment recommendation is produced by AI technologies.One view claims that the worry about black-box medicine is unreasonable because AI systems outperform human doctors in identifying the disease.Furthermore,under the medical AI-physicianpatient model,the physician can undertake the responsibility of interpreting the medical AI’s diagnosis.In this study,we focus on the potential harm caused by the unexplainability feature of medical AI and try to show that such possible harm is underestimated.We will seek to contribute to the literature from three aspects.First,we appealed to a thought experiment to show that although the medical AI systems perform better on accuracy,the harm caused by medical AI’s misdiagnoses may be more serious than that caused by human doctors’misdiagnoses in some cases.Second,in patient-centered medicine,physicians were obligated to provide adequate information to their patients in medical decision-making.However,the unexplainability feature of medical AI systems would limit the patient’s autonomy.Last,we tried to illustrate the psychological and financial burdens that may be caused by the unexplainablity feature of medical AI systems,which seems to be ignored by the previous ethical discussions.