AI-related research is conducted in various ways,but the reliability of AI prediction results is currently insufficient,so expert decisions are indispensable for tasks that require essential decision-making.XAI(eXplai...AI-related research is conducted in various ways,but the reliability of AI prediction results is currently insufficient,so expert decisions are indispensable for tasks that require essential decision-making.XAI(eXplainable AI)is studied to improve the reliability of AI.However,each XAI methodology shows different results in the same data set and exact model.This means that XAI results must be given meaning,and a lot of noise value emerges.This paper proposes the HFD(Hybrid Feature Dropout)-based XAI and evaluation methodology.The proposed XAI methodology can mitigate shortcomings,such as incorrect feature weights and impractical feature selection.There are few XAI evaluation methods.This paper proposed four evaluation criteria that can give practical meaning.As a result of verifying with the malware data set(Data Challenge 2019),we confirmed better results than other XAI methodologies in 4 evaluation criteria.Since the efficiency of interpretation is verified with a reasonable XAI evaluation standard,The practicality of the XAI methodology will be improved.In addition,The usefulness of the XAI methodology will be demonstrated to enhance the reliability of AI,and it helps apply AI results to essential tasks that require expert decision-making.展开更多
基金This work was supported by an Institute of Information and Communications Technology Planning and Evaluation(IITP)grant funded by the Korean government(MSIT)(No.2022-0-00089Development of clustering and analysis technology to identify cyber-attack groups based on life-cycle)and the Institute of Civil Military Technology Cooperation funded by the Defense Acquisition Program Administration and Ministry of Trade,Industry and Energy of Korean government under grant No.21-CM-EC-07.
文摘AI-related research is conducted in various ways,but the reliability of AI prediction results is currently insufficient,so expert decisions are indispensable for tasks that require essential decision-making.XAI(eXplainable AI)is studied to improve the reliability of AI.However,each XAI methodology shows different results in the same data set and exact model.This means that XAI results must be given meaning,and a lot of noise value emerges.This paper proposes the HFD(Hybrid Feature Dropout)-based XAI and evaluation methodology.The proposed XAI methodology can mitigate shortcomings,such as incorrect feature weights and impractical feature selection.There are few XAI evaluation methods.This paper proposed four evaluation criteria that can give practical meaning.As a result of verifying with the malware data set(Data Challenge 2019),we confirmed better results than other XAI methodologies in 4 evaluation criteria.Since the efficiency of interpretation is verified with a reasonable XAI evaluation standard,The practicality of the XAI methodology will be improved.In addition,The usefulness of the XAI methodology will be demonstrated to enhance the reliability of AI,and it helps apply AI results to essential tasks that require expert decision-making.