Explainable recommendation systems deal with the problem of‘Why’.Besides providing the user with the recommendation,it is also explained why such an object is being recommended.It helps to improve trustworthiness,ef...Explainable recommendation systems deal with the problem of‘Why’.Besides providing the user with the recommendation,it is also explained why such an object is being recommended.It helps to improve trustworthiness,effectiveness,efficiency,persuasiveness,and user satisfaction towards the system.To recommend the relevant information with an explanation to the user is required.Existing systems provide the top-k recommendation options to the user based on ratings and reviews about the required object but unable to explain the matched-attribute-based recommendation to the user.A framework is proposed to fetch the most specific information that matches the user requirements based on Formal Concept Analysis(FCA).The ranking quality of the recommendation list for the proposed system is evaluated quantitatively with Normalized Discounted Cumulative Gain(NDCG)@k,which is better than the existing systems.Explanation is provided qualitatively by considering trustworthiness criterion i.e.,among the seven explainability evaluation criteria,and its metric satisfies the results of proposed method.This framework can be enhanced to accommodate for more effectiveness and trustworthiness.展开更多
The explainability of recommendation systems refers to the ability to explain the logic that guides the system’s decision to endorse or exclude an item.In industrial-grade recommendation systems,the high complexity o...The explainability of recommendation systems refers to the ability to explain the logic that guides the system’s decision to endorse or exclude an item.In industrial-grade recommendation systems,the high complexity of features,the presence of embedding layers,the existence of adversarial samples and the requirements for explanation accuracy and efficiency pose significant challenges to current explanation methods.This paper proposes a novel framework AdvLIME(Adversarial Local Interpretable Model-agnostic Explanation)that leverages Generative Adversarial Networks(GANs)with Embedding Constraints to enhance explainability.This method utilizes adversarial samples as references to explain recommendation decisions,generating these samples in accordance with realistic distributions and ensuring they meet the structural constraints of the embedding module.AdvLIME requires no modifications to the existing model architecture and needs only a single training session for global explanation,making it ideal for industrial applications.This work contributes two significant advancements.First,it develops a model-independent global explanation method via adversarial generation.Second,it introduces a model discrimination method to guarantee that the generated samples adhere to the embedding constraints.We evaluate the AdvLIME framework on the Behavior Sequence Transformer(BST)model using the MovieLens 20 M dataset.The experimental results show that AdvLIME outperforms traditional methods such as LIME and DLIME,reducing the approximation error of real samples by 50%and demonstrating improved stability and accuracy.展开更多
Explainable recommendation, which can provide reasonable explanations for recommendations, is increasingly important in many fields. Although traditional embedding-based models can learn many implicit features, result...Explainable recommendation, which can provide reasonable explanations for recommendations, is increasingly important in many fields. Although traditional embedding-based models can learn many implicit features, resulting in good performance, they cannot provide the reason for their recommendations. Existing explainable recommender methods can be mainly divided into two types. The first type models highlight reviews written by users to provide an explanation. For the second type, attribute information is taken into consideration. These approaches only consider one aspect and do not make the best use of the existing information. In this paper, we propose a novel neural explainable recommender model based on attributes and reviews (NERAR) for recommendation that combines the processing of attribute features and review features. We employ a tree-based model to extract and learn attribute features from auxiliary information, and then we use a time-aware gated recurrent unit (T-GRU) to model user review features and process item review features based on a convolutional neural network (CNN). Extensive experiments on Amazon datasets demonstrate that our model outperforms the state-of-the-art recommendation models in accuracy of recommendations. The presented examples also show that our model can offer more reasonable explanations. Crowd-sourcing based evaluations are conducted to verify our model's superiority in explainability.展开更多
文摘Explainable recommendation systems deal with the problem of‘Why’.Besides providing the user with the recommendation,it is also explained why such an object is being recommended.It helps to improve trustworthiness,effectiveness,efficiency,persuasiveness,and user satisfaction towards the system.To recommend the relevant information with an explanation to the user is required.Existing systems provide the top-k recommendation options to the user based on ratings and reviews about the required object but unable to explain the matched-attribute-based recommendation to the user.A framework is proposed to fetch the most specific information that matches the user requirements based on Formal Concept Analysis(FCA).The ranking quality of the recommendation list for the proposed system is evaluated quantitatively with Normalized Discounted Cumulative Gain(NDCG)@k,which is better than the existing systems.Explanation is provided qualitatively by considering trustworthiness criterion i.e.,among the seven explainability evaluation criteria,and its metric satisfies the results of proposed method.This framework can be enhanced to accommodate for more effectiveness and trustworthiness.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.62403360,72171172 and 92367101the Aeronautical Science Foundation of China under Grant No.2023Z066038001+2 种基金the National Natural Science Foundation of China Basic Science Research Center Program under Grant No.62088101Shanghai Municipal Science and Technology Major Project under Grant No.2021SHZDZX0100iF open-funds from Xinghuo Eco and China Institute of Communications.
文摘The explainability of recommendation systems refers to the ability to explain the logic that guides the system’s decision to endorse or exclude an item.In industrial-grade recommendation systems,the high complexity of features,the presence of embedding layers,the existence of adversarial samples and the requirements for explanation accuracy and efficiency pose significant challenges to current explanation methods.This paper proposes a novel framework AdvLIME(Adversarial Local Interpretable Model-agnostic Explanation)that leverages Generative Adversarial Networks(GANs)with Embedding Constraints to enhance explainability.This method utilizes adversarial samples as references to explain recommendation decisions,generating these samples in accordance with realistic distributions and ensuring they meet the structural constraints of the embedding module.AdvLIME requires no modifications to the existing model architecture and needs only a single training session for global explanation,making it ideal for industrial applications.This work contributes two significant advancements.First,it develops a model-independent global explanation method via adversarial generation.Second,it introduces a model discrimination method to guarantee that the generated samples adhere to the embedding constraints.We evaluate the AdvLIME framework on the Behavior Sequence Transformer(BST)model using the MovieLens 20 M dataset.The experimental results show that AdvLIME outperforms traditional methods such as LIME and DLIME,reducing the approximation error of real samples by 50%and demonstrating improved stability and accuracy.
基金This work was supported by the University Science and Technology Research Plan Project of Jilin Province of China under Grant No.JJKH20190156KJthe National Natural Science Foundation of China under Grant Nos.61572226 and 61876069+1 种基金Jilin Province Key Scientific and Technological Research and Development Project under Grant Nos.20180201067GX and 20180201044GXJilin Province Natural Science Foundation under Grant No.20200201036JC.
文摘Explainable recommendation, which can provide reasonable explanations for recommendations, is increasingly important in many fields. Although traditional embedding-based models can learn many implicit features, resulting in good performance, they cannot provide the reason for their recommendations. Existing explainable recommender methods can be mainly divided into two types. The first type models highlight reviews written by users to provide an explanation. For the second type, attribute information is taken into consideration. These approaches only consider one aspect and do not make the best use of the existing information. In this paper, we propose a novel neural explainable recommender model based on attributes and reviews (NERAR) for recommendation that combines the processing of attribute features and review features. We employ a tree-based model to extract and learn attribute features from auxiliary information, and then we use a time-aware gated recurrent unit (T-GRU) to model user review features and process item review features based on a convolutional neural network (CNN). Extensive experiments on Amazon datasets demonstrate that our model outperforms the state-of-the-art recommendation models in accuracy of recommendations. The presented examples also show that our model can offer more reasonable explanations. Crowd-sourcing based evaluations are conducted to verify our model's superiority in explainability.