期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Trustworthy Explainable Recommendation Framework for Relevancy
1
作者 Saba Sana Mohammad Shoaib 《Computers, Materials & Continua》 SCIE EI 2022年第12期5887-5909,共23页
Explainable recommendation systems deal with the problem of‘Why’.Besides providing the user with the recommendation,it is also explained why such an object is being recommended.It helps to improve trustworthiness,ef... Explainable recommendation systems deal with the problem of‘Why’.Besides providing the user with the recommendation,it is also explained why such an object is being recommended.It helps to improve trustworthiness,effectiveness,efficiency,persuasiveness,and user satisfaction towards the system.To recommend the relevant information with an explanation to the user is required.Existing systems provide the top-k recommendation options to the user based on ratings and reviews about the required object but unable to explain the matched-attribute-based recommendation to the user.A framework is proposed to fetch the most specific information that matches the user requirements based on Formal Concept Analysis(FCA).The ranking quality of the recommendation list for the proposed system is evaluated quantitatively with Normalized Discounted Cumulative Gain(NDCG)@k,which is better than the existing systems.Explanation is provided qualitatively by considering trustworthiness criterion i.e.,among the seven explainability evaluation criteria,and its metric satisfies the results of proposed method.This framework can be enhanced to accommodate for more effectiveness and trustworthiness. 展开更多
关键词 explainable recommendation data analysis formal concept analysis(FCA)approach
在线阅读 下载PDF
Explanation framework for industrial recommendation systems based on the generative adversarial network with embedding constraints 被引量:1
2
作者 Binchuan Qi Wei Gong Li Li 《Autonomous Intelligent Systems》 2025年第1期325-338,共14页
The explainability of recommendation systems refers to the ability to explain the logic that guides the system’s decision to endorse or exclude an item.In industrial-grade recommendation systems,the high complexity o... The explainability of recommendation systems refers to the ability to explain the logic that guides the system’s decision to endorse or exclude an item.In industrial-grade recommendation systems,the high complexity of features,the presence of embedding layers,the existence of adversarial samples and the requirements for explanation accuracy and efficiency pose significant challenges to current explanation methods.This paper proposes a novel framework AdvLIME(Adversarial Local Interpretable Model-agnostic Explanation)that leverages Generative Adversarial Networks(GANs)with Embedding Constraints to enhance explainability.This method utilizes adversarial samples as references to explain recommendation decisions,generating these samples in accordance with realistic distributions and ensuring they meet the structural constraints of the embedding module.AdvLIME requires no modifications to the existing model architecture and needs only a single training session for global explanation,making it ideal for industrial applications.This work contributes two significant advancements.First,it develops a model-independent global explanation method via adversarial generation.Second,it introduces a model discrimination method to guarantee that the generated samples adhere to the embedding constraints.We evaluate the AdvLIME framework on the Behavior Sequence Transformer(BST)model using the MovieLens 20 M dataset.The experimental results show that AdvLIME outperforms traditional methods such as LIME and DLIME,reducing the approximation error of real samples by 50%and demonstrating improved stability and accuracy. 展开更多
关键词 explainable recommendation system Feature embedding Generative adversarial networks Deep learning
原文传递
Neural Explainable Recommender Model Based on Attributes and Reviews
3
作者 Yu-Yao Liu Bo Yang +1 位作者 Hong-Bin Pei Jing Huang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第6期1446-1460,共15页
Explainable recommendation, which can provide reasonable explanations for recommendations, is increasingly important in many fields. Although traditional embedding-based models can learn many implicit features, result... Explainable recommendation, which can provide reasonable explanations for recommendations, is increasingly important in many fields. Although traditional embedding-based models can learn many implicit features, resulting in good performance, they cannot provide the reason for their recommendations. Existing explainable recommender methods can be mainly divided into two types. The first type models highlight reviews written by users to provide an explanation. For the second type, attribute information is taken into consideration. These approaches only consider one aspect and do not make the best use of the existing information. In this paper, we propose a novel neural explainable recommender model based on attributes and reviews (NERAR) for recommendation that combines the processing of attribute features and review features. We employ a tree-based model to extract and learn attribute features from auxiliary information, and then we use a time-aware gated recurrent unit (T-GRU) to model user review features and process item review features based on a convolutional neural network (CNN). Extensive experiments on Amazon datasets demonstrate that our model outperforms the state-of-the-art recommendation models in accuracy of recommendations. The presented examples also show that our model can offer more reasonable explanations. Crowd-sourcing based evaluations are conducted to verify our model's superiority in explainability. 展开更多
关键词 recommender system explainable recommendation review usefulness attribute usefulness
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部