期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Mitigating Spurious Correlations for Self-supervised Recommendation
1
作者 Xin-Yu Lin Yi-Yan Xu +2 位作者 Wen-Jie Wang Yang Zhang Fu-Li Feng 《Machine Intelligence Research》 EI CSCD 2023年第2期263-275,共13页
Recent years have witnessed the great success of self-supervised learning(SSL)in recommendation systems.However,SSL recommender models are likely to suffer from spurious correlations,leading to poor generalization.To ... Recent years have witnessed the great success of self-supervised learning(SSL)in recommendation systems.However,SSL recommender models are likely to suffer from spurious correlations,leading to poor generalization.To mitigate spurious correlations,existing work usually pursues ID-based SSL recommendation or utilizes feature engineering to identify spurious features.Nevertheless,ID-based SSL approaches sacrifice the positive impact of invariant features,while feature engineering methods require high-cost human labeling.To address the problems,we aim to automatically mitigate the effect of spurious correlations.This objective requires to 1)automatically mask spurious features without supervision,and 2)block the negative effect transmission from spurious features to other features during SSL.To handle the two challenges,we propose an invariant feature learning framework,which first divides user-item interactions into multiple environments with distribution shifts and then learns a feature mask mechanism to capture invariant features across environments.Based on the mask mechanism,we can remove the spurious features for robust predictions and block the negative effect transmission via mask-guided feature augmentation.Extensive experiments on two datasets demonstrate the effectiveness of the proposed framework in mitigating spurious correlations and improving the generalization abilities of SSL models. 展开更多
关键词 Self-supervised recommendation spurious correlations spurious features invariant feature learning contrastive learning
原文传递
Causal Inference Meets Deep Learning:A Comprehensive Survey
2
作者 Licheng Jiao Yuhan Wang +7 位作者 Xu Liu Lingling Li Fang Liu Wenping Ma Yuwei Guo Puhua Chen Shuyuan Yang Biao Hou 《Research》 2025年第2期586-626,共41页
Deep learning relies on learning from extensive data to generate prediction results.This approach may inadvertently capture spurious correlations within the data,leading to models that lack interpretability and robust... Deep learning relies on learning from extensive data to generate prediction results.This approach may inadvertently capture spurious correlations within the data,leading to models that lack interpretability and robustness.Researchers have developed more profound and stable causal inference methods based on cognitive neuroscience.By replacing the correlation model with a stable and interpretable causal model,it is possible to mitigate the misleading nature of spurious correlations and overcome the limitations of model calculations.In this survey,we provide a comprehensive and structured review of causal inference methods in deep learning.Brain-like inference ideas are discussed from a brain-inspired perspective,and the basic concepts of causal learning are introduced.The article describes the integration of causal inference with traditional deep learning algorithms and illustrates its application to large model tasks as well as specific modalities in deep learning.The current limitations of causal inference and future research directions are discussed.Moreover,the commonly used benchmark datasets and the corresponding download links are summarized. 展开更多
关键词 correlation model more profound stable causal inference methods deep learning spurious correlations learning extensive data causal inference cognitive neuroscienceby
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部