摘要
Learning domain-invariant feature representations is critical to alleviate the distribution differences between training and testing domains.The existing mainstream domain generalization approaches primarily pursue to align the across-domain distributions to extract the transferable feature representations.However,these representations may be insufficient and unstable.Moreover,these networks may also undergo catastrophic forgetting because the previous learned knowledge is replaced by the new learned knowledge.To cope with these issues,we propose a novel causality-based contrastive incremental learning model for domain generalization,which mainly includes three components:(1)intra-domain causal factorization,(2)inter-domain Mahalanobis similarity metric,and(3)contrastive knowledge distillation.The model extracts intra and inter domain-invariant knowledge to improve model generalization.Specifically,we first introduce a causal factorization to extract intra-domain invariant knowledge.Then,we design a Mahalanobis similarity metric to extract common inter-domain invariant knowledge.Finally,we propose a contrastive knowledge distillation with exponential moving average to distill model parameters in a smooth way to preserve the previous learned knowledge and mitigate model forgetting.Extensive experiments on several domain generalization benchmarks prove that our model achieves the state-of-the-art results,which sufficiently show the effectiveness of our model.
基金
supported by the Pre-research Project on Civil Aerospace Technologies of China National Space Administration(No.D010301).