In real-world scenarios,few-shot unsupervised domain adaptation(FUDA)faces the dual challenges of limited source supervision and poor target generalization due to the extremely scarce annotated source samples.Existing...In real-world scenarios,few-shot unsupervised domain adaptation(FUDA)faces the dual challenges of limited source supervision and poor target generalization due to the extremely scarce annotated source samples.Existing methods often overlook the restricted learning capacity caused by sparse source labels or fail to effectively utilize the structural information within the target domain to enhance discriminative performance.To address these issues,we propose a novel method,Collaborative Pseudo-label Transfer(CPLT),which jointly improves cross-domain adaptation under few-shot UDA settings.CPLT comprises two key components:a Pseudo-label Guided Source Augmentation(PGSA)mechanism that iteratively selects high-confidence target samples to augment the source domain and strengthen initial representation learning,and a Target-aware Discriminative Modeling(TADM)that leverages pseudo-labeled target data to construct auxiliary classifiers for enhanced inter-class discrimination and reduced misclassification under domain shift.Experiments on three widely used FUDA benchmarks validate the superior performance of CPLT,achieving average accuracy gains of+3.5%on Office-31,+1.4%on Office-Home,and+1.0%on DomainNet over competitive existing methods.展开更多
Deep neural networks have been successfully applied to numerous machine learning tasks because of their impressive feature abstraction capabilities.However,conventional deep networks assume that the training and test ...Deep neural networks have been successfully applied to numerous machine learning tasks because of their impressive feature abstraction capabilities.However,conventional deep networks assume that the training and test data are sampled from the same distribution,and this assumption is often violated in real-world scenarios.To address the domain shift or data bias problems,we introduce layer-wise domain correction(LDC),a new unsupervised domain adaptation algorithm which adapts an existing deep network through additive correction layers spaced throughout the network.Through the additive layers,the representations of source and target domains can be perfectly aligned.The corrections that are trained via maximum mean discrepancy,adapt to the target domain while increasing the representational capacity of the network.LDC requires no target labels,achieves state-of-the-art performance across several adaptation benchmarks,and requires significantly less training time than existing adaptation methods.展开更多
基金supported by the Science and Technology Project of Qinghai Province(No.2023-QY-208).
文摘In real-world scenarios,few-shot unsupervised domain adaptation(FUDA)faces the dual challenges of limited source supervision and poor target generalization due to the extremely scarce annotated source samples.Existing methods often overlook the restricted learning capacity caused by sparse source labels or fail to effectively utilize the structural information within the target domain to enhance discriminative performance.To address these issues,we propose a novel method,Collaborative Pseudo-label Transfer(CPLT),which jointly improves cross-domain adaptation under few-shot UDA settings.CPLT comprises two key components:a Pseudo-label Guided Source Augmentation(PGSA)mechanism that iteratively selects high-confidence target samples to augment the source domain and strengthen initial representation learning,and a Target-aware Discriminative Modeling(TADM)that leverages pseudo-labeled target data to construct auxiliary classifiers for enhanced inter-class discrimination and reduced misclassification under domain shift.Experiments on three widely used FUDA benchmarks validate the superior performance of CPLT,achieving average accuracy gains of+3.5%on Office-31,+1.4%on Office-Home,and+1.0%on DomainNet over competitive existing methods.
基金supported by the National Key R&D Program of China(No.2016YFB1200203)the National Natural Science Foundation of China(Nos.41427806 and 61273233)
文摘Deep neural networks have been successfully applied to numerous machine learning tasks because of their impressive feature abstraction capabilities.However,conventional deep networks assume that the training and test data are sampled from the same distribution,and this assumption is often violated in real-world scenarios.To address the domain shift or data bias problems,we introduce layer-wise domain correction(LDC),a new unsupervised domain adaptation algorithm which adapts an existing deep network through additive correction layers spaced throughout the network.Through the additive layers,the representations of source and target domains can be perfectly aligned.The corrections that are trained via maximum mean discrepancy,adapt to the target domain while increasing the representational capacity of the network.LDC requires no target labels,achieves state-of-the-art performance across several adaptation benchmarks,and requires significantly less training time than existing adaptation methods.