期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Collaborative pseudo-label transfer for few-shot unsupervised domain adaptation
1
作者 Song Shi Jinfang Jia +1 位作者 Wandong Xue Jianqiang Huang 《CCF Transactions on High Performance Computing》 2025年第6期574-588,共15页
In real-world scenarios,few-shot unsupervised domain adaptation(FUDA)faces the dual challenges of limited source supervision and poor target generalization due to the extremely scarce annotated source samples.Existing... In real-world scenarios,few-shot unsupervised domain adaptation(FUDA)faces the dual challenges of limited source supervision and poor target generalization due to the extremely scarce annotated source samples.Existing methods often overlook the restricted learning capacity caused by sparse source labels or fail to effectively utilize the structural information within the target domain to enhance discriminative performance.To address these issues,we propose a novel method,Collaborative Pseudo-label Transfer(CPLT),which jointly improves cross-domain adaptation under few-shot UDA settings.CPLT comprises two key components:a Pseudo-label Guided Source Augmentation(PGSA)mechanism that iteratively selects high-confidence target samples to augment the source domain and strengthen initial representation learning,and a Target-aware Discriminative Modeling(TADM)that leverages pseudo-labeled target data to construct auxiliary classifiers for enhanced inter-class discrimination and reduced misclassification under domain shift.Experiments on three widely used FUDA benchmarks validate the superior performance of CPLT,achieving average accuracy gains of+3.5%on Office-31,+1.4%on Office-Home,and+1.0%on DomainNet over competitive existing methods. 展开更多
关键词 Few-shot learning·Unsupervised domain adaptation·Pseudo-labeling·Source domain expansion·Crossdomain knowledge transfer
在线阅读 下载PDF
Layer-wise domain correction for unsupervised domain adaptation 被引量:1
2
作者 Shuang LI Shi-ji SONG Cheng WU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2018年第1期91-103,共13页
Deep neural networks have been successfully applied to numerous machine learning tasks because of their impressive feature abstraction capabilities.However,conventional deep networks assume that the training and test ... Deep neural networks have been successfully applied to numerous machine learning tasks because of their impressive feature abstraction capabilities.However,conventional deep networks assume that the training and test data are sampled from the same distribution,and this assumption is often violated in real-world scenarios.To address the domain shift or data bias problems,we introduce layer-wise domain correction(LDC),a new unsupervised domain adaptation algorithm which adapts an existing deep network through additive correction layers spaced throughout the network.Through the additive layers,the representations of source and target domains can be perfectly aligned.The corrections that are trained via maximum mean discrepancy,adapt to the target domain while increasing the representational capacity of the network.LDC requires no target labels,achieves state-of-the-art performance across several adaptation benchmarks,and requires significantly less training time than existing adaptation methods. 展开更多
关键词 Unsupervised domain adaptation Maximum mean discrepancy Residual network Deep learning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部