Semi-supervised new intent discovery is a significant research focus in natural language understanding.To address the limitations of current semi-supervised training data and the underutilization of implicit informati...Semi-supervised new intent discovery is a significant research focus in natural language understanding.To address the limitations of current semi-supervised training data and the underutilization of implicit information,a Semi-supervised New Intent Discovery for Elastic Neighborhood Syntactic Elimination and Fusion model(SNID-ENSEF)is proposed.Syntactic elimination contrast learning leverages verb-dominant syntactic features,systematically replacing specific words to enhance data diversity.The radius of the positive sample neighborhood is elastically adjusted to eliminate invalid samples and improve training efficiency.A neighborhood sample fusion strategy,based on sample distribution patterns,dynamically adjusts neighborhood size and fuses sample vectors to reduce noise and improve implicit information utilization and discovery accuracy.Experimental results show that SNID-ENSEF achieves average improvements of 0.88%,1.27%,and 1.30%in Normalized Mutual Information(NMI),Accuracy(ACC),and Adjusted Rand Index(ARI),respectively,outperforming PTJN,DPN,MTP-CLNN,and DWG models on the Banking77,StackOverflow,and Clinc150 datasets.The code is available at https://github.com/qsdesz/SNID-ENSEF,accessed on 16 January 2025.展开更多
基金supported by Research Projects of the Nature Science Foundation of Hebei Province(F2021402005).
文摘Semi-supervised new intent discovery is a significant research focus in natural language understanding.To address the limitations of current semi-supervised training data and the underutilization of implicit information,a Semi-supervised New Intent Discovery for Elastic Neighborhood Syntactic Elimination and Fusion model(SNID-ENSEF)is proposed.Syntactic elimination contrast learning leverages verb-dominant syntactic features,systematically replacing specific words to enhance data diversity.The radius of the positive sample neighborhood is elastically adjusted to eliminate invalid samples and improve training efficiency.A neighborhood sample fusion strategy,based on sample distribution patterns,dynamically adjusts neighborhood size and fuses sample vectors to reduce noise and improve implicit information utilization and discovery accuracy.Experimental results show that SNID-ENSEF achieves average improvements of 0.88%,1.27%,and 1.30%in Normalized Mutual Information(NMI),Accuracy(ACC),and Adjusted Rand Index(ARI),respectively,outperforming PTJN,DPN,MTP-CLNN,and DWG models on the Banking77,StackOverflow,and Clinc150 datasets.The code is available at https://github.com/qsdesz/SNID-ENSEF,accessed on 16 January 2025.