Arabic Dialect Identification(DID)is a task in Natural Language Processing(NLP)that involves determining the dialect of a given piece of text in Arabic.The state-of-the-art solutions for DID are built on various deep ...Arabic Dialect Identification(DID)is a task in Natural Language Processing(NLP)that involves determining the dialect of a given piece of text in Arabic.The state-of-the-art solutions for DID are built on various deep neural networks that commonly learn the representation of sentences in response to a given dialect.Despite the effectiveness of these solutions,the performance heavily relies on the amount of labeled examples,which is labor-intensive to atain and may not be readily available in real-world scenarios.To alleviate the burden of labeling data,this paper introduces a novel solution that leverages unlabeled corpora to boost performance on the DID task.Specifically,we design an architecture that enables learning the shared information between labeled and unlabeled texts through a gradient reversal layer.The key idea is to penalize the model for learning source dataset specific features and thus enable it to capture common knowledge regardless of the label.Finally,we evaluate the proposed solution on benchmark datasets for DID.Our extensive experiments show that it performs signifcantly better,especially,with sparse labeled data.By comparing our approach with existing Pre-trained Language Models(PLMs),we achieve a new state-of-the-art performance in the DID field.The code will be available on GitHub upon the paper's acceptance.展开更多
基金supported by the Deanship of Scientific Research at King Khalid University through Small Groups funding(Project Grant No.RGP1/243/45)The funding was awarded to Dr.Mohammed Abker.And Natural Science Foundation of China under Grant 61901388.
文摘Arabic Dialect Identification(DID)is a task in Natural Language Processing(NLP)that involves determining the dialect of a given piece of text in Arabic.The state-of-the-art solutions for DID are built on various deep neural networks that commonly learn the representation of sentences in response to a given dialect.Despite the effectiveness of these solutions,the performance heavily relies on the amount of labeled examples,which is labor-intensive to atain and may not be readily available in real-world scenarios.To alleviate the burden of labeling data,this paper introduces a novel solution that leverages unlabeled corpora to boost performance on the DID task.Specifically,we design an architecture that enables learning the shared information between labeled and unlabeled texts through a gradient reversal layer.The key idea is to penalize the model for learning source dataset specific features and thus enable it to capture common knowledge regardless of the label.Finally,we evaluate the proposed solution on benchmark datasets for DID.Our extensive experiments show that it performs signifcantly better,especially,with sparse labeled data.By comparing our approach with existing Pre-trained Language Models(PLMs),we achieve a new state-of-the-art performance in the DID field.The code will be available on GitHub upon the paper's acceptance.