When performing English-to-Tamil Neural Machine Translation(NMT),end users face several challenges due to Tamil's rich morphology,free word order,and limited annotated corpora.Although available transformer-based ...When performing English-to-Tamil Neural Machine Translation(NMT),end users face several challenges due to Tamil's rich morphology,free word order,and limited annotated corpora.Although available transformer-based models offer strong baselines,they compromise syntactic awareness and the detection and man-agement of offensive content in cluttered,noisy,and informal text.In this paper,we present POSDEP-Offense-Trans,a multi-task NMT framework that combines Part-of-Speech(POS)and Dependency Parsing(DEP)methods with a robust offensive language classification module.Our architecture enriches the Transformer encoder with syntax-aware embeddings and provides syntax-guided attention mechanisms.The architecture incorporates a structure-aware contrastive loss that reinforces syntactic consistency and deploys auxiliary classification heads for POS tagging,dependency parsing,and multi-class offensive detection.The classifier for offensive words operates at both sentence and token levels and obtains guidance from syntactic features and formal finite automata rules that model offensive language structures-hate speech,profanity,sarcasm,and threats.Using this architecture,we construct a syntactically enriched,socially annotated corpus.Experimental results show improvements in translation quality,with a BLEU score of 33.5,UAS/LAS parsing accuracies of 92.4%and 90%,and a 4.5%Fl-score gain in offensive content detection compared with baseline POS+DEP+Offense models.Also,the proposed model achieved 92.3%in offensive content neutralization,as confirmed by ablation studies.This comprehensive English-Tamil NMT model that unifies syntactic modelling and ethical filtering-laying the groundwork for applications in social media moderation,hate speech mitigation,and policy-compliant multilingual content generation.展开更多
In multi-domain neural machine translation tasks,the disparity in data distribution between domains poses significant challenges in distinguishing domain features and sharing parameters across domains.This paper propo...In multi-domain neural machine translation tasks,the disparity in data distribution between domains poses significant challenges in distinguishing domain features and sharing parameters across domains.This paper proposes a Transformer-based multi-domain-aware mixture of experts model.To address the problem of domain feature differentiation,a mixture of experts(MoE)is introduced into attention to enhance the domain perception ability of the model,thereby improving the domain feature differentiation.To address the trade-off between domain feature distinction and cross-domain parameter sharing,we propose a domain-aware mixture of experts(DMoE).A domain-aware gating mechanism is introduced within the MoE module,simultaneously activating all domain experts to effectively blend domain feature distinction and cross-domain parameter sharing.A loss balancing function is then added to dynamically adjust the impact of the loss function on the expert distribution,enabling fine-tuning of the expert activation distribution to achieve a balance between domains.Experimental results on multiple Chinese-to-English and English-to-French datasets demonstrate that our proposed method significantly outperforms baseline models in both BLEU,chrF,and COMET metrics,validating its effectiveness in multi-domain neural machine translation.Further analysis of the probability distribution of expert activations shows that our method achieves remarkable results in both domain differentiation and cross-domain parameter sharing.展开更多
自然语言转换结构化查询语言(NL2SQL)能降低非专业人员操作数据库的技术门槛,从而提升用户体验和工作效率。此外,检索增强生成(RAG)技术可以通过引入外部知识库提升NL2SQL的性能。针对目前RAG在NL2SQL应用中存在的检索策略漏检率高和召...自然语言转换结构化查询语言(NL2SQL)能降低非专业人员操作数据库的技术门槛,从而提升用户体验和工作效率。此外,检索增强生成(RAG)技术可以通过引入外部知识库提升NL2SQL的性能。针对目前RAG在NL2SQL应用中存在的检索策略漏检率高和召回上下文的相关性不强等问题,提出一种分序检索重排序RAG(RAG-SRR)方法优化知识库构建、检索召回策略和提示词设计等环节。首先,从问答对、专业名词和数据库结构这3个方面进行领域知识库的构建:问答对根据文物艺术品拍卖监管的高频处理和查询的问题构建,专业名词根据拍卖行业标准构建,而数据库结构根据雅昌艺术拍卖网的数据构建;其次,在检索阶段采取分序检索的策略,并对3类知识库设置不同的优先级,且在召回阶段重排序检索的信息;最后,在提示词设计中给出提示词优化设计的原则及提示词模板。实验结果表明:在领域数据集、Spider数据集上,RAG-SRR方法与基于BERT(Bidirectional Encoder Representations from Transformers)模型和RESDSQL(Ranking-enhanced Encoding plus a Skeleton-aware Decoding framework for text-to-SQL)模型的方法的执行准确率分别至少提高了19.50、24.20和12.17、8.90个百分点。而在相同大语言模型下,RAG-SRR方法比未优化的RAG方法的执行准确率分别至少提高了12.83和15.60个百分点,与C3SQL方法相比,执行准确率分别至少提高了1.50和3.10个百分点。在使用Llama3.1-8B时,与DIN-SQL方法相比,执行准确率在中文语料数据集中提升0.30个百分点,在英文语料数据集中最多相差3.90个百分点;但在使用Qwen2.5-7B时,执行准确率分别提高1.60和4.10个百分点。可见,RAG-SRR方法具备较强的实用性和可移植性。展开更多
在汉越低资源翻译任务中,句子中的实体词准确翻译是一大难点。针对实体词在训练语料中出现的频率较低,模型无法构建双语实体词之间的映射关系等问题,构建一种融入实体翻译的汉越神经机器翻译模型。首先,通过汉越实体双语词典预先获取源...在汉越低资源翻译任务中,句子中的实体词准确翻译是一大难点。针对实体词在训练语料中出现的频率较低,模型无法构建双语实体词之间的映射关系等问题,构建一种融入实体翻译的汉越神经机器翻译模型。首先,通过汉越实体双语词典预先获取源句中实体词的翻译结果;其次,将结果拼接在源句末端作为模型的输入,同时在编码端引入“约束提示信息”增强表征;最后,在解码端融入指针网络机制,以确保模型能复制输出源端句的词汇。实验结果表明,该模型相较于跨语言模型XLM-R(Cross-lingual Language Model-RoBERTa)的双语评估替补(BLEU)值在汉越方向提升了1.37,越汉方向提升了0.21,时间性能上相较于Transformer该模型在汉越方向和越汉方向分别缩短3.19%和3.50%,可有效地提升句子中实体词翻译的综合性能。展开更多
文摘When performing English-to-Tamil Neural Machine Translation(NMT),end users face several challenges due to Tamil's rich morphology,free word order,and limited annotated corpora.Although available transformer-based models offer strong baselines,they compromise syntactic awareness and the detection and man-agement of offensive content in cluttered,noisy,and informal text.In this paper,we present POSDEP-Offense-Trans,a multi-task NMT framework that combines Part-of-Speech(POS)and Dependency Parsing(DEP)methods with a robust offensive language classification module.Our architecture enriches the Transformer encoder with syntax-aware embeddings and provides syntax-guided attention mechanisms.The architecture incorporates a structure-aware contrastive loss that reinforces syntactic consistency and deploys auxiliary classification heads for POS tagging,dependency parsing,and multi-class offensive detection.The classifier for offensive words operates at both sentence and token levels and obtains guidance from syntactic features and formal finite automata rules that model offensive language structures-hate speech,profanity,sarcasm,and threats.Using this architecture,we construct a syntactically enriched,socially annotated corpus.Experimental results show improvements in translation quality,with a BLEU score of 33.5,UAS/LAS parsing accuracies of 92.4%and 90%,and a 4.5%Fl-score gain in offensive content detection compared with baseline POS+DEP+Offense models.Also,the proposed model achieved 92.3%in offensive content neutralization,as confirmed by ablation studies.This comprehensive English-Tamil NMT model that unifies syntactic modelling and ethical filtering-laying the groundwork for applications in social media moderation,hate speech mitigation,and policy-compliant multilingual content generation.
基金supported by the NationalNatural Science Foundation of China(U2004163)Key Research and Development Program of Henan Province(No.251111211200).
文摘In multi-domain neural machine translation tasks,the disparity in data distribution between domains poses significant challenges in distinguishing domain features and sharing parameters across domains.This paper proposes a Transformer-based multi-domain-aware mixture of experts model.To address the problem of domain feature differentiation,a mixture of experts(MoE)is introduced into attention to enhance the domain perception ability of the model,thereby improving the domain feature differentiation.To address the trade-off between domain feature distinction and cross-domain parameter sharing,we propose a domain-aware mixture of experts(DMoE).A domain-aware gating mechanism is introduced within the MoE module,simultaneously activating all domain experts to effectively blend domain feature distinction and cross-domain parameter sharing.A loss balancing function is then added to dynamically adjust the impact of the loss function on the expert distribution,enabling fine-tuning of the expert activation distribution to achieve a balance between domains.Experimental results on multiple Chinese-to-English and English-to-French datasets demonstrate that our proposed method significantly outperforms baseline models in both BLEU,chrF,and COMET metrics,validating its effectiveness in multi-domain neural machine translation.Further analysis of the probability distribution of expert activations shows that our method achieves remarkable results in both domain differentiation and cross-domain parameter sharing.
文摘自然语言转换结构化查询语言(NL2SQL)能降低非专业人员操作数据库的技术门槛,从而提升用户体验和工作效率。此外,检索增强生成(RAG)技术可以通过引入外部知识库提升NL2SQL的性能。针对目前RAG在NL2SQL应用中存在的检索策略漏检率高和召回上下文的相关性不强等问题,提出一种分序检索重排序RAG(RAG-SRR)方法优化知识库构建、检索召回策略和提示词设计等环节。首先,从问答对、专业名词和数据库结构这3个方面进行领域知识库的构建:问答对根据文物艺术品拍卖监管的高频处理和查询的问题构建,专业名词根据拍卖行业标准构建,而数据库结构根据雅昌艺术拍卖网的数据构建;其次,在检索阶段采取分序检索的策略,并对3类知识库设置不同的优先级,且在召回阶段重排序检索的信息;最后,在提示词设计中给出提示词优化设计的原则及提示词模板。实验结果表明:在领域数据集、Spider数据集上,RAG-SRR方法与基于BERT(Bidirectional Encoder Representations from Transformers)模型和RESDSQL(Ranking-enhanced Encoding plus a Skeleton-aware Decoding framework for text-to-SQL)模型的方法的执行准确率分别至少提高了19.50、24.20和12.17、8.90个百分点。而在相同大语言模型下,RAG-SRR方法比未优化的RAG方法的执行准确率分别至少提高了12.83和15.60个百分点,与C3SQL方法相比,执行准确率分别至少提高了1.50和3.10个百分点。在使用Llama3.1-8B时,与DIN-SQL方法相比,执行准确率在中文语料数据集中提升0.30个百分点,在英文语料数据集中最多相差3.90个百分点;但在使用Qwen2.5-7B时,执行准确率分别提高1.60和4.10个百分点。可见,RAG-SRR方法具备较强的实用性和可移植性。
文摘在汉越低资源翻译任务中,句子中的实体词准确翻译是一大难点。针对实体词在训练语料中出现的频率较低,模型无法构建双语实体词之间的映射关系等问题,构建一种融入实体翻译的汉越神经机器翻译模型。首先,通过汉越实体双语词典预先获取源句中实体词的翻译结果;其次,将结果拼接在源句末端作为模型的输入,同时在编码端引入“约束提示信息”增强表征;最后,在解码端融入指针网络机制,以确保模型能复制输出源端句的词汇。实验结果表明,该模型相较于跨语言模型XLM-R(Cross-lingual Language Model-RoBERTa)的双语评估替补(BLEU)值在汉越方向提升了1.37,越汉方向提升了0.21,时间性能上相较于Transformer该模型在汉越方向和越汉方向分别缩短3.19%和3.50%,可有效地提升句子中实体词翻译的综合性能。