期刊文献+
共找到26篇文章
< 1 2 >
每页显示 20 50 100
Improvements of Google Neural Machine Translation
1
作者 李瑞 蒋美佳 《海外英语》 2017年第15期132-134,共3页
Machine Translation has been playing an important role in modern society due to its effectiveness and efficiency,but the great demand for corpus makes it difficult for users to use traditional Machine Translation syst... Machine Translation has been playing an important role in modern society due to its effectiveness and efficiency,but the great demand for corpus makes it difficult for users to use traditional Machine Translation systems.To solve this problem and improve translation quality,in November 2016,Google introduces Google Neural Machine Translation system,which implements the latest techniques to achieve better outcomes.The conspicuous achievement has been proved by experiments using BLEU score to measure performance of different systems.With GNMT,the gap between human and machine translation is narrowing. 展开更多
关键词 machine translation machine translation improvement translation google neural machine translation neural machine translation
在线阅读 下载PDF
Dependency-Based Local Attention Approach to Neural Machine Translation 被引量:3
2
作者 Jing Qiu Yan Liu +4 位作者 Yuhan Chai Yaqi Si Shen Su Le Wang Yue Wu 《Computers, Materials & Continua》 SCIE EI 2019年第5期547-562,共16页
Recently dependency information has been used in different ways to improve neural machine translation.For example,add dependency labels to the hidden states of source words.Or the contiguous information of a source wo... Recently dependency information has been used in different ways to improve neural machine translation.For example,add dependency labels to the hidden states of source words.Or the contiguous information of a source word would be found according to the dependency tree and then be learned independently and be added into Neural Machine Translation(NMT)model as a unit in various ways.However,these works are all limited to the use of dependency information to enrich the hidden states of source words.Since many works in Statistical Machine Translation(SMT)and NMT have proven the validity and potential of using dependency information.We believe that there are still many ways to apply dependency information in the NMT structure.In this paper,we explore a new way to use dependency information to improve NMT.Based on the theory of local attention mechanism,we present Dependency-based Local Attention Approach(DLAA),a new attention mechanism that allowed the NMT model to trace the dependency words related to the current translating words.Our work also indicates that dependency information could help to supervise attention mechanism.Experiment results on WMT 17 Chineseto-English translation task shared training datasets show that our model is effective and perform distinctively on long sentence translation. 展开更多
关键词 neural machine translation attention mechanism dependency parsing
在线阅读 下载PDF
Corpus Augmentation for Improving Neural Machine Translation 被引量:2
3
作者 Zijian Li Chengying Chi Yunyun Zhan 《Computers, Materials & Continua》 SCIE EI 2020年第7期637-650,共14页
The translation quality of neural machine translation(NMT)systems depends largely on the quality of large-scale bilingual parallel corpora available.Research shows that under the condition of limited resources,the per... The translation quality of neural machine translation(NMT)systems depends largely on the quality of large-scale bilingual parallel corpora available.Research shows that under the condition of limited resources,the performance of NMT is greatly reduced,and a large amount of high-quality bilingual parallel data is needed to train a competitive translation model.However,not all languages have large-scale and high-quality bilingual corpus resources available.In these cases,improving the quality of the corpora has become the main focus to increase the accuracy of the NMT results.This paper proposes a new method to improve the quality of data by using data cleaning,data expansion,and other measures to expand the data at the word and sentence-level,thus improving the richness of the bilingual data.The long short-term memory(LSTM)language model is also used to ensure the smoothness of sentence construction in the process of sentence construction.At the same time,it uses a variety of processing methods to improve the quality of the bilingual data.Experiments using three standard test sets are conducted to validate the proposed method;the most advanced fairseq-transformer NMT system is used in the training.The results show that the proposed method has worked well on improving the translation results.Compared with the state-of-the-art methods,the BLEU value of our method is increased by 2.34 compared with that of the baseline. 展开更多
关键词 neural machine translation corpus argumentation model improvement deep learning data cleaning
在线阅读 下载PDF
A Novel Beam Search to Improve Neural Machine Translation for English-Chinese 被引量:2
4
作者 Xinyue Lin Jin Liu +1 位作者 Jianming Zhang Se-Jung Lim 《Computers, Materials & Continua》 SCIE EI 2020年第10期387-404,共18页
Neural Machine Translation(NMT)is an end-to-end learning approach for automated translation,overcoming the weaknesses of conventional phrase-based translation systems.Although NMT based systems have gained their popul... Neural Machine Translation(NMT)is an end-to-end learning approach for automated translation,overcoming the weaknesses of conventional phrase-based translation systems.Although NMT based systems have gained their popularity in commercial translation applications,there is still plenty of room for improvement.Being the most popular search algorithm in NMT,beam search is vital to the translation result.However,traditional beam search can produce duplicate or missing translation due to its target sequence selection strategy.Aiming to alleviate this problem,this paper proposed neural machine translation improvements based on a novel beam search evaluation function.And we use reinforcement learning to train a translation evaluation system to select better candidate words for generating translations.In the experiments,we conducted extensive experiments to evaluate our methods.CASIA corpus and the 1,000,000 pairs of bilingual corpora of NiuTrans are used in our experiments.The experiment results prove that the proposed methods can effectively improve the English to Chinese translation quality. 展开更多
关键词 neural machine translation beam search reinforcement learning
在线阅读 下载PDF
Neural Machine Translation Models with Attention-Based Dropout Layer 被引量:1
5
作者 Huma Israr Safdar Abbas Khan +3 位作者 Muhammad Ali Tahir Muhammad Khuram Shahzad Muneer Ahmad Jasni Mohamad Zain 《Computers, Materials & Continua》 SCIE EI 2023年第5期2981-3009,共29页
In bilingual translation,attention-based Neural Machine Translation(NMT)models are used to achieve synchrony between input and output sequences and the notion of alignment.NMT model has obtained state-of-the-art perfo... In bilingual translation,attention-based Neural Machine Translation(NMT)models are used to achieve synchrony between input and output sequences and the notion of alignment.NMT model has obtained state-of-the-art performance for several language pairs.However,there has been little work exploring useful architectures for Urdu-to-English machine translation.We conducted extensive Urdu-to-English translation experiments using Long short-term memory(LSTM)/Bidirectional recurrent neural networks(Bi-RNN)/Statistical recurrent unit(SRU)/Gated recurrent unit(GRU)/Convolutional neural network(CNN)and Transformer.Experimental results show that Bi-RNN and LSTM with attention mechanism trained iteratively,with a scalable data set,make precise predictions on unseen data.The trained models yielded competitive results by achieving 62.6%and 61%accuracy and 49.67 and 47.14 BLEU scores,respectively.From a qualitative perspective,the translation of the test sets was examined manually,and it was observed that trained models tend to produce repetitive output more frequently.The attention score produced by Bi-RNN and LSTM produced clear alignment,while GRU showed incorrect translation for words,poor alignment and lack of a clear structure.Therefore,we considered refining the attention-based models by defining an additional attention-based dropout layer.Attention dropout fixes alignment errors and minimizes translation errors at the word level.After empirical demonstration and comparison with their counterparts,we found improvement in the quality of the resulting translation system and a decrease in the perplexity and over-translation score.The ability of the proposed model was evaluated using Arabic-English and Persian-English datasets as well.We empirically concluded that adding an attention-based dropout layer helps improve GRU,SRU,and Transformer translation and is considerably more efficient in translation quality and speed. 展开更多
关键词 Natural language processing neural machine translation word embedding ATTENTION PERPLEXITY selective dropout regularization URDU PERSIAN Arabic BLEU
在线阅读 下载PDF
LKMT:Linguistics Knowledge-Driven Multi-Task Neural Machine Translation for Urdu and English
6
作者 Muhammad Naeem Ul Hassan Zhengtao Yu +4 位作者 Jian Wang Ying Li Shengxiang Gao Shuwan Yang Cunli Mao 《Computers, Materials & Continua》 SCIE EI 2024年第10期951-969,共19页
Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the ... Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the scale of the parallel training corpus is limited.Considering the pre-trained language model has a strong ability for monolingual representation,it is the key challenge for machine translation to construct the in-depth relationship between the source and target language by injecting the lexical and syntactic information into pre-trained language models.To alleviate the dependence on the parallel corpus,we propose a Linguistics Knowledge-Driven MultiTask(LKMT)approach to inject part-of-speech and syntactic knowledge into pre-trained models,thus enhancing the machine translation performance.On the one hand,we integrate part-of-speech and dependency labels into the embedding layer and exploit large-scale monolingual corpus to update all parameters of pre-trained language models,thus ensuring the updated language model contains potential lexical and syntactic information.On the other hand,we leverage an extra self-attention layer to explicitly inject linguistic knowledge into the pre-trained language model-enhanced machine translation model.Experiments on the benchmark dataset show that our proposed LKMT approach improves the Urdu-English translation accuracy by 1.97 points and the English-Urdu translation accuracy by 2.42 points,highlighting the effectiveness of our LKMT framework.Detailed ablation experiments confirm the positive impact of part-of-speech and dependency parsing on machine translation. 展开更多
关键词 Urdu NMT(neural machine translation) Urdu natural language processing Urdu Linguistic features low resources language linguistic features pretrain model
在线阅读 下载PDF
Neural Machine Translation by Fusing Key Information of Text
7
作者 Shijie Hu Xiaoyu Li +8 位作者 Jiayu Bai Hang Lei Weizhong Qian Sunqiang Hu Cong Zhang Akpatsa Samuel Kofi Qian Qiu Yong Zhou Shan Yang 《Computers, Materials & Continua》 SCIE EI 2023年第2期2803-2815,共13页
When the Transformer proposed by Google in 2017,it was first used for machine translation tasks and achieved the state of the art at that time.Although the current neural machine translation model can generate high qu... When the Transformer proposed by Google in 2017,it was first used for machine translation tasks and achieved the state of the art at that time.Although the current neural machine translation model can generate high quality translation results,there are still mistranslations and omissions in the translation of key information of long sentences.On the other hand,the most important part in traditional translation tasks is the translation of key information.In the translation results,as long as the key information is translated accurately and completely,even if other parts of the results are translated incorrect,the final translation results’quality can still be guaranteed.In order to solve the problem of mistranslation and missed translation effectively,and improve the accuracy and completeness of long sentence translation in machine translation,this paper proposes a key information fused neural machine translation model based on Transformer.The model proposed in this paper extracts the keywords of the source language text separately as the input of the encoder.After the same encoding as the source language text,it is fused with the output of the source language text encoded by the encoder,then the key information is processed and input into the decoder.With incorporating keyword information from the source language sentence,the model’s performance in the task of translating long sentences is very reliable.In order to verify the effectiveness of the method of fusion of key information proposed in this paper,a series of experiments were carried out on the verification set.The experimental results show that the Bilingual Evaluation Understudy(BLEU)score of the model proposed in this paper on theWorkshop on Machine Translation(WMT)2017 test dataset is higher than the BLEU score of Transformer proposed by Google on the WMT2017 test dataset.The experimental results show the advantages of the model proposed in this paper. 展开更多
关键词 Key information transformer FUSION neural machine translation
在线阅读 下载PDF
A Novel Chinese-English Neural Machine Translation Model Based on BERT
8
作者 Linlin Zhang 《IJLAI Transactions on Science and Engineering》 2025年第1期59-65,共7页
In recent years,neural machine translation has rapidly developed and replaced traditional machine translation,becoming the mainstream paradigm in the field of machine translation.Machine translation can reduce transla... In recent years,neural machine translation has rapidly developed and replaced traditional machine translation,becoming the mainstream paradigm in the field of machine translation.Machine translation can reduce translation costs and improve translation efficiency,bring good news to cultural exchanges and international cooperation,and help national development.However,neural machine translation is highly dependent on large-scale high-quality parallel corpus,and there are problems such as uneven quality and sparse data,so it is imperative to study and explore neural machine translation.The purpose of this paper is to construct pseudo-parallel corpus using data enhancement technology,improve the diversity of Chinese and English materials,and then optimize the translation model to improve the translation effect of the model.Based on BERT pre-training technology,this paper first analyzes the limitations of the traditional Transformer model,and then puts forward two directions for model optimization.On the one hand,in the data preprocessing stage,multi-granularity word segmentation technology is used for word segmentation to help Chinese-English neural machine translation model better understand the text.On the other hand,in the pre-training stage,this paper adopts the strategy of deep integration of BERT dynamic word embedding and original word embedding.On the basis of the original Transformer,a fusion module is added,through which the original word embeddings and BERT dynamic word embeddings are simple linear splicing,and then fed into the encoder.The attention mechanism is used for deep integration and better word vector representation,enabling the Transformer model to take full advantage of the external semantic information introduced by BERT.Finally,the feasibility and effectiveness of the Transformer architecture adopted in this paper are verified by the comparison experiment between RNN and Transformer model.Through the ablation experiments of different word vector representation and different stages using BERT pre-training technology,the effectiveness of BERT dynamic word embedding and deep fusion of word embedding and the rationality of using pre-training technology only in the encoder stage are verified. 展开更多
关键词 TRANSFORMER Chinese-English neural machine translation BERT Multi-granularity word segmentation technology
在线阅读 下载PDF
Neural machine translation:Challenges,progress and future 被引量:13
9
作者 ZHANG JiaJun ZONG ChengQing 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2020年第10期2028-2050,共23页
Machine translation(MT)is a technique that leverages computers to translate human languages automatically.Nowadays,neural machine translation(NMT)which models direct mapping between source and target languages with de... Machine translation(MT)is a technique that leverages computers to translate human languages automatically.Nowadays,neural machine translation(NMT)which models direct mapping between source and target languages with deep neural networks has achieved a big breakthrough in translation performance and become the de facto paradigm of MT.This article makes a review of NMT framework,discusses the challenges in NMT,introduces some exciting recent progresses and finally looks forward to some potential future research trends. 展开更多
关键词 neural machine translation Transformer multimodal translation low-resource translation document translation
原文传递
Document-Level Neural Machine Translation with Hierarchical Modeling of Global Context 被引量:2
10
作者 Xin Tan Long-Yin Zhang Guo-Dong Zhou 《Journal of Computer Science & Technology》 SCIE EI CSCD 2022年第2期295-308,共14页
Document-level machine translation(MT)remains challenging due to its difficulty in efficiently using documentlevel global context for translation.In this paper,we propose a hierarchical model to learn the global conte... Document-level machine translation(MT)remains challenging due to its difficulty in efficiently using documentlevel global context for translation.In this paper,we propose a hierarchical model to learn the global context for documentlevel neural machine translation(NMT).This is done through a sentence encoder to capture intra-sentence dependencies and a document encoder to model document-level inter-sentence consistency and coherence.With this hierarchical architecture,we feedback the extracted document-level global context to each word in a top-down fashion to distinguish different translations of a word according to its specific surrounding context.Notably,we explore the effect of three popular attention functions during the information backward-distribution phase to take a deep look into the global context information distribution of our model.In addition,since large-scale in-domain document-level parallel corpora are usually unavailable,we use a two-step training strategy to take advantage of a large-scale corpus with out-of-domain parallel sentence pairs and a small-scale corpus with in-domain parallel document pairs to achieve the domain adaptability.Experimental results of our model on Chinese-English and English-German corpora significantly improve the Transformer baseline by 4.5 BLEU points on average which demonstrates the effectiveness of our proposed hierarchical model in document-level NMT. 展开更多
关键词 neural machine translation document-level translation global context hierarchical model
原文传递
Floating-gate based PN blending optoelectronic synaptic transistor for neural machine translation 被引量:1
11
作者 Xianghong Zhang Enlong Li +4 位作者 Rengjian Yu Lihua He Weijie Yu Huipeng Chen Tailiang Guo 《Science China Materials》 SCIE EI CAS CSCD 2022年第5期1383-1390,共8页
Neural machine translation,which has an encoder-decoder framework,is considered to be a feasible way for future machine translation.Nevertheless,with the fusion of multiple languages and the continuous emergence of ne... Neural machine translation,which has an encoder-decoder framework,is considered to be a feasible way for future machine translation.Nevertheless,with the fusion of multiple languages and the continuous emergence of new words,most current neural machine translation systems based on von Neumann’s architecture have seen a substantial increase in the number of devices for the decoder,resulting in high-energy consumption rate.Here,a multilevel photosensitive blending semiconductor optoelectronic synaptic transistor(MOST)with two different trapping mechanisms is firstly demonstrated,which exhibits 8 stable and well distinguishable states and synaptic behaviors such as excitatory postsynaptic current,short-term memory,and long-term memory are successfully mimicked under illumination in the wavelength range of 480–800 nm.More importantly,an optical decoder model based on MOST is successfully fabricated,which is the first application of neuromorphic device in the field of neural machine translation,significantly simplifying the structure of traditional neural machine translation system.Moreover,as a multi-level synaptic device,MOST can further reduce the number of components and simplify the structure of the codec model under light illumination.This work first applies the neuromorphic device to neural machine translation,and proposes a multi-level synaptic transistor as the based cell of decoding module,which would lay the foundation for breaking the bottleneck of machine translation. 展开更多
关键词 optoelectronic transistor synaptic transistor synaptic plasticity modulation neural machine translation decoder
原文传递
Improving Machine Translation Formality with Large Language Models
12
作者 Murun Yang Fuxue Li 《Computers, Materials & Continua》 2025年第2期2061-2075,共15页
Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lac... Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose how to improve NMT formality with large language models (LLMs), which combines the style transfer and evaluation capabilities of an LLM and the high-quality translation generation ability of NMT models to improve NMT formality. The proposed method (namely INMTF) encompasses two approaches. The first involves a revision approach using an LLM to revise the NMT-generated translation, ensuring a formal translation style. The second approach employs an LLM as a reward model for scoring translation formality, and then uses reinforcement learning algorithms to fine-tune the NMT model to maximize the reward score, thereby enhancing the formality of the generated translations. Considering the substantial parameter size of LLMs, we also explore methods to reduce the computational cost of INMTF. Experimental results demonstrate that INMTF significantly outperforms baselines in terms of translation formality and translation quality, with an improvement of +9.19 style accuracy points in the German-to-English task and +2.16 COMET score in the Russian-to-English task. Furthermore, our work demonstrates the potential of integrating LLMs within NMT frameworks to bridge the gap between NMT outputs and the formality required in various real-world translation scenarios. 展开更多
关键词 neural machine translation FORMALITY large language model text style transfer style evaluation reinforcement learning
在线阅读 下载PDF
PNMT:Zero-Resource Machine Translation with Pivot-Based Feature Converter
13
作者 Lingfang Li Weijian Hu Mingxing Luo 《Computers, Materials & Continua》 2025年第9期5915-5935,共21页
Neural machine translation(NMT)has been widely applied to high-resource language pairs,but its dependence on large-scale data results in poor performance in low-resource scenarios.In this paper,we propose a transfer-l... Neural machine translation(NMT)has been widely applied to high-resource language pairs,but its dependence on large-scale data results in poor performance in low-resource scenarios.In this paper,we propose a transfer-learning-based approach called shared space transfer for zero-resource NMT.Our method leverages a pivot pre-trained language model(PLM)to create a shared representation space,which is used in both auxiliary source→pivot(Ms2p)and(Mp2t)translation models.Specifically,we exploit pivot PLM to initialize the Ms2p decoder pivot→targetand Mp2t encoder,while adopting a freezing strategy during the training process.We further propose a feature converter to mitigate representation space deviations by converting the features from the source encoder into the shared representation space.The converter is trained using the synthetic parallel corpus.The final Ms2t model source→targetcombines the Ms2p encoder,feature converter,and Mp2t decoder.We conduct simulation experiments using English as the pivot language for and translations.We finally test our method German→French,German→Czech,Turkish→Hindion a real zero-resource language pair,with Chinese as the pivot language.Experiment results Mongolian→Vietnameseshow that our method achieves high translation quality,with better Translation Error Rate(TER)and BLEU scores compared with other pivot-based methods.The step-wise pre-training with our feature converter outperforms baseline models in terms of COMET scores. 展开更多
关键词 Zero-resource machine translation pivot pre-trained language model transfer learning neural machine translation
在线阅读 下载PDF
Progress in Machine Translation 被引量:4
14
作者 Haifeng Wang Hua Wu +2 位作者 Zhongjun He Liang Huang Kenneth Ward Church 《Engineering》 SCIE EI CAS 2022年第11期143-153,共11页
After more than 70 years of evolution,great achievements have been made in machine translation.Especially in recent years,translation quality has been greatly improved with the emergence of neural machine translation(... After more than 70 years of evolution,great achievements have been made in machine translation.Especially in recent years,translation quality has been greatly improved with the emergence of neural machine translation(NMT).In this article,we first review the history of machine translation from rule-based machine translation to example-based machine translation and statistical machine translation.We then introduce NMT in more detail,including the basic framework and the current dominant framework,Transformer,as well as multilingual translation models to deal with the data sparseness problem.In addition,we introduce cutting-edge simultaneous translation methods that achieve a balance between translation quality and latency.We then describe various products and applications of machine translation.At the end of this article,we briefly discuss challenges and future research directions in this field. 展开更多
关键词 machine translation neural machine translation Simultaneous translation
在线阅读 下载PDF
Research on system combination of machine translation based on Transformer
15
作者 刘文斌 HE Yanqing +1 位作者 LAN Tian WU Zhenfeng 《High Technology Letters》 EI CAS 2023年第3期310-317,共8页
Influenced by its training corpus,the performance of different machine translation systems varies greatly.Aiming at achieving higher quality translations,system combination methods combine the translation results of m... Influenced by its training corpus,the performance of different machine translation systems varies greatly.Aiming at achieving higher quality translations,system combination methods combine the translation results of multiple systems through statistical combination or neural network combination.This paper proposes a new multi-system translation combination method based on the Transformer architecture,which uses a multi-encoder to encode source sentences and the translation results of each system in order to realize encoder combination and decoder combination.The experimental verification on the Chinese-English translation task shows that this method has 1.2-2.35 more bilingual evaluation understudy(BLEU)points compared with the best single system results,0.71-3.12 more BLEU points compared with the statistical combination method,and 0.14-0.62 more BLEU points compared with the state-of-the-art neural network combination method.The experimental results demonstrate the effectiveness of the proposed system combination method based on Transformer. 展开更多
关键词 TRANSFORMER system combination neural machine translation(NMT) attention mechanism multi-encoder
在线阅读 下载PDF
Improving Low-Resource Machine Translation Using Reinforcement Learning from Human Feedback
16
作者 Liqing Wang Yiheng Xiao 《Intelligent Automation & Soft Computing》 2024年第4期619-631,共13页
Neural Machine Translation is one of the key research directions in Natural Language Processing.However,limited by the scale and quality of parallel corpus,the translation quality of low-resource Neural Machine Transl... Neural Machine Translation is one of the key research directions in Natural Language Processing.However,limited by the scale and quality of parallel corpus,the translation quality of low-resource Neural Machine Translation has always been unsatisfactory.When Reinforcement Learning from Human Feedback(RLHF)is applied to lowresource machine translation,commonly encountered issues of substandard preference data quality and the higher cost associated with manual feedback data.Therefore,a more cost-effective method for obtaining feedback data is proposed.At first,optimizing the quality of preference data through the prompt engineering of the Large Language Model(LLM),then combining human feedback to complete the evaluation.In this way,the reward model could acquire more semantic information and human preferences during the training phase,thereby enhancing feedback efficiency and the result’s quality.Experimental results demonstrate that compared with the traditional RLHF method,our method has been proven effective on multiple datasets and exhibits a notable improvement of 1.07 in BLUE.Meanwhile,it is also more favorably received in the assessments conducted by human evaluators and GPT-4o. 展开更多
关键词 Low-resource neural machine translation RLHF prompt engineering LLM
在线阅读 下载PDF
Applications and Challenges of ChatGPT in Translation:A Multidimensional Exploration of Theory and Practice
17
作者 ZHAO Shujie 《Sino-US English Teaching》 2025年第2期35-41,共7页
This study aims to explore the potential and limitations of ChatGPT in translation,focusing on its application in Neural Machine Translation(NMT).By combining theoretical analysis with empirical research,the study eva... This study aims to explore the potential and limitations of ChatGPT in translation,focusing on its application in Neural Machine Translation(NMT).By combining theoretical analysis with empirical research,the study evaluates ChatGPT’s strengths and weaknesses.It reveals ChatGPT’s superior performance in handling technical documents with high translation quality and efficiency.However,its limitations become evident in addressing cultural nuances and emotional expressions,where semantic deviation or cultural loss often occurs.Moreover,ChatGPT struggles with creative translation,failing to convey the artistic style and emotional depth of original texts,such as literary works and advertisements.The study proposes optimized paths for human-machine collaboration,emphasizing the crucial role of human translators in cultural adaptation and quality assurance.It suggests incorporating multimodal data,dynamic feedback mechanisms,and pragmatic reasoning techniques to enhance machine translation capabilities.The findings conclude that while ChatGPT serves as an efficient translation tool,complex tasks require human-machine synergy to achieve high-quality cross-cultural communication. 展开更多
关键词 ChatGPT neural machine translation(NMT) cultural adaptation translation ethics
在线阅读 下载PDF
A Survey of Multilingual Neural Machine Translation Based on Sparse Models
18
作者 Shaolin Zhu Dong Jian Deyi Xiong 《Tsinghua Science and Technology》 2025年第6期2399-2418,共20页
Recent research has shown a burgeoning interest in exploring sparse models for massively Multilingual Neural Machine Translation(MNMT).In this paper,we present a comprehensive survey of this emerging topic.Massively M... Recent research has shown a burgeoning interest in exploring sparse models for massively Multilingual Neural Machine Translation(MNMT).In this paper,we present a comprehensive survey of this emerging topic.Massively MNMT,when based on sparse models,offers significant improvements in parameter efficiency and reduces interference compared to its dense model counterparts.Various methods have been proposed to leverage sparse models for enhancing translation quality.However,the lack of a thorough survey has hindered the identification and further investigation of the most promising approaches.To address this gap,we provide an exhaustive examination of the current research landscape in massively MNMT,with a special emphasis on sparse models.Initially,we categorize the various sparse model-based approaches into distinct classifications.We then delve into each category in detail,elucidating their fundamental modeling principles,core issues,and the challenges they face.Wherever possible,we conduct comparative analyses to assess the strengths and weaknesses of different methodologies.Moreover,we explore potential future research avenues for MNMT based on sparse models.This survey serves as a valuable resource for both newcomers and established experts in the field of MNMT,particularly those interested in sparse model applications. 展开更多
关键词 neural machine translation sparse models multilingual dense model
原文传递
When machines meet gavel:a case study of the English-Arabic machine translation of the Egyptian arguments before the International Court of Justice(2024)
19
作者 Aya Sayed Omran Elsayed 《Language and Semiotic Studies》 2025年第4期661-690,共30页
The legal field heavily relies on audio-visual content such as witness testimonies and trials,making accurate transcription and translation crucial,especially in cross-border cases.This study examines the performance ... The legal field heavily relies on audio-visual content such as witness testimonies and trials,making accurate transcription and translation crucial,especially in cross-border cases.This study examines the performance of neural machine translation(NMT)in handling such material,using the DQF-MQM harmonized error typology to categorize errors by type,including terminology,accuracy,and fluency.Legal translation demands precision,as minor errors can impact legal outcomes.Thus,this analysis focuses on English-to-Arabic translations of Egyptian oral arguments before the International Court of Justice,sourced from DawnNews(Feb 21,2024).It investigates whether errors stem from the ASR-generated transcript or the Google NMT system.The findings aim to guide machine translation post-editors(MTPEs)in identifying lexical and syntactic patterns that typically result in errors,ultimately supporting more accurate and legally sound translations. 展开更多
关键词 audio-visual translation(AVT) English to Arabic translation legal discourse International Court of Justice(ICJ) neural machine translation(NMT) the DQF-MQM harmonized error typology
原文传递
Optimizing Non-Decomposable Evaluation Metrics for NeuralMachine Translation
20
作者 Shi-Qi Shen Yang Liu Mao-Song Sun 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第4期796-804,共9页
While optimizing model parameters with respect to evaluation metrics has recently proven to benefit end to-end neural machine translation (NMT), the evaluation metrics used in the training are restricted to be defined... While optimizing model parameters with respect to evaluation metrics has recently proven to benefit end to-end neural machine translation (NMT), the evaluation metrics used in the training are restricted to be defined at the sentence level to facilitate online learning algorithms. This is undesirable because the final evaluation metrics used in the testing phase are usually non-decomposable (i.e., they are defined at the corpus level and cannot be expressed as the sum of sentence-level metrics). To minimize the discrepancy between the training and the testing, we propose to extend the minimum risk training (MRT) algorithm to take non-decomposable corpus-level evaluation metrics into consideration while still keeping the advantages of online training. This can be done by calculating corpus-level evaluation metrics on a subset of training data at each step in online training. Experiments on Chinese-English and English-French translation show that our approach improves the correlation between training and testing and significantly outperforms the MRT algorithm using decomposable evaluation metrics. 展开更多
关键词 neural machine translation training criterion non-decomposable evaluation metric
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部