Neural machine translation(NMT)has been widely applied to high-resource language pairs,but its dependence on large-scale data results in poor performance in low-resource scenarios.In this paper,we propose a transfer-l...Neural machine translation(NMT)has been widely applied to high-resource language pairs,but its dependence on large-scale data results in poor performance in low-resource scenarios.In this paper,we propose a transfer-learning-based approach called shared space transfer for zero-resource NMT.Our method leverages a pivot pre-trained language model(PLM)to create a shared representation space,which is used in both auxiliary source→pivot(Ms2p)and(Mp2t)translation models.Specifically,we exploit pivot PLM to initialize the Ms2p decoder pivot→targetand Mp2t encoder,while adopting a freezing strategy during the training process.We further propose a feature converter to mitigate representation space deviations by converting the features from the source encoder into the shared representation space.The converter is trained using the synthetic parallel corpus.The final Ms2t model source→targetcombines the Ms2p encoder,feature converter,and Mp2t decoder.We conduct simulation experiments using English as the pivot language for and translations.We finally test our method German→French,German→Czech,Turkish→Hindion a real zero-resource language pair,with Chinese as the pivot language.Experiment results Mongolian→Vietnameseshow that our method achieves high translation quality,with better Translation Error Rate(TER)and BLEU scores compared with other pivot-based methods.The step-wise pre-training with our feature converter outperforms baseline models in terms of COMET scores.展开更多
Machine translation builds a bridge for cross-language communication by realizing text conversion between different languages.However,there are still many challenges in achieving context-accurate translations.These ma...Machine translation builds a bridge for cross-language communication by realizing text conversion between different languages.However,there are still many challenges in achieving context-accurate translations.These mainly include how to accurately capture subtle information in context,effectively resolve the ambiguity of polysemous words,properly translate idiomatic expressions,accurately reflect cultural differences,and correctly use terms in specific fields.This article reviews the existing platforms and the latest research results in the field of machine translation,deeply explores the above-mentioned key difficulties,and explores the introduction of artificial intelligence technology.The aim is to improve the overall performance of machine-translation systems,facilitate smoother communication and understanding among people from different cultural backgrounds,further eliminate language barriers,and promote the in-depth integration and development of global multiculturalism.展开更多
With the development of machine translation technology,automatic pre-editing has attracted increasing research attention for its important role in improving translation quality and efficiency.This study utilizes UAM C...With the development of machine translation technology,automatic pre-editing has attracted increasing research attention for its important role in improving translation quality and efficiency.This study utilizes UAM Corpus Tool 3.0 to annotate and categorize 99 key publications between 1992 and 2024,tracing the research paths and technological evolution of automatic pre-translation editing.The study finds that current approaches can be classified into four categories:controlled language-based approaches,text simplification approaches,interlingua-based approaches,and large language model-driven approaches.By critically examining their technical features and applicability in various contexts,this review aims to provide valuable insights to guide the future optimization and expansion of pre-translation editing systems.展开更多
In the process of machine translation,pre-editing is a crucial step,which can help reduce the cost of post-translation editing and improve the quality of machine translation.By sorting out and reviewing the relevant l...In the process of machine translation,pre-editing is a crucial step,which can help reduce the cost of post-translation editing and improve the quality of machine translation.By sorting out and reviewing the relevant literature about pre-editing of machine translation,this paper summarizes the previous researches on pre-editing of machine translation from three aspects:the theoretical framework,automated and semi-automated pre-translation editing and evaluation of pre-translation editing effect.The possible development direction of pre-translation edting is also put forward.展开更多
Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lac...Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose how to improve NMT formality with large language models (LLMs), which combines the style transfer and evaluation capabilities of an LLM and the high-quality translation generation ability of NMT models to improve NMT formality. The proposed method (namely INMTF) encompasses two approaches. The first involves a revision approach using an LLM to revise the NMT-generated translation, ensuring a formal translation style. The second approach employs an LLM as a reward model for scoring translation formality, and then uses reinforcement learning algorithms to fine-tune the NMT model to maximize the reward score, thereby enhancing the formality of the generated translations. Considering the substantial parameter size of LLMs, we also explore methods to reduce the computational cost of INMTF. Experimental results demonstrate that INMTF significantly outperforms baselines in terms of translation formality and translation quality, with an improvement of +9.19 style accuracy points in the German-to-English task and +2.16 COMET score in the Russian-to-English task. Furthermore, our work demonstrates the potential of integrating LLMs within NMT frameworks to bridge the gap between NMT outputs and the formality required in various real-world translation scenarios.展开更多
Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large lan...Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large language models(LLMs),while highlighting ongoing challenges such as data bottlenecks,biases,fairness,and computational costs.Finally,it discusses future directions,including efficient parameter fine-tuning,multimodal translation,and community-driven corpus construction,providing insights for advancing LRL translation research.展开更多
One of uses of machine translation(MT),is helping readers to read for the gist of a foreign text through a draft transla tion produced by MT engines.Rapid post-editing,as Jeffrey Allen defines it as a"strictly mi...One of uses of machine translation(MT),is helping readers to read for the gist of a foreign text through a draft transla tion produced by MT engines.Rapid post-editing,as Jeffrey Allen defines it as a"strictly minimal editing on texts in order to re move blatant and significant errors without considering stylistic issues",can help present the reader with a roughly comprehensi ble translation as quickly as possible.The purpose of this article is on a proposed set of rapid post-editing guidelines for Biblical Chinese-English MT,with its application on editing the English MT version of Chapter one of Mark(马尔谷福音) of the Chi nese Catholic Bible(天主教思高本圣经) as an example.展开更多
As a kind of ancillary translation tool, Machine Translation has been paid increasing attention to and received different kinds of study by a great deal of researchers and scholars for a long time. To know the definit...As a kind of ancillary translation tool, Machine Translation has been paid increasing attention to and received different kinds of study by a great deal of researchers and scholars for a long time. To know the definition of Machine Translation and to analyse its benefits and problems are significant for translators in order to make good use of Machine Translation, and helpful to develop and consummate Machine Translation Systems in the future.展开更多
Machine Translation are increasingly welcomed and used during recent years with the commonly application of Internet and the acceleration of the integration of world economy. To know about the history and development ...Machine Translation are increasingly welcomed and used during recent years with the commonly application of Internet and the acceleration of the integration of world economy. To know about the history and development process of Machine Translation during 1930s-1970 s could help researchers gain new insights through restudying old material.展开更多
After more than 70 years of evolution,great achievements have been made in machine translation.Especially in recent years,translation quality has been greatly improved with the emergence of neural machine translation(...After more than 70 years of evolution,great achievements have been made in machine translation.Especially in recent years,translation quality has been greatly improved with the emergence of neural machine translation(NMT).In this article,we first review the history of machine translation from rule-based machine translation to example-based machine translation and statistical machine translation.We then introduce NMT in more detail,including the basic framework and the current dominant framework,Transformer,as well as multilingual translation models to deal with the data sparseness problem.In addition,we introduce cutting-edge simultaneous translation methods that achieve a balance between translation quality and latency.We then describe various products and applications of machine translation.At the end of this article,we briefly discuss challenges and future research directions in this field.展开更多
Recently dependency information has been used in different ways to improve neural machine translation.For example,add dependency labels to the hidden states of source words.Or the contiguous information of a source wo...Recently dependency information has been used in different ways to improve neural machine translation.For example,add dependency labels to the hidden states of source words.Or the contiguous information of a source word would be found according to the dependency tree and then be learned independently and be added into Neural Machine Translation(NMT)model as a unit in various ways.However,these works are all limited to the use of dependency information to enrich the hidden states of source words.Since many works in Statistical Machine Translation(SMT)and NMT have proven the validity and potential of using dependency information.We believe that there are still many ways to apply dependency information in the NMT structure.In this paper,we explore a new way to use dependency information to improve NMT.Based on the theory of local attention mechanism,we present Dependency-based Local Attention Approach(DLAA),a new attention mechanism that allowed the NMT model to trace the dependency words related to the current translating words.Our work also indicates that dependency information could help to supervise attention mechanism.Experiment results on WMT 17 Chineseto-English translation task shared training datasets show that our model is effective and perform distinctively on long sentence translation.展开更多
Neural Machine Translation(NMT)is an end-to-end learning approach for automated translation,overcoming the weaknesses of conventional phrase-based translation systems.Although NMT based systems have gained their popul...Neural Machine Translation(NMT)is an end-to-end learning approach for automated translation,overcoming the weaknesses of conventional phrase-based translation systems.Although NMT based systems have gained their popularity in commercial translation applications,there is still plenty of room for improvement.Being the most popular search algorithm in NMT,beam search is vital to the translation result.However,traditional beam search can produce duplicate or missing translation due to its target sequence selection strategy.Aiming to alleviate this problem,this paper proposed neural machine translation improvements based on a novel beam search evaluation function.And we use reinforcement learning to train a translation evaluation system to select better candidate words for generating translations.In the experiments,we conducted extensive experiments to evaluate our methods.CASIA corpus and the 1,000,000 pairs of bilingual corpora of NiuTrans are used in our experiments.The experiment results prove that the proposed methods can effectively improve the English to Chinese translation quality.展开更多
The translation quality of neural machine translation(NMT)systems depends largely on the quality of large-scale bilingual parallel corpora available.Research shows that under the condition of limited resources,the per...The translation quality of neural machine translation(NMT)systems depends largely on the quality of large-scale bilingual parallel corpora available.Research shows that under the condition of limited resources,the performance of NMT is greatly reduced,and a large amount of high-quality bilingual parallel data is needed to train a competitive translation model.However,not all languages have large-scale and high-quality bilingual corpus resources available.In these cases,improving the quality of the corpora has become the main focus to increase the accuracy of the NMT results.This paper proposes a new method to improve the quality of data by using data cleaning,data expansion,and other measures to expand the data at the word and sentence-level,thus improving the richness of the bilingual data.The long short-term memory(LSTM)language model is also used to ensure the smoothness of sentence construction in the process of sentence construction.At the same time,it uses a variety of processing methods to improve the quality of the bilingual data.Experiments using three standard test sets are conducted to validate the proposed method;the most advanced fairseq-transformer NMT system is used in the training.The results show that the proposed method has worked well on improving the translation results.Compared with the state-of-the-art methods,the BLEU value of our method is increased by 2.34 compared with that of the baseline.展开更多
In bilingual translation,attention-based Neural Machine Translation(NMT)models are used to achieve synchrony between input and output sequences and the notion of alignment.NMT model has obtained state-of-the-art perfo...In bilingual translation,attention-based Neural Machine Translation(NMT)models are used to achieve synchrony between input and output sequences and the notion of alignment.NMT model has obtained state-of-the-art performance for several language pairs.However,there has been little work exploring useful architectures for Urdu-to-English machine translation.We conducted extensive Urdu-to-English translation experiments using Long short-term memory(LSTM)/Bidirectional recurrent neural networks(Bi-RNN)/Statistical recurrent unit(SRU)/Gated recurrent unit(GRU)/Convolutional neural network(CNN)and Transformer.Experimental results show that Bi-RNN and LSTM with attention mechanism trained iteratively,with a scalable data set,make precise predictions on unseen data.The trained models yielded competitive results by achieving 62.6%and 61%accuracy and 49.67 and 47.14 BLEU scores,respectively.From a qualitative perspective,the translation of the test sets was examined manually,and it was observed that trained models tend to produce repetitive output more frequently.The attention score produced by Bi-RNN and LSTM produced clear alignment,while GRU showed incorrect translation for words,poor alignment and lack of a clear structure.Therefore,we considered refining the attention-based models by defining an additional attention-based dropout layer.Attention dropout fixes alignment errors and minimizes translation errors at the word level.After empirical demonstration and comparison with their counterparts,we found improvement in the quality of the resulting translation system and a decrease in the perplexity and over-translation score.The ability of the proposed model was evaluated using Arabic-English and Persian-English datasets as well.We empirically concluded that adding an attention-based dropout layer helps improve GRU,SRU,and Transformer translation and is considerably more efficient in translation quality and speed.展开更多
Neural Machine Translation(NMT)based system is an important technology for translation applications.However,there is plenty of rooms for the improvement of NMT.In the process of NMT,traditional word vector cannot dist...Neural Machine Translation(NMT)based system is an important technology for translation applications.However,there is plenty of rooms for the improvement of NMT.In the process of NMT,traditional word vector cannot distinguish the same words under different parts of speech(POS).Aiming to alleviate this problem,this paper proposed a new word vector training method based on POS feature.It can efficiently improve the quality of translation by adding POS feature to the training process of word vectors.In the experiments,we conducted extensive experiments to evaluate our methods.The experimental result shows that the proposed method is beneficial to improve the quality of translation from English into Chinese.展开更多
This paper compared several methods of machine translation(MT) design, drew lessons from the idea of phrase structure, GPSG, HPSG and Corpus, took words as the core, built a set of word rules, and developed an English...This paper compared several methods of machine translation(MT) design, drew lessons from the idea of phrase structure, GPSG, HPSG and Corpus, took words as the core, built a set of word rules, and developed an English Chinese Machine Translation System based on it. The paper also discussed some technical problems on building MT system, and provided an estimation principle for using rules. With this principle the syntax ambiguities in MT system are solved better.展开更多
How to select appropriate wolds in a translation is a significant problem in current studies of machine translation, because it directly decides the translation quality. This paper uses an unsupervised corpus-based st...How to select appropriate wolds in a translation is a significant problem in current studies of machine translation, because it directly decides the translation quality. This paper uses an unsupervised corpus-based statisticalmethod to select target word. Based on the concurrence probabilities, all ambiguous words in a sentence are disambiguated at the same time. Because a corpus of limited size cannot cover all the collocation of words, we use an effectivesmoothing method to increase the coverage of the corpus. In ceder to solve the problem in our English-Chinese MT system, we have applied the algorithm to disambiguate senses of verbs, nouns and adjectitves in target language, and theresult shows that the approach is very promising.展开更多
In this paper, we propose to enhance machine translation system combination (MTSC) with a sentence-level paraphrasing model trained by a neural network. This work extends the number of candidates in MTSC by paraphrasi...In this paper, we propose to enhance machine translation system combination (MTSC) with a sentence-level paraphrasing model trained by a neural network. This work extends the number of candidates in MTSC by paraphrasing the whole original MT translation sentences. First we train a neural paraphrasing model of Encoder-Decoder, and leverage the model to paraphrase the MT system outputs to generate synonymous candidates in the semantic space. Then we merge all of them into a single improved translation by a state-of-the-art system combination approach (MEMT) adding some new paraphrasing features. Our experimental results show a significant improvement of 0.28 BLEU points on the WMT2011 test data and 0.41 BLEU points without considering the out-of-vocabulary (OOV) words for the sentence-level paraphrasing model.展开更多
This paper describes the experiments with Korean-to-Vietnamese statistical machine translation(SMT). The fact that Korean is a morphologically complex language that does not have clear optimal word boundaries causes a...This paper describes the experiments with Korean-to-Vietnamese statistical machine translation(SMT). The fact that Korean is a morphologically complex language that does not have clear optimal word boundaries causes a major problem of translating into or from Korean. To solve this problem, we present a method to conduct a Korean morphological analysis by using a pre-analyzed partial word-phrase dictionary(PWD).Besides, we build a Korean-Vietnamese parallel corpus for training SMT models by collecting text from multilingual magazines. Then, we apply such a morphology analysis to Korean sentences that are included in the collected parallel corpus as a preprocessing step. The experiment results demonstrate a remarkable improvement of Korean-to-Vietnamese translation quality in term of bi-lingual evaluation understudy(BLEU).展开更多
This paper gives the representation of rules, the strategy of rule controlling and the existing problems in English Chinese Machine Translation(MT) named BT863 I. Then it puts forward a method for processing these rul...This paper gives the representation of rules, the strategy of rule controlling and the existing problems in English Chinese Machine Translation(MT) named BT863 I. Then it puts forward a method for processing these rules based on the decision tree. With this method, some problems such as rule conflic and rule redundancy occurring in BT863 I have been solved and the efficiency of MT system has been improved greatly. This method also has general meaning in the Rule based expert system.展开更多
基金funded by the National Natural Science Foundation of China(Grant number:Nos.62172341 and 12204386)Sichuan Natural Science Foundation(Grant number:No.2024NSFSC1375)+1 种基金Youth Foundation of Inner Mongolia Natural Science Foundation(Grant number:No.2024QN06017)Basic Scientific Research Business Fee Project for Universities in Inner Mongolia(Grant number:No.0406082215).
文摘Neural machine translation(NMT)has been widely applied to high-resource language pairs,but its dependence on large-scale data results in poor performance in low-resource scenarios.In this paper,we propose a transfer-learning-based approach called shared space transfer for zero-resource NMT.Our method leverages a pivot pre-trained language model(PLM)to create a shared representation space,which is used in both auxiliary source→pivot(Ms2p)and(Mp2t)translation models.Specifically,we exploit pivot PLM to initialize the Ms2p decoder pivot→targetand Mp2t encoder,while adopting a freezing strategy during the training process.We further propose a feature converter to mitigate representation space deviations by converting the features from the source encoder into the shared representation space.The converter is trained using the synthetic parallel corpus.The final Ms2t model source→targetcombines the Ms2p encoder,feature converter,and Mp2t decoder.We conduct simulation experiments using English as the pivot language for and translations.We finally test our method German→French,German→Czech,Turkish→Hindion a real zero-resource language pair,with Chinese as the pivot language.Experiment results Mongolian→Vietnameseshow that our method achieves high translation quality,with better Translation Error Rate(TER)and BLEU scores compared with other pivot-based methods.The step-wise pre-training with our feature converter outperforms baseline models in terms of COMET scores.
文摘Machine translation builds a bridge for cross-language communication by realizing text conversion between different languages.However,there are still many challenges in achieving context-accurate translations.These mainly include how to accurately capture subtle information in context,effectively resolve the ambiguity of polysemous words,properly translate idiomatic expressions,accurately reflect cultural differences,and correctly use terms in specific fields.This article reviews the existing platforms and the latest research results in the field of machine translation,deeply explores the above-mentioned key difficulties,and explores the introduction of artificial intelligence technology.The aim is to improve the overall performance of machine-translation systems,facilitate smoother communication and understanding among people from different cultural backgrounds,further eliminate language barriers,and promote the in-depth integration and development of global multiculturalism.
基金supported by Chunhui Collaborative Research Project funded by the Ministry of Education of China[Grant No.202200490]Humanities and Social Sciences Research Project funded by the Ministry of Education of China[Grant No.23YJAZH139].
文摘With the development of machine translation technology,automatic pre-editing has attracted increasing research attention for its important role in improving translation quality and efficiency.This study utilizes UAM Corpus Tool 3.0 to annotate and categorize 99 key publications between 1992 and 2024,tracing the research paths and technological evolution of automatic pre-translation editing.The study finds that current approaches can be classified into four categories:controlled language-based approaches,text simplification approaches,interlingua-based approaches,and large language model-driven approaches.By critically examining their technical features and applicability in various contexts,this review aims to provide valuable insights to guide the future optimization and expansion of pre-translation editing systems.
文摘In the process of machine translation,pre-editing is a crucial step,which can help reduce the cost of post-translation editing and improve the quality of machine translation.By sorting out and reviewing the relevant literature about pre-editing of machine translation,this paper summarizes the previous researches on pre-editing of machine translation from three aspects:the theoretical framework,automated and semi-automated pre-translation editing and evaluation of pre-translation editing effect.The possible development direction of pre-translation edting is also put forward.
文摘Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose how to improve NMT formality with large language models (LLMs), which combines the style transfer and evaluation capabilities of an LLM and the high-quality translation generation ability of NMT models to improve NMT formality. The proposed method (namely INMTF) encompasses two approaches. The first involves a revision approach using an LLM to revise the NMT-generated translation, ensuring a formal translation style. The second approach employs an LLM as a reward model for scoring translation formality, and then uses reinforcement learning algorithms to fine-tune the NMT model to maximize the reward score, thereby enhancing the formality of the generated translations. Considering the substantial parameter size of LLMs, we also explore methods to reduce the computational cost of INMTF. Experimental results demonstrate that INMTF significantly outperforms baselines in terms of translation formality and translation quality, with an improvement of +9.19 style accuracy points in the German-to-English task and +2.16 COMET score in the Russian-to-English task. Furthermore, our work demonstrates the potential of integrating LLMs within NMT frameworks to bridge the gap between NMT outputs and the formality required in various real-world translation scenarios.
基金supported by China Undergraduate Innovation Training Program[Grant No.202410699184]Humanities and Social Sciences Research Project funded by the Ministry of Education of China[Grant No.23YJAZH139].
文摘Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large language models(LLMs),while highlighting ongoing challenges such as data bottlenecks,biases,fairness,and computational costs.Finally,it discusses future directions,including efficient parameter fine-tuning,multimodal translation,and community-driven corpus construction,providing insights for advancing LRL translation research.
文摘One of uses of machine translation(MT),is helping readers to read for the gist of a foreign text through a draft transla tion produced by MT engines.Rapid post-editing,as Jeffrey Allen defines it as a"strictly minimal editing on texts in order to re move blatant and significant errors without considering stylistic issues",can help present the reader with a roughly comprehensi ble translation as quickly as possible.The purpose of this article is on a proposed set of rapid post-editing guidelines for Biblical Chinese-English MT,with its application on editing the English MT version of Chapter one of Mark(马尔谷福音) of the Chi nese Catholic Bible(天主教思高本圣经) as an example.
文摘As a kind of ancillary translation tool, Machine Translation has been paid increasing attention to and received different kinds of study by a great deal of researchers and scholars for a long time. To know the definition of Machine Translation and to analyse its benefits and problems are significant for translators in order to make good use of Machine Translation, and helpful to develop and consummate Machine Translation Systems in the future.
文摘Machine Translation are increasingly welcomed and used during recent years with the commonly application of Internet and the acceleration of the integration of world economy. To know about the history and development process of Machine Translation during 1930s-1970 s could help researchers gain new insights through restudying old material.
文摘After more than 70 years of evolution,great achievements have been made in machine translation.Especially in recent years,translation quality has been greatly improved with the emergence of neural machine translation(NMT).In this article,we first review the history of machine translation from rule-based machine translation to example-based machine translation and statistical machine translation.We then introduce NMT in more detail,including the basic framework and the current dominant framework,Transformer,as well as multilingual translation models to deal with the data sparseness problem.In addition,we introduce cutting-edge simultaneous translation methods that achieve a balance between translation quality and latency.We then describe various products and applications of machine translation.At the end of this article,we briefly discuss challenges and future research directions in this field.
基金This research was funded in part by the National Natural Science Foundation of China(61871140,61872100,61572153,U1636215,61572492,61672020)the National Key research and Development Plan(Grant No.2018YFB0803504)Open Fund of Beijing Key Laboratory of IOT Information Security Technology(J6V0011104).
文摘Recently dependency information has been used in different ways to improve neural machine translation.For example,add dependency labels to the hidden states of source words.Or the contiguous information of a source word would be found according to the dependency tree and then be learned independently and be added into Neural Machine Translation(NMT)model as a unit in various ways.However,these works are all limited to the use of dependency information to enrich the hidden states of source words.Since many works in Statistical Machine Translation(SMT)and NMT have proven the validity and potential of using dependency information.We believe that there are still many ways to apply dependency information in the NMT structure.In this paper,we explore a new way to use dependency information to improve NMT.Based on the theory of local attention mechanism,we present Dependency-based Local Attention Approach(DLAA),a new attention mechanism that allowed the NMT model to trace the dependency words related to the current translating words.Our work also indicates that dependency information could help to supervise attention mechanism.Experiment results on WMT 17 Chineseto-English translation task shared training datasets show that our model is effective and perform distinctively on long sentence translation.
基金This work is supported by the National Natural Science Foundation of China(61872231,61701297).
文摘Neural Machine Translation(NMT)is an end-to-end learning approach for automated translation,overcoming the weaknesses of conventional phrase-based translation systems.Although NMT based systems have gained their popularity in commercial translation applications,there is still plenty of room for improvement.Being the most popular search algorithm in NMT,beam search is vital to the translation result.However,traditional beam search can produce duplicate or missing translation due to its target sequence selection strategy.Aiming to alleviate this problem,this paper proposed neural machine translation improvements based on a novel beam search evaluation function.And we use reinforcement learning to train a translation evaluation system to select better candidate words for generating translations.In the experiments,we conducted extensive experiments to evaluate our methods.CASIA corpus and the 1,000,000 pairs of bilingual corpora of NiuTrans are used in our experiments.The experiment results prove that the proposed methods can effectively improve the English to Chinese translation quality.
基金This research was supported by the National Natural Science Foundation of China(NSFC)under the grant(No.61672138).
文摘The translation quality of neural machine translation(NMT)systems depends largely on the quality of large-scale bilingual parallel corpora available.Research shows that under the condition of limited resources,the performance of NMT is greatly reduced,and a large amount of high-quality bilingual parallel data is needed to train a competitive translation model.However,not all languages have large-scale and high-quality bilingual corpus resources available.In these cases,improving the quality of the corpora has become the main focus to increase the accuracy of the NMT results.This paper proposes a new method to improve the quality of data by using data cleaning,data expansion,and other measures to expand the data at the word and sentence-level,thus improving the richness of the bilingual data.The long short-term memory(LSTM)language model is also used to ensure the smoothness of sentence construction in the process of sentence construction.At the same time,it uses a variety of processing methods to improve the quality of the bilingual data.Experiments using three standard test sets are conducted to validate the proposed method;the most advanced fairseq-transformer NMT system is used in the training.The results show that the proposed method has worked well on improving the translation results.Compared with the state-of-the-art methods,the BLEU value of our method is increased by 2.34 compared with that of the baseline.
基金This work was supported by the Institute for Big Data Analytics and Artificial Intelligence(IBDAAI),Universiti Teknologi Mara,Shah Alam,Selangor.Malaysia.
文摘In bilingual translation,attention-based Neural Machine Translation(NMT)models are used to achieve synchrony between input and output sequences and the notion of alignment.NMT model has obtained state-of-the-art performance for several language pairs.However,there has been little work exploring useful architectures for Urdu-to-English machine translation.We conducted extensive Urdu-to-English translation experiments using Long short-term memory(LSTM)/Bidirectional recurrent neural networks(Bi-RNN)/Statistical recurrent unit(SRU)/Gated recurrent unit(GRU)/Convolutional neural network(CNN)and Transformer.Experimental results show that Bi-RNN and LSTM with attention mechanism trained iteratively,with a scalable data set,make precise predictions on unseen data.The trained models yielded competitive results by achieving 62.6%and 61%accuracy and 49.67 and 47.14 BLEU scores,respectively.From a qualitative perspective,the translation of the test sets was examined manually,and it was observed that trained models tend to produce repetitive output more frequently.The attention score produced by Bi-RNN and LSTM produced clear alignment,while GRU showed incorrect translation for words,poor alignment and lack of a clear structure.Therefore,we considered refining the attention-based models by defining an additional attention-based dropout layer.Attention dropout fixes alignment errors and minimizes translation errors at the word level.After empirical demonstration and comparison with their counterparts,we found improvement in the quality of the resulting translation system and a decrease in the perplexity and over-translation score.The ability of the proposed model was evaluated using Arabic-English and Persian-English datasets as well.We empirically concluded that adding an attention-based dropout layer helps improve GRU,SRU,and Transformer translation and is considerably more efficient in translation quality and speed.
基金This work is supported by the National Natural Science Foundation of China(61872231,61701297).
文摘Neural Machine Translation(NMT)based system is an important technology for translation applications.However,there is plenty of rooms for the improvement of NMT.In the process of NMT,traditional word vector cannot distinguish the same words under different parts of speech(POS).Aiming to alleviate this problem,this paper proposed a new word vector training method based on POS feature.It can efficiently improve the quality of translation by adding POS feature to the training process of word vectors.In the experiments,we conducted extensive experiments to evaluate our methods.The experimental result shows that the proposed method is beneficial to improve the quality of translation from English into Chinese.
文摘This paper compared several methods of machine translation(MT) design, drew lessons from the idea of phrase structure, GPSG, HPSG and Corpus, took words as the core, built a set of word rules, and developed an English Chinese Machine Translation System based on it. The paper also discussed some technical problems on building MT system, and provided an estimation principle for using rules. With this principle the syntax ambiguities in MT system are solved better.
文摘How to select appropriate wolds in a translation is a significant problem in current studies of machine translation, because it directly decides the translation quality. This paper uses an unsupervised corpus-based statisticalmethod to select target word. Based on the concurrence probabilities, all ambiguous words in a sentence are disambiguated at the same time. Because a corpus of limited size cannot cover all the collocation of words, we use an effectivesmoothing method to increase the coverage of the corpus. In ceder to solve the problem in our English-Chinese MT system, we have applied the algorithm to disambiguate senses of verbs, nouns and adjectitves in target language, and theresult shows that the approach is very promising.
基金This paper is supported by the project of Natural Science Foundation of China (Grant No. 61272384&61370170).
文摘In this paper, we propose to enhance machine translation system combination (MTSC) with a sentence-level paraphrasing model trained by a neural network. This work extends the number of candidates in MTSC by paraphrasing the whole original MT translation sentences. First we train a neural paraphrasing model of Encoder-Decoder, and leverage the model to paraphrase the MT system outputs to generate synonymous candidates in the semantic space. Then we merge all of them into a single improved translation by a state-of-the-art system combination approach (MEMT) adding some new paraphrasing features. Our experimental results show a significant improvement of 0.28 BLEU points on the WMT2011 test data and 0.41 BLEU points without considering the out-of-vocabulary (OOV) words for the sentence-level paraphrasing model.
基金supported by the Institute for Information&communications Technology Promotion under Grant No.R0101-16-0176the Project of Core Technology Development for Human-Like Self-Taught Learning Based on Symbolic Approach
文摘This paper describes the experiments with Korean-to-Vietnamese statistical machine translation(SMT). The fact that Korean is a morphologically complex language that does not have clear optimal word boundaries causes a major problem of translating into or from Korean. To solve this problem, we present a method to conduct a Korean morphological analysis by using a pre-analyzed partial word-phrase dictionary(PWD).Besides, we build a Korean-Vietnamese parallel corpus for training SMT models by collecting text from multilingual magazines. Then, we apply such a morphology analysis to Korean sentences that are included in the collected parallel corpus as a preprocessing step. The experiment results demonstrate a remarkable improvement of Korean-to-Vietnamese translation quality in term of bi-lingual evaluation understudy(BLEU).
文摘This paper gives the representation of rules, the strategy of rule controlling and the existing problems in English Chinese Machine Translation(MT) named BT863 I. Then it puts forward a method for processing these rules based on the decision tree. With this method, some problems such as rule conflic and rule redundancy occurring in BT863 I have been solved and the efficiency of MT system has been improved greatly. This method also has general meaning in the Rule based expert system.