期刊文献+
共找到73篇文章
< 1 2 4 >
每页显示 20 50 100
PNMT:Zero-Resource Machine Translation with Pivot-Based Feature Converter
1
作者 Lingfang Li Weijian Hu Mingxing Luo 《Computers, Materials & Continua》 2025年第9期5915-5935,共21页
Neural machine translation(NMT)has been widely applied to high-resource language pairs,but its dependence on large-scale data results in poor performance in low-resource scenarios.In this paper,we propose a transfer-l... Neural machine translation(NMT)has been widely applied to high-resource language pairs,but its dependence on large-scale data results in poor performance in low-resource scenarios.In this paper,we propose a transfer-learning-based approach called shared space transfer for zero-resource NMT.Our method leverages a pivot pre-trained language model(PLM)to create a shared representation space,which is used in both auxiliary source→pivot(Ms2p)and(Mp2t)translation models.Specifically,we exploit pivot PLM to initialize the Ms2p decoder pivot→targetand Mp2t encoder,while adopting a freezing strategy during the training process.We further propose a feature converter to mitigate representation space deviations by converting the features from the source encoder into the shared representation space.The converter is trained using the synthetic parallel corpus.The final Ms2t model source→targetcombines the Ms2p encoder,feature converter,and Mp2t decoder.We conduct simulation experiments using English as the pivot language for and translations.We finally test our method German→French,German→Czech,Turkish→Hindion a real zero-resource language pair,with Chinese as the pivot language.Experiment results Mongolian→Vietnameseshow that our method achieves high translation quality,with better Translation Error Rate(TER)and BLEU scores compared with other pivot-based methods.The step-wise pre-training with our feature converter outperforms baseline models in terms of COMET scores. 展开更多
关键词 Zero-resource machine translation pivot pre-trained language model transfer learning neural machine translation
在线阅读 下载PDF
Improving Machine Translation Formality with Large Language Models
2
作者 Murun Yang Fuxue Li 《Computers, Materials & Continua》 2025年第2期2061-2075,共15页
Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lac... Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose how to improve NMT formality with large language models (LLMs), which combines the style transfer and evaluation capabilities of an LLM and the high-quality translation generation ability of NMT models to improve NMT formality. The proposed method (namely INMTF) encompasses two approaches. The first involves a revision approach using an LLM to revise the NMT-generated translation, ensuring a formal translation style. The second approach employs an LLM as a reward model for scoring translation formality, and then uses reinforcement learning algorithms to fine-tune the NMT model to maximize the reward score, thereby enhancing the formality of the generated translations. Considering the substantial parameter size of LLMs, we also explore methods to reduce the computational cost of INMTF. Experimental results demonstrate that INMTF significantly outperforms baselines in terms of translation formality and translation quality, with an improvement of +9.19 style accuracy points in the German-to-English task and +2.16 COMET score in the Russian-to-English task. Furthermore, our work demonstrates the potential of integrating LLMs within NMT frameworks to bridge the gap between NMT outputs and the formality required in various real-world translation scenarios. 展开更多
关键词 Neural machine translation FORMALITY large language model text style transfer style evaluation reinforcement learning
在线阅读 下载PDF
A Review of Automatic Pre-editing Approaches in Machine Translation
3
作者 WANG Jun-song MENG Ya-qi WANG Ai-qing 《Journal of Literature and Art Studies》 2025年第6期483-489,共7页
With the development of machine translation technology,automatic pre-editing has attracted increasing research attention for its important role in improving translation quality and efficiency.This study utilizes UAM C... With the development of machine translation technology,automatic pre-editing has attracted increasing research attention for its important role in improving translation quality and efficiency.This study utilizes UAM Corpus Tool 3.0 to annotate and categorize 99 key publications between 1992 and 2024,tracing the research paths and technological evolution of automatic pre-translation editing.The study finds that current approaches can be classified into four categories:controlled language-based approaches,text simplification approaches,interlingua-based approaches,and large language model-driven approaches.By critically examining their technical features and applicability in various contexts,this review aims to provide valuable insights to guide the future optimization and expansion of pre-translation editing systems. 展开更多
关键词 automatic pre-editing machine translation controlled language text simplification large language models
在线阅读 下载PDF
A Review of Research on Pre-editing of Machine Translation
4
作者 ZHANG Xue-yao 《Journal of Literature and Art Studies》 2025年第1期36-43,共8页
In the process of machine translation,pre-editing is a crucial step,which can help reduce the cost of post-translation editing and improve the quality of machine translation.By sorting out and reviewing the relevant l... In the process of machine translation,pre-editing is a crucial step,which can help reduce the cost of post-translation editing and improve the quality of machine translation.By sorting out and reviewing the relevant literature about pre-editing of machine translation,this paper summarizes the previous researches on pre-editing of machine translation from three aspects:the theoretical framework,automated and semi-automated pre-translation editing and evaluation of pre-translation editing effect.The possible development direction of pre-translation edting is also put forward. 展开更多
关键词 machine translation(MT) pre-editing controlled language automatic pre-translation editing
在线阅读 下载PDF
Research on the Development Status and Dificulties of Machine Translation in the Era of Artiicial Intelligence
5
作者 Ruijing Xu Qi Bai 《Journal of Electronic Research and Application》 2025年第3期39-43,共5页
Machine translation builds a bridge for cross-language communication by realizing text conversion between different languages.However,there are still many challenges in achieving context-accurate translations.These ma... Machine translation builds a bridge for cross-language communication by realizing text conversion between different languages.However,there are still many challenges in achieving context-accurate translations.These mainly include how to accurately capture subtle information in context,effectively resolve the ambiguity of polysemous words,properly translate idiomatic expressions,accurately reflect cultural differences,and correctly use terms in specific fields.This article reviews the existing platforms and the latest research results in the field of machine translation,deeply explores the above-mentioned key difficulties,and explores the introduction of artificial intelligence technology.The aim is to improve the overall performance of machine-translation systems,facilitate smoother communication and understanding among people from different cultural backgrounds,further eliminate language barriers,and promote the in-depth integration and development of global multiculturalism. 展开更多
关键词 machine translation Artificial intelligence Context analysis
在线阅读 下载PDF
A Review of Machine Translation Techniques for Low-Resource Languages
6
作者 PENG Cheng-xi MA Zi-han 《Journal of Literature and Art Studies》 2025年第9期725-731,共7页
Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large lan... Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large language models(LLMs),while highlighting ongoing challenges such as data bottlenecks,biases,fairness,and computational costs.Finally,it discusses future directions,including efficient parameter fine-tuning,multimodal translation,and community-driven corpus construction,providing insights for advancing LRL translation research. 展开更多
关键词 low-resource languages(LRLs) machine translation large language models(LLMs)
在线阅读 下载PDF
LKMT:Linguistics Knowledge-Driven Multi-Task Neural Machine Translation for Urdu and English
7
作者 Muhammad Naeem Ul Hassan Zhengtao Yu +4 位作者 Jian Wang Ying Li Shengxiang Gao Shuwan Yang Cunli Mao 《Computers, Materials & Continua》 SCIE EI 2024年第10期951-969,共19页
Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the ... Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the scale of the parallel training corpus is limited.Considering the pre-trained language model has a strong ability for monolingual representation,it is the key challenge for machine translation to construct the in-depth relationship between the source and target language by injecting the lexical and syntactic information into pre-trained language models.To alleviate the dependence on the parallel corpus,we propose a Linguistics Knowledge-Driven MultiTask(LKMT)approach to inject part-of-speech and syntactic knowledge into pre-trained models,thus enhancing the machine translation performance.On the one hand,we integrate part-of-speech and dependency labels into the embedding layer and exploit large-scale monolingual corpus to update all parameters of pre-trained language models,thus ensuring the updated language model contains potential lexical and syntactic information.On the other hand,we leverage an extra self-attention layer to explicitly inject linguistic knowledge into the pre-trained language model-enhanced machine translation model.Experiments on the benchmark dataset show that our proposed LKMT approach improves the Urdu-English translation accuracy by 1.97 points and the English-Urdu translation accuracy by 2.42 points,highlighting the effectiveness of our LKMT framework.Detailed ablation experiments confirm the positive impact of part-of-speech and dependency parsing on machine translation. 展开更多
关键词 Urdu NMT(neural machine translation) Urdu natural language processing Urdu Linguistic features low resources language linguistic features pretrain model
在线阅读 下载PDF
Improving Low-Resource Machine Translation Using Reinforcement Learning from Human Feedback
8
作者 Liqing Wang Yiheng Xiao 《Intelligent Automation & Soft Computing》 2024年第4期619-631,共13页
Neural Machine Translation is one of the key research directions in Natural Language Processing.However,limited by the scale and quality of parallel corpus,the translation quality of low-resource Neural Machine Transl... Neural Machine Translation is one of the key research directions in Natural Language Processing.However,limited by the scale and quality of parallel corpus,the translation quality of low-resource Neural Machine Translation has always been unsatisfactory.When Reinforcement Learning from Human Feedback(RLHF)is applied to lowresource machine translation,commonly encountered issues of substandard preference data quality and the higher cost associated with manual feedback data.Therefore,a more cost-effective method for obtaining feedback data is proposed.At first,optimizing the quality of preference data through the prompt engineering of the Large Language Model(LLM),then combining human feedback to complete the evaluation.In this way,the reward model could acquire more semantic information and human preferences during the training phase,thereby enhancing feedback efficiency and the result’s quality.Experimental results demonstrate that compared with the traditional RLHF method,our method has been proven effective on multiple datasets and exhibits a notable improvement of 1.07 in BLUE.Meanwhile,it is also more favorably received in the assessments conducted by human evaluators and GPT-4o. 展开更多
关键词 Low-resource neural machine translation RLHF prompt engineering LLM
在线阅读 下载PDF
Artificial Intelligence Regulation and Machine Translation Ethics
9
作者 Xiaojun ZHANG 《译苑新谭》 2024年第2期1-14,共14页
The technological breakthroughs in generative artificial intelligence,represented by ChatGPT,have brought about significant social changes as well as new problems and challenges.Generative artificial intelligence has ... The technological breakthroughs in generative artificial intelligence,represented by ChatGPT,have brought about significant social changes as well as new problems and challenges.Generative artificial intelligence has inherent flaws such as language imbalance,algorithmic black box,and algorithmic bias,and at the same time,it has external risks such as algorithmic comfort zone,data pollution,algorithmic infringement,and inaccurate output.These problems lead to the difficulty in legislation for the governance of generative artificial intelligence.Taking the data contamination incident in Google Translate as an example,this article proposes that in the process of constructing machine translation ethics,the responsibility mechanism of generative artificial intelligence should be constructed around three elements:data processing,algorithmic optimisation,and ethical alignment. 展开更多
关键词 artificial intelligence regulation machine translation ethics data processing algorithmic optimisation ethical alignment
原文传递
Improvements of Google Neural Machine Translation
10
作者 李瑞 蒋美佳 《海外英语》 2017年第15期132-134,共3页
Machine Translation has been playing an important role in modern society due to its effectiveness and efficiency,but the great demand for corpus makes it difficult for users to use traditional Machine Translation syst... Machine Translation has been playing an important role in modern society due to its effectiveness and efficiency,but the great demand for corpus makes it difficult for users to use traditional Machine Translation systems.To solve this problem and improve translation quality,in November 2016,Google introduces Google Neural Machine Translation system,which implements the latest techniques to achieve better outcomes.The conspicuous achievement has been proved by experiments using BLEU score to measure performance of different systems.With GNMT,the gap between human and machine translation is narrowing. 展开更多
关键词 machine translation machine translation improvement translation google neural machine translation neural machine translation
在线阅读 下载PDF
Progress in Machine Translation 被引量:2
11
作者 Haifeng Wang Hua Wu +2 位作者 Zhongjun He Liang Huang Kenneth Ward Church 《Engineering》 SCIE EI CAS 2022年第11期143-153,共11页
After more than 70 years of evolution,great achievements have been made in machine translation.Especially in recent years,translation quality has been greatly improved with the emergence of neural machine translation(... After more than 70 years of evolution,great achievements have been made in machine translation.Especially in recent years,translation quality has been greatly improved with the emergence of neural machine translation(NMT).In this article,we first review the history of machine translation from rule-based machine translation to example-based machine translation and statistical machine translation.We then introduce NMT in more detail,including the basic framework and the current dominant framework,Transformer,as well as multilingual translation models to deal with the data sparseness problem.In addition,we introduce cutting-edge simultaneous translation methods that achieve a balance between translation quality and latency.We then describe various products and applications of machine translation.At the end of this article,we briefly discuss challenges and future research directions in this field. 展开更多
关键词 machine translation Neural machine translation Simultaneous translation
在线阅读 下载PDF
Dependency-Based Local Attention Approach to Neural Machine Translation 被引量:3
12
作者 Jing Qiu Yan Liu +4 位作者 Yuhan Chai Yaqi Si Shen Su Le Wang Yue Wu 《Computers, Materials & Continua》 SCIE EI 2019年第5期547-562,共16页
Recently dependency information has been used in different ways to improve neural machine translation.For example,add dependency labels to the hidden states of source words.Or the contiguous information of a source wo... Recently dependency information has been used in different ways to improve neural machine translation.For example,add dependency labels to the hidden states of source words.Or the contiguous information of a source word would be found according to the dependency tree and then be learned independently and be added into Neural Machine Translation(NMT)model as a unit in various ways.However,these works are all limited to the use of dependency information to enrich the hidden states of source words.Since many works in Statistical Machine Translation(SMT)and NMT have proven the validity and potential of using dependency information.We believe that there are still many ways to apply dependency information in the NMT structure.In this paper,we explore a new way to use dependency information to improve NMT.Based on the theory of local attention mechanism,we present Dependency-based Local Attention Approach(DLAA),a new attention mechanism that allowed the NMT model to trace the dependency words related to the current translating words.Our work also indicates that dependency information could help to supervise attention mechanism.Experiment results on WMT 17 Chineseto-English translation task shared training datasets show that our model is effective and perform distinctively on long sentence translation. 展开更多
关键词 Neural machine translation attention mechanism dependency parsing
在线阅读 下载PDF
Corpus Augmentation for Improving Neural Machine Translation 被引量:2
13
作者 Zijian Li Chengying Chi Yunyun Zhan 《Computers, Materials & Continua》 SCIE EI 2020年第7期637-650,共14页
The translation quality of neural machine translation(NMT)systems depends largely on the quality of large-scale bilingual parallel corpora available.Research shows that under the condition of limited resources,the per... The translation quality of neural machine translation(NMT)systems depends largely on the quality of large-scale bilingual parallel corpora available.Research shows that under the condition of limited resources,the performance of NMT is greatly reduced,and a large amount of high-quality bilingual parallel data is needed to train a competitive translation model.However,not all languages have large-scale and high-quality bilingual corpus resources available.In these cases,improving the quality of the corpora has become the main focus to increase the accuracy of the NMT results.This paper proposes a new method to improve the quality of data by using data cleaning,data expansion,and other measures to expand the data at the word and sentence-level,thus improving the richness of the bilingual data.The long short-term memory(LSTM)language model is also used to ensure the smoothness of sentence construction in the process of sentence construction.At the same time,it uses a variety of processing methods to improve the quality of the bilingual data.Experiments using three standard test sets are conducted to validate the proposed method;the most advanced fairseq-transformer NMT system is used in the training.The results show that the proposed method has worked well on improving the translation results.Compared with the state-of-the-art methods,the BLEU value of our method is increased by 2.34 compared with that of the baseline. 展开更多
关键词 Neural machine translation corpus argumentation model improvement deep learning data cleaning
在线阅读 下载PDF
A Novel Beam Search to Improve Neural Machine Translation for English-Chinese 被引量:2
14
作者 Xinyue Lin Jin Liu +1 位作者 Jianming Zhang Se-Jung Lim 《Computers, Materials & Continua》 SCIE EI 2020年第10期387-404,共18页
Neural Machine Translation(NMT)is an end-to-end learning approach for automated translation,overcoming the weaknesses of conventional phrase-based translation systems.Although NMT based systems have gained their popul... Neural Machine Translation(NMT)is an end-to-end learning approach for automated translation,overcoming the weaknesses of conventional phrase-based translation systems.Although NMT based systems have gained their popularity in commercial translation applications,there is still plenty of room for improvement.Being the most popular search algorithm in NMT,beam search is vital to the translation result.However,traditional beam search can produce duplicate or missing translation due to its target sequence selection strategy.Aiming to alleviate this problem,this paper proposed neural machine translation improvements based on a novel beam search evaluation function.And we use reinforcement learning to train a translation evaluation system to select better candidate words for generating translations.In the experiments,we conducted extensive experiments to evaluate our methods.CASIA corpus and the 1,000,000 pairs of bilingual corpora of NiuTrans are used in our experiments.The experiment results prove that the proposed methods can effectively improve the English to Chinese translation quality. 展开更多
关键词 Neural machine translation beam search reinforcement learning
在线阅读 下载PDF
Improve Neural Machine Translation by Building Word Vector with Part of Speech 被引量:3
15
作者 Jinyingming Zhang Jin Liu Xinyue Lin 《Journal on Artificial Intelligence》 2020年第2期79-88,共10页
Neural Machine Translation(NMT)based system is an important technology for translation applications.However,there is plenty of rooms for the improvement of NMT.In the process of NMT,traditional word vector cannot dist... Neural Machine Translation(NMT)based system is an important technology for translation applications.However,there is plenty of rooms for the improvement of NMT.In the process of NMT,traditional word vector cannot distinguish the same words under different parts of speech(POS).Aiming to alleviate this problem,this paper proposed a new word vector training method based on POS feature.It can efficiently improve the quality of translation by adding POS feature to the training process of word vectors.In the experiments,we conducted extensive experiments to evaluate our methods.The experimental result shows that the proposed method is beneficial to improve the quality of translation from English into Chinese. 展开更多
关键词 machine translation parts of speech word vector
在线阅读 下载PDF
Evaluating Classification Research for Machine Translation Course Teaching 被引量:1
16
作者 Honglin Wu Ke Wang 《Journal of Contemporary Educational Research》 2022年第10期1-5,共5页
Teaching evaluation can be divided into different types,additionally their functions and applicable conditions are different.According to different standards,teaching evaluation can be divided into different types:(1)... Teaching evaluation can be divided into different types,additionally their functions and applicable conditions are different.According to different standards,teaching evaluation can be divided into different types:(1)according to different evaluation functions,it can be divided into pre-evaluation,intermediate evaluation,and post-evaluation;(2)according to different evaluation reference standards,it can be divided into relative evaluation,absolute evaluation,and individual difference evaluation;(3)according to different evaluation and analysis methods,it can be divided into qualitative and quantitative evaluation;(4)according to the different evaluation subjects,it can be divided into self-evaluation and others’evaluation.This paper introduced research work using different types of teaching evaluation in the machine translation course according to different situations.The research results showed that the rational selection of different types of teaching evaluation methods and the combination of these methods can greatly promote teaching. 展开更多
关键词 Evaluating classification TEACHING machine translation
在线阅读 下载PDF
Neural Machine Translation Models with Attention-Based Dropout Layer
17
作者 Huma Israr Safdar Abbas Khan +3 位作者 Muhammad Ali Tahir Muhammad Khuram Shahzad Muneer Ahmad Jasni Mohamad Zain 《Computers, Materials & Continua》 SCIE EI 2023年第5期2981-3009,共29页
In bilingual translation,attention-based Neural Machine Translation(NMT)models are used to achieve synchrony between input and output sequences and the notion of alignment.NMT model has obtained state-of-the-art perfo... In bilingual translation,attention-based Neural Machine Translation(NMT)models are used to achieve synchrony between input and output sequences and the notion of alignment.NMT model has obtained state-of-the-art performance for several language pairs.However,there has been little work exploring useful architectures for Urdu-to-English machine translation.We conducted extensive Urdu-to-English translation experiments using Long short-term memory(LSTM)/Bidirectional recurrent neural networks(Bi-RNN)/Statistical recurrent unit(SRU)/Gated recurrent unit(GRU)/Convolutional neural network(CNN)and Transformer.Experimental results show that Bi-RNN and LSTM with attention mechanism trained iteratively,with a scalable data set,make precise predictions on unseen data.The trained models yielded competitive results by achieving 62.6%and 61%accuracy and 49.67 and 47.14 BLEU scores,respectively.From a qualitative perspective,the translation of the test sets was examined manually,and it was observed that trained models tend to produce repetitive output more frequently.The attention score produced by Bi-RNN and LSTM produced clear alignment,while GRU showed incorrect translation for words,poor alignment and lack of a clear structure.Therefore,we considered refining the attention-based models by defining an additional attention-based dropout layer.Attention dropout fixes alignment errors and minimizes translation errors at the word level.After empirical demonstration and comparison with their counterparts,we found improvement in the quality of the resulting translation system and a decrease in the perplexity and over-translation score.The ability of the proposed model was evaluated using Arabic-English and Persian-English datasets as well.We empirically concluded that adding an attention-based dropout layer helps improve GRU,SRU,and Transformer translation and is considerably more efficient in translation quality and speed. 展开更多
关键词 Natural language processing neural machine translation word embedding ATTENTION PERPLEXITY selective dropout regularization URDU PERSIAN Arabic BLEU
在线阅读 下载PDF
Integrating Deep Learning and Machine Translation for Understanding Unrefined Languages
18
作者 Hong Geun Ji Soyoung Oh +2 位作者 Jina Kim Seong Choi Eunil Park 《Computers, Materials & Continua》 SCIE EI 2022年第1期669-678,共10页
In the field of natural language processing(NLP),the advancement of neural machine translation has paved the way for cross-lingual research.Yet,most studies in NLP have evaluated the proposed language models on well-r... In the field of natural language processing(NLP),the advancement of neural machine translation has paved the way for cross-lingual research.Yet,most studies in NLP have evaluated the proposed language models on well-refined datasets.We investigatewhether amachine translation approach is suitable for multilingual analysis of unrefined datasets,particularly,chat messages in Twitch.In order to address it,we collected the dataset,which included 7,066,854 and 3,365,569 chat messages from English and Korean streams,respectively.We employed several machine learning classifiers and neural networks with two different types of embedding:word-sequence embedding and the final layer of a pre-trained language model.The results of the employed models indicate that the accuracy difference between English,and English to Korean was relatively high,ranging from 3%to 12%.For Korean data(Korean,and Korean to English),it ranged from 0%to 2%.Therefore,the results imply that translation from a low-resource language(e.g.,Korean)into a high-resource language(e.g.,English)shows higher performance,in contrast to vice versa.Several implications and limitations of the presented results are also discussed.For instance,we suggest the feasibility of translation from resource-poor languages for using the tools of resource-rich languages in further analysis. 展开更多
关键词 TWITCH MULTILINGUAL machine translation machine learning
在线阅读 下载PDF
Korean Morphological Analysis for Korean-Vietnamese Statistical Machine Translation
19
作者 Quang-Phuoc Nguyen Joon-Choul Shin Cheol-Young Ock 《Journal of Electronic Science and Technology》 CAS CSCD 2017年第4期413-419,共7页
This paper describes the experiments with Korean-to-Vietnamese statistical machine translation(SMT). The fact that Korean is a morphologically complex language that does not have clear optimal word boundaries causes a... This paper describes the experiments with Korean-to-Vietnamese statistical machine translation(SMT). The fact that Korean is a morphologically complex language that does not have clear optimal word boundaries causes a major problem of translating into or from Korean. To solve this problem, we present a method to conduct a Korean morphological analysis by using a pre-analyzed partial word-phrase dictionary(PWD).Besides, we build a Korean-Vietnamese parallel corpus for training SMT models by collecting text from multilingual magazines. Then, we apply such a morphology analysis to Korean sentences that are included in the collected parallel corpus as a preprocessing step. The experiment results demonstrate a remarkable improvement of Korean-to-Vietnamese translation quality in term of bi-lingual evaluation understudy(BLEU). 展开更多
关键词 Factored translation models Korean-Vietnamese parallel corpus morphological analysis statistical machine translation(SMT)
在线阅读 下载PDF
Neural Machine Translation by Fusing Key Information of Text
20
作者 Shijie Hu Xiaoyu Li +8 位作者 Jiayu Bai Hang Lei Weizhong Qian Sunqiang Hu Cong Zhang Akpatsa Samuel Kofi Qian Qiu Yong Zhou Shan Yang 《Computers, Materials & Continua》 SCIE EI 2023年第2期2803-2815,共13页
When the Transformer proposed by Google in 2017,it was first used for machine translation tasks and achieved the state of the art at that time.Although the current neural machine translation model can generate high qu... When the Transformer proposed by Google in 2017,it was first used for machine translation tasks and achieved the state of the art at that time.Although the current neural machine translation model can generate high quality translation results,there are still mistranslations and omissions in the translation of key information of long sentences.On the other hand,the most important part in traditional translation tasks is the translation of key information.In the translation results,as long as the key information is translated accurately and completely,even if other parts of the results are translated incorrect,the final translation results’quality can still be guaranteed.In order to solve the problem of mistranslation and missed translation effectively,and improve the accuracy and completeness of long sentence translation in machine translation,this paper proposes a key information fused neural machine translation model based on Transformer.The model proposed in this paper extracts the keywords of the source language text separately as the input of the encoder.After the same encoding as the source language text,it is fused with the output of the source language text encoded by the encoder,then the key information is processed and input into the decoder.With incorporating keyword information from the source language sentence,the model’s performance in the task of translating long sentences is very reliable.In order to verify the effectiveness of the method of fusion of key information proposed in this paper,a series of experiments were carried out on the verification set.The experimental results show that the Bilingual Evaluation Understudy(BLEU)score of the model proposed in this paper on theWorkshop on Machine Translation(WMT)2017 test dataset is higher than the BLEU score of Transformer proposed by Google on the WMT2017 test dataset.The experimental results show the advantages of the model proposed in this paper. 展开更多
关键词 Key information transformer FUSION neural machine translation
在线阅读 下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部