期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Joint Generation of Distractors for Multiple-Choice Questions:A Text-to-Text Approach
1
作者 Ricardo Rodriguez-Torrealba Eva Garcia-Lopez Antonio Garcia-Cabot 《Computers, Materials & Continua》 2025年第5期1683-1705,共23页
Generation of good-quality distractors is a key and time-consuming task associated withmultiple-choice questions(MCQs),one of the assessment items that have dominated the educational field for years.Recent advances in... Generation of good-quality distractors is a key and time-consuming task associated withmultiple-choice questions(MCQs),one of the assessment items that have dominated the educational field for years.Recent advances in language models and architectures present an opportunity for helping teachers to generate and update these elements to the required speed and scale of widespread increase in online education.This study focuses on a text-to-text approach for joints generation of distractors for MCQs,where the context,question and correct answer are used as input,while the set of distractors corresponds to the output,allowing the generation of three distractors in a singlemodel inference.By fine-tuning FlanT5 models and LongT5 with TGlobal attention using a RACE-based dataset,the potential of this approach is explored,demonstrating an improvement in the BLEU and ROUGE-L metrics when compared to previous works and a GPT-3.5 baseline.Additionally,BERTScore is introduced in the evaluation,showing that the fine-tuned models generate distractors semantically close to the reference,but the GPT-3.5 baseline still outperforms in this area.A tendency toward duplicating distractors is noted,although models fine-tuned with Low-Rank Adaptation(LoRA)and 4-bit quantization showcased a significant reduction in duplicated distractors. 展开更多
关键词 text-to-text distractor generation fine-tuning FlanT5 LongT5 multiple-choice QUESTIONNAIRE
在线阅读 下载PDF
利用T5和MAML的多语种英语翻译质量改进研究
2
作者 梅玲 孙红萍 《鄂州大学学报》 2025年第2期98-102,105,共6页
为改善翻译效果与质量,结合T5 (Text-To-Text Transfer Transformer)和MAML (Model-Agnostic Meta-Learning),对其在多语种英语翻译中的质量可持续改进与应用进行了研究。采用自回归学习方法,对T5模型预训练参数进行微调,构建一个生成... 为改善翻译效果与质量,结合T5 (Text-To-Text Transfer Transformer)和MAML (Model-Agnostic Meta-Learning),对其在多语种英语翻译中的质量可持续改进与应用进行了研究。采用自回归学习方法,对T5模型预训练参数进行微调,构建一个生成式的多语种英语翻译模型,结合MAML框架,在多个任务上进行训练,使模型在少量的新任务数据上实现快速自适应,利用网络爬虫,构建多语种平行语料库,并以BLEU(Bilingual Evaluation Understudy)及TER(Translation Error Rate)为指标,对基于T5和MAML的翻译模型进行了质量评估。实验结果显示,相较于OpenNMT (Open Neural Machine Translation)、Transformer以及Opus-MT(Open Parallel Corpus-Machine Translation)基线模型,该文模型BLEU Score均值分别高出6.05%、2.59%以及2.05%。结论表明,T5-MAML模型能够有效改进多语种英语翻译质量,实现更自然流畅的翻译输出。 展开更多
关键词 多语言英语翻译 text-to-text Transfer Transformer Model-Agnostic Meta-Learning 英语翻译模型
在线阅读 下载PDF
A Transformer-Based Deep Learning Framework with Semantic Encoding and Syntax-Aware LSTM for Fake Electronic News Detection
3
作者 Hamza Murad Khan Shakila Basheer +3 位作者 Mohammad Tabrez Quasim Raja`a Al-Naimi Vijaykumar Varadarajan Anwar Khan 《Computers, Materials & Continua》 2026年第1期1024-1048,共25页
With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contex... With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contextual understanding,sequential dependencies,and/or data imbalance.This makes distinction between genuine and fabricated news a challenging task.To address this problem,we propose a novel hybrid architecture,T5-SA-LSTM,which synergistically integrates the T5 Transformer for semantically rich contextual embedding with the Self-Attentionenhanced(SA)Long Short-Term Memory(LSTM).The LSTM is trained using the Adam optimizer,which provides faster and more stable convergence compared to the Stochastic Gradient Descend(SGD)and Root Mean Square Propagation(RMSProp).The WELFake and FakeNewsPrediction datasets are used,which consist of labeled news articles having fake and real news samples.Tokenization and Synthetic Minority Over-sampling Technique(SMOTE)methods are used for data preprocessing to ensure linguistic normalization and class imbalance.The incorporation of the Self-Attention(SA)mechanism enables the model to highlight critical words and phrases,thereby enhancing predictive accuracy.The proposed model is evaluated using accuracy,precision,recall(sensitivity),and F1-score as performance metrics.The model achieved 99%accuracy on the WELFake dataset and 96.5%accuracy on the FakeNewsPrediction dataset.It outperformed the competitive schemes such as T5-SA-LSTM(RMSProp),T5-SA-LSTM(SGD)and some other models. 展开更多
关键词 Fake news detection tokenization SMOTE text-to-text transfer transformer(T5) long short-term memory(LSTM) self-attention mechanism(SA) T5-SA-LSTM WELFake dataset FakeNewsPrediction dataset
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部