期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Text Difficulty,Working Memory Capacity and Mind Wandering During Chinese EFL Learners’Reading
1
作者 Xianli GAO Li LI 《Chinese Journal of Applied Linguistics》 2024年第3期433-449,525,共18页
This experimental study investigated how text difficulty and different working memory capacity(WMC)affected Chinese EFL learners’reading comprehension and their tendency to engage in task-unrelated thoughts,that is,m... This experimental study investigated how text difficulty and different working memory capacity(WMC)affected Chinese EFL learners’reading comprehension and their tendency to engage in task-unrelated thoughts,that is,mind wandering(MW),in the course of reading.Sixty first-year university non-English majors participated in the study.A two-factor mixed experimental design of 2(text difficulty:difficult and simple)×2(WMC:high/large and low/small)was employed.Results revealed that 1)the main and interaction effects of WMC and text difficulty on voluntary MW were significant,whereas those on involuntary MW were not;2)while reading the easy texts,the involuntary MW of high-WMC individuals was less frequent than that of low-WMC ones,whereas while reading the difficult ones,the direct relationship between WMC and involuntary MW was not found;and that 3)high-WMC individuals had a lower overall rate of MW and better reading performance than low-WMC individuals did,but with increasing text difficulty,their rates of overall MW and voluntary MW were getting higher and higher,and the reading performance was getting lower and lower.These results lend support to WM theory and have pedagogical implications for the instruction of L2 reading. 展开更多
关键词 text difficulty working memory capacity reading mind wandering voluntary mind wandering involuntary mind wandering reading comprehension
在线阅读 下载PDF
Text Difficulty Study:Do Machines Behave the Same as Humans Regarding Text Difficulty?
2
作者 Bowen Chen Xiao Ding +4 位作者 Yi Zhao Bo Fu Tingmao Lin Bing Qin Ting Liu 《Machine Intelligence Research》 EI CSCD 2024年第2期283-293,共11页
With the emergence of pre-trained models,current neural networks are able to give task performance that is comparable to humans.However,we know little about the fundamental working mechanism of pre-trained models in w... With the emergence of pre-trained models,current neural networks are able to give task performance that is comparable to humans.However,we know little about the fundamental working mechanism of pre-trained models in which we do not know how they approach such performance and how the task is solved by the model.For example,given a task,human learns from easy to hard,whereas the model learns randomly.Undeniably,difficulty-insensitive learning leads to great success in natural language processing(NLP),but little attention has been paid to the effect of text difficulty in NLP.We propose a human learning matching index(HLM Index)to investigate the effect of text difficulty.Experiment results show:1)LSTM gives more human-like learning behavior than BERT.Additionally,UID-SuperLinear gives the best evaluation of text difficulty among four text difficulty criteria.Among nine tasks,some tasks’performance is related to text difficulty,whereas others are not.2)Model trained on easy data performs best in both easy and medium test data,whereas trained on hard data only performs well on hard test data.3)Train the model from easy to hard,leading to quicker convergence. 展开更多
关键词 Cognition inspired natural language processing PSYCHOLINGUISTICS explainability text difficulty curriculum learning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部