This experimental study investigated how text difficulty and different working memory capacity(WMC)affected Chinese EFL learners’reading comprehension and their tendency to engage in task-unrelated thoughts,that is,m...This experimental study investigated how text difficulty and different working memory capacity(WMC)affected Chinese EFL learners’reading comprehension and their tendency to engage in task-unrelated thoughts,that is,mind wandering(MW),in the course of reading.Sixty first-year university non-English majors participated in the study.A two-factor mixed experimental design of 2(text difficulty:difficult and simple)×2(WMC:high/large and low/small)was employed.Results revealed that 1)the main and interaction effects of WMC and text difficulty on voluntary MW were significant,whereas those on involuntary MW were not;2)while reading the easy texts,the involuntary MW of high-WMC individuals was less frequent than that of low-WMC ones,whereas while reading the difficult ones,the direct relationship between WMC and involuntary MW was not found;and that 3)high-WMC individuals had a lower overall rate of MW and better reading performance than low-WMC individuals did,but with increasing text difficulty,their rates of overall MW and voluntary MW were getting higher and higher,and the reading performance was getting lower and lower.These results lend support to WM theory and have pedagogical implications for the instruction of L2 reading.展开更多
With the emergence of pre-trained models,current neural networks are able to give task performance that is comparable to humans.However,we know little about the fundamental working mechanism of pre-trained models in w...With the emergence of pre-trained models,current neural networks are able to give task performance that is comparable to humans.However,we know little about the fundamental working mechanism of pre-trained models in which we do not know how they approach such performance and how the task is solved by the model.For example,given a task,human learns from easy to hard,whereas the model learns randomly.Undeniably,difficulty-insensitive learning leads to great success in natural language processing(NLP),but little attention has been paid to the effect of text difficulty in NLP.We propose a human learning matching index(HLM Index)to investigate the effect of text difficulty.Experiment results show:1)LSTM gives more human-like learning behavior than BERT.Additionally,UID-SuperLinear gives the best evaluation of text difficulty among four text difficulty criteria.Among nine tasks,some tasks’performance is related to text difficulty,whereas others are not.2)Model trained on easy data performs best in both easy and medium test data,whereas trained on hard data only performs well on hard test data.3)Train the model from easy to hard,leading to quicker convergence.展开更多
文摘This experimental study investigated how text difficulty and different working memory capacity(WMC)affected Chinese EFL learners’reading comprehension and their tendency to engage in task-unrelated thoughts,that is,mind wandering(MW),in the course of reading.Sixty first-year university non-English majors participated in the study.A two-factor mixed experimental design of 2(text difficulty:difficult and simple)×2(WMC:high/large and low/small)was employed.Results revealed that 1)the main and interaction effects of WMC and text difficulty on voluntary MW were significant,whereas those on involuntary MW were not;2)while reading the easy texts,the involuntary MW of high-WMC individuals was less frequent than that of low-WMC ones,whereas while reading the difficult ones,the direct relationship between WMC and involuntary MW was not found;and that 3)high-WMC individuals had a lower overall rate of MW and better reading performance than low-WMC individuals did,but with increasing text difficulty,their rates of overall MW and voluntary MW were getting higher and higher,and the reading performance was getting lower and lower.These results lend support to WM theory and have pedagogical implications for the instruction of L2 reading.
基金the support of the National Natural Science Foundation of China(Nos.U22B2059,62176079)National Natural Science Foundation of Heilongjiang Province,China(No.YQ 2022F005)the Industry-University-Research Innovation Foundation of China University(No.2021ITA05009).
文摘With the emergence of pre-trained models,current neural networks are able to give task performance that is comparable to humans.However,we know little about the fundamental working mechanism of pre-trained models in which we do not know how they approach such performance and how the task is solved by the model.For example,given a task,human learns from easy to hard,whereas the model learns randomly.Undeniably,difficulty-insensitive learning leads to great success in natural language processing(NLP),but little attention has been paid to the effect of text difficulty in NLP.We propose a human learning matching index(HLM Index)to investigate the effect of text difficulty.Experiment results show:1)LSTM gives more human-like learning behavior than BERT.Additionally,UID-SuperLinear gives the best evaluation of text difficulty among four text difficulty criteria.Among nine tasks,some tasks’performance is related to text difficulty,whereas others are not.2)Model trained on easy data performs best in both easy and medium test data,whereas trained on hard data only performs well on hard test data.3)Train the model from easy to hard,leading to quicker convergence.