Different from International Phonetic Alphabet learning,phonics as an effective way of spelling and reading gets more at⁃tention in China.But it faces many problems in implementation process.This paper introduces a ca...Different from International Phonetic Alphabet learning,phonics as an effective way of spelling and reading gets more at⁃tention in China.But it faces many problems in implementation process.This paper introduces a case study of a technology-based phonics teaching and learning.This paper results from two classes in an elementary school revealed that pupils broke through the difficulties of learning phonics on technology-based learning.展开更多
Sentiment analysis is a fine‐grained analysis task that aims to identify the sentiment polarity of a specified sentence.Existing methods in Chinese sentiment analysis tasks only consider sentiment features from a sin...Sentiment analysis is a fine‐grained analysis task that aims to identify the sentiment polarity of a specified sentence.Existing methods in Chinese sentiment analysis tasks only consider sentiment features from a single pole and scale and thus cannot fully exploit and utilise sentiment feature information,making their performance less than ideal.To resolve the problem,the authors propose a new method,GP‐FMLNet,that integrates both glyph and phonetic information and design a novel feature matrix learning process for phonetic features with which to model words that have the same pinyin information but different glyph information.Our method solves the problem of misspelling words influencing sentiment polarity prediction results.Specifically,the authors iteratively mine character,glyph,and pinyin features from the input comments sentences.Then,the authors use soft attention and matrix compound modules to model the phonetic features,which empowers their model to keep on zeroing in on the dynamic‐setting words in various positions and to dispense with the impacts of the deceptive‐setting ones.Ex-periments on six public datasets prove that the proposed model fully utilises the glyph and phonetic information and improves on the performance of existing Chinese senti-ment analysis algorithms.展开更多
This study explored the nature and use of technology-based self-regulated learning(SRL)strategies among the Chinese university students.A total of 20 undergraduate students in China's Mainland were invited to part...This study explored the nature and use of technology-based self-regulated learning(SRL)strategies among the Chinese university students.A total of 20 undergraduate students in China's Mainland were invited to participate in a focus group interview.The students reported using four types of technology-based SRL strategies including cognitive,meta-cognitive,social behavioral,and motivational regulation strategies.Among the strategies,technology-based vocabulary learning was reported to be a dominant strategy by the students.This study opens a new window to understanding how English as a foreign language(EFL)students utilize different strategies to learn English in technology-based learning context.展开更多
Speech is a highly coordinated process that requires precise control over vocal tract morphology/motion to produce intelligible sounds while simultaneously generating unique exhaled flow patterns.The schlieren imaging...Speech is a highly coordinated process that requires precise control over vocal tract morphology/motion to produce intelligible sounds while simultaneously generating unique exhaled flow patterns.The schlieren imaging technique visualizes airflows with subtle density variations.It is hypothesized that speech flows captured by schlieren,when analyzed using a hybrid of convolutional neural network(CNN)and long short-term memory(LSTM)network,can recognize alphabet pronunciations,thus facilitating automatic speech recognition and speech disorder therapy.This study evaluates the feasibility of using a CNN-based video classification network to differentiate speech flows corresponding to the first four alphabets:/A/,/B/,/C/,and/D/.A schlieren optical system was developed,and the speech flows of alphabet pronunciations were recorded for two participants at an acquisition rate of 60 frames per second.A total of 640 video clips,each lasting 1 s,were utilized to train and test a hybrid CNN-LSTM network.Acoustic analyses of the recorded sounds were conducted to understand the phonetic differences among the four alphabets.The hybrid CNN-LSTM network was trained separately on four datasets of varying sizes(i.e.,20,30,40,50 videos per alphabet),all achieving over 95%accuracy in classifying videos of the same participant.However,the network’s performance declined when tested on speech flows from a different participant,with accuracy dropping to around 44%,indicating significant inter-participant variability in alphabet pronunciation.Retraining the network with videos from both participants improved accuracy to 93%on the second participant.Analysis of misclassified videos indicated that factors such as low video quality and disproportional head size affected accuracy.These results highlight the potential of CNN-assisted speech recognition and speech therapy using articulation flows,although challenges remain in expanding the alphabet set and participant cohort.展开更多
文摘Different from International Phonetic Alphabet learning,phonics as an effective way of spelling and reading gets more at⁃tention in China.But it faces many problems in implementation process.This paper introduces a case study of a technology-based phonics teaching and learning.This paper results from two classes in an elementary school revealed that pupils broke through the difficulties of learning phonics on technology-based learning.
基金Science and Technology Innovation 2030‐“New Generation Artificial Intelligence”major project,Grant/Award Number:2020AAA0108703。
文摘Sentiment analysis is a fine‐grained analysis task that aims to identify the sentiment polarity of a specified sentence.Existing methods in Chinese sentiment analysis tasks only consider sentiment features from a single pole and scale and thus cannot fully exploit and utilise sentiment feature information,making their performance less than ideal.To resolve the problem,the authors propose a new method,GP‐FMLNet,that integrates both glyph and phonetic information and design a novel feature matrix learning process for phonetic features with which to model words that have the same pinyin information but different glyph information.Our method solves the problem of misspelling words influencing sentiment polarity prediction results.Specifically,the authors iteratively mine character,glyph,and pinyin features from the input comments sentences.Then,the authors use soft attention and matrix compound modules to model the phonetic features,which empowers their model to keep on zeroing in on the dynamic‐setting words in various positions and to dispense with the impacts of the deceptive‐setting ones.Ex-periments on six public datasets prove that the proposed model fully utilises the glyph and phonetic information and improves on the performance of existing Chinese senti-ment analysis algorithms.
文摘This study explored the nature and use of technology-based self-regulated learning(SRL)strategies among the Chinese university students.A total of 20 undergraduate students in China's Mainland were invited to participate in a focus group interview.The students reported using four types of technology-based SRL strategies including cognitive,meta-cognitive,social behavioral,and motivational regulation strategies.Among the strategies,technology-based vocabulary learning was reported to be a dominant strategy by the students.This study opens a new window to understanding how English as a foreign language(EFL)students utilize different strategies to learn English in technology-based learning context.
文摘Speech is a highly coordinated process that requires precise control over vocal tract morphology/motion to produce intelligible sounds while simultaneously generating unique exhaled flow patterns.The schlieren imaging technique visualizes airflows with subtle density variations.It is hypothesized that speech flows captured by schlieren,when analyzed using a hybrid of convolutional neural network(CNN)and long short-term memory(LSTM)network,can recognize alphabet pronunciations,thus facilitating automatic speech recognition and speech disorder therapy.This study evaluates the feasibility of using a CNN-based video classification network to differentiate speech flows corresponding to the first four alphabets:/A/,/B/,/C/,and/D/.A schlieren optical system was developed,and the speech flows of alphabet pronunciations were recorded for two participants at an acquisition rate of 60 frames per second.A total of 640 video clips,each lasting 1 s,were utilized to train and test a hybrid CNN-LSTM network.Acoustic analyses of the recorded sounds were conducted to understand the phonetic differences among the four alphabets.The hybrid CNN-LSTM network was trained separately on four datasets of varying sizes(i.e.,20,30,40,50 videos per alphabet),all achieving over 95%accuracy in classifying videos of the same participant.However,the network’s performance declined when tested on speech flows from a different participant,with accuracy dropping to around 44%,indicating significant inter-participant variability in alphabet pronunciation.Retraining the network with videos from both participants improved accuracy to 93%on the second participant.Analysis of misclassified videos indicated that factors such as low video quality and disproportional head size affected accuracy.These results highlight the potential of CNN-assisted speech recognition and speech therapy using articulation flows,although challenges remain in expanding the alphabet set and participant cohort.