We use a lot of devices in our daily life to communicate with others. In this modern world, people use email, Facebook, Twitter, and many other social network sites for exchanging information. People lose their valuab...We use a lot of devices in our daily life to communicate with others. In this modern world, people use email, Facebook, Twitter, and many other social network sites for exchanging information. People lose their valuable time misspelling and retyping, and some people are not happy to type large sentences because they face unnecessary words or grammatical issues. So, for this reason, word predictive systems help to exchange textual information more quickly, easier, and comfortably for all people. These systems predict the next most probable words and give users to choose of the needed word from these suggested words. Word prediction can help the writer by predicting the next word and helping complete the sentence correctly. This research aims to forecast the most suitable next word to complete a sentence for any given context. In this research, we have worked on the Bangla language. We have presented a process that can expect the next maximum probable and proper words and suggest a complete sentence using predicted words. In this research, GRU-based RNN has been used on the N-gram dataset to develop the proposed model. We collected a large dataset using multiple sources in the Bangla language and also compared it to the other approaches that have been used such as LSTM, and Naive Bayes. But this suggested approach provides excellent exactness than others. Here, the Unigram model provides 88.22%, Bi-gram model is 99.24%, Tri-gram model is 97.69%, and 4-gram and 5-gram models provide 99.43% and 99.78% on average accurateness. We think that our proposed method profound impression on Bangla search engines.展开更多
This paper aims to explore the relationship between boundedness and sentence completion. We found that it is the boundedness of eventualities instead of telicity of verbs that virtually affects and restricts sentence ...This paper aims to explore the relationship between boundedness and sentence completion. We found that it is the boundedness of eventualities instead of telicity of verbs that virtually affects and restricts sentence completion. Unboundedness and boundedness of eventualities are the underlying factors that influence and restrict sentence completion while sentence-completing elements are explicit markers of boundedness and unboundedness.展开更多
Sign language is a primary mode of communication for individuals with hearing impairments,conveying meaning through hand shapes and hand movements.Contrary to spoken or written languages,sign language relies on the re...Sign language is a primary mode of communication for individuals with hearing impairments,conveying meaning through hand shapes and hand movements.Contrary to spoken or written languages,sign language relies on the recognition and interpretation of hand gestures captured in video data.However,sign language datasets remain relatively limited compared to those of other languages,which hinders the training and performance of deep learning models.Additionally,the distinct word order of sign language,unlike that of spoken language,requires context-aware and natural sentence generation.To address these challenges,this study applies data augmentation techniques to build a Korean Sign Language dataset and train recognition models.Recognized words are then reconstructed into complete sentences.The sign recognition process uses OpenCV and MediaPipe to extract hand landmarks from sign language videos and analyzes hand position,orientation,and motion.The extracted features are converted into time-series data and fed into a Long Short-Term Memory(LSTM)model.The proposed recognition framework achieved an accuracy of up to 81.25%,while the sentence generation achieved an accuracy of up to 95%.The proposed approach is expected to be applicable not only to Korean Sign Language but also to other low-resource sign languages for recognition and translation tasks.展开更多
文摘We use a lot of devices in our daily life to communicate with others. In this modern world, people use email, Facebook, Twitter, and many other social network sites for exchanging information. People lose their valuable time misspelling and retyping, and some people are not happy to type large sentences because they face unnecessary words or grammatical issues. So, for this reason, word predictive systems help to exchange textual information more quickly, easier, and comfortably for all people. These systems predict the next most probable words and give users to choose of the needed word from these suggested words. Word prediction can help the writer by predicting the next word and helping complete the sentence correctly. This research aims to forecast the most suitable next word to complete a sentence for any given context. In this research, we have worked on the Bangla language. We have presented a process that can expect the next maximum probable and proper words and suggest a complete sentence using predicted words. In this research, GRU-based RNN has been used on the N-gram dataset to develop the proposed model. We collected a large dataset using multiple sources in the Bangla language and also compared it to the other approaches that have been used such as LSTM, and Naive Bayes. But this suggested approach provides excellent exactness than others. Here, the Unigram model provides 88.22%, Bi-gram model is 99.24%, Tri-gram model is 97.69%, and 4-gram and 5-gram models provide 99.43% and 99.78% on average accurateness. We think that our proposed method profound impression on Bangla search engines.
文摘This paper aims to explore the relationship between boundedness and sentence completion. We found that it is the boundedness of eventualities instead of telicity of verbs that virtually affects and restricts sentence completion. Unboundedness and boundedness of eventualities are the underlying factors that influence and restrict sentence completion while sentence-completing elements are explicit markers of boundedness and unboundedness.
基金supported by the Institute of Information&Communications Technoljogy Planning&Evaluation(IITP)-Innovative Human Resource Development for Local Intellectualization Program grant funded by the Korea government(MSIT)(IITP-2026-RS-2022-00156334,50%)the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1C1C2011105,50%).
文摘Sign language is a primary mode of communication for individuals with hearing impairments,conveying meaning through hand shapes and hand movements.Contrary to spoken or written languages,sign language relies on the recognition and interpretation of hand gestures captured in video data.However,sign language datasets remain relatively limited compared to those of other languages,which hinders the training and performance of deep learning models.Additionally,the distinct word order of sign language,unlike that of spoken language,requires context-aware and natural sentence generation.To address these challenges,this study applies data augmentation techniques to build a Korean Sign Language dataset and train recognition models.Recognized words are then reconstructed into complete sentences.The sign recognition process uses OpenCV and MediaPipe to extract hand landmarks from sign language videos and analyzes hand position,orientation,and motion.The extracted features are converted into time-series data and fed into a Long Short-Term Memory(LSTM)model.The proposed recognition framework achieved an accuracy of up to 81.25%,while the sentence generation achieved an accuracy of up to 95%.The proposed approach is expected to be applicable not only to Korean Sign Language but also to other low-resource sign languages for recognition and translation tasks.