Sentiment analysis, the meta field of Natural Language Processing (NLP), attempts to analyze and identify thesentiments in the opinionated text data. People share their judgments, reactions, and feedback on the intern...Sentiment analysis, the meta field of Natural Language Processing (NLP), attempts to analyze and identify thesentiments in the opinionated text data. People share their judgments, reactions, and feedback on the internetusing various languages. Urdu is one of them, and it is frequently used worldwide. Urdu-speaking people prefer tocommunicate on social media in Roman Urdu (RU), an English scripting style with the Urdu language dialect.Researchers have developed versatile lexical resources for features-rich comprehensive languages, but limitedlinguistic resources are available to facilitate the sentiment classification of Roman Urdu. This effort encompassesextracting subjective expressions in Roman Urdu and determining the implied opinionated text polarity. Theprimary sources of the dataset are Daraz (an e-commerce platform), Google Maps, and the manual effort. Thecontributions of this study include a Bilingual Roman Urdu Language Detector (BRULD) and a Roman UrduSpelling Checker (RUSC). These integrated modules accept the user input, detect the text language, correct thespellings, categorize the sentiments, and return the input sentence’s orientation with a sentiment intensity score.The developed system gains strength with each input experience gradually. The results show that the languagedetector gives an accuracy of 97.1% on a close domain dataset, with an overall sentiment classification accuracy of94.3%.展开更多
Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the ...Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the scale of the parallel training corpus is limited.Considering the pre-trained language model has a strong ability for monolingual representation,it is the key challenge for machine translation to construct the in-depth relationship between the source and target language by injecting the lexical and syntactic information into pre-trained language models.To alleviate the dependence on the parallel corpus,we propose a Linguistics Knowledge-Driven MultiTask(LKMT)approach to inject part-of-speech and syntactic knowledge into pre-trained models,thus enhancing the machine translation performance.On the one hand,we integrate part-of-speech and dependency labels into the embedding layer and exploit large-scale monolingual corpus to update all parameters of pre-trained language models,thus ensuring the updated language model contains potential lexical and syntactic information.On the other hand,we leverage an extra self-attention layer to explicitly inject linguistic knowledge into the pre-trained language model-enhanced machine translation model.Experiments on the benchmark dataset show that our proposed LKMT approach improves the Urdu-English translation accuracy by 1.97 points and the English-Urdu translation accuracy by 2.42 points,highlighting the effectiveness of our LKMT framework.Detailed ablation experiments confirm the positive impact of part-of-speech and dependency parsing on machine translation.展开更多
Offline Urdu Nastaleeq text recognition has long been a serious problem due to its very cursive nature. In order to get rid of the character segmentation problems, many researchers are shifting focus towards segmentat...Offline Urdu Nastaleeq text recognition has long been a serious problem due to its very cursive nature. In order to get rid of the character segmentation problems, many researchers are shifting focus towards segmentation free ligature based recognition approaches. Majority of the prevalent ligature based recognition systems heavily rely on hand-engineered feature extraction techniques. However, such techniques are more error prone and may often lead to a loss of useful information that might hardly be captured later by any manual features. Most of the prevalent Urdu Nastaleeq test recognition was trained and tested on small sets. This paper proposes the use of stacked denoising autoencoder for automatic feature extraction directly from raw pixel values of ligature images. Such deep learning networks have not been applied for the recognition of Urdu text thus far. Different stacked denoising autoencoders have been trained on 178573 ligatures with 3732 classes from un-degraded(noise free) UPTI(Urdu Printed Text Image) data set. Subsequently, trained networks are validated and tested on degraded versions of UPTI data set. The experimental results demonstrate accuracies in range of 93% to 96% which are better than the existing Urdu OCR systems for such large dataset of ligatures.展开更多
Roman Urdu has been used for text messaging over the Internet for years especially in Indo-Pak Subcontinent.Persons from the subcontinent may speak the same Urdu language but they might be using different scripts for ...Roman Urdu has been used for text messaging over the Internet for years especially in Indo-Pak Subcontinent.Persons from the subcontinent may speak the same Urdu language but they might be using different scripts for writing.The communication using the Roman characters,which are used in the script of Urdu language on social media,is now considered the most typical standard of communication in an Indian landmass that makes it an expensive information supply.English Text classification is a solved problem but there have been only a few efforts to examine the rich information supply of Roman Urdu in the past.This is due to the numerous complexities involved in the processing of Roman Urdu data.The complexities associated with Roman Urdu include the non-availability of the tagged corpus,lack of a set of rules,and lack of standardized spellings.A large amount of Roman Urdu news data is available on mainstream news websites and social media websites like Facebook,Twitter but meaningful information can only be extracted if data is in a structured format.We have developed a Roman Urdu news headline classifier,which will help to classify news into relevant categories on which further analysis and modeling can be done.The author of this research aims to develop the Roman Urdu news classifier,which will classify the news into five categories(health,business,technology,sports,international).First,we will develop the news dataset using scraping tools and then after preprocessing,we will compare the results of different machine learning algorithms like Logistic Regression(LR),Multinomial Naïve Bayes(MNB),Long short term memory(LSTM),and Convolutional Neural Network(CNN).After this,we will use a phonetic algorithm to control lexical variation and test news from different websites.The preliminary results suggest that a more accurate classification can be accomplished by monitoring noise inside data and by classifying the news.After applying above mentioned different machine learning algorithms,results have shown that Multinomial Naïve Bayes classifier is giving the best accuracy of 90.17%which is due to the noise lexical variation.展开更多
Sentiment analysis task has widely been studied for various languages such as English and French.However,Roman Urdu sentiment analysis yet requires more attention from peer-researchers due to the lack of Off-the-Shelf...Sentiment analysis task has widely been studied for various languages such as English and French.However,Roman Urdu sentiment analysis yet requires more attention from peer-researchers due to the lack of Off-the-Shelf Natural Language Processing(NLP)solutions.The primary objective of this study is to investigate the diverse machine learning methods for the sentiment analysis of Roman Urdu data which is very informal in nature and needs to be lexically normalized.To mitigate this challenge,we propose a fine-tuned Support Vector Machine(SVM)powered by Roman Urdu Stemmer.In our proposed scheme,the corpus data is initially cleaned to remove the anomalies from the text.After initial pre-processing,each user review is being stemmed.The input text is transformed into a feature vector using the bag-of-word model.Subsequently,the SVM is used to classify and detect user sentiment.Our proposed scheme is based on a dictionary based Roman Urdu stemmer.The creation of the Roman Urdu stemmer is aimed at standardizing the text so as to minimize the level of complexity.The efficacy of our proposed model is also empirically evaluated with diverse experimental configurations,so as to fine-tune the hyper-parameters and achieve superior performance.Moreover,a series of experiments are conducted on diverse machine learning and deep learning models to compare the performance with our proposed model.We also introduced the largest dataset on Roman Urdu,i.e.,Roman Urdu e-commerce dataset(RUECD),which contains 26K+user reviews annotated by the group of experts.The RUECD is challenging and the largest dataset available of Roman Urdu.The experiments show that the newly generated dataset is quite challenging and requires more attention from the peer researchers for Roman Urdu sentiment analysis.展开更多
Cursive text recognition of Arabic script-based languages like Urdu is extremely complicated due to its diverse and complex characteristics.Evolutionary approaches like genetic algorithms have been used in the past fo...Cursive text recognition of Arabic script-based languages like Urdu is extremely complicated due to its diverse and complex characteristics.Evolutionary approaches like genetic algorithms have been used in the past for various optimization as well as pattern recognition tasks,reporting exceptional results.The proposed Urdu ligature recognition system uses a genetic algorithm for optimization and recognition.Overall the proposed recognition system observes the processes of pre-processing,segmentation,feature extraction,hierarchical clustering,classification rules and genetic algorithm optimization and recognition.The pre-processing stage removes noise from the sentence images,whereas,in segmentation,the sentences are segmented into ligature components.Fifteen features are extracted from each of the segmented ligature images.Intra-feature hierarchical clustering is observed that results in clustered data.Next,classification rules are used for the representation of the clustered data.The genetic algorithm performs an optimization mechanism using multi-level sorting of the clustered data for improving the classification rules used for recognition of Urdu ligatures.Experiments conducted on the benchmark UPTI dataset for the proposed Urdu ligature recognition system yields promising results,achieving a recognition rate of 96.72%.展开更多
Handwritten character recognition systems are used in every field of life nowadays,including shopping malls,banks,educational institutes,etc.Urdu is the national language of Pakistan,and it is the fourth spoken langua...Handwritten character recognition systems are used in every field of life nowadays,including shopping malls,banks,educational institutes,etc.Urdu is the national language of Pakistan,and it is the fourth spoken language in the world.However,it is still challenging to recognize Urdu handwritten characters owing to their cursive nature.Our paper presents a Convolutional Neural Networks(CNN)model to recognize Urdu handwritten alphabet recognition(UHAR)offline and online characters.Our research contributes an Urdu handwritten dataset(aka UHDS)to empower future works in this field.For offline systems,optical readers are used for extracting the alphabets,while diagonal-based extraction methods are implemented in online systems.Moreover,our research tackled the issue concerning the lack of comprehensive and standard Urdu alphabet datasets to empower research activities in the area of Urdu text recognition.To this end,we collected 1000 handwritten samples for each alphabet and a total of 38000 samples from 12 to 25 age groups to train our CNN model using online and offline mediums.Subsequently,we carried out detailed experiments for character recognition,as detailed in the results.The proposed CNN model outperformed as compared to previously published approaches.展开更多
English to Urdu machine translation is still in its beginning and lacks simple translation methods to provide motivating and adequate English to Urdu translation.In order tomake knowledge available to the masses,there...English to Urdu machine translation is still in its beginning and lacks simple translation methods to provide motivating and adequate English to Urdu translation.In order tomake knowledge available to the masses,there should be mechanisms and tools in place to make things understandable by translating from source language to target language in an automated fashion.Machine translation has achieved this goal with encouraging results.When decoding the source text into the target language,the translator checks all the characteristics of the text.To achieve machine translation,rule-based,computational,hybrid and neural machine translation approaches have been proposed to automate the work.In this research work,a neural machine translation approach is employed to translate English text into Urdu.Long Short Term Short Model(LSTM)Encoder Decoder is used to translate English to Urdu.The various steps required to perform translation tasks include preprocessing,tokenization,grammar and sentence structure analysis,word embeddings,training data preparation,encoder-decoder models,and output text generation.The results show that the model used in the research work shows better performance in translation.The results were evaluated using bilingual research metrics and showed that the test and training data yielded the highest score sequences with an effective length of ten(10).展开更多
This paper will talk about the sociolinguistic variations in Urdu Language.It takes account with the interviews that conduct from the various senior citizens of Urdu speakers in Pakistan,each interview base on 4 quest...This paper will talk about the sociolinguistic variations in Urdu Language.It takes account with the interviews that conduct from the various senior citizens of Urdu speakers in Pakistan,each interview base on 4 questions to find out Urdu language variations for this paper.The questions on the value of the Urdu language in today’s world,what changes took place in the Urdu language over the period from lexical and dialectical point of view.The interviews were conducted as semi-structured with the use of different modern and simple tools.The study of this paper will be very helpful for the young generation of Pakistan because all the past studies did not solely talk about the sociolinguistics in Urdu language.It guides them to bring their native or national language with original accents and vocabulary which shows their actual culture and purity of Urdu language.展开更多
Urdu,a prominent subcontinental language,serves as a versatile means of communication.However,its handwritten expressions present challenges for optical character recognition(OCR).While various OCR techniques have bee...Urdu,a prominent subcontinental language,serves as a versatile means of communication.However,its handwritten expressions present challenges for optical character recognition(OCR).While various OCR techniques have been proposed,most of them focus on recognizing printed Urdu characters and digits.To the best of our knowledge,very little research has focused solely on Urdu pure handwriting recognition,and the results of such proposed methods are often inadequate.In this study,we introduce a novel approach to recognizing Urdu pure handwritten digits and characters using Convolutional Neural Networks(CNN).Our proposed method utilizes convolutional layers to extract important features from input images and classifies them using fully connected layers,enabling efficient and accurate detection of Urdu handwritten digits and characters.We implemented the proposed technique on a large publicly available dataset of Urdu handwritten digits and characters.The findings demonstrate that the CNN model achieves an accuracy of 98.30%and an F1 score of 88.6%,indicating its effectiveness in detecting and classifyingUrdu handwritten digits and characters.These results have far-reaching implications for various applications,including document analysis,text recognition,and language understanding,which have previously been unexplored in the context of Urdu handwriting data.This work lays a solid foundation for future research and development in Urdu language detection and processing,opening up new opportunities for advancement in this field.展开更多
In bilingual translation,attention-based Neural Machine Translation(NMT)models are used to achieve synchrony between input and output sequences and the notion of alignment.NMT model has obtained state-of-the-art perfo...In bilingual translation,attention-based Neural Machine Translation(NMT)models are used to achieve synchrony between input and output sequences and the notion of alignment.NMT model has obtained state-of-the-art performance for several language pairs.However,there has been little work exploring useful architectures for Urdu-to-English machine translation.We conducted extensive Urdu-to-English translation experiments using Long short-term memory(LSTM)/Bidirectional recurrent neural networks(Bi-RNN)/Statistical recurrent unit(SRU)/Gated recurrent unit(GRU)/Convolutional neural network(CNN)and Transformer.Experimental results show that Bi-RNN and LSTM with attention mechanism trained iteratively,with a scalable data set,make precise predictions on unseen data.The trained models yielded competitive results by achieving 62.6%and 61%accuracy and 49.67 and 47.14 BLEU scores,respectively.From a qualitative perspective,the translation of the test sets was examined manually,and it was observed that trained models tend to produce repetitive output more frequently.The attention score produced by Bi-RNN and LSTM produced clear alignment,while GRU showed incorrect translation for words,poor alignment and lack of a clear structure.Therefore,we considered refining the attention-based models by defining an additional attention-based dropout layer.Attention dropout fixes alignment errors and minimizes translation errors at the word level.After empirical demonstration and comparison with their counterparts,we found improvement in the quality of the resulting translation system and a decrease in the perplexity and over-translation score.The ability of the proposed model was evaluated using Arabic-English and Persian-English datasets as well.We empirically concluded that adding an attention-based dropout layer helps improve GRU,SRU,and Transformer translation and is considerably more efficient in translation quality and speed.展开更多
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime...Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.展开更多
In the current era of the internet,people use online media for conversation,discussion,chatting,and other similar purposes.Analysis of such material where more than one person is involved has a spate challenge as comp...In the current era of the internet,people use online media for conversation,discussion,chatting,and other similar purposes.Analysis of such material where more than one person is involved has a spate challenge as compared to other text analysis tasks.There are several approaches to identify users’emotions fromthe conversational text for the English language,however regional or low resource languages have been neglected.The Urdu language is one of them and despite being used by millions of users across the globe,with the best of our knowledge there exists no work on dialogue analysis in the Urdu language.Therefore,in this paper,we have proposed a model which utilizes deep learning and machine learning approaches for the classification of users’emotions from the text.To accomplish this task,we have first created a dataset for the Urdu language with the help of existing English language datasets for dialogue analysis.After that,we have preprocessed the data and selected dialogues with common emotions.Once the dataset is prepared,we have used different deep learning and machine learning techniques for the classification of emotion.We have tuned the algorithms according to the Urdu language datasets.The experimental evaluation has shown encouraging results with 67%accuracy for the Urdu dialogue datasets,more than 10,000 dialogues are classified into five emotions i.e.,joy,fear,anger,sadness,and neutral.We believe that this is the first effort for emotion detection from the conversational text in the Urdu language domain.展开更多
Activities of Indians in the Gulf Cooperation Council(GCC)States continue to be a major area of attention by the government and citizens of the sub-region.The GCC States have had to contend with varied numbers of fore...Activities of Indians in the Gulf Cooperation Council(GCC)States continue to be a major area of attention by the government and citizens of the sub-region.The GCC States have had to contend with varied numbers of foreign nationals who continue to troop into their territories in search of economic opportunities.The United Arab Emirates(UAE),which is a federation of seven emirates,has over the years,been a major destination for foreigners.This article identifies various categories of Indian immigrants in the UAE,their areas of activities,and how they have impacted the vernacular of the citizens of the federal monarchical state.It begins by highlighting India-UAE relation prior to the independence of what is now the United Arab Emirates in 1971.Second,it describes the characterization of the relation between Indians and Emiratis after 1971.With some selected examples,the article reveals the impacts Hindi/Urdu languages have made on the vernacular of Emiratis.展开更多
文摘Sentiment analysis, the meta field of Natural Language Processing (NLP), attempts to analyze and identify thesentiments in the opinionated text data. People share their judgments, reactions, and feedback on the internetusing various languages. Urdu is one of them, and it is frequently used worldwide. Urdu-speaking people prefer tocommunicate on social media in Roman Urdu (RU), an English scripting style with the Urdu language dialect.Researchers have developed versatile lexical resources for features-rich comprehensive languages, but limitedlinguistic resources are available to facilitate the sentiment classification of Roman Urdu. This effort encompassesextracting subjective expressions in Roman Urdu and determining the implied opinionated text polarity. Theprimary sources of the dataset are Daraz (an e-commerce platform), Google Maps, and the manual effort. Thecontributions of this study include a Bilingual Roman Urdu Language Detector (BRULD) and a Roman UrduSpelling Checker (RUSC). These integrated modules accept the user input, detect the text language, correct thespellings, categorize the sentiments, and return the input sentence’s orientation with a sentiment intensity score.The developed system gains strength with each input experience gradually. The results show that the languagedetector gives an accuracy of 97.1% on a close domain dataset, with an overall sentiment classification accuracy of94.3%.
基金supported by the National Natural Science Foundation of China under Grant(61732005,61972186)Yunnan Provincial Major Science and Technology Special Plan Projects(Nos.202103AA080015,202203AA080004).
文摘Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the scale of the parallel training corpus is limited.Considering the pre-trained language model has a strong ability for monolingual representation,it is the key challenge for machine translation to construct the in-depth relationship between the source and target language by injecting the lexical and syntactic information into pre-trained language models.To alleviate the dependence on the parallel corpus,we propose a Linguistics Knowledge-Driven MultiTask(LKMT)approach to inject part-of-speech and syntactic knowledge into pre-trained models,thus enhancing the machine translation performance.On the one hand,we integrate part-of-speech and dependency labels into the embedding layer and exploit large-scale monolingual corpus to update all parameters of pre-trained language models,thus ensuring the updated language model contains potential lexical and syntactic information.On the other hand,we leverage an extra self-attention layer to explicitly inject linguistic knowledge into the pre-trained language model-enhanced machine translation model.Experiments on the benchmark dataset show that our proposed LKMT approach improves the Urdu-English translation accuracy by 1.97 points and the English-Urdu translation accuracy by 2.42 points,highlighting the effectiveness of our LKMT framework.Detailed ablation experiments confirm the positive impact of part-of-speech and dependency parsing on machine translation.
基金National Natural Science Foundation of China (Project No. 61273365)111 Project (No. B08004) are gratefully acknowledged
文摘Offline Urdu Nastaleeq text recognition has long been a serious problem due to its very cursive nature. In order to get rid of the character segmentation problems, many researchers are shifting focus towards segmentation free ligature based recognition approaches. Majority of the prevalent ligature based recognition systems heavily rely on hand-engineered feature extraction techniques. However, such techniques are more error prone and may often lead to a loss of useful information that might hardly be captured later by any manual features. Most of the prevalent Urdu Nastaleeq test recognition was trained and tested on small sets. This paper proposes the use of stacked denoising autoencoder for automatic feature extraction directly from raw pixel values of ligature images. Such deep learning networks have not been applied for the recognition of Urdu text thus far. Different stacked denoising autoencoders have been trained on 178573 ligatures with 3732 classes from un-degraded(noise free) UPTI(Urdu Printed Text Image) data set. Subsequently, trained networks are validated and tested on degraded versions of UPTI data set. The experimental results demonstrate accuracies in range of 93% to 96% which are better than the existing Urdu OCR systems for such large dataset of ligatures.
基金This work is supported by the KIAS(Research Number:CG076601)and in part by Sejong University Faculty Research Fund.
文摘Roman Urdu has been used for text messaging over the Internet for years especially in Indo-Pak Subcontinent.Persons from the subcontinent may speak the same Urdu language but they might be using different scripts for writing.The communication using the Roman characters,which are used in the script of Urdu language on social media,is now considered the most typical standard of communication in an Indian landmass that makes it an expensive information supply.English Text classification is a solved problem but there have been only a few efforts to examine the rich information supply of Roman Urdu in the past.This is due to the numerous complexities involved in the processing of Roman Urdu data.The complexities associated with Roman Urdu include the non-availability of the tagged corpus,lack of a set of rules,and lack of standardized spellings.A large amount of Roman Urdu news data is available on mainstream news websites and social media websites like Facebook,Twitter but meaningful information can only be extracted if data is in a structured format.We have developed a Roman Urdu news headline classifier,which will help to classify news into relevant categories on which further analysis and modeling can be done.The author of this research aims to develop the Roman Urdu news classifier,which will classify the news into five categories(health,business,technology,sports,international).First,we will develop the news dataset using scraping tools and then after preprocessing,we will compare the results of different machine learning algorithms like Logistic Regression(LR),Multinomial Naïve Bayes(MNB),Long short term memory(LSTM),and Convolutional Neural Network(CNN).After this,we will use a phonetic algorithm to control lexical variation and test news from different websites.The preliminary results suggest that a more accurate classification can be accomplished by monitoring noise inside data and by classifying the news.After applying above mentioned different machine learning algorithms,results have shown that Multinomial Naïve Bayes classifier is giving the best accuracy of 90.17%which is due to the noise lexical variation.
基金the Deputy for Study and Innovation,Ministry of Education,Kingdom of Saudi Arabia,for funding this research through a Grant(NU/IFC/INT/01/008)from the Najran University Institutional Funding Committee.
文摘Sentiment analysis task has widely been studied for various languages such as English and French.However,Roman Urdu sentiment analysis yet requires more attention from peer-researchers due to the lack of Off-the-Shelf Natural Language Processing(NLP)solutions.The primary objective of this study is to investigate the diverse machine learning methods for the sentiment analysis of Roman Urdu data which is very informal in nature and needs to be lexically normalized.To mitigate this challenge,we propose a fine-tuned Support Vector Machine(SVM)powered by Roman Urdu Stemmer.In our proposed scheme,the corpus data is initially cleaned to remove the anomalies from the text.After initial pre-processing,each user review is being stemmed.The input text is transformed into a feature vector using the bag-of-word model.Subsequently,the SVM is used to classify and detect user sentiment.Our proposed scheme is based on a dictionary based Roman Urdu stemmer.The creation of the Roman Urdu stemmer is aimed at standardizing the text so as to minimize the level of complexity.The efficacy of our proposed model is also empirically evaluated with diverse experimental configurations,so as to fine-tune the hyper-parameters and achieve superior performance.Moreover,a series of experiments are conducted on diverse machine learning and deep learning models to compare the performance with our proposed model.We also introduced the largest dataset on Roman Urdu,i.e.,Roman Urdu e-commerce dataset(RUECD),which contains 26K+user reviews annotated by the group of experts.The RUECD is challenging and the largest dataset available of Roman Urdu.The experiments show that the newly generated dataset is quite challenging and requires more attention from the peer researchers for Roman Urdu sentiment analysis.
文摘Cursive text recognition of Arabic script-based languages like Urdu is extremely complicated due to its diverse and complex characteristics.Evolutionary approaches like genetic algorithms have been used in the past for various optimization as well as pattern recognition tasks,reporting exceptional results.The proposed Urdu ligature recognition system uses a genetic algorithm for optimization and recognition.Overall the proposed recognition system observes the processes of pre-processing,segmentation,feature extraction,hierarchical clustering,classification rules and genetic algorithm optimization and recognition.The pre-processing stage removes noise from the sentence images,whereas,in segmentation,the sentences are segmented into ligature components.Fifteen features are extracted from each of the segmented ligature images.Intra-feature hierarchical clustering is observed that results in clustered data.Next,classification rules are used for the representation of the clustered data.The genetic algorithm performs an optimization mechanism using multi-level sorting of the clustered data for improving the classification rules used for recognition of Urdu ligatures.Experiments conducted on the benchmark UPTI dataset for the proposed Urdu ligature recognition system yields promising results,achieving a recognition rate of 96.72%.
基金This project was funded by the Deanship of Scientific Research(DSR),King Abdul-Aziz University,Jeddah,Saudi Arabia under Grant No.(RG-11-611-43).
文摘Handwritten character recognition systems are used in every field of life nowadays,including shopping malls,banks,educational institutes,etc.Urdu is the national language of Pakistan,and it is the fourth spoken language in the world.However,it is still challenging to recognize Urdu handwritten characters owing to their cursive nature.Our paper presents a Convolutional Neural Networks(CNN)model to recognize Urdu handwritten alphabet recognition(UHAR)offline and online characters.Our research contributes an Urdu handwritten dataset(aka UHDS)to empower future works in this field.For offline systems,optical readers are used for extracting the alphabets,while diagonal-based extraction methods are implemented in online systems.Moreover,our research tackled the issue concerning the lack of comprehensive and standard Urdu alphabet datasets to empower research activities in the area of Urdu text recognition.To this end,we collected 1000 handwritten samples for each alphabet and a total of 38000 samples from 12 to 25 age groups to train our CNN model using online and offline mediums.Subsequently,we carried out detailed experiments for character recognition,as detailed in the results.The proposed CNN model outperformed as compared to previously published approaches.
基金King Saud University through Researchers Supporting Project number(RSP-2021/387),King Saud University,Riyadh,Saudi Arabia.
文摘English to Urdu machine translation is still in its beginning and lacks simple translation methods to provide motivating and adequate English to Urdu translation.In order tomake knowledge available to the masses,there should be mechanisms and tools in place to make things understandable by translating from source language to target language in an automated fashion.Machine translation has achieved this goal with encouraging results.When decoding the source text into the target language,the translator checks all the characteristics of the text.To achieve machine translation,rule-based,computational,hybrid and neural machine translation approaches have been proposed to automate the work.In this research work,a neural machine translation approach is employed to translate English text into Urdu.Long Short Term Short Model(LSTM)Encoder Decoder is used to translate English to Urdu.The various steps required to perform translation tasks include preprocessing,tokenization,grammar and sentence structure analysis,word embeddings,training data preparation,encoder-decoder models,and output text generation.The results show that the model used in the research work shows better performance in translation.The results were evaluated using bilingual research metrics and showed that the test and training data yielded the highest score sequences with an effective length of ten(10).
文摘This paper will talk about the sociolinguistic variations in Urdu Language.It takes account with the interviews that conduct from the various senior citizens of Urdu speakers in Pakistan,each interview base on 4 questions to find out Urdu language variations for this paper.The questions on the value of the Urdu language in today’s world,what changes took place in the Urdu language over the period from lexical and dialectical point of view.The interviews were conducted as semi-structured with the use of different modern and simple tools.The study of this paper will be very helpful for the young generation of Pakistan because all the past studies did not solely talk about the sociolinguistics in Urdu language.It guides them to bring their native or national language with original accents and vocabulary which shows their actual culture and purity of Urdu language.
文摘Urdu,a prominent subcontinental language,serves as a versatile means of communication.However,its handwritten expressions present challenges for optical character recognition(OCR).While various OCR techniques have been proposed,most of them focus on recognizing printed Urdu characters and digits.To the best of our knowledge,very little research has focused solely on Urdu pure handwriting recognition,and the results of such proposed methods are often inadequate.In this study,we introduce a novel approach to recognizing Urdu pure handwritten digits and characters using Convolutional Neural Networks(CNN).Our proposed method utilizes convolutional layers to extract important features from input images and classifies them using fully connected layers,enabling efficient and accurate detection of Urdu handwritten digits and characters.We implemented the proposed technique on a large publicly available dataset of Urdu handwritten digits and characters.The findings demonstrate that the CNN model achieves an accuracy of 98.30%and an F1 score of 88.6%,indicating its effectiveness in detecting and classifyingUrdu handwritten digits and characters.These results have far-reaching implications for various applications,including document analysis,text recognition,and language understanding,which have previously been unexplored in the context of Urdu handwriting data.This work lays a solid foundation for future research and development in Urdu language detection and processing,opening up new opportunities for advancement in this field.
基金This work was supported by the Institute for Big Data Analytics and Artificial Intelligence(IBDAAI),Universiti Teknologi Mara,Shah Alam,Selangor.Malaysia.
文摘In bilingual translation,attention-based Neural Machine Translation(NMT)models are used to achieve synchrony between input and output sequences and the notion of alignment.NMT model has obtained state-of-the-art performance for several language pairs.However,there has been little work exploring useful architectures for Urdu-to-English machine translation.We conducted extensive Urdu-to-English translation experiments using Long short-term memory(LSTM)/Bidirectional recurrent neural networks(Bi-RNN)/Statistical recurrent unit(SRU)/Gated recurrent unit(GRU)/Convolutional neural network(CNN)and Transformer.Experimental results show that Bi-RNN and LSTM with attention mechanism trained iteratively,with a scalable data set,make precise predictions on unseen data.The trained models yielded competitive results by achieving 62.6%and 61%accuracy and 49.67 and 47.14 BLEU scores,respectively.From a qualitative perspective,the translation of the test sets was examined manually,and it was observed that trained models tend to produce repetitive output more frequently.The attention score produced by Bi-RNN and LSTM produced clear alignment,while GRU showed incorrect translation for words,poor alignment and lack of a clear structure.Therefore,we considered refining the attention-based models by defining an additional attention-based dropout layer.Attention dropout fixes alignment errors and minimizes translation errors at the word level.After empirical demonstration and comparison with their counterparts,we found improvement in the quality of the resulting translation system and a decrease in the perplexity and over-translation score.The ability of the proposed model was evaluated using Arabic-English and Persian-English datasets as well.We empirically concluded that adding an attention-based dropout layer helps improve GRU,SRU,and Transformer translation and is considerably more efficient in translation quality and speed.
文摘Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.
文摘In the current era of the internet,people use online media for conversation,discussion,chatting,and other similar purposes.Analysis of such material where more than one person is involved has a spate challenge as compared to other text analysis tasks.There are several approaches to identify users’emotions fromthe conversational text for the English language,however regional or low resource languages have been neglected.The Urdu language is one of them and despite being used by millions of users across the globe,with the best of our knowledge there exists no work on dialogue analysis in the Urdu language.Therefore,in this paper,we have proposed a model which utilizes deep learning and machine learning approaches for the classification of users’emotions from the text.To accomplish this task,we have first created a dataset for the Urdu language with the help of existing English language datasets for dialogue analysis.After that,we have preprocessed the data and selected dialogues with common emotions.Once the dataset is prepared,we have used different deep learning and machine learning techniques for the classification of emotion.We have tuned the algorithms according to the Urdu language datasets.The experimental evaluation has shown encouraging results with 67%accuracy for the Urdu dialogue datasets,more than 10,000 dialogues are classified into five emotions i.e.,joy,fear,anger,sadness,and neutral.We believe that this is the first effort for emotion detection from the conversational text in the Urdu language domain.
文摘Activities of Indians in the Gulf Cooperation Council(GCC)States continue to be a major area of attention by the government and citizens of the sub-region.The GCC States have had to contend with varied numbers of foreign nationals who continue to troop into their territories in search of economic opportunities.The United Arab Emirates(UAE),which is a federation of seven emirates,has over the years,been a major destination for foreigners.This article identifies various categories of Indian immigrants in the UAE,their areas of activities,and how they have impacted the vernacular of the citizens of the federal monarchical state.It begins by highlighting India-UAE relation prior to the independence of what is now the United Arab Emirates in 1971.Second,it describes the characterization of the relation between Indians and Emiratis after 1971.With some selected examples,the article reveals the impacts Hindi/Urdu languages have made on the vernacular of Emiratis.