期刊文献+
共找到340篇文章
< 1 2 17 >
每页显示 20 50 100
Chinese DeepSeek: Performance of Various Oversampling Techniques on Public Perceptions Using Natural Language Processing
1
作者 Anees Ara Muhammad Mujahid +2 位作者 Amal Al-Rasheed Shaha Al-Otaibi Tanzila Saba 《Computers, Materials & Continua》 2025年第8期2717-2731,共15页
DeepSeek Chinese artificial intelligence(AI)open-source model,has gained a lot of attention due to its economical training and efficient inference.DeepSeek,a model trained on large-scale reinforcement learning without... DeepSeek Chinese artificial intelligence(AI)open-source model,has gained a lot of attention due to its economical training and efficient inference.DeepSeek,a model trained on large-scale reinforcement learning without supervised fine-tuning as a preliminary step,demonstrates remarkable reasoning capabilities of performing a wide range of tasks.DeepSeek is a prominent AI-driven chatbot that assists individuals in learning and enhances responses by generating insightful solutions to inquiries.Users possess divergent viewpoints regarding advanced models like DeepSeek,posting both their merits and shortcomings across several social media platforms.This research presents a new framework for predicting public sentiment to evaluate perceptions of DeepSeek.To transform the unstructured data into a suitable manner,we initially collect DeepSeek-related tweets from Twitter and subsequently implement various preprocessing methods.Subsequently,we annotated the tweets utilizing the Valence Aware Dictionary and sentiment Reasoning(VADER)methodology and the lexicon-driven TextBlob.Next,we classified the attitudes obtained from the purified data utilizing the proposed hybrid model.The proposed hybrid model consists of long-term,shortterm memory(LSTM)and bidirectional gated recurrent units(BiGRU).To strengthen it,we include multi-head attention,regularizer activation,and dropout units to enhance performance.Topic modeling employing KMeans clustering and Latent Dirichlet Allocation(LDA),was utilized to analyze public behavior concerning DeepSeek.The perceptions demonstrate that 82.5%of the people are positive,15.2%negative,and 2.3%neutral using TextBlob,and 82.8%positive,16.1%negative,and 1.2%neutral using the VADER analysis.The slight difference in results ensures that both analyses concur with their overall perceptions and may have distinct views of language peculiarities.The results indicate that the proposed model surpassed previous state-of-the-art approaches. 展开更多
关键词 DeepSeek PREDICTION natural language processing deep learning analysis TextBlob imbalance data
在线阅读 下载PDF
Deep Learning-Based Natural Language Processing Model and Optical Character Recognition for Detection of Online Grooming on Social Networking Services
2
作者 Sangmin Kim Byeongcheon Lee +2 位作者 Muazzam Maqsood Jihoon Moon Seungmin Rho 《Computer Modeling in Engineering & Sciences》 2025年第5期2079-2108,共30页
The increased accessibility of social networking services(SNSs)has facilitated communication and information sharing among users.However,it has also heightened concerns about digital safety,particularly for children a... The increased accessibility of social networking services(SNSs)has facilitated communication and information sharing among users.However,it has also heightened concerns about digital safety,particularly for children and adolescents who are increasingly exposed to online grooming crimes.Early and accurate identification of grooming conversations is crucial in preventing long-term harm to victims.However,research on grooming detection in South Korea remains limited,as existing models trained primarily on English text and fail to reflect the unique linguistic features of SNS conversations,leading to inaccurate classifications.To address these issues,this study proposes a novel framework that integrates optical character recognition(OCR)technology with KcELECTRA,a deep learning-based natural language processing(NLP)model that shows excellent performance in processing the colloquial Korean language.In the proposed framework,the KcELECTRA model is fine-tuned by an extensive dataset,including Korean social media conversations,Korean ethical verification data from AI-Hub,and Korean hate speech data from Hug-gingFace,to enable more accurate classification of text extracted from social media conversation images.Experimental results show that the proposed framework achieves an accuracy of 0.953,outperforming existing transformer-based models.Furthermore,OCR technology shows high accuracy in extracting text from images,demonstrating that the proposed framework is effective for online grooming detection.The proposed framework is expected to contribute to the more accurate detection of grooming text and the prevention of grooming-related crimes. 展开更多
关键词 Online grooming KcELECTRA natural language processing optical character recognition social networking service text classification
在线阅读 下载PDF
Unlocking the Potential:A Comprehensive Systematic Review of ChatGPT in Natural Language Processing Tasks
3
作者 Ebtesam Ahmad Alomari 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期43-85,共43页
As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects in... As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues. 展开更多
关键词 Generative AI large languagemodel(LLM) natural language processing(NLP) ChatGPT GPT(generative pretraining transformer) GPT-4 sentiment analysis NER information extraction ANNOTATION text classification
在线阅读 下载PDF
Literature classification and its applications in condensed matter physics and materials science by natural language processing
4
作者 吴思远 朱天念 +5 位作者 涂思佳 肖睿娟 袁洁 吴泉生 李泓 翁红明 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第5期117-123,共7页
The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classificatio... The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classification,it remains hindered by the lack of labelled dataset.In this article,we introduce a novel method for generating literature classification models through semi-supervised learning,which can generate labelled dataset iteratively with limited human input.We apply this method to train NLP models for classifying literatures related to several research directions,i.e.,battery,superconductor,topological material,and artificial intelligence(AI)in materials science.The trained NLP‘battery’model applied on a larger dataset different from the training and testing dataset can achieve F1 score of 0.738,which indicates the accuracy and reliability of this scheme.Furthermore,our approach demonstrates that even with insufficient data,the not-well-trained model in the first few cycles can identify the relationships among different research fields and facilitate the discovery and understanding of interdisciplinary directions. 展开更多
关键词 natural language processing text mining materials science
原文传递
Herding and investor sentiment after the cryptocurrency crash:evidence from Twitter and natural language processing
5
作者 Michael Cary 《Financial Innovation》 2024年第1期425-447,共23页
Although the 2022 cryptocurrency market crash prompted despair among investors,the rallying cry,“wagmi”(We’re all gonna make it.)emerged among cryptocurrency enthusiasts in the aftermath.Did cryptocurrency enthusia... Although the 2022 cryptocurrency market crash prompted despair among investors,the rallying cry,“wagmi”(We’re all gonna make it.)emerged among cryptocurrency enthusiasts in the aftermath.Did cryptocurrency enthusiasts respond to this crash differently compared to traditional investors?Using natural language processing techniques applied to Twitter data,this study employed a difference-in-differences method to determine whether the cryptocurrency market crash had a differential effect on investor sentiment toward cryptocurrency enthusiasts relative to more traditional investors.The results indicate that the crash affected investor sentiment among cryptocurrency enthusiastic investors differently from traditional investors.In particular,cryptocurrency enthusiasts’tweets became more neutral and,surprisingly,less negative.This result appears to be primarily driven by a deliberate,collectivist effort to promote positivity within the cryptocurrency community(“wagmi”).Considering the more nuanced emotional content of tweets,it appears that cryptocurrency enthusiasts expressed less joy and surprise in the aftermath of the cryptocurrency crash than traditional investors.Moreover,cryptocurrency enthusiasts tweeted more frequently after the cryptocurrency crash,with a relative increase in tweet frequency of approximately one tweet per day.An analysis of the specific textual content of tweets provides evidence of herding behavior among cryptocurrency enthusiasts. 展开更多
关键词 Bitcoin Cryptocurrency HERDING Investor sentiment natural language processing Sentiment analysis TWITTER
在线阅读 下载PDF
Application of Natural Language Processing in Virtual Experience AI Interaction Design
6
作者 Ziqian Rong 《Journal of Intelligent Learning Systems and Applications》 2024年第4期403-417,共15页
This paper investigates the application of Natural Language Processing (NLP) in AI interaction design for virtual experiences. It analyzes the impact of various interaction methods on user experience, integrating Virt... This paper investigates the application of Natural Language Processing (NLP) in AI interaction design for virtual experiences. It analyzes the impact of various interaction methods on user experience, integrating Virtual Reality (VR) and Augmented Reality (AR) technologies to achieve more natural and intuitive interaction models through NLP techniques. Through experiments and data analysis across multiple technical models, this study proposes an innovative design solution based on natural language interaction and summarizes its advantages and limitations in immersive experiences. 展开更多
关键词 natural language processing Virtual Reality Augmented Reality Interaction Design User Experience
在线阅读 下载PDF
Deep Learning with Natural Language Processing Enabled Sentimental Analysis on Sarcasm Classification 被引量:2
7
作者 Abdul Rahaman Wahab Sait Mohamad Khairi Ishak 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期2553-2567,共15页
Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier... Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier works on sarcasm detection on text utilize lexical as well as pragmatic cues namely interjection,punctuations,and sentiment shift that are vital indicators of sarcasm.With the advent of deep-learning,recent works,leveraging neural networks in learning lexical and contextual features,removing the need for handcrafted feature.In this aspect,this study designs a deep learning with natural language processing enabled SA(DLNLP-SA)technique for sarcasm classification.The proposed DLNLP-SA technique aims to detect and classify the occurrence of sarcasm in the input data.Besides,the DLNLP-SA technique holds various sub-processes namely preprocessing,feature vector conversion,and classification.Initially,the pre-processing is performed in diverse ways such as single character removal,multi-spaces removal,URL removal,stopword removal,and tokenization.Secondly,the transformation of feature vectors takes place using the N-gram feature vector technique.Finally,mayfly optimization(MFO)with multi-head self-attention based gated recurrent unit(MHSA-GRU)model is employed for the detection and classification of sarcasm.To verify the enhanced outcomes of the DLNLP-SA model,a comprehensive experimental investigation is performed on the News Headlines Dataset from Kaggle Repository and the results signified the supremacy over the existing approaches. 展开更多
关键词 Sentiment analysis sarcasm detection deep learning natural language processing N-GRAMS hyperparameter tuning
在线阅读 下载PDF
Sentence,Phrase,and Triple Annotations to Build a Knowledge Graph of Natural Language Processing Contributions—A Trial Dataset 被引量:1
8
作者 Jennifer D’Souza Sören Auer 《Journal of Data and Information Science》 CSCD 2021年第3期6-34,共29页
Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly... Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty. 展开更多
关键词 Scholarly knowledge graphs Open science graphs Knowledge representation natural language processing Semantic publishing
在线阅读 下载PDF
Natural Language Processing with Optimal Deep Learning-Enabled Intelligent Image Captioning System 被引量:1
9
作者 Radwa Marzouk Eatedal Alabdulkreem +5 位作者 Mohamed KNour Mesfer Al Duhayyim Mahmoud Othman Abu Sarwar Zamani Ishfaq Yaseen Abdelwahed Motwakel 《Computers, Materials & Continua》 SCIE EI 2023年第2期4435-4451,共17页
The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models... The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models such as speech understanding,emotion detection,home automation,and so on.If an image needs to be captioned,then the objects in that image,its actions and connections,and any silent feature that remains under-projected or missing from the images should be identified.The aim of the image captioning process is to generate a caption for image.In next step,the image should be provided with one of the most significant and detailed descriptions that is syntactically as well as semantically correct.In this scenario,computer vision model is used to identify the objects and NLP approaches are followed to describe the image.The current study develops aNatural Language Processing with Optimal Deep Learning Enabled Intelligent Image Captioning System(NLPODL-IICS).The aim of the presented NLPODL-IICS model is to produce a proper description for input image.To attain this,the proposed NLPODL-IICS follows two stages such as encoding and decoding processes.Initially,at the encoding side,the proposed NLPODL-IICS model makes use of Hunger Games Search(HGS)with Neural Search Architecture Network(NASNet)model.This model represents the input data appropriately by inserting it into a predefined length vector.Besides,during decoding phase,Chimp Optimization Algorithm(COA)with deeper Long Short Term Memory(LSTM)approach is followed to concatenate the description sentences 4436 CMC,2023,vol.74,no.2 produced by the method.The application of HGS and COA algorithms helps in accomplishing proper parameter tuning for NASNet and LSTM models respectively.The proposed NLPODL-IICS model was experimentally validated with the help of two benchmark datasets.Awidespread comparative analysis confirmed the superior performance of NLPODL-IICS model over other models. 展开更多
关键词 natural language processing information retrieval image captioning deep learning metaheuristics
在线阅读 下载PDF
Research on Text Mining of Syndrome Element Syndrome Differentiation by Natural Language Processing 被引量:5
10
作者 DENG Wen-Xiang ZHU Jian-Ping +6 位作者 LI Jing YUAN Zhi-Ying WU Hua-Ying YAO Zhong-Hua ZHANG Yi-Ge ZHANG Wen-An HUANG Hui-Yong 《Digital Chinese Medicine》 2019年第2期61-71,共11页
Objective Natural language processing (NLP) was used to excavate and visualize the core content of syndrome element syndrome differentiation (SESD). Methods The first step was to build a text mining and analysis envir... Objective Natural language processing (NLP) was used to excavate and visualize the core content of syndrome element syndrome differentiation (SESD). Methods The first step was to build a text mining and analysis environment based on Python language, and built a corpus based on the core chapters of SESD. The second step was to digitalize the corpus. The main steps included word segmentation, information cleaning and merging, document-entry matrix, dictionary compilation and information conversion. The third step was to mine and display the internal information of SESD corpus by means of word cloud, keyword extraction and visualization. Results NLP played a positive role in computer recognition and comprehension of SESD. Different chapters had different keywords and weights. Deficiency syndrome elements were an important component of SESD, such as "Qi deficiency""Yang deficiency" and "Yin deficiency". The important syndrome elements of substantiality included "Blood stasis""Qi stagnation", etc. Core syndrome elements were closely related. Conclusions Syndrome differentiation and treatment was the core of SESD. Using NLP to excavate syndromes differentiation could help reveal the internal relationship between syndromes differentiation and provide basis for artificial intelligence to learn syndromes differentiation. 展开更多
关键词 Syndrome element syndrome differentiation (SESD) natural language processing (NLP) Diagnostics of TCM Artificial intelligence Text mining
在线阅读 下载PDF
Numerical‐discrete‐scheme‐incorporated recurrent neural network for tasks in natural language processing 被引量:1
11
作者 Mei Liu Wendi Luo +3 位作者 Zangtai Cai Xiujuan Du Jiliang Zhang Shuai Li 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1415-1424,共10页
A variety of neural networks have been presented to deal with issues in deep learning in the last decades.Despite the prominent success achieved by the neural network,it still lacks theoretical guidance to design an e... A variety of neural networks have been presented to deal with issues in deep learning in the last decades.Despite the prominent success achieved by the neural network,it still lacks theoretical guidance to design an efficient neural network model,and verifying the performance of a model needs excessive resources.Previous research studies have demonstrated that many existing models can be regarded as different numerical discretizations of differential equations.This connection sheds light on designing an effective recurrent neural network(RNN)by resorting to numerical analysis.Simple RNN is regarded as a discretisation of the forward Euler scheme.Considering the limited solution accuracy of the forward Euler methods,a Taylor‐type discrete scheme is presented with lower truncation error and a Taylor‐type RNN(T‐RNN)is designed with its guidance.Extensive experiments are conducted to evaluate its performance on statistical language models and emotion analysis tasks.The noticeable gains obtained by T‐RNN present its superiority and the feasibility of designing the neural network model using numerical methods. 展开更多
关键词 deep learning natural language processing neural network text analysis
在线阅读 下载PDF
Word Embeddings and Semantic Spaces in Natural Language Processing 被引量:2
12
作者 Peter J. Worth 《International Journal of Intelligence Science》 2023年第1期1-21,共21页
One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse ... One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP. 展开更多
关键词 natural language processing Vector Space Models Semantic Spaces Word Embeddings Representation Learning Text Vectorization Machine Learning Deep Learning
在线阅读 下载PDF
Research on the Automatic Pattem Abstraction and Recognition Methodology for Large-scale Database System based on Natural Language Processing 被引量:1
13
作者 RongWang Cuizhen Jiao Wenhua Dai 《International Journal of Technology Management》 2015年第9期125-127,共3页
In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connec... In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connection between nodes, data across different nodes and even regional distribution are well recognized. In order to reduce data redundancy and model design of the database will usually contain a lot of forms we combine the NLP theory to optimize the traditional method. The experimental analysis and simulation proves the correctness of our method. 展开更多
关键词 Pattern Abstraction and Recognition Database System natural language processing.
在线阅读 下载PDF
Automated labelling of radiology reports using natural language processing:Comparison of traditional and newer methods 被引量:1
14
作者 Seo Yi Chng Paul J.W.Tern +1 位作者 Matthew R.X.Kan Lionel T.E.Cheng 《Health Care Science》 2023年第2期120-128,共9页
Automated labelling of radiology reports using natural language processing allows for the labelling of ground truth for large datasets of radiological studies that are required for training of computer vision models.T... Automated labelling of radiology reports using natural language processing allows for the labelling of ground truth for large datasets of radiological studies that are required for training of computer vision models.This paper explains the necessary data preprocessing steps,reviews the main methods for automated labelling and compares their performance.There are four main methods of automated labelling,namely:(1)rules-based text-matching algorithms,(2)conventional machine learning models,(3)neural network models and(4)Bidirectional Encoder Representations from Transformers(BERT)models.Rules-based labellers perform a brute force search against manually curated keywords and are able to achieve high F1 scores.However,they require proper handling of negative words.Machine learning models require preprocessing that involves tokenization and vectorization of text into numerical vectors.Multilabel classification approaches are required in labelling radiology reports and conventional models can achieve good performance if they have large enough training sets.Deep learning models make use of connected neural networks,often a long short-term memory network,and are similarly able to achieve good performance if trained on a large data set.BERT is a transformer-based model that utilizes attention.Pretrained BERT models only require fine-tuning with small data sets.In particular,domain-specific BERT models can achieve superior performance compared with the other methods for automated labelling. 展开更多
关键词 automated labelling machine learning natural language processing neural network RADIOLOGY
在线阅读 下载PDF
Natural Language Processing with Optimal Deep Learning Based Fake News Classification
15
作者 Sara AAlthubiti Fayadh Alenezi Romany F.Mansour 《Computers, Materials & Continua》 SCIE EI 2022年第11期3529-3544,共16页
The recent advancements made in World Wide Web and social networking have eased the spread of fake news among people at a faster rate.At most of the times,the intention of fake news is to misinform the people and make... The recent advancements made in World Wide Web and social networking have eased the spread of fake news among people at a faster rate.At most of the times,the intention of fake news is to misinform the people and make manipulated societal insights.The spread of low-quality news in social networking sites has a negative influence upon people as well as the society.In order to overcome the ever-increasing dissemination of fake news,automated detection models are developed using Artificial Intelligence(AI)and Machine Learning(ML)methods.The latest advancements in Deep Learning(DL)models and complex Natural Language Processing(NLP)tasks make the former,a significant solution to achieve Fake News Detection(FND).In this background,the current study focuses on design and development of Natural Language Processing with Sea Turtle Foraging Optimizationbased Deep Learning Technique for Fake News Detection and Classification(STODL-FNDC)model.The aim of the proposed STODL-FNDC model is to discriminate fake news from legitimate news in an effectual manner.In the proposed STODL-FNDC model,the input data primarily undergoes pre-processing and Glove-based word embedding.Besides,STODL-FNDC model employs Deep Belief Network(DBN)approach for detection as well as classification of fake news.Finally,STO algorithm is utilized after adjusting the hyperparameters involved in DBN model,in an optimal manner.The novelty of the study lies in the design of STO algorithm with DBN model for FND.In order to improve the detection performance of STODL-FNDC technique,a series of simulations was carried out on benchmark datasets.The experimental outcomes established the better performance of STODL-FNDC approach over other methods with a maximum accuracy of 95.50%. 展开更多
关键词 natural language processing text mining fake news detection deep belief network machine learning evolutionary algorithm
在线阅读 下载PDF
Spontaneous Language Analysis in Alzheimer’s Disease:Evaluation of Natural Language Processing Technique for Analyzing Lexical Performance
16
作者 Liu Ning Yuan Zhenming 《Journal of Shanghai Jiaotong university(Science)》 EI 2022年第2期160-167,共8页
Language disorder,a common manifestation of Alzheimer’s disease(AD),has attracted widespread attention in recent years.This paper uses a novel natural language processing(NLP)method,compared with latest deep learning... Language disorder,a common manifestation of Alzheimer’s disease(AD),has attracted widespread attention in recent years.This paper uses a novel natural language processing(NLP)method,compared with latest deep learning technology,to detect AD and explore the lexical performance.Our proposed approach is based on two stages.First,the dialogue contents are summarized into two categories with the same category.Second,term frequency—inverse document frequency(TF-IDF)algorithm is used to extract the keywords of transcripts,and the similarity of keywords between the groups was calculated separately by cosine distance.Several deep learning methods are used to compare the performance.In the meanwhile,keywords with the best performance are used to analyze AD patients’lexical performance.In the Predictive Challenge of Alzheimer’s Disease held by iFlytek in 2019,the proposed AD diagnosis model achieves a better performance in binary classification by adjusting the number of keywords.The F1 score of the model has a considerable improvement over the baseline of 75.4%,and the training process of which is simple and efficient.We analyze the keywords of the model and find that AD patients use less noun and verb than normal controls.A computer-assisted AD diagnosis model on small Chinese dataset is proposed in this paper,which provides a potential way for assisting diagnosis of AD and analyzing lexical performance in clinical setting. 展开更多
关键词 natural language processing(NLP) Alzheimer's disease(AD) mild cognitive impairment term frequency-inverse document frequency(TF-IDF) bag of words
原文传递
Automating Transfer Credit Assessment-A Natural Language Processing-Based Approach
17
作者 Dhivya Chandrasekaran Vijay Mago 《Computers, Materials & Continua》 SCIE EI 2022年第11期2257-2274,共18页
Student mobility or academic mobility involves students moving between institutions during their post-secondary education,and one of the challenging tasks in this process is to assess the transfer credits to be offere... Student mobility or academic mobility involves students moving between institutions during their post-secondary education,and one of the challenging tasks in this process is to assess the transfer credits to be offered to the incoming student.In general,this process involves domain experts comparing the learning outcomes of the courses,to decide on offering transfer credits to the incoming students.This manual implementation is not only labor-intensive but also influenced by undue bias and administrative complexity.The proposed research article focuses on identifying a model that exploits the advancements in the field of Natural Language Processing(NLP)to effectively automate this process.Given the unique structure,domain specificity,and complexity of learning outcomes(LOs),a need for designing a tailor-made model arises.The proposed model uses a clustering-inspired methodology based on knowledge-based semantic similarity measures to assess the taxonomic similarity of LOs and a transformer-based semantic similarity model to assess the semantic similarity of the LOs.The similarity between LOs is further aggregated to form course to course similarity.Due to the lack of quality benchmark datasets,a new benchmark dataset containing seven course-to-course similarity measures is proposed.Understanding the inherent need for flexibility in the decision-making process the aggregation part of the model offers tunable parameters to accommodate different levels of leniency.While providing an efficient model to assess the similarity between courses with existing resources,this research work also steers future research attempts to apply NLP in the field of articulation in an ideal direction by highlighting the persisting research gaps. 展开更多
关键词 Articulation agreements higher education natural language processing semantic similarity
在线阅读 下载PDF
Eliciting Requirements from Stakeholders’ Responses Using Natural Language Processing
18
作者 Mohammed Lafi Bilal Hawashin Shadi AlZu’bi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2021年第4期99-116,共18页
Most software systems have different stakeholders with a variety of concerns.The process of collecting requirements from a large number of stakeholders is vital but challenging.We propose an efficient,automatic approa... Most software systems have different stakeholders with a variety of concerns.The process of collecting requirements from a large number of stakeholders is vital but challenging.We propose an efficient,automatic approach to collecting requirements from different stakeholders’responses to a specific question.We use natural language processing techniques to get the stakeholder response that represents most other stakeholders’responses.This study improves existing practices in three ways:Firstly,it reduces the human effort needed to collect the requirements;secondly,it reduces the time required to carry out this task with a large number of stakeholders;thirdly,it underlines the importance of using of data mining techniques in various software engineering steps.Our approach uses tokenization,stop word removal,and word lemmatization to create a list of frequently accruing words.It then creates a similarity matrix to calculate the score value for each response and selects the answer with the highest score.Our experiments show that using this approach significantly reduces the time and effort needed to collect requirements and does so with a sufficient degree of accuracy. 展开更多
关键词 Software requirements requirements elicitation natural language processing
在线阅读 下载PDF
Performance Analysis of Cross⁃Site Scripting Based on Natural Language Processing
19
作者 Mengda Xu Luqun Li 《Journal of Harbin Institute of Technology(New Series)》 CAS 2022年第4期19-25,共7页
With the acceleration of network communication in the 5G era,the volume of data communication in cyberspace has increased unprecedentedly.The speed of data transmission will accelerate.Subsequently,the security of net... With the acceleration of network communication in the 5G era,the volume of data communication in cyberspace has increased unprecedentedly.The speed of data transmission will accelerate.Subsequently,the security of network communication data becomes more and more serious.Among them,malicious cross⁃site scripting leading to the leakage of user information is very serious.This article uses URL attribute analysis method and YARA rule to process data for cross⁃site scripting based on the long short⁃term memory(LSTM)characteristics of LSTM model.The results show that the LSTM classification model adopted in this paper has higher recall rate and F1⁃score than other machine learning methods,which proves that the method adopted in this paper is feasible. 展开更多
关键词 cross⁃site scripting network communication web security natural language processing
在线阅读 下载PDF
Dif? culty Discrimination of Interpretation Teaching Materials based on Analytic Hierarchy Process and Natural Language Processing
20
作者 Yingfei Xiong 《International Journal of Technology Management》 2016年第9期41-43,共3页
Difficulty discrimination is an important step in autonomous design and interpreting teaching materials development, which is related to scientifi c nature of the materials, teaching effectiveness, and sequential teac... Difficulty discrimination is an important step in autonomous design and interpreting teaching materials development, which is related to scientifi c nature of the materials, teaching effectiveness, and sequential teaching progress. In this paper, we focus on the diffi culty discrimination of interpretation teaching materials on the basis of analytic hierarchy process and natural language processing. We analyze several factors which affect interpretation teaching materials, and we introduce theories of analytic hierarchy process and natural language processing which is intuitive and credible operation basis. 展开更多
关键词 Analytic Hierarchy Process Interpretation Teaching Materials natural language processing Difficulty Discrimination.
在线阅读 下载PDF
上一页 1 2 17 下一页 到第
使用帮助 返回顶部