As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects in...As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues.展开更多
Objective Natural language processing (NLP) was used to excavate and visualize the core content of syndrome element syndrome differentiation (SESD). Methods The first step was to build a text mining and analysis envir...Objective Natural language processing (NLP) was used to excavate and visualize the core content of syndrome element syndrome differentiation (SESD). Methods The first step was to build a text mining and analysis environment based on Python language, and built a corpus based on the core chapters of SESD. The second step was to digitalize the corpus. The main steps included word segmentation, information cleaning and merging, document-entry matrix, dictionary compilation and information conversion. The third step was to mine and display the internal information of SESD corpus by means of word cloud, keyword extraction and visualization. Results NLP played a positive role in computer recognition and comprehension of SESD. Different chapters had different keywords and weights. Deficiency syndrome elements were an important component of SESD, such as "Qi deficiency""Yang deficiency" and "Yin deficiency". The important syndrome elements of substantiality included "Blood stasis""Qi stagnation", etc. Core syndrome elements were closely related. Conclusions Syndrome differentiation and treatment was the core of SESD. Using NLP to excavate syndromes differentiation could help reveal the internal relationship between syndromes differentiation and provide basis for artificial intelligence to learn syndromes differentiation.展开更多
Language disorder,a common manifestation of Alzheimer’s disease(AD),has attracted widespread attention in recent years.This paper uses a novel natural language processing(NLP)method,compared with latest deep learning...Language disorder,a common manifestation of Alzheimer’s disease(AD),has attracted widespread attention in recent years.This paper uses a novel natural language processing(NLP)method,compared with latest deep learning technology,to detect AD and explore the lexical performance.Our proposed approach is based on two stages.First,the dialogue contents are summarized into two categories with the same category.Second,term frequency—inverse document frequency(TF-IDF)algorithm is used to extract the keywords of transcripts,and the similarity of keywords between the groups was calculated separately by cosine distance.Several deep learning methods are used to compare the performance.In the meanwhile,keywords with the best performance are used to analyze AD patients’lexical performance.In the Predictive Challenge of Alzheimer’s Disease held by iFlytek in 2019,the proposed AD diagnosis model achieves a better performance in binary classification by adjusting the number of keywords.The F1 score of the model has a considerable improvement over the baseline of 75.4%,and the training process of which is simple and efficient.We analyze the keywords of the model and find that AD patients use less noun and verb than normal controls.A computer-assisted AD diagnosis model on small Chinese dataset is proposed in this paper,which provides a potential way for assisting diagnosis of AD and analyzing lexical performance in clinical setting.展开更多
随着计算机算力的提升和智能设备的普及,社会逐步进入智慧化时代。高校图书馆作为高校的文献信息中心,进行智慧化转型提升服务质量是时代所需。因此,文章借助智能问答技术,设计了基于自然语言处理(Natural Language Processing,NLP)的...随着计算机算力的提升和智能设备的普及,社会逐步进入智慧化时代。高校图书馆作为高校的文献信息中心,进行智慧化转型提升服务质量是时代所需。因此,文章借助智能问答技术,设计了基于自然语言处理(Natural Language Processing,NLP)的图书馆智能问答系统,创新图书馆参考咨询服务模式,提高图书馆服务水平和效率。展开更多
随着人工智能技术的快速发展,自然语言处理(Natural Language Processing,NLP)技术在各个领域得到了广泛应用。文章提出一种基于NLP技术的智能培训系统中知识点与题库关联方法,该方法利用NLP技术对培训材料进行文本分析,自动提取知识点...随着人工智能技术的快速发展,自然语言处理(Natural Language Processing,NLP)技术在各个领域得到了广泛应用。文章提出一种基于NLP技术的智能培训系统中知识点与题库关联方法,该方法利用NLP技术对培训材料进行文本分析,自动提取知识点,并基于知识点和题库之间建立关联模型,实现试卷题目的自动分配。该方法能够有效提高培训系统的智能化水平,提高培训效率和质量。展开更多
在数字化时代,智能语音质检成为企业提升工作效率的重要工具,其中自然语言处理(Natural Language Processing,NLP)技术的应用为智能语音质检提供了技术支持。NLP技术通过情感分析、语义分析等手段,使得质检过程更加高效、准确,并降低了...在数字化时代,智能语音质检成为企业提升工作效率的重要工具,其中自然语言处理(Natural Language Processing,NLP)技术的应用为智能语音质检提供了技术支持。NLP技术通过情感分析、语义分析等手段,使得质检过程更加高效、准确,并降低了质检成本。基于此,探讨了NLP技术在智能语音质检中的应用优势和具体实现方式。展开更多
中文命名实体识别(NER)任务旨在抽取非结构化文本中包含的实体并给它们分配预定义的实体类别。针对大多数中文NER方法在上下文信息缺乏时的语义学习不足问题,提出一种层次融合多元知识的NER框架——HTLR(Chinese NER method based on Hi...中文命名实体识别(NER)任务旨在抽取非结构化文本中包含的实体并给它们分配预定义的实体类别。针对大多数中文NER方法在上下文信息缺乏时的语义学习不足问题,提出一种层次融合多元知识的NER框架——HTLR(Chinese NER method based on Hierarchical Transformer fusing Lexicon and Radical),以通过分层次融合的多元知识来帮助模型学习更丰富、全面的上下文信息和语义信息。首先,通过发布的中文词汇表和词汇向量表识别语料中包含的潜在词汇并把它们向量化,同时通过优化后的位置编码建模词汇和相关字符的语义关系,以学习中文的词汇知识;其次,通过汉典网发布的基于汉字字形的编码将语料转换为相应的编码序列以代表字形信息,并提出RFECNN(Radical Feature Extraction-Convolutional Neural Network)模型来提取字形知识;最后,提出Hierarchical Transformer模型,其中由低层模块分别学习字符和词汇以及字符和字形的语义关系,并由高层模块进一步融合字符、词汇、字形等多元知识,从而帮助模型学习语义更丰富的字符表征。在Weibo、Resume、MSRA和OntoNotes4.0公开数据集进行了实验,与主流方法NFLAT(Non-Flat-LAttice Transformer for Chinese named entity recognition)的对比结果表明,所提方法的F1值在4个数据集上分别提升了9.43、0.75、1.76和6.45个百分点,达到最优水平。可见,多元语义知识、层次化融合、RFE-CNN结构和Hierarchical Transformer结构对学习丰富的语义知识及提高模型性能是有效的。展开更多
ReLM(Rephrasing Language Model)是当前性能领先的中文拼写纠错(CSC)模型。针对它在复杂语义场景中存在特征表达不足的问题,提出深层语义特征增强的ReLM——FeReLM(Feature-enhanced Rephrasing Language Model)。该模型利用深度可分...ReLM(Rephrasing Language Model)是当前性能领先的中文拼写纠错(CSC)模型。针对它在复杂语义场景中存在特征表达不足的问题,提出深层语义特征增强的ReLM——FeReLM(Feature-enhanced Rephrasing Language Model)。该模型利用深度可分离卷积(DSC)技术融合特征提取模型BGE(BAAI General Embeddings)生成的深层语义特征与ReLM生成的整体特征,从而有效提升模型对复杂上下文的解析力和拼写错误的识别纠正精度。首先,在Wang271K数据集上训练FeReLM,使模型持续学习句子中的深层语义和复杂表达;其次,迁移训练好的权重,从而将模型学习到的知识应用于新的数据集并进行微调。实验结果表明,在ECSpell和MCSC数据集上与ReLM、MCRSpell(Metric learning of Correct Representation for Chinese Spelling Correction)和RSpell(Retrieval-augmented Framework for Domain Adaptive Chinese Spelling Check)等模型相比,FeReLM的精确率、召回率、F1分数等关键指标的提升幅度可达0.6~28.7个百分点。此外,通过消融实验验证了所提方法的有效性。展开更多
文摘As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues.
基金the funding support from the National Natural Science Foundation of China (No. 81874429)Digital and Applied Research Platform for Diagnosis of Traditional Chinese Medicine (No. 49021003005)+1 种基金2018 Hunan Provincial Postgraduate Research Innovation Project (No. CX2018B465)Excellent Youth Project of Hunan Education Department in 2018 (No. 18B241)
文摘Objective Natural language processing (NLP) was used to excavate and visualize the core content of syndrome element syndrome differentiation (SESD). Methods The first step was to build a text mining and analysis environment based on Python language, and built a corpus based on the core chapters of SESD. The second step was to digitalize the corpus. The main steps included word segmentation, information cleaning and merging, document-entry matrix, dictionary compilation and information conversion. The third step was to mine and display the internal information of SESD corpus by means of word cloud, keyword extraction and visualization. Results NLP played a positive role in computer recognition and comprehension of SESD. Different chapters had different keywords and weights. Deficiency syndrome elements were an important component of SESD, such as "Qi deficiency""Yang deficiency" and "Yin deficiency". The important syndrome elements of substantiality included "Blood stasis""Qi stagnation", etc. Core syndrome elements were closely related. Conclusions Syndrome differentiation and treatment was the core of SESD. Using NLP to excavate syndromes differentiation could help reveal the internal relationship between syndromes differentiation and provide basis for artificial intelligence to learn syndromes differentiation.
基金the Natural Science Foundation of Zhejiang Province(No.GF20F020063)the Fujian Province Young and Middle-Aged Teacher Education Research Project(No.JAT170480)。
文摘Language disorder,a common manifestation of Alzheimer’s disease(AD),has attracted widespread attention in recent years.This paper uses a novel natural language processing(NLP)method,compared with latest deep learning technology,to detect AD and explore the lexical performance.Our proposed approach is based on two stages.First,the dialogue contents are summarized into two categories with the same category.Second,term frequency—inverse document frequency(TF-IDF)algorithm is used to extract the keywords of transcripts,and the similarity of keywords between the groups was calculated separately by cosine distance.Several deep learning methods are used to compare the performance.In the meanwhile,keywords with the best performance are used to analyze AD patients’lexical performance.In the Predictive Challenge of Alzheimer’s Disease held by iFlytek in 2019,the proposed AD diagnosis model achieves a better performance in binary classification by adjusting the number of keywords.The F1 score of the model has a considerable improvement over the baseline of 75.4%,and the training process of which is simple and efficient.We analyze the keywords of the model and find that AD patients use less noun and verb than normal controls.A computer-assisted AD diagnosis model on small Chinese dataset is proposed in this paper,which provides a potential way for assisting diagnosis of AD and analyzing lexical performance in clinical setting.
文摘随着计算机算力的提升和智能设备的普及,社会逐步进入智慧化时代。高校图书馆作为高校的文献信息中心,进行智慧化转型提升服务质量是时代所需。因此,文章借助智能问答技术,设计了基于自然语言处理(Natural Language Processing,NLP)的图书馆智能问答系统,创新图书馆参考咨询服务模式,提高图书馆服务水平和效率。
文摘随着人工智能技术的快速发展,自然语言处理(Natural Language Processing,NLP)技术在各个领域得到了广泛应用。文章提出一种基于NLP技术的智能培训系统中知识点与题库关联方法,该方法利用NLP技术对培训材料进行文本分析,自动提取知识点,并基于知识点和题库之间建立关联模型,实现试卷题目的自动分配。该方法能够有效提高培训系统的智能化水平,提高培训效率和质量。
文摘在数字化时代,智能语音质检成为企业提升工作效率的重要工具,其中自然语言处理(Natural Language Processing,NLP)技术的应用为智能语音质检提供了技术支持。NLP技术通过情感分析、语义分析等手段,使得质检过程更加高效、准确,并降低了质检成本。基于此,探讨了NLP技术在智能语音质检中的应用优势和具体实现方式。
文摘中文命名实体识别(NER)任务旨在抽取非结构化文本中包含的实体并给它们分配预定义的实体类别。针对大多数中文NER方法在上下文信息缺乏时的语义学习不足问题,提出一种层次融合多元知识的NER框架——HTLR(Chinese NER method based on Hierarchical Transformer fusing Lexicon and Radical),以通过分层次融合的多元知识来帮助模型学习更丰富、全面的上下文信息和语义信息。首先,通过发布的中文词汇表和词汇向量表识别语料中包含的潜在词汇并把它们向量化,同时通过优化后的位置编码建模词汇和相关字符的语义关系,以学习中文的词汇知识;其次,通过汉典网发布的基于汉字字形的编码将语料转换为相应的编码序列以代表字形信息,并提出RFECNN(Radical Feature Extraction-Convolutional Neural Network)模型来提取字形知识;最后,提出Hierarchical Transformer模型,其中由低层模块分别学习字符和词汇以及字符和字形的语义关系,并由高层模块进一步融合字符、词汇、字形等多元知识,从而帮助模型学习语义更丰富的字符表征。在Weibo、Resume、MSRA和OntoNotes4.0公开数据集进行了实验,与主流方法NFLAT(Non-Flat-LAttice Transformer for Chinese named entity recognition)的对比结果表明,所提方法的F1值在4个数据集上分别提升了9.43、0.75、1.76和6.45个百分点,达到最优水平。可见,多元语义知识、层次化融合、RFE-CNN结构和Hierarchical Transformer结构对学习丰富的语义知识及提高模型性能是有效的。
文摘ReLM(Rephrasing Language Model)是当前性能领先的中文拼写纠错(CSC)模型。针对它在复杂语义场景中存在特征表达不足的问题,提出深层语义特征增强的ReLM——FeReLM(Feature-enhanced Rephrasing Language Model)。该模型利用深度可分离卷积(DSC)技术融合特征提取模型BGE(BAAI General Embeddings)生成的深层语义特征与ReLM生成的整体特征,从而有效提升模型对复杂上下文的解析力和拼写错误的识别纠正精度。首先,在Wang271K数据集上训练FeReLM,使模型持续学习句子中的深层语义和复杂表达;其次,迁移训练好的权重,从而将模型学习到的知识应用于新的数据集并进行微调。实验结果表明,在ECSpell和MCSC数据集上与ReLM、MCRSpell(Metric learning of Correct Representation for Chinese Spelling Correction)和RSpell(Retrieval-augmented Framework for Domain Adaptive Chinese Spelling Check)等模型相比,FeReLM的精确率、召回率、F1分数等关键指标的提升幅度可达0.6~28.7个百分点。此外,通过消融实验验证了所提方法的有效性。