In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o...In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.展开更多
Geological knowledge can provide support for knowledge discovery, knowledge inference and mineralization predictions of geological big data. Entity identification and relationship extraction from geological data descr...Geological knowledge can provide support for knowledge discovery, knowledge inference and mineralization predictions of geological big data. Entity identification and relationship extraction from geological data description text are the key links for constructing knowledge graphs. Given the lack of publicly annotated datasets in the geology domain, this paper illustrates the construction process of geological entity datasets, defines the types of entities and interconceptual relationships by using the geological entity concept system, and completes the construction of the geological corpus. To address the shortcomings of existing language models(such as Word2vec and Glove) that cannot solve polysemous words and have a poor ability to fuse contexts, we propose a geological named entity recognition and relationship extraction model jointly with Bidirectional Encoder Representation from Transformers(BERT) pretrained language model. To effectively represent the text features, we construct a BERT-bidirectional gated recurrent unit network(BiGRU)-conditional random field(CRF)-based architecture to extract the named entities and the BERT-BiGRU-Attention-based architecture to extract the entity relations. The results show that the F1-score of the BERT-BiGRU-CRF named entity recognition model is 0.91 and the F1-score of the BERT-BiGRU-Attention relationship extraction model is 0.84, which are significant performance improvements when compared to classic language models(e.g., word2vec and Embedding from Language Models(ELMo)).展开更多
In this paper, we explore the multi-classification problem of acupuncture acupoints bas</span><span><span style="font-family:Verdana;">ed on </span><span style="font-family:Ve...In this paper, we explore the multi-classification problem of acupuncture acupoints bas</span><span><span style="font-family:Verdana;">ed on </span><span style="font-family:Verdana;">Bert</span><span style="font-family:Verdana;"> model, </span><i><span style="font-family:Verdana;">i.e.</span></i><span style="font-family:Verdana;">, we try to recommend the best main acupuncture point for treating the disease by classifying and predicting the main acupuncture point for the disease, and further explore its acupuncture point grouping to provide the medical practitioner with the optimal solution for treating the disease and improv</span></span></span><span style="font-family:Verdana;">ing</span><span style="font-family:""><span style="font-family:Verdana;"> the clinical decision-making ability. The Bert-Chinese-Acupoint model was constructed by retraining </span><span style="font-family:Verdana;">on the basis of</span><span style="font-family:Verdana;"> the Bert model, and the semantic features in terms of acupuncture points were added to the acupunctu</span></span><span style="font-family:""><span style="font-family:Verdana;">re point corpus in the fine-tuning process to increase the semantic features in terms of acupuncture </span><span style="font-family:Verdana;">points,</span><span style="font-family:Verdana;"> and compared with the machine learning method. The results show that the Bert-Chinese Acupoint model proposed in this paper has a 3% improvement in accuracy compared to the </span><span style="font-family:Verdana;">best performing</span><span style="font-family:Verdana;"> model in the machine learning approach.展开更多
Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or...Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.展开更多
在微博等社交媒体的舆情发现和预测中,网络水军制造的“假热点”会影响分析准确性。为真实反映微博舆情热度,提出一种融合BERT(Bidirectional Encoder Representations from Transformers)和X-means算法的微博舆情热度分析预测模型BXpre...在微博等社交媒体的舆情发现和预测中,网络水军制造的“假热点”会影响分析准确性。为真实反映微博舆情热度,提出一种融合BERT(Bidirectional Encoder Representations from Transformers)和X-means算法的微博舆情热度分析预测模型BXpre,旨在融合微博参与用户的属性特征与热度变化的时域特征,以提高热度预测的准确性。首先,对微博原文和互动用户的数据进行预处理,利用微调后的StructBERT模型对这些数据分类,从而确定参与互动的用户与微博原文的关联度,作为用户对该微博热度增长的贡献度权重计算的参考值;其次,使用X-means算法,以互动用户的特征为依据进行聚类,基于所得聚集态的同质性特征过滤水军,并引入针对水军样本的权重惩罚机制,结合标签关联度,进一步构建微博热度指标模型;最后,通过计算先验热度值随时间变化的二阶导数与真实数据的余弦相似度预测未来微博热度变化。实验结果表明,BXpre在不同用户量级下输出的微博舆情热度排序结果更贴近真实数据,在混合量级测试条件下,BXpre的预测相关性指标达到了90.88%,相较于基于长短期记忆(LSTM)网络、极限梯度提升(XGBoost)算法和时序差值排序(TDR)的3种传统方法,分别提升了12.71、14.80和11.30个百分点;相较于ChatGPT和文心一言,分别提升了9.76和11.95个百分点。展开更多
文摘In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.
基金financially supported by the National Key R&D Program of China (No.2022YFF0711601)the Natural Science Foundation of Hubei Province of China (No.2022CFB640)+2 种基金the Open Fund of Key Laboratory of Urban Land Resources Monitoring and Simulation,Ministry of Natural Resources (No.KF-2022-07-014)the Opening Fund of Hubei Key Laboratory of Intelligent Vision-Based Monitoring for Hydroelectric Engineering (No.2022SDSJ04)the Beijing Key Laboratory of Urban Spatial Information Engineering (No.20220108)。
文摘Geological knowledge can provide support for knowledge discovery, knowledge inference and mineralization predictions of geological big data. Entity identification and relationship extraction from geological data description text are the key links for constructing knowledge graphs. Given the lack of publicly annotated datasets in the geology domain, this paper illustrates the construction process of geological entity datasets, defines the types of entities and interconceptual relationships by using the geological entity concept system, and completes the construction of the geological corpus. To address the shortcomings of existing language models(such as Word2vec and Glove) that cannot solve polysemous words and have a poor ability to fuse contexts, we propose a geological named entity recognition and relationship extraction model jointly with Bidirectional Encoder Representation from Transformers(BERT) pretrained language model. To effectively represent the text features, we construct a BERT-bidirectional gated recurrent unit network(BiGRU)-conditional random field(CRF)-based architecture to extract the named entities and the BERT-BiGRU-Attention-based architecture to extract the entity relations. The results show that the F1-score of the BERT-BiGRU-CRF named entity recognition model is 0.91 and the F1-score of the BERT-BiGRU-Attention relationship extraction model is 0.84, which are significant performance improvements when compared to classic language models(e.g., word2vec and Embedding from Language Models(ELMo)).
文摘In this paper, we explore the multi-classification problem of acupuncture acupoints bas</span><span><span style="font-family:Verdana;">ed on </span><span style="font-family:Verdana;">Bert</span><span style="font-family:Verdana;"> model, </span><i><span style="font-family:Verdana;">i.e.</span></i><span style="font-family:Verdana;">, we try to recommend the best main acupuncture point for treating the disease by classifying and predicting the main acupuncture point for the disease, and further explore its acupuncture point grouping to provide the medical practitioner with the optimal solution for treating the disease and improv</span></span></span><span style="font-family:Verdana;">ing</span><span style="font-family:""><span style="font-family:Verdana;"> the clinical decision-making ability. The Bert-Chinese-Acupoint model was constructed by retraining </span><span style="font-family:Verdana;">on the basis of</span><span style="font-family:Verdana;"> the Bert model, and the semantic features in terms of acupuncture points were added to the acupunctu</span></span><span style="font-family:""><span style="font-family:Verdana;">re point corpus in the fine-tuning process to increase the semantic features in terms of acupuncture </span><span style="font-family:Verdana;">points,</span><span style="font-family:Verdana;"> and compared with the machine learning method. The results show that the Bert-Chinese Acupoint model proposed in this paper has a 3% improvement in accuracy compared to the </span><span style="font-family:Verdana;">best performing</span><span style="font-family:Verdana;"> model in the machine learning approach.
基金funded by Scientific Research Deanship at University of Hail-Saudi Arabia through Project Number RG-23092.
文摘Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.
文摘在微博等社交媒体的舆情发现和预测中,网络水军制造的“假热点”会影响分析准确性。为真实反映微博舆情热度,提出一种融合BERT(Bidirectional Encoder Representations from Transformers)和X-means算法的微博舆情热度分析预测模型BXpre,旨在融合微博参与用户的属性特征与热度变化的时域特征,以提高热度预测的准确性。首先,对微博原文和互动用户的数据进行预处理,利用微调后的StructBERT模型对这些数据分类,从而确定参与互动的用户与微博原文的关联度,作为用户对该微博热度增长的贡献度权重计算的参考值;其次,使用X-means算法,以互动用户的特征为依据进行聚类,基于所得聚集态的同质性特征过滤水军,并引入针对水军样本的权重惩罚机制,结合标签关联度,进一步构建微博热度指标模型;最后,通过计算先验热度值随时间变化的二阶导数与真实数据的余弦相似度预测未来微博热度变化。实验结果表明,BXpre在不同用户量级下输出的微博舆情热度排序结果更贴近真实数据,在混合量级测试条件下,BXpre的预测相关性指标达到了90.88%,相较于基于长短期记忆(LSTM)网络、极限梯度提升(XGBoost)算法和时序差值排序(TDR)的3种传统方法,分别提升了12.71、14.80和11.30个百分点;相较于ChatGPT和文心一言,分别提升了9.76和11.95个百分点。