Topic modeling is a fundamental technique of content analysis in natural language processing,widely applied in domains such as social sciences and finance.In the era of digital communication,social scientists increasi...Topic modeling is a fundamental technique of content analysis in natural language processing,widely applied in domains such as social sciences and finance.In the era of digital communication,social scientists increasingly rely on large-scale social media data to explore public discourse,collective behavior,and emerging social concerns.However,traditional models like Latent Dirichlet Allocation(LDA)and neural topic models like BERTopic struggle to capture deep semantic structures in short-text datasets,especially in complex non-English languages like Chinese.This paper presents Generative Language Model Topic(GLMTopic)a novel hybrid topic modeling framework leveraging the capabilities of large language models,designed to support social science research by uncovering coherent and interpretable themes from Chinese social media platforms.GLMTopic integrates Adaptive Community-enhanced Graph Embedding for advanced semantic representation,Uniform Manifold Approximation and Projection-based(UMAP-based)dimensionality reduction,Hierarchical Density-Based Spatial Clustering of Applications with Noise(HDBSCAN)clustering,and large language model-powered(LLM-powered)representation tuning to generate more contextually relevant and interpretable topics.By reducing dependence on extensive text preprocessing and human expert intervention in post-analysis topic label annotation,GLMTopic facilitates a fully automated and user-friendly topic extraction process.Experimental evaluations on a social media dataset sourced from Weibo demonstrate that GLMTopic outperforms Latent Dirichlet Allocation(LDA)and BERTopic in coherence score and usability with automated interpretation,providing a more scalable and semantically accurate solution for Chinese topic modeling.Future research will explore optimizing computational efficiency,integrating knowledge graphs and sentiment analysis for more complicated workflows,and extending the framework for real-time and multilingual topic modeling.展开更多
Modeling topics in short texts presents significant challenges due to feature sparsity, particularly when analyzing content generated by large-scale online users. This sparsity can substantially impair semantic captur...Modeling topics in short texts presents significant challenges due to feature sparsity, particularly when analyzing content generated by large-scale online users. This sparsity can substantially impair semantic capture accuracy. We propose a novel approach that incorporates pre-clustered knowledge into the BERTopic model while reducing the l2 norm for low-frequency words. Our method effectively mitigates feature sparsity during cluster mapping. Empirical evaluation on the StackOverflow dataset demonstrates that our approach outperforms baseline models, achieving superior Macro-F1 scores. These results validate the effectiveness of our proposed feature sparsity reduction technique for short-text topic modeling.展开更多
Traditionally,exam preparation involves manually analyzing past question papers to identify and prioritize key topics.This research proposes a data-driven solution to automate this process using techniques like Docume...Traditionally,exam preparation involves manually analyzing past question papers to identify and prioritize key topics.This research proposes a data-driven solution to automate this process using techniques like Document Layout Segmentation,Optical Character Recognition(OCR),and Latent Dirichlet Allocation(LDA)for topic modelling.This study aims to develop a system that utilizes machine learning and topic modelling to identify and rank key topics from historical exam papers,aiding students in efficient exam preparation.The research addresses the difficulty in exam preparation due to the manual and labour-intensive process of analyzing past exam papers to identify and prioritize key topics.This approach is designed to streamline and optimize exam preparation,making it easier for students to focus on the most relevant topics,thereby using their efforts more effectively.The process involves three stages:(i)Document Layout Segmentation and Data Preparation,using deep learning techniques to separate text from non-textual content in past exam papers,(ii)Text Extraction and Processing using OCR to convert images into machine-readable text,and(iii)Topic Modeling with LDA to identify key topics covered in the exams.The research demonstrates the effectiveness of the proposed method in identifying and prioritizing key topics from exam papers.The LDA model successfully extracts relevant themes,aiding students in focusing their study efforts.The research presents a promising approach for optimizing exam preparation.By leveraging machine learning and topic modelling,the system offers a data-driven and efficient solution for students to prioritize their study efforts.Future work includes expanding the dataset size to further enhance model accuracy.Additionally,integration with educational platforms holds potential for personalized recommendations and adaptive learning experiences.展开更多
文章采用BERTopic模型,对“好大夫在线”平台上的医学科普文章进行主题挖掘,旨在提升患者检索医疗信息的效率,并辅助医疗从业者精准把握医学话题的发展趋势,进而推动医疗事业的进步。针对医学文本信息量大、专业性强的特点,研究通过数...文章采用BERTopic模型,对“好大夫在线”平台上的医学科普文章进行主题挖掘,旨在提升患者检索医疗信息的效率,并辅助医疗从业者精准把握医学话题的发展趋势,进而推动医疗事业的进步。针对医学文本信息量大、专业性强的特点,研究通过数据预处理、预训练嵌入模型ERNIE-Health,并细致调整模型参数,有效地解决了传统LDA(Latent Dirichlet Allocation)模型在医学文本处理任务中存在的局限性。实验结果显示,BERTopic模型成功识别出220个研究主题,且经OCTIS(Open Topic Modeling Toolkit for Interpretability and Similarity)框架评估,主题多样性得分为0.662,连贯性得分为0.991,显著提升了主题挖掘的准确性和可靠性。此项研究对医疗大数据中知识的深入挖掘具有重要意义。展开更多
基金funded by the Natural Science Foundation of Fujian Province,China,grant No.2022J05291.
文摘Topic modeling is a fundamental technique of content analysis in natural language processing,widely applied in domains such as social sciences and finance.In the era of digital communication,social scientists increasingly rely on large-scale social media data to explore public discourse,collective behavior,and emerging social concerns.However,traditional models like Latent Dirichlet Allocation(LDA)and neural topic models like BERTopic struggle to capture deep semantic structures in short-text datasets,especially in complex non-English languages like Chinese.This paper presents Generative Language Model Topic(GLMTopic)a novel hybrid topic modeling framework leveraging the capabilities of large language models,designed to support social science research by uncovering coherent and interpretable themes from Chinese social media platforms.GLMTopic integrates Adaptive Community-enhanced Graph Embedding for advanced semantic representation,Uniform Manifold Approximation and Projection-based(UMAP-based)dimensionality reduction,Hierarchical Density-Based Spatial Clustering of Applications with Noise(HDBSCAN)clustering,and large language model-powered(LLM-powered)representation tuning to generate more contextually relevant and interpretable topics.By reducing dependence on extensive text preprocessing and human expert intervention in post-analysis topic label annotation,GLMTopic facilitates a fully automated and user-friendly topic extraction process.Experimental evaluations on a social media dataset sourced from Weibo demonstrate that GLMTopic outperforms Latent Dirichlet Allocation(LDA)and BERTopic in coherence score and usability with automated interpretation,providing a more scalable and semantically accurate solution for Chinese topic modeling.Future research will explore optimizing computational efficiency,integrating knowledge graphs and sentiment analysis for more complicated workflows,and extending the framework for real-time and multilingual topic modeling.
文摘Modeling topics in short texts presents significant challenges due to feature sparsity, particularly when analyzing content generated by large-scale online users. This sparsity can substantially impair semantic capture accuracy. We propose a novel approach that incorporates pre-clustered knowledge into the BERTopic model while reducing the l2 norm for low-frequency words. Our method effectively mitigates feature sparsity during cluster mapping. Empirical evaluation on the StackOverflow dataset demonstrates that our approach outperforms baseline models, achieving superior Macro-F1 scores. These results validate the effectiveness of our proposed feature sparsity reduction technique for short-text topic modeling.
文摘Traditionally,exam preparation involves manually analyzing past question papers to identify and prioritize key topics.This research proposes a data-driven solution to automate this process using techniques like Document Layout Segmentation,Optical Character Recognition(OCR),and Latent Dirichlet Allocation(LDA)for topic modelling.This study aims to develop a system that utilizes machine learning and topic modelling to identify and rank key topics from historical exam papers,aiding students in efficient exam preparation.The research addresses the difficulty in exam preparation due to the manual and labour-intensive process of analyzing past exam papers to identify and prioritize key topics.This approach is designed to streamline and optimize exam preparation,making it easier for students to focus on the most relevant topics,thereby using their efforts more effectively.The process involves three stages:(i)Document Layout Segmentation and Data Preparation,using deep learning techniques to separate text from non-textual content in past exam papers,(ii)Text Extraction and Processing using OCR to convert images into machine-readable text,and(iii)Topic Modeling with LDA to identify key topics covered in the exams.The research demonstrates the effectiveness of the proposed method in identifying and prioritizing key topics from exam papers.The LDA model successfully extracts relevant themes,aiding students in focusing their study efforts.The research presents a promising approach for optimizing exam preparation.By leveraging machine learning and topic modelling,the system offers a data-driven and efficient solution for students to prioritize their study efforts.Future work includes expanding the dataset size to further enhance model accuracy.Additionally,integration with educational platforms holds potential for personalized recommendations and adaptive learning experiences.
文摘文章采用BERTopic模型,对“好大夫在线”平台上的医学科普文章进行主题挖掘,旨在提升患者检索医疗信息的效率,并辅助医疗从业者精准把握医学话题的发展趋势,进而推动医疗事业的进步。针对医学文本信息量大、专业性强的特点,研究通过数据预处理、预训练嵌入模型ERNIE-Health,并细致调整模型参数,有效地解决了传统LDA(Latent Dirichlet Allocation)模型在医学文本处理任务中存在的局限性。实验结果显示,BERTopic模型成功识别出220个研究主题,且经OCTIS(Open Topic Modeling Toolkit for Interpretability and Similarity)框架评估,主题多样性得分为0.662,连贯性得分为0.991,显著提升了主题挖掘的准确性和可靠性。此项研究对医疗大数据中知识的深入挖掘具有重要意义。