Background:In mental health,recovery is emphasized,and qualitative analyses of service users’narratives have accumulated;however,while qualitative approaches excel at capturing rich context and generating new concept...Background:In mental health,recovery is emphasized,and qualitative analyses of service users’narratives have accumulated;however,while qualitative approaches excel at capturing rich context and generating new concepts,they are limited in generalizability and feasible data volume.This study aimed to quantify the subjective life history narratives of users of psychiatric home-visit nursing using natural language processing(NLP)and to clarify the relationships between linguistic features and recovery-related indicators.Methods:We conducted audio-recorded and transcribed semi-structured interviews on daily life verbatim and collected self-report questionnaires(Recovery Assessment Scale[RAS])and clinician ratings(Global Assessment of Functioning[GAF])from Japanese users of psychiatric home-visit nursing.Using the artificial intelligence-based topic-modeling method BERTopic,we extracted topics from the interview texts and calculated each participant’s topic proportions,and then examined associations between topic proportions and recovery-related indicators using Pearson correlation analyses.Results:“School”showed a significant positive correlation with RAS(r=0.39,p=0.05),whereas“Family”showed a significant negative correlation(r=–0.46,p=0.02).GAF was positively correlated with word count(r=0.44,p=0.02)and“Hospital”(r=0.42,p=0.03),and negatively correlated with“Backchannels”(aizuchi)(r=–0.41,p=0.03).Conclusion:The present results suggest that the quantity,quality,and content of narratives can serve as useful indicators of mental health and recovery,and that objective NLP-based analysis of service users’narratives can complement traditional self-report scales and clinician ratings to inform the design of recovery-oriented care in psychiatric home-visit nursing.展开更多
Recently,diffusion models have emerged as a promising paradigm for molecular design and optimization.However,most diffusion-based molecular generative models focus on modeling 2D graphs or 3D geom-etries,with limited ...Recently,diffusion models have emerged as a promising paradigm for molecular design and optimization.However,most diffusion-based molecular generative models focus on modeling 2D graphs or 3D geom-etries,with limited research on molecular sequence diffusion models.The International Union of Pure and Applied Chemistry(IUPAC)names are more akin to chemical natural language than the simplified molecular input line entry system(SMILES)for organic compounds.In this work,we apply an IUPAC-guided conditional diffusion model to facilitate molecular editing from chemical natural language to chemical language(SMILES)and explore whether the pre-trained generative performance of diffusion models can be transferred to chemical natural language.We propose DiffIUPAC,a controllable molecular editing diffusion model that converts IUPAC names to SMILES strings.Evaluation results demonstrate that our model out-performs existing methods and successfully captures the semantic rules of both chemical languages.Chemical space and scaffold analysis show that the model can generate similar compounds with diverse scaffolds within the specified constraints.Additionally,to illustrate the model’s applicability in drug design,we conducted case studies in functional group editing,analogue design and linker design.展开更多
DeepSeek Chinese artificial intelligence(AI)open-source model,has gained a lot of attention due to its economical training and efficient inference.DeepSeek,a model trained on large-scale reinforcement learning without...DeepSeek Chinese artificial intelligence(AI)open-source model,has gained a lot of attention due to its economical training and efficient inference.DeepSeek,a model trained on large-scale reinforcement learning without supervised fine-tuning as a preliminary step,demonstrates remarkable reasoning capabilities of performing a wide range of tasks.DeepSeek is a prominent AI-driven chatbot that assists individuals in learning and enhances responses by generating insightful solutions to inquiries.Users possess divergent viewpoints regarding advanced models like DeepSeek,posting both their merits and shortcomings across several social media platforms.This research presents a new framework for predicting public sentiment to evaluate perceptions of DeepSeek.To transform the unstructured data into a suitable manner,we initially collect DeepSeek-related tweets from Twitter and subsequently implement various preprocessing methods.Subsequently,we annotated the tweets utilizing the Valence Aware Dictionary and sentiment Reasoning(VADER)methodology and the lexicon-driven TextBlob.Next,we classified the attitudes obtained from the purified data utilizing the proposed hybrid model.The proposed hybrid model consists of long-term,shortterm memory(LSTM)and bidirectional gated recurrent units(BiGRU).To strengthen it,we include multi-head attention,regularizer activation,and dropout units to enhance performance.Topic modeling employing KMeans clustering and Latent Dirichlet Allocation(LDA),was utilized to analyze public behavior concerning DeepSeek.The perceptions demonstrate that 82.5%of the people are positive,15.2%negative,and 2.3%neutral using TextBlob,and 82.8%positive,16.1%negative,and 1.2%neutral using the VADER analysis.The slight difference in results ensures that both analyses concur with their overall perceptions and may have distinct views of language peculiarities.The results indicate that the proposed model surpassed previous state-of-the-art approaches.展开更多
The increasing frequency and severity of natural disasters,exacerbated by global warming,necessitate novel solutions to strengthen the resilience of Critical Infrastructure Systems(CISs).Recent research reveals the si...The increasing frequency and severity of natural disasters,exacerbated by global warming,necessitate novel solutions to strengthen the resilience of Critical Infrastructure Systems(CISs).Recent research reveals the sig-nificant potential of natural language processing(NLP)to analyze unstructured human language during disasters,thereby facilitating the uncovering of disruptions and providing situational awareness supporting various aspects of resilience regarding CISs.Despite this potential,few studies have systematically mapped the global research on NLP applications with respect to supporting various aspects of resilience of CISs.This paper contributes to the body of knowledge by presenting a review of current knowledge using the scientometric review technique.Using 231 bibliographic records from the Scopus and Web of Science core collections,we identify five key research areas where researchers have used NLP to support the resilience of CISs during natural disasters,including sentiment analysis,crisis informatics,data and knowledge visualization,disaster impacts,and content analysis.Furthermore,we map the utility of NLP in the identified research focus with respect to four aspects of resilience(i.e.,preparedness,absorption,recovery,and adaptability)and present various common techniques used and potential future research directions.This review highlights that NLP has the potential to become a supplementary data source to support the resilience of CISs.The results of this study serve as an introductory-level guide designed to help scholars and practitioners unlock the potential of NLP for strengthening the resilience of CISs against natural disasters.展开更多
The increased accessibility of social networking services(SNSs)has facilitated communication and information sharing among users.However,it has also heightened concerns about digital safety,particularly for children a...The increased accessibility of social networking services(SNSs)has facilitated communication and information sharing among users.However,it has also heightened concerns about digital safety,particularly for children and adolescents who are increasingly exposed to online grooming crimes.Early and accurate identification of grooming conversations is crucial in preventing long-term harm to victims.However,research on grooming detection in South Korea remains limited,as existing models trained primarily on English text and fail to reflect the unique linguistic features of SNS conversations,leading to inaccurate classifications.To address these issues,this study proposes a novel framework that integrates optical character recognition(OCR)technology with KcELECTRA,a deep learning-based natural language processing(NLP)model that shows excellent performance in processing the colloquial Korean language.In the proposed framework,the KcELECTRA model is fine-tuned by an extensive dataset,including Korean social media conversations,Korean ethical verification data from AI-Hub,and Korean hate speech data from Hug-gingFace,to enable more accurate classification of text extracted from social media conversation images.Experimental results show that the proposed framework achieves an accuracy of 0.953,outperforming existing transformer-based models.Furthermore,OCR technology shows high accuracy in extracting text from images,demonstrating that the proposed framework is effective for online grooming detection.The proposed framework is expected to contribute to the more accurate detection of grooming text and the prevention of grooming-related crimes.展开更多
Artificial intelligence technologies are rapidly evolving,with generative AI advancements—particularly those driven by large models—drawing significant attention.Large model technologies will play a pivotal role in ...Artificial intelligence technologies are rapidly evolving,with generative AI advancements—particularly those driven by large models—drawing significant attention.Large model technologies will play a pivotal role in railway intelligent operation and maintenance(O&M)by leveraging natural language as the medium.Based on the multi-source and heterogeneous data characteristics of railway infrastructure,this study investigates data analysis methods and application scenarios for railway infrastructure O&M leveraging large natural language models.An overall architecture is proposed for intelligent O&M of railway infrastructure,centered on railway large natural language models and featuring multi-source model synergy.This architecture is developed through a detailed analysis of O&M knowledge sources and structures,as well as data analysis requirements spanning the entire life cycle of railway infrastructure.These railwayspecific models are employed to derive railway intelligent O&M scenario models,which are driven by intelligent agent technologies and integrate traditional models,knowledge graphs,and other technologies to empower railway intelligent O&M.Further research focuses on key technologies,including the fine-tuning of railway large natural language models,retrievalaugmented generation,and AI agent technologies.These technologies are combined with the capabilities inherent in large natural language models—such as logical reasoning,content generation,and intelligent decision-making—to explore applications of large natural language models in inspection,repair,and maintenance of railway infrastructure,management of equipment maintenance information,equipment condition inspection,fault handling and emergency response in accidents,and intelligent O&M decision-making.展开更多
The natural language processing(NLP)domain has witnessed significant advancements with the emergence of transformer-based models,which have reshaped the text understanding and generation landscape.While their capabili...The natural language processing(NLP)domain has witnessed significant advancements with the emergence of transformer-based models,which have reshaped the text understanding and generation landscape.While their capabilities are well recognized,there remains a limited systematic synthesis of how these models perform across tasks,scale efficiently,adapt to domains,and address ethical challenges.Therefore,the aim of this paper was to analyze the performance of transformer-based models across various NLP tasks,their scalability,domain adaptation,and the ethical implications of such models.This meta-analysis paper synthesizes findings from 25 peer-reviewed studies on NLP transformer-based models,adhering to the PRISMA framework.Relevant papers were sourced from electronic databases,including IEEE Xplore,Springer,ACM Digital Library,Elsevier,PubMed,and Google Scholar.The findings highlight the superior performance of transformers over conventional approaches,attributed to selfattention mechanisms and pre-trained language representations.Despite these advantages,challenges such as high computational costs,data bias,and hallucination persist.The study provides new perspectives by underscoring the necessity for future research to optimize transformer architectures for efficiency,address ethical AI concerns,and enhance generalization across languages.This paper contributes valuable insights into the current trends,limitations,and potential improvements in transformer-based models for NLP.展开更多
Organizations often use sentiment analysis-based systems or even resort to simple manual analysis to try to extract useful meaning from their customers’general digital“chatter”.Driven by the need for a more accurat...Organizations often use sentiment analysis-based systems or even resort to simple manual analysis to try to extract useful meaning from their customers’general digital“chatter”.Driven by the need for a more accurate way to qualitatively extract valuable product and brand-oriented consumer-generated texts,this paper experimentally tests the ability of an NLP-based analytics approach to extract information from highly unstructured texts.The results show that natural language processing outperforms sentiment analysis for detecting issues from social media data.Surprisingly,the experiment shows that sentiment analysis is not only better than manual analysis of social media data for the goal of supporting organizational decision-making,but may also be disadvantageous for such efforts.展开更多
The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in S...The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.展开更多
The outstanding growth in the applications of large language models(LLMs)demonstrates the significance of adaptive and efficient prompt engineering tactics.The existing methods may not be variable,vigorous and streaml...The outstanding growth in the applications of large language models(LLMs)demonstrates the significance of adaptive and efficient prompt engineering tactics.The existing methods may not be variable,vigorous and streamlined in different domains.The offered study introduces an immediate optimization outline,named PROMPTx-PE,that is going to yield a greater level of precision and strength when it comes to the assignments that are premised on LLM.The proposed systemfeatures a timely selection schemewhich is informed by reinforcement learning,a contextual layer and a dynamic weighting module which is regulated by Lyapunov-based stability guidelines.The PROMPTx-PE dynamically varies the exploration and exploitation of the prompt space,depending on real-time feedback and multi-objective reward development.Extensive testing on both benchmark(GLUE,SuperGLUE)and domain-specific data(Healthcare-QA and Industrial-NER)demonstrates a large best performance to be 89.4%and a strong robustness disconnect with under 3%computation expense.The results confirm the effectiveness,consistency,and scalability of PROMPTx-PE as a platform of adaptive prompt engineering based on recent uses of LLMs.展开更多
The rapid proliferation of multimodal misinformation on social media demands detection frameworks that are not only accurate but also robust to noise,adversarial manipulation,and semantic inconsistency between modalit...The rapid proliferation of multimodal misinformation on social media demands detection frameworks that are not only accurate but also robust to noise,adversarial manipulation,and semantic inconsistency between modalities.Existing multimodal fake news detection approaches often rely on deterministic fusion strategies,which limits their ability to model uncertainty and complex cross-modal dependencies.To address these challenges,we propose Q-ALIGNer,a quantum-inspired multimodal framework that integrates classical feature extraction with quantumstate encoding,learnable cross-modal entanglement,and robustness-aware training objectives.The proposed framework adopts quantumformalism as a representational abstraction,enabling probabilisticmodeling ofmultimodal alignment while remaining fully executable on classical hardware.Q-ALIGNer is evaluated on four widely used benchmark datasets—FakeNewsNet,Fakeddit,Weibo,and MediaEval VMU—covering diverse platforms,languages,and content characteristics.Experimental results demonstrate consistent performance improvements over strong text-only,vision-only,multimodal,and quantum-inspired baselines,including BERT,RoBERTa,XLNet,ResNet,EfficientNet,ViT,Multimodal-BERT,ViLBERT,and QEMF.Q-ALIGNer achieves accuracies of 91.2%,92.9%,91.7%,and 92.1%on FakeNewsNet,Fakeddit,Weibo,and MediaEval VMU,respectively,with F1-score gains of 3–4 percentage points over QEMF.Robustness evaluation shows a reduced adversarial accuracy gap of 2.6%,compared to 7%–9%for baseline models,while calibration analysis indicates improved reliability with an expected calibration error of 0.031.In addition,computational analysis shows that Q-ALIGNer reduces training time to 19.6 h compared to 48.2 h for QEMF at a comparable parameter scale.These results indicate that quantum-inspired alignment and entanglement can enhance robustness,uncertainty awareness,and efficiency in multimodal fake news detection,positioning Q-ALIGNer as a principled and practical content-centric framework for misinformation analysis.展开更多
Building reliable intent-based,task-oriented dialog systems typically requires substantial manual effort:designers must derive intents,entities,responses,and control logic from raw conversational data,then iterate unt...Building reliable intent-based,task-oriented dialog systems typically requires substantial manual effort:designers must derive intents,entities,responses,and control logic from raw conversational data,then iterate until the assistant behaves consistently.This paper investigates how far large language models(LLMs)can automate this development.In this paper,we use two reference corpora,Let’s Go(English,public transport)and MEDIA(French,hotel booking),to prompt four LLM families(GPT-4o,Claude,Gemini,Mistral Small)and generate the core specifications required by the rasa platform.These include intent sets with example utterances,entity definitions with slot mappings,response templates,and basic dialog flows.To structure this process,we introduce a model-and platform-agnostic pipelinewith two phases.The first normalizes and validates LLM-generated artifacts,enforcing crossfile consistency andmaking slot usage explicit.The second uses a lightweight dialog harness that runs scripted tests and incrementally patches failure points until conversations complete reliably.Across eight projects,all models required some targeted repairs before training.After applying our pipeline,all reached≥70%task completion(many above 84%),while NLU performance ranged from mid-0.6 to 1.0 macro-F1 depending on domain breadth.These results show that,with modest guidance,current LLMs can produce workable end-to-end dialog prototypes directly fromraw transcripts.Our main contributions are:(i)a reusable bootstrap method aligned with industry domain-specific languages(DSLs),(ii)a small set of high-impact corrective patterns,and(iii)a simple but effective harness for closed-loop refinement across conversational platforms.展开更多
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods...Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.展开更多
With the rapid development of digital culture,a large number of cultural texts are presented in the form of digital and network.These texts have significant characteristics such as sparsity,real-time and non-standard ...With the rapid development of digital culture,a large number of cultural texts are presented in the form of digital and network.These texts have significant characteristics such as sparsity,real-time and non-standard expression,which bring serious challenges to traditional classification methods.In order to cope with the above problems,this paper proposes a new ASSC(ALBERT,SVD,Self-Attention and Cross-Entropy)-TextRCNN digital cultural text classification model.Based on the framework of TextRCNN,the Albert pre-training language model is introduced to improve the depth and accuracy of semantic embedding.Combined with the dual attention mechanism,the model’s ability to capture and model potential key information in short texts is strengthened.The Singular Value Decomposition(SVD)was used to replace the traditional Max pooling operation,which effectively reduced the feature loss rate and retained more key semantic information.The cross-entropy loss function was used to optimize the prediction results,making the model more robust in class distribution learning.The experimental results indicate that,in the digital cultural text classification task,as compared to the baseline model,the proposed ASSC-TextRCNN method achieves an 11.85%relative improvement in accuracy and an 11.97%relative increase in the F1 score.Meanwhile,the relative error rate decreases by 53.18%.This achievement not only validates the effectiveness and advanced nature of the proposed approach but also offers a novel technical route and methodological underpinnings for the intelligent analysis and dissemination of digital cultural texts.It holds great significance for promoting the in-depth exploration and value realization of digital culture.展开更多
The increasing significance of text data in power system intelligence has highlighted the out-of-distribution(OOD)problem as a critical challenge,hindering the deployment of artificial intelligence(AI)models.In a clos...The increasing significance of text data in power system intelligence has highlighted the out-of-distribution(OOD)problem as a critical challenge,hindering the deployment of artificial intelligence(AI)models.In a closed-world setting,most AI models cannot detect and reject unexpected data,which exacerbates the harmful impact of the OOD problem.The high similarity between OOD and indistribution(IND)samples in the power system presents challenges for existing OOD detection methods in achieving effective results.This study aims to elucidate and address the OOD problem in power systems through a text classification task.First,the underlying causes of OOD sample generation are analyzed,highlighting the inherent nature of the OOD problem in the power system.Second,a novel method integrating the enhanced Mahalanobis distance with calibration strategies is introduced to improve OOD detection for text data in power system applications.Finally,the case study utilizing the actual text data from power system field operation(PSFO)is conducted,demonstrating the effectiveness of the proposed OOD detection method.Experimental results indicate that the proposed method outperformed existing methods in text OOD detection tasks within the power system,achieving a remarkable 21.03%enhancement of metric in the false positive rate at 95%true positive recall(FPR95)and a 12.97%enhancement in classi-fication accuracy for the mixed IND-OOD scenarios.展开更多
Objective To develop a clinical decision and prescription generation system(CDPGS)specifically for diarrhea in traditional Chinese medicine(TCM),utilizing a specialized large language model(LLM),Qwen-TCM-Dia,to standa...Objective To develop a clinical decision and prescription generation system(CDPGS)specifically for diarrhea in traditional Chinese medicine(TCM),utilizing a specialized large language model(LLM),Qwen-TCM-Dia,to standardize diagnostic processes and prescription generation.Methods Two primary datasets were constructed:an evaluation benchmark and a fine-tuning dataset consisting of fundamental diarrhea knowledge,medical records,and chain-ofthought(CoT)reasoning datasets.After an initial evaluation of 16 open-source LLMs across inference time,accuracy,and output quality,Qwen2.5 was selected as the base model due to its superior overall performance.We then employed a two-stage low-rank adaptation(LoRA)fine-tuning strategy,integrating continued pre-training on domain-specific knowledge with instruction fine-tuning using CoT-enriched medical records.This approach was designed to embed the clinical logic(symptoms→pathogenesis→therapeutic principles→prescriptions)into the model’s reasoning capabilities.The resulting fine-tuned model,specialized for TCM diarrhea,was designated as Qwen-TCM-Dia.Model performance was evaluated for disease diagnosis and syndrome type differentiation using accuracy,precision,recall,and F1-score.Furthermore,the quality of the generated prescriptions was compared with that of established open-source TCM LLMs.Results Qwen-TCM-Dia achieved peak performance compared to both the base Qwen2.5 model and five other open-source TCM LLMs.It achieved 97.05%accuracy and 91.48%F1-score in disease diagnosis,and 74.54%accuracy and 74.21%F1-score in syndrome type differentiation.Compared with existing open-source TCM LLMs(BianCang,HuangDi,LingDan,TCMLLM-PR,and ZhongJing),Qwen-TCM-Dia exhibited higher fidelity in reconstructing the“symptoms→pathogenesis→therapeutic principles→prescriptions”logic chain.It provided complete prescriptions,whereas other models often omitted dosages or generated mismatched prescriptions.Conclusion By integrating continued pre-training,CoT reasoning,and a two-stage fine-tuning strategy,this study establishes a CDPGS for diarrhea in TCM.The results demonstrate the synergistic effect of strengthening domain representation through pre-training and activating logical reasoning via CoT.This research not only provides critical technical support for the standardized diagnosis and treatment of diarrhea but also offers a scalable paradigm for the digital inheritance of expert TCM experience and the intelligent transformation of TCM.展开更多
Large language model-based(LLM-based)text-to-SQL methods have achieved important progress in generating SQL queries for real-world applications.When confronted with table content-aware questions in real-world scenario...Large language model-based(LLM-based)text-to-SQL methods have achieved important progress in generating SQL queries for real-world applications.When confronted with table content-aware questions in real-world scenarios,ambiguous data content keywords and nonexistent database schema column names within the question lead to the poor performance of existing methods.To solve this problem,we propose a novel approach towards table content-aware text-to-SQL with self-retrieval(TCSR-SQL).It leverages LLM's in-context learning capability to extract data content keywords within the question and infer possible related database schema,which is used to generate Seed SQL to fuzz search databases.The search results are further used to confirm the encoding knowledge with the designed encoding knowledge table,including column names and exact stored content values used in the SQL.The encoding knowledge is sent to obtain the final Precise SQL following multirounds of generation-execution-revision process.To validate our approach,we introduce a table-content-aware,questionrelated benchmark dataset,containing 2115 question-SQL pairs.Comprehensive experiments conducted on this benchmark demonstrate the remarkable performance of TCSR-SQL,achieving an improvement of at least 27.8%in execution accuracy compared to other state-of-the-art methods.展开更多
Accurate,up to date,and quick information related to any disaster supports disaster management team/authorities to perform quick,easy,and cost-effective response to enhance rescue operations to alleviate the possible ...Accurate,up to date,and quick information related to any disaster supports disaster management team/authorities to perform quick,easy,and cost-effective response to enhance rescue operations to alleviate the possible loss of lives,financial risks,and properties.Due to damaged infrastructure in disaster-affected areas,social media is the only way to share/exchange real time information.Therefore,‘X’(formerly Twitter)has become a major platform for disseminating real-time information during disaster events or emergencies,i.e.,floods and earthquake.Rapid identification of actionable content is critical for effective humanitarian response;however,the brief and noisy nature of tweets makes automated classification challenging.To tackle this problem,this study proposes a hybrid classification framework that integrates term frequency–inverse document frequency(TF-IDF)features with graph convolutional networks(GCNs)to enhance disaster-related tweet analysis.The proposed model performs three classification tasks:identifying disaster-related tweets(achieving 94.47%accuracy),categorizing disaster types(earthquake,flood,and non-disaster)with 91.78%accuracy,and detecting aid requests such as food,donations,and medical assistance(94.64%accuracy).By combining the statistical strengths of TF-IDF with the relational learning capabilities of GCNs,the model attains high accuracy while maintaining computational efficiency and interpretability.The results demonstrate the framework’s strong potential for real-time disaster response,offering valuable insights to support emergency management systems and humanitarian decision-making.展开更多
Since Google introduced the concept of Knowledge Graphs(KGs)in 2012,their construction technologies have evolved into a comprehensive methodological framework encompassing knowledge acquisition,extraction,representati...Since Google introduced the concept of Knowledge Graphs(KGs)in 2012,their construction technologies have evolved into a comprehensive methodological framework encompassing knowledge acquisition,extraction,representation,modeling,fusion,computation,and storage.Within this framework,knowledge extraction,as the core component,directly determines KG quality.In military domains,traditional manual curation models face efficiency constraints due to data fragmentation,complex knowledge architectures,and confidentiality protocols.Meanwhile,crowdsourced ontology construction approaches from general domains prove non-transferable,while human-crafted ontologies struggle with generalization deficiencies.To address these challenges,this study proposes an OntologyAware LLM Methodology for Military Domain Knowledge Extraction(LLM-KE).This approach leverages the deep semantic comprehension capabilities of Large Language Models(LLMs)to simulate human experts’cognitive processes in crowdsourced ontology construction,enabling automated extraction of military textual knowledge.It concurrently enhances knowledge processing efficiency and improves KG completeness.Empirical analysis demonstrates that this method effectively resolves scalability and dynamic adaptation challenges in military KG construction,establishing a novel technological pathway for advancing military intelligence development.展开更多
In multi-domain neural machine translation tasks,the disparity in data distribution between domains poses significant challenges in distinguishing domain features and sharing parameters across domains.This paper propo...In multi-domain neural machine translation tasks,the disparity in data distribution between domains poses significant challenges in distinguishing domain features and sharing parameters across domains.This paper proposes a Transformer-based multi-domain-aware mixture of experts model.To address the problem of domain feature differentiation,a mixture of experts(MoE)is introduced into attention to enhance the domain perception ability of the model,thereby improving the domain feature differentiation.To address the trade-off between domain feature distinction and cross-domain parameter sharing,we propose a domain-aware mixture of experts(DMoE).A domain-aware gating mechanism is introduced within the MoE module,simultaneously activating all domain experts to effectively blend domain feature distinction and cross-domain parameter sharing.A loss balancing function is then added to dynamically adjust the impact of the loss function on the expert distribution,enabling fine-tuning of the expert activation distribution to achieve a balance between domains.Experimental results on multiple Chinese-to-English and English-to-French datasets demonstrate that our proposed method significantly outperforms baseline models in both BLEU,chrF,and COMET metrics,validating its effectiveness in multi-domain neural machine translation.Further analysis of the probability distribution of expert activations shows that our method achieves remarkable results in both domain differentiation and cross-domain parameter sharing.展开更多
文摘Background:In mental health,recovery is emphasized,and qualitative analyses of service users’narratives have accumulated;however,while qualitative approaches excel at capturing rich context and generating new concepts,they are limited in generalizability and feasible data volume.This study aimed to quantify the subjective life history narratives of users of psychiatric home-visit nursing using natural language processing(NLP)and to clarify the relationships between linguistic features and recovery-related indicators.Methods:We conducted audio-recorded and transcribed semi-structured interviews on daily life verbatim and collected self-report questionnaires(Recovery Assessment Scale[RAS])and clinician ratings(Global Assessment of Functioning[GAF])from Japanese users of psychiatric home-visit nursing.Using the artificial intelligence-based topic-modeling method BERTopic,we extracted topics from the interview texts and calculated each participant’s topic proportions,and then examined associations between topic proportions and recovery-related indicators using Pearson correlation analyses.Results:“School”showed a significant positive correlation with RAS(r=0.39,p=0.05),whereas“Family”showed a significant negative correlation(r=–0.46,p=0.02).GAF was positively correlated with word count(r=0.44,p=0.02)and“Hospital”(r=0.42,p=0.03),and negatively correlated with“Backchannels”(aizuchi)(r=–0.41,p=0.03).Conclusion:The present results suggest that the quantity,quality,and content of narratives can serve as useful indicators of mental health and recovery,and that objective NLP-based analysis of service users’narratives can complement traditional self-report scales and clinician ratings to inform the design of recovery-oriented care in psychiatric home-visit nursing.
基金supported by the Yonsei University graduate school Department of Integrative Biotechnology.
文摘Recently,diffusion models have emerged as a promising paradigm for molecular design and optimization.However,most diffusion-based molecular generative models focus on modeling 2D graphs or 3D geom-etries,with limited research on molecular sequence diffusion models.The International Union of Pure and Applied Chemistry(IUPAC)names are more akin to chemical natural language than the simplified molecular input line entry system(SMILES)for organic compounds.In this work,we apply an IUPAC-guided conditional diffusion model to facilitate molecular editing from chemical natural language to chemical language(SMILES)and explore whether the pre-trained generative performance of diffusion models can be transferred to chemical natural language.We propose DiffIUPAC,a controllable molecular editing diffusion model that converts IUPAC names to SMILES strings.Evaluation results demonstrate that our model out-performs existing methods and successfully captures the semantic rules of both chemical languages.Chemical space and scaffold analysis show that the model can generate similar compounds with diverse scaffolds within the specified constraints.Additionally,to illustrate the model’s applicability in drug design,we conducted case studies in functional group editing,analogue design and linker design.
文摘DeepSeek Chinese artificial intelligence(AI)open-source model,has gained a lot of attention due to its economical training and efficient inference.DeepSeek,a model trained on large-scale reinforcement learning without supervised fine-tuning as a preliminary step,demonstrates remarkable reasoning capabilities of performing a wide range of tasks.DeepSeek is a prominent AI-driven chatbot that assists individuals in learning and enhances responses by generating insightful solutions to inquiries.Users possess divergent viewpoints regarding advanced models like DeepSeek,posting both their merits and shortcomings across several social media platforms.This research presents a new framework for predicting public sentiment to evaluate perceptions of DeepSeek.To transform the unstructured data into a suitable manner,we initially collect DeepSeek-related tweets from Twitter and subsequently implement various preprocessing methods.Subsequently,we annotated the tweets utilizing the Valence Aware Dictionary and sentiment Reasoning(VADER)methodology and the lexicon-driven TextBlob.Next,we classified the attitudes obtained from the purified data utilizing the proposed hybrid model.The proposed hybrid model consists of long-term,shortterm memory(LSTM)and bidirectional gated recurrent units(BiGRU).To strengthen it,we include multi-head attention,regularizer activation,and dropout units to enhance performance.Topic modeling employing KMeans clustering and Latent Dirichlet Allocation(LDA),was utilized to analyze public behavior concerning DeepSeek.The perceptions demonstrate that 82.5%of the people are positive,15.2%negative,and 2.3%neutral using TextBlob,and 82.8%positive,16.1%negative,and 1.2%neutral using the VADER analysis.The slight difference in results ensures that both analyses concur with their overall perceptions and may have distinct views of language peculiarities.The results indicate that the proposed model surpassed previous state-of-the-art approaches.
基金financial support from the National Science Foundation(NSF)EPSCoR R.I.I.Track-2 Program,awarded under the NSF grant number 2119691.
文摘The increasing frequency and severity of natural disasters,exacerbated by global warming,necessitate novel solutions to strengthen the resilience of Critical Infrastructure Systems(CISs).Recent research reveals the sig-nificant potential of natural language processing(NLP)to analyze unstructured human language during disasters,thereby facilitating the uncovering of disruptions and providing situational awareness supporting various aspects of resilience regarding CISs.Despite this potential,few studies have systematically mapped the global research on NLP applications with respect to supporting various aspects of resilience of CISs.This paper contributes to the body of knowledge by presenting a review of current knowledge using the scientometric review technique.Using 231 bibliographic records from the Scopus and Web of Science core collections,we identify five key research areas where researchers have used NLP to support the resilience of CISs during natural disasters,including sentiment analysis,crisis informatics,data and knowledge visualization,disaster impacts,and content analysis.Furthermore,we map the utility of NLP in the identified research focus with respect to four aspects of resilience(i.e.,preparedness,absorption,recovery,and adaptability)and present various common techniques used and potential future research directions.This review highlights that NLP has the potential to become a supplementary data source to support the resilience of CISs.The results of this study serve as an introductory-level guide designed to help scholars and practitioners unlock the potential of NLP for strengthening the resilience of CISs against natural disasters.
基金supported by the IITP(Institute of Information&Communications Technology Planning&Evaluation)-ITRC(Information Technology Research Center)grant funded by the Korean government(Ministry of Science and ICT)(IITP-2025-RS-2024-00438056).
文摘The increased accessibility of social networking services(SNSs)has facilitated communication and information sharing among users.However,it has also heightened concerns about digital safety,particularly for children and adolescents who are increasingly exposed to online grooming crimes.Early and accurate identification of grooming conversations is crucial in preventing long-term harm to victims.However,research on grooming detection in South Korea remains limited,as existing models trained primarily on English text and fail to reflect the unique linguistic features of SNS conversations,leading to inaccurate classifications.To address these issues,this study proposes a novel framework that integrates optical character recognition(OCR)technology with KcELECTRA,a deep learning-based natural language processing(NLP)model that shows excellent performance in processing the colloquial Korean language.In the proposed framework,the KcELECTRA model is fine-tuned by an extensive dataset,including Korean social media conversations,Korean ethical verification data from AI-Hub,and Korean hate speech data from Hug-gingFace,to enable more accurate classification of text extracted from social media conversation images.Experimental results show that the proposed framework achieves an accuracy of 0.953,outperforming existing transformer-based models.Furthermore,OCR technology shows high accuracy in extracting text from images,demonstrating that the proposed framework is effective for online grooming detection.The proposed framework is expected to contribute to the more accurate detection of grooming text and the prevention of grooming-related crimes.
文摘Artificial intelligence technologies are rapidly evolving,with generative AI advancements—particularly those driven by large models—drawing significant attention.Large model technologies will play a pivotal role in railway intelligent operation and maintenance(O&M)by leveraging natural language as the medium.Based on the multi-source and heterogeneous data characteristics of railway infrastructure,this study investigates data analysis methods and application scenarios for railway infrastructure O&M leveraging large natural language models.An overall architecture is proposed for intelligent O&M of railway infrastructure,centered on railway large natural language models and featuring multi-source model synergy.This architecture is developed through a detailed analysis of O&M knowledge sources and structures,as well as data analysis requirements spanning the entire life cycle of railway infrastructure.These railwayspecific models are employed to derive railway intelligent O&M scenario models,which are driven by intelligent agent technologies and integrate traditional models,knowledge graphs,and other technologies to empower railway intelligent O&M.Further research focuses on key technologies,including the fine-tuning of railway large natural language models,retrievalaugmented generation,and AI agent technologies.These technologies are combined with the capabilities inherent in large natural language models—such as logical reasoning,content generation,and intelligent decision-making—to explore applications of large natural language models in inspection,repair,and maintenance of railway infrastructure,management of equipment maintenance information,equipment condition inspection,fault handling and emergency response in accidents,and intelligent O&M decision-making.
文摘The natural language processing(NLP)domain has witnessed significant advancements with the emergence of transformer-based models,which have reshaped the text understanding and generation landscape.While their capabilities are well recognized,there remains a limited systematic synthesis of how these models perform across tasks,scale efficiently,adapt to domains,and address ethical challenges.Therefore,the aim of this paper was to analyze the performance of transformer-based models across various NLP tasks,their scalability,domain adaptation,and the ethical implications of such models.This meta-analysis paper synthesizes findings from 25 peer-reviewed studies on NLP transformer-based models,adhering to the PRISMA framework.Relevant papers were sourced from electronic databases,including IEEE Xplore,Springer,ACM Digital Library,Elsevier,PubMed,and Google Scholar.The findings highlight the superior performance of transformers over conventional approaches,attributed to selfattention mechanisms and pre-trained language representations.Despite these advantages,challenges such as high computational costs,data bias,and hallucination persist.The study provides new perspectives by underscoring the necessity for future research to optimize transformer architectures for efficiency,address ethical AI concerns,and enhance generalization across languages.This paper contributes valuable insights into the current trends,limitations,and potential improvements in transformer-based models for NLP.
文摘Organizations often use sentiment analysis-based systems or even resort to simple manual analysis to try to extract useful meaning from their customers’general digital“chatter”.Driven by the need for a more accurate way to qualitatively extract valuable product and brand-oriented consumer-generated texts,this paper experimentally tests the ability of an NLP-based analytics approach to extract information from highly unstructured texts.The results show that natural language processing outperforms sentiment analysis for detecting issues from social media data.Surprisingly,the experiment shows that sentiment analysis is not only better than manual analysis of social media data for the goal of supporting organizational decision-making,but may also be disadvantageous for such efforts.
基金the research project LaTe4PoliticES(PID2022-138099OB-I00)funded by MCIN/AEI/10.13039/501100011033 and the European Fund for Regional Development(ERDF)-a way to make Europe.Tomás Bernal-Beltrán is supported by University of Murcia through the predoctoral programme.
文摘The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.
基金supported by the National Science and Technology Council(NSTC),Taiwan,under grant number 114-2221-E-182-041-MY3by Chang Gung University and Chang Gung Memorial Hospital under project number NERPD4Q0021.
文摘The outstanding growth in the applications of large language models(LLMs)demonstrates the significance of adaptive and efficient prompt engineering tactics.The existing methods may not be variable,vigorous and streamlined in different domains.The offered study introduces an immediate optimization outline,named PROMPTx-PE,that is going to yield a greater level of precision and strength when it comes to the assignments that are premised on LLM.The proposed systemfeatures a timely selection schemewhich is informed by reinforcement learning,a contextual layer and a dynamic weighting module which is regulated by Lyapunov-based stability guidelines.The PROMPTx-PE dynamically varies the exploration and exploitation of the prompt space,depending on real-time feedback and multi-objective reward development.Extensive testing on both benchmark(GLUE,SuperGLUE)and domain-specific data(Healthcare-QA and Industrial-NER)demonstrates a large best performance to be 89.4%and a strong robustness disconnect with under 3%computation expense.The results confirm the effectiveness,consistency,and scalability of PROMPTx-PE as a platform of adaptive prompt engineering based on recent uses of LLMs.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2026R77)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia,the Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia,through the project number NBU-FFR-2026-2248-02.
文摘The rapid proliferation of multimodal misinformation on social media demands detection frameworks that are not only accurate but also robust to noise,adversarial manipulation,and semantic inconsistency between modalities.Existing multimodal fake news detection approaches often rely on deterministic fusion strategies,which limits their ability to model uncertainty and complex cross-modal dependencies.To address these challenges,we propose Q-ALIGNer,a quantum-inspired multimodal framework that integrates classical feature extraction with quantumstate encoding,learnable cross-modal entanglement,and robustness-aware training objectives.The proposed framework adopts quantumformalism as a representational abstraction,enabling probabilisticmodeling ofmultimodal alignment while remaining fully executable on classical hardware.Q-ALIGNer is evaluated on four widely used benchmark datasets—FakeNewsNet,Fakeddit,Weibo,and MediaEval VMU—covering diverse platforms,languages,and content characteristics.Experimental results demonstrate consistent performance improvements over strong text-only,vision-only,multimodal,and quantum-inspired baselines,including BERT,RoBERTa,XLNet,ResNet,EfficientNet,ViT,Multimodal-BERT,ViLBERT,and QEMF.Q-ALIGNer achieves accuracies of 91.2%,92.9%,91.7%,and 92.1%on FakeNewsNet,Fakeddit,Weibo,and MediaEval VMU,respectively,with F1-score gains of 3–4 percentage points over QEMF.Robustness evaluation shows a reduced adversarial accuracy gap of 2.6%,compared to 7%–9%for baseline models,while calibration analysis indicates improved reliability with an expected calibration error of 0.031.In addition,computational analysis shows that Q-ALIGNer reduces training time to 19.6 h compared to 48.2 h for QEMF at a comparable parameter scale.These results indicate that quantum-inspired alignment and entanglement can enhance robustness,uncertainty awareness,and efficiency in multimodal fake news detection,positioning Q-ALIGNer as a principled and practical content-centric framework for misinformation analysis.
基金This publication is part of the TrustBoost project,that has received funding from MICIU/AEI/10.13039/501100011033,from FEDER,UEIt is a coordinated project by a multidisciplinary team from the Universidad Politécnica de Madrid(UPM)and University of Granada(UGR),with two subprojects that address TrustBoost’s objectives:“Enhancing Trustworthiness in Conversational AI through Multimodal Affective Awareness”(Trust Boost-UPM,ref.PID2023-150584OB-C21)“Breaking the Duality of Conversational AI:Going beyond Guided Conversations While Ensuring Compliance with Domain Rules and Constraints”(Trust Boost-UGR,ref.PID2023-150584OB-C22).
文摘Building reliable intent-based,task-oriented dialog systems typically requires substantial manual effort:designers must derive intents,entities,responses,and control logic from raw conversational data,then iterate until the assistant behaves consistently.This paper investigates how far large language models(LLMs)can automate this development.In this paper,we use two reference corpora,Let’s Go(English,public transport)and MEDIA(French,hotel booking),to prompt four LLM families(GPT-4o,Claude,Gemini,Mistral Small)and generate the core specifications required by the rasa platform.These include intent sets with example utterances,entity definitions with slot mappings,response templates,and basic dialog flows.To structure this process,we introduce a model-and platform-agnostic pipelinewith two phases.The first normalizes and validates LLM-generated artifacts,enforcing crossfile consistency andmaking slot usage explicit.The second uses a lightweight dialog harness that runs scripted tests and incrementally patches failure points until conversations complete reliably.Across eight projects,all models required some targeted repairs before training.After applying our pipeline,all reached≥70%task completion(many above 84%),while NLU performance ranged from mid-0.6 to 1.0 macro-F1 depending on domain breadth.These results show that,with modest guidance,current LLMs can produce workable end-to-end dialog prototypes directly fromraw transcripts.Our main contributions are:(i)a reusable bootstrap method aligned with industry domain-specific languages(DSLs),(ii)a small set of high-impact corrective patterns,and(iii)a simple but effective harness for closed-loop refinement across conversational platforms.
基金supported by the project“Romanian Hub for Artificial Intelligence-HRIA”,Smart Growth,Digitization and Financial Instruments Program,2021–2027,MySMIS No.334906.
文摘Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.
基金funded by China National Innovation and Entrepreneurship Project Fund Innovation Training Program(202410451009).
文摘With the rapid development of digital culture,a large number of cultural texts are presented in the form of digital and network.These texts have significant characteristics such as sparsity,real-time and non-standard expression,which bring serious challenges to traditional classification methods.In order to cope with the above problems,this paper proposes a new ASSC(ALBERT,SVD,Self-Attention and Cross-Entropy)-TextRCNN digital cultural text classification model.Based on the framework of TextRCNN,the Albert pre-training language model is introduced to improve the depth and accuracy of semantic embedding.Combined with the dual attention mechanism,the model’s ability to capture and model potential key information in short texts is strengthened.The Singular Value Decomposition(SVD)was used to replace the traditional Max pooling operation,which effectively reduced the feature loss rate and retained more key semantic information.The cross-entropy loss function was used to optimize the prediction results,making the model more robust in class distribution learning.The experimental results indicate that,in the digital cultural text classification task,as compared to the baseline model,the proposed ASSC-TextRCNN method achieves an 11.85%relative improvement in accuracy and an 11.97%relative increase in the F1 score.Meanwhile,the relative error rate decreases by 53.18%.This achievement not only validates the effectiveness and advanced nature of the proposed approach but also offers a novel technical route and methodological underpinnings for the intelligent analysis and dissemination of digital cultural texts.It holds great significance for promoting the in-depth exploration and value realization of digital culture.
基金supported in part by the Science and Technology Project of the State Grid East China Branch(No.520800230008).
文摘The increasing significance of text data in power system intelligence has highlighted the out-of-distribution(OOD)problem as a critical challenge,hindering the deployment of artificial intelligence(AI)models.In a closed-world setting,most AI models cannot detect and reject unexpected data,which exacerbates the harmful impact of the OOD problem.The high similarity between OOD and indistribution(IND)samples in the power system presents challenges for existing OOD detection methods in achieving effective results.This study aims to elucidate and address the OOD problem in power systems through a text classification task.First,the underlying causes of OOD sample generation are analyzed,highlighting the inherent nature of the OOD problem in the power system.Second,a novel method integrating the enhanced Mahalanobis distance with calibration strategies is introduced to improve OOD detection for text data in power system applications.Finally,the case study utilizing the actual text data from power system field operation(PSFO)is conducted,demonstrating the effectiveness of the proposed OOD detection method.Experimental results indicate that the proposed method outperformed existing methods in text OOD detection tasks within the power system,achieving a remarkable 21.03%enhancement of metric in the false positive rate at 95%true positive recall(FPR95)and a 12.97%enhancement in classi-fication accuracy for the mixed IND-OOD scenarios.
基金National Key Research and Development Program of China(2024YFC3505400)Capital Clinical Project of Beijing Municipal Science&Technology Commission(Z221100007422092)Capital’s Funds for Health Improvement and Research(2024-1-2231).
文摘Objective To develop a clinical decision and prescription generation system(CDPGS)specifically for diarrhea in traditional Chinese medicine(TCM),utilizing a specialized large language model(LLM),Qwen-TCM-Dia,to standardize diagnostic processes and prescription generation.Methods Two primary datasets were constructed:an evaluation benchmark and a fine-tuning dataset consisting of fundamental diarrhea knowledge,medical records,and chain-ofthought(CoT)reasoning datasets.After an initial evaluation of 16 open-source LLMs across inference time,accuracy,and output quality,Qwen2.5 was selected as the base model due to its superior overall performance.We then employed a two-stage low-rank adaptation(LoRA)fine-tuning strategy,integrating continued pre-training on domain-specific knowledge with instruction fine-tuning using CoT-enriched medical records.This approach was designed to embed the clinical logic(symptoms→pathogenesis→therapeutic principles→prescriptions)into the model’s reasoning capabilities.The resulting fine-tuned model,specialized for TCM diarrhea,was designated as Qwen-TCM-Dia.Model performance was evaluated for disease diagnosis and syndrome type differentiation using accuracy,precision,recall,and F1-score.Furthermore,the quality of the generated prescriptions was compared with that of established open-source TCM LLMs.Results Qwen-TCM-Dia achieved peak performance compared to both the base Qwen2.5 model and five other open-source TCM LLMs.It achieved 97.05%accuracy and 91.48%F1-score in disease diagnosis,and 74.54%accuracy and 74.21%F1-score in syndrome type differentiation.Compared with existing open-source TCM LLMs(BianCang,HuangDi,LingDan,TCMLLM-PR,and ZhongJing),Qwen-TCM-Dia exhibited higher fidelity in reconstructing the“symptoms→pathogenesis→therapeutic principles→prescriptions”logic chain.It provided complete prescriptions,whereas other models often omitted dosages or generated mismatched prescriptions.Conclusion By integrating continued pre-training,CoT reasoning,and a two-stage fine-tuning strategy,this study establishes a CDPGS for diarrhea in TCM.The results demonstrate the synergistic effect of strengthening domain representation through pre-training and activating logical reasoning via CoT.This research not only provides critical technical support for the standardized diagnosis and treatment of diarrhea but also offers a scalable paradigm for the digital inheritance of expert TCM experience and the intelligent transformation of TCM.
基金supported by the National Key Research and Development Program of China under(Grant 2023YFB3106504)Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies under(Grant 2022B1212010005)+2 种基金the Major Key Project of PCL under(Grant PCL2023A09)Shenzhen Science and Technology Program under(Grants ZDSYS20210623091809029 and RCBS20221008093131089)the project of Guangdong Power Grid Co.Ltd.under(Grants 037800KC23090005 and GD-KJXM20231042).
文摘Large language model-based(LLM-based)text-to-SQL methods have achieved important progress in generating SQL queries for real-world applications.When confronted with table content-aware questions in real-world scenarios,ambiguous data content keywords and nonexistent database schema column names within the question lead to the poor performance of existing methods.To solve this problem,we propose a novel approach towards table content-aware text-to-SQL with self-retrieval(TCSR-SQL).It leverages LLM's in-context learning capability to extract data content keywords within the question and infer possible related database schema,which is used to generate Seed SQL to fuzz search databases.The search results are further used to confirm the encoding knowledge with the designed encoding knowledge table,including column names and exact stored content values used in the SQL.The encoding knowledge is sent to obtain the final Precise SQL following multirounds of generation-execution-revision process.To validate our approach,we introduce a table-content-aware,questionrelated benchmark dataset,containing 2115 question-SQL pairs.Comprehensive experiments conducted on this benchmark demonstrate the remarkable performance of TCSR-SQL,achieving an improvement of at least 27.8%in execution accuracy compared to other state-of-the-art methods.
文摘Accurate,up to date,and quick information related to any disaster supports disaster management team/authorities to perform quick,easy,and cost-effective response to enhance rescue operations to alleviate the possible loss of lives,financial risks,and properties.Due to damaged infrastructure in disaster-affected areas,social media is the only way to share/exchange real time information.Therefore,‘X’(formerly Twitter)has become a major platform for disseminating real-time information during disaster events or emergencies,i.e.,floods and earthquake.Rapid identification of actionable content is critical for effective humanitarian response;however,the brief and noisy nature of tweets makes automated classification challenging.To tackle this problem,this study proposes a hybrid classification framework that integrates term frequency–inverse document frequency(TF-IDF)features with graph convolutional networks(GCNs)to enhance disaster-related tweet analysis.The proposed model performs three classification tasks:identifying disaster-related tweets(achieving 94.47%accuracy),categorizing disaster types(earthquake,flood,and non-disaster)with 91.78%accuracy,and detecting aid requests such as food,donations,and medical assistance(94.64%accuracy).By combining the statistical strengths of TF-IDF with the relational learning capabilities of GCNs,the model attains high accuracy while maintaining computational efficiency and interpretability.The results demonstrate the framework’s strong potential for real-time disaster response,offering valuable insights to support emergency management systems and humanitarian decision-making.
文摘Since Google introduced the concept of Knowledge Graphs(KGs)in 2012,their construction technologies have evolved into a comprehensive methodological framework encompassing knowledge acquisition,extraction,representation,modeling,fusion,computation,and storage.Within this framework,knowledge extraction,as the core component,directly determines KG quality.In military domains,traditional manual curation models face efficiency constraints due to data fragmentation,complex knowledge architectures,and confidentiality protocols.Meanwhile,crowdsourced ontology construction approaches from general domains prove non-transferable,while human-crafted ontologies struggle with generalization deficiencies.To address these challenges,this study proposes an OntologyAware LLM Methodology for Military Domain Knowledge Extraction(LLM-KE).This approach leverages the deep semantic comprehension capabilities of Large Language Models(LLMs)to simulate human experts’cognitive processes in crowdsourced ontology construction,enabling automated extraction of military textual knowledge.It concurrently enhances knowledge processing efficiency and improves KG completeness.Empirical analysis demonstrates that this method effectively resolves scalability and dynamic adaptation challenges in military KG construction,establishing a novel technological pathway for advancing military intelligence development.
基金supported by the NationalNatural Science Foundation of China(U2004163)Key Research and Development Program of Henan Province(No.251111211200).
文摘In multi-domain neural machine translation tasks,the disparity in data distribution between domains poses significant challenges in distinguishing domain features and sharing parameters across domains.This paper proposes a Transformer-based multi-domain-aware mixture of experts model.To address the problem of domain feature differentiation,a mixture of experts(MoE)is introduced into attention to enhance the domain perception ability of the model,thereby improving the domain feature differentiation.To address the trade-off between domain feature distinction and cross-domain parameter sharing,we propose a domain-aware mixture of experts(DMoE).A domain-aware gating mechanism is introduced within the MoE module,simultaneously activating all domain experts to effectively blend domain feature distinction and cross-domain parameter sharing.A loss balancing function is then added to dynamically adjust the impact of the loss function on the expert distribution,enabling fine-tuning of the expert activation distribution to achieve a balance between domains.Experimental results on multiple Chinese-to-English and English-to-French datasets demonstrate that our proposed method significantly outperforms baseline models in both BLEU,chrF,and COMET metrics,validating its effectiveness in multi-domain neural machine translation.Further analysis of the probability distribution of expert activations shows that our method achieves remarkable results in both domain differentiation and cross-domain parameter sharing.