期刊文献+
共找到1,127篇文章
< 1 2 57 >
每页显示 20 50 100
Channel Knowledge Maps for 6G Wireless Networks:Construction,Applications,and Future Challenges
1
作者 LIU Xingchen SUN Shu +2 位作者 TAO Meixia Aryan KAUSHIK YAN Hangsong 《ZTE Communications》 2025年第2期46-59,共14页
The advent of 6G wireless networks promises unprecedented connectivity,supporting ultra-high data rates,low latency,and massive device connectivity.However,these ambitious goals introduce significant challenges,partic... The advent of 6G wireless networks promises unprecedented connectivity,supporting ultra-high data rates,low latency,and massive device connectivity.However,these ambitious goals introduce significant challenges,particularly in channel estimation due to complex and dynamic propagation environments.This paper explores the concept of channel knowledge maps(CKMs)as a solution to these challenges.CKMs enable environment-aware communications by providing location-specific channel information,reducing reliance on real-time pilot measurements.We categorize CKM construction techniques into measurement-based,model-based,and hybrid methods,and examine their key applications in integrated sensing and communication(ISAC)systems,beamforming,trajectory optimization of unmanned aerial vehicles(UAVs),base station(BS)placement,and resource allocation.Furthermore,we discuss open challenges and propose future research directions to enhance the robustness,accuracy,and scalability of CKM-based systems in the evolving 6G landscape. 展开更多
关键词 channel knowledge map channel modeling wireless communication 6G
在线阅读 下载PDF
Task-Structured Curriculum Learning for Multi-Task Distillation:Enhancing Step-by-Step Knowledge Transfer in Language Models
2
作者 Ahmet Ezgi Aytug Onan 《Computers, Materials & Continua》 2026年第3期1647-1673,共27页
Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Re... Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning. 展开更多
关键词 knowledge distillation curriculum learning language models multi-task learning step-by-step learning
在线阅读 下载PDF
LLMKB:Large Language Models with Knowledge Base Augmentation for Conversational Recommendation
3
作者 FANG Xiu QIU Sijia +1 位作者 SUN Guohao LU Jinhu 《Journal of Donghua University(English Edition)》 2026年第1期91-103,共13页
Conversational recommender systems(CRSs)focus on refining preferences and providing personalized recommendations through natural language interactions and dialogue history.Large language models(LLMs)have shown outstan... Conversational recommender systems(CRSs)focus on refining preferences and providing personalized recommendations through natural language interactions and dialogue history.Large language models(LLMs)have shown outstanding performance across various domains,thereby prompting researchers to investigate their applicability in recommendation systems.However,due to the lack of task-specific knowledge and an inefficient feature extraction process,LLMs still have suboptimal performance in recommendation tasks.Therefore,external knowledge sources,such as knowledge graphs(KGs)and knowledge bases(KBs),are often introduced to address the issue of data sparsity.Compared to KGs,KBs possess higher retrieval efficiency,making them more suitable for scenarios where LLMs serve as recommenders.To this end,we introduce a novel framework integrating LLMs with KBs for enhanced retrieval generation,namely LLMKB.LLMKB initially leverages structured knowledge to create mapping dictionaries,extracting entity-relation information from heterogeneous knowledge to construct KBs.Then,LLMKB achieves the embedding calibration between user information representations and documents in KBs through retrieval model fine-tuning.Finally,LLMKB employs retrievalaugmented generation to produce recommendations based on fused text inputs,followed by post-processing.Experiment results on two public CRS datasets demonstrate the effectiveness of our framework.Our code is publicly available at the link:https://anonymous.4open.science/r/LLMKB-6FD0. 展开更多
关键词 recommender system large language model(LLM) knowledge base(KB)
在线阅读 下载PDF
Research on the Construction of an Accounting Knowledge Graph Based on Large Language Model
4
作者 Yunfeng Wang 《Journal of Electronic Research and Application》 2025年第4期248-253,共6页
The article is based on language model,through the cue word engineering and agent thinking method,automatic knowledge extraction,with China accounting standards support to complete the corresponding knowledge map cons... The article is based on language model,through the cue word engineering and agent thinking method,automatic knowledge extraction,with China accounting standards support to complete the corresponding knowledge map construction.Through the way of extracting the accounting entities and their connections in the pattern layer,the data layer is provided for the fine-tuning and optimization of the large model.Studies found that,through the reasonable application of language model,knowledge can be realized in massive financial data neural five effective extracted tuples,and complete accounting knowledge map construction. 展开更多
关键词 ACCOUNTING Large language model knowledge graph knowledge extraction knowledge optimization
在线阅读 下载PDF
LLM-KE: An Ontology-Aware LLM Methodology for Military Domain Knowledge Extraction
5
作者 Yu Tao Ruopeng Yang +3 位作者 Yongqi Wen Yihao Zhong Kaige Jiao Xiaolei Gu 《Computers, Materials & Continua》 2026年第1期2045-2061,共17页
Since Google introduced the concept of Knowledge Graphs(KGs)in 2012,their construction technologies have evolved into a comprehensive methodological framework encompassing knowledge acquisition,extraction,representati... Since Google introduced the concept of Knowledge Graphs(KGs)in 2012,their construction technologies have evolved into a comprehensive methodological framework encompassing knowledge acquisition,extraction,representation,modeling,fusion,computation,and storage.Within this framework,knowledge extraction,as the core component,directly determines KG quality.In military domains,traditional manual curation models face efficiency constraints due to data fragmentation,complex knowledge architectures,and confidentiality protocols.Meanwhile,crowdsourced ontology construction approaches from general domains prove non-transferable,while human-crafted ontologies struggle with generalization deficiencies.To address these challenges,this study proposes an OntologyAware LLM Methodology for Military Domain Knowledge Extraction(LLM-KE).This approach leverages the deep semantic comprehension capabilities of Large Language Models(LLMs)to simulate human experts’cognitive processes in crowdsourced ontology construction,enabling automated extraction of military textual knowledge.It concurrently enhances knowledge processing efficiency and improves KG completeness.Empirical analysis demonstrates that this method effectively resolves scalability and dynamic adaptation challenges in military KG construction,establishing a novel technological pathway for advancing military intelligence development. 展开更多
关键词 knowledge extraction natural language processing knowledge graph large language model
在线阅读 下载PDF
Intelligent Fault Diagnosis for CNC Through the Integration of Large Language Models and Domain Knowledge Graphs
6
作者 Yuhan Liu Yuan Zhou +2 位作者 Yufei Liu Zhen Xu Yixin He 《Engineering》 2025年第10期311-322,共12页
As large language models(LLMs)continue to demonstrate their potential in handling complex tasks,their value in knowledge-intensive industrial scenarios is becoming increasingly evident.Fault diagnosis,a critical domai... As large language models(LLMs)continue to demonstrate their potential in handling complex tasks,their value in knowledge-intensive industrial scenarios is becoming increasingly evident.Fault diagnosis,a critical domain in the industrial sector,has long faced the dual challenges of managing vast amounts of experiential knowledge and improving human-machine collaboration efficiency.Traditional fault diagnosis systems,which are primarily based on expert systems,suffer from three major limitations:(1)ineffective organization of fault diagnosis knowledge,(2)lack of adaptability between static knowledge frameworks and dynamic engineering environments,and(3)difficulties in integrating expert knowledge with real-time data streams.These systemic shortcomings restrict the ability of conventional approaches to handle uncertainty.In this study,we proposed an intelligent computer numerical control(CNC)fault diagnosis system,integrating LLMs with knowledge graph(KG).First,we constructed a comprehensive KG that consolidated multi-source data for structured representation.Second,we designed a retrievalaugmented generation(RAG)framework leveraging the KG to support multi-turn interactive fault diagnosis while incorporating real-time engineering data into the decision-making process.Finally,we introduced a learning mechanism to facilitate dynamic knowledge updates.The experimental results demonstrated that our system significantly improved fault diagnosis accuracy,outperforming engineers with two years of professional experience on our constructed benchmark datasets.By integrating LLMs and KG,our framework surpassed the limitations of traditional expert systems rooted in symbolic reasoning,offering a novel approach to addressing the cognitive paradox of unstructured knowledge modeling and dynamic environment adaptation in industrial settings. 展开更多
关键词 Large language model Domain knowledge graph knowledge graph-based retrieval augmented generation Learning mechanism Decision support system
在线阅读 下载PDF
Knowledge-Empowered,Collaborative,and Co-Evolving AI Models:The Post-LLM Roadmap 被引量:1
7
作者 Fei Wu Tao Shen +17 位作者 Thomas Back Jingyuan Chen Gang Huang Yaochu Jin Kun Kuang Mengze Li Cewu Lu Jiaxu Miao Yongwei Wang Ying Wei Fan Wu Junchi Yan Hongxia Yang Yi Yang Shengyu Zhang Zhou Zhao Yueting Zhuang Yunhe Pan 《Engineering》 2025年第1期87-100,共14页
Large language models(LLMs)have significantly advanced artificial intelligence(AI)by excelling in tasks such as understanding,generation,and reasoning across multiple modalities.Despite these achievements,LLMs have in... Large language models(LLMs)have significantly advanced artificial intelligence(AI)by excelling in tasks such as understanding,generation,and reasoning across multiple modalities.Despite these achievements,LLMs have inherent limitations including outdated information,hallucinations,inefficiency,lack of interpretability,and challenges in domain-specific accuracy.To address these issues,this survey explores three promising directions in the post-LLM era:knowledge empowerment,model collaboration,and model co-evolution.First,we examine methods of integrating external knowledge into LLMs to enhance factual accuracy,reasoning capabilities,and interpretability,including incorporating knowledge into training objectives,instruction tuning,retrieval-augmented inference,and knowledge prompting.Second,we discuss model collaboration strategies that leverage the complementary strengths of LLMs and smaller models to improve efficiency and domain-specific performance through techniques such as model merging,functional model collaboration,and knowledge injection.Third,we delve into model co-evolution,in which multiple models collaboratively evolve by sharing knowledge,parameters,and learning strategies to adapt to dynamic environments and tasks,thereby enhancing their adaptability and continual learning.We illustrate how the integration of these techniques advances AI capabilities in science,engineering,and society—particularly in hypothesis development,problem formulation,problem-solving,and interpretability across various domains.We conclude by outlining future pathways for further advancement and applications. 展开更多
关键词 Artificial intelligence Large language models knowledge empowerment Model collaboration Model co-evolution
在线阅读 下载PDF
Adaptive multi-view learning method for enhanced drug repurposing using chemical-induced transcriptional profiles, knowledge graphs, and large language models
8
作者 Yudong Yan Yinqi Yang +9 位作者 Zhuohao Tong Yu Wang Fan Yang Zupeng Pan Chuan Liu Mingze Bai Yongfang Xie Yuefei Li Kunxian Shu Yinghong Li 《Journal of Pharmaceutical Analysis》 2025年第6期1354-1369,共16页
Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches ofte... Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches often rely on limited data sources and simplistic hypotheses,which restrict their ability to capture the multi-faceted nature of biological systems.This study introduces adaptive multi-view learning(AMVL),a novel methodology that integrates chemical-induced transcriptional profiles(CTPs),knowledge graph(KG)embeddings,and large language model(LLM)representations,to enhance drug repurposing predictions.AMVL incorporates an innovative similarity matrix expansion strategy and leverages multi-view learning(MVL),matrix factorization,and ensemble optimization techniques to integrate heterogeneous multi-source data.Comprehensive evaluations on benchmark datasets(Fdata-set,Cdataset,and Ydataset)and the large-scale iDrug dataset demonstrate that AMVL outperforms state-of-the-art(SOTA)methods,achieving superior accuracy in predicting drug-disease associations across multiple metrics.Literature-based validation further confirmed the model's predictive capabilities,with seven out of the top ten predictions corroborated by post-2011 evidence.To promote transparency and reproducibility,all data and codes used in this study were open-sourced,providing resources for pro-cessing CTPs,KG,and LLM-based similarity calculations,along with the complete AMVL algorithm and benchmarking procedures.By unifying diverse data modalities,AMVL offers a robust and scalable so-lution for accelerating drug discovery,fostering advancements in translational medicine and integrating multi-omics data.We aim to inspire further innovations in multi-source data integration and support the development of more precise and efficient strategies for advancing drug discovery and translational medicine. 展开更多
关键词 Drug repurposing Multi-view learning Chemical-induced transcriptional profile knowledge graph Large language model Heterogeneous network
在线阅读 下载PDF
Toward a Large Language Model-Driven Medical Knowledge Retrieval and QA System:Framework Design and Evaluation
9
作者 Yuyang Liu Xiaoying Li +6 位作者 Yan Luo Jinhua Du Ying Zhang Tingyu Lv Hao Yin Xiaoli Tang Hui Liu 《Engineering》 2025年第7期270-282,共13页
Recent advancements in large language models(LLMs)have driven remarkable progress in text process-ing,opening new avenues for medical knowledge discovery.In this study,we present ERQA,a mEdical knowledge Retrieval and... Recent advancements in large language models(LLMs)have driven remarkable progress in text process-ing,opening new avenues for medical knowledge discovery.In this study,we present ERQA,a mEdical knowledge Retrieval and Question-Answering framework powered by an enhanced LLM that integrates a semantic vector database and a curated literature repository.The ERQA framework leverages domain-specific incremental pretraining and conducts supervised fine-tuning on medical literature,enabling retrieval and question-answering(QA)tasks to be completed with high precision.Performance evaluations implemented on the coronavirus disease 2019(COVID-19)and TripClick data-sets demonstrate the robust capabilities of ERQA across multiple tasks.On the COVID-19 dataset,ERQA-13B achieves state-of-the-art retrieval metrics,with normalized discounted cumulative gain at top 10(NDCG@10)0.297,recall values at top 10(Recall@10)0.347,and mean reciprocal rank(MRR)=0.370;it also attains strong abstract summarization performance,with a recall-oriented understudy for gisting evaluation(ROUGE)-1 score of 0.434,and QA performance,with a bilingual evaluation understudy(BLEU)-1 score of 7.851.The comparable performance achieved on the TripClick dataset further under-scores the adaptability of ERQA across diverse medical topics.These findings suggest that ERQA repre-sents a significant step toward efficient biomedical knowledge retrieval and QA. 展开更多
关键词 Large language models Medical knowledge Information retrieval Vector database
在线阅读 下载PDF
Development of a large language model–based knowledge graph for chemotherapy-induced nausea and vomiting in breast cancer and its implications for nursing
10
作者 Yu Liu Jingjing Chen +2 位作者 Xianhui Lin Jihong Song Shaohua Chen 《International Journal of Nursing Sciences》 2025年第6期524-531,共8页
Objectives:Chemotherapy-induced nausea and vomiting(CINV)is a common adverse effect among breast cancer patients,significantly affecting quality of life.Existing evidence on the prevention,assessment,and management of... Objectives:Chemotherapy-induced nausea and vomiting(CINV)is a common adverse effect among breast cancer patients,significantly affecting quality of life.Existing evidence on the prevention,assessment,and management of this condition is fragmented and inconsistent.This study constructed a CINV knowledge graph using a large language model(LLM)to integrate nursing and medical evidence,thereby supporting systematic clinical decision-making.Methods:A top-down approach was adopted.1)Knowledge base preparation:Nine databases and eight guideline repositories were searched up to October 2024 to include guidelines,evidence summaries,expert consensuses,and systematic reviews screened by two researchers.2)Schema design:Referring to the Unified Medical Language System,Systematized Nomenclature of Medicine-Clinical Terms,and the Nursing Intervention Classification,entity and relation types were defined to build the ontology schema.3)LLM-based extraction and integration:Using the Qwen model under the CRISPE framework,named entity recognition,relation extraction,disambiguation,and fusion were conducted to generate triples and visualize them in Neo4j.Four expert rounds ensured semantic and logical consistency.Model performance was evaluated using precision,recall,F1-score,and 95%confidence interval(95%CI)in Python 3.11.Result:A total of 47 studies were included(18 guidelines,two expert consensuses,two evidence summaries,and 25 systematic reviews).The Qwen model extracted 273 entities and 289 relations;after expert validation,238 entities and 242 relations were retained,forming 244 triples.The ontology comprised nine entity types and eight relation types.The F1-scores for named entity recognition and relation extraction were 82.97(95%CI:0.820,0.839)and 85.54(95%CI:0.844,0.867),respectively.The average node degree was 2.03,with no isolated nodes.Conclusion:The LLM-based CINV knowledge graph achieved structured integration of nursing and medical evidence,offering a novel,data-driven tool to support clinical nursing decision-making and advance intelligent healthcare. 展开更多
关键词 Breast cancer Chemotherapy-induced nausea and vomiting knowledge graph Large language model Symptom management
暂未订购
A Knowledge Push Method of Complex Product Assembly Process Design Based on Distillation Model-Based Dynamically Enhanced Graph and Bayesian Network
11
作者 Fengque Pei Yaojie Lin +2 位作者 Jianhua Liu Cunbo Zhuang Sikuan Zhai 《Chinese Journal of Mechanical Engineering》 2025年第6期117-134,共18页
Under the paradigm of Industry 5.0,intelligent manufacturing transcends mere efficiency enhancement by emphasizing human-machine collaboration,where human expertise plays a central role in assembly processes.Despite a... Under the paradigm of Industry 5.0,intelligent manufacturing transcends mere efficiency enhancement by emphasizing human-machine collaboration,where human expertise plays a central role in assembly processes.Despite advancements in intelligent and digital technologies,assembly process design still heavily relies on manual knowledge reuse,and inefficiencies and inconsistent quality in process documentation are caused.To address the aforementioned issues,this paper proposes a knowledge push method of complex product assembly process design based on distillation model-based dynamically enhanced graph and Bayesian network.First,an initial knowledge graph is constructed using a BERT-BiLSTM-CRF model trained with integrated human expertise and a fine-tuned large language model.Then,a confidence-based dynamic weighted fusion strategy is employed to achieve dynamic incremental construction of the knowledge graph with low resource consumption.Subsequently,a Bayesian network model is constructed based on the relationships between assembly components,assembly features,and operations.Bayesian network reasoning is used to push assembly process knowledge under different design requirements.Finally,the feasibility of the Bayesian network construction method and the effectiveness of Bayesian network reasoning are verified through a specific example,significantly improving the utilization of assembly process knowledge and the efficiency of assembly process design. 展开更多
关键词 Complex product assembly process Large language model Dynamic incremental construction of knowledge graph Bayesian network knowledge push
在线阅读 下载PDF
A Dynamic Knowledge Base Updating Mechanism-Based Retrieval-Augmented Generation Framework for Intelligent Question-and-Answer Systems 被引量:1
12
作者 Yu Li 《Journal of Computer and Communications》 2025年第1期41-58,共18页
In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilizati... In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilization of this information. This study proposes a novel framework for intelligent Question-and-Answer (Q&A) systems based on Retrieval-Augmented Generation (RAG) to address these issues. The system efficiently acquires domain-specific knowledge by leveraging external databases, including Relational Databases (RDBs) and graph databases, without additional fine-tuning for Large Language Models (LLMs). Crucially, the framework integrates a Dynamic Knowledge Base Updating Mechanism (DKBUM) and a Weighted Context-Aware Similarity (WCAS) method to enhance retrieval accuracy and mitigate inherent limitations of LLMs, such as hallucinations and lack of specialization. Additionally, the proposed DKBUM dynamically adjusts knowledge weights within the database, ensuring that the most recent and relevant information is utilized, while WCAS refines the alignment between queries and knowledge items by enhanced context understanding. Experimental validation demonstrates that the system can generate timely, accurate, and context-sensitive responses, making it a robust solution for managing complex business logic in specialized industries. 展开更多
关键词 Retrieval-Augmented Generation Question-and-Answer Large language Models Dynamic knowledge Base Updating Mechanism Weighted Context-Aware Similarity
在线阅读 下载PDF
LKPNR: Large Language Models and Knowledge Graph for Personalized News Recommendation Framework 被引量:1
13
作者 Hao Chen Runfeng Xie +4 位作者 Xiangyang Cui Zhou Yan Xin Wang Zhanwei Xuan Kai Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第6期4283-4296,共14页
Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news text... Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR. 展开更多
关键词 Large language models news recommendation knowledge graphs(KG)
在线阅读 下载PDF
The Design and Practice of an Enhanced Search for Maritime Transportation Knowledge Graph Based on Semi-Schema Constraints
14
作者 Yiwen Gao Shaohan Wang +1 位作者 Feiyang Ren Xinbo Wang 《Journal of Computer and Communications》 2025年第2期94-125,共32页
With the continuous development of artificial intelligence and natural language processing technologies, traditional retrieval-augmented generation (RAG) techniques face numerous challenges in document answer precisio... With the continuous development of artificial intelligence and natural language processing technologies, traditional retrieval-augmented generation (RAG) techniques face numerous challenges in document answer precision and similarity measurement. This study, set against the backdrop of the shipping industry, combines top-down and bottom-up schema design strategies to achieve precise and flexible knowledge representation. The research adopts a semi-structured approach, innovatively constructing an adaptive schema generation mechanism based on reinforcement learning, which models the knowledge graph construction process as a Markov decision process. This method begins with general concepts, defining foundational industry concepts, and then delves into abstracting core concepts specific to the maritime domain through an adaptive pattern generation mechanism that dynamically adjusts the knowledge structure. Specifically, the study designs a four-layer knowledge construction framework, including the data layer, modeling layer, technology layer, and application layer. It draws on a mutual indexing strategy, integrating large language models and traditional information extraction techniques. By leveraging self-attention mechanisms and graph attention networks, it efficiently extracts semantic relationships. The introduction of logic-form-driven solvers and symbolic decomposition techniques for reasoning significantly enhances the model’s ability to understand complex semantic relationships. Additionally, the use of open information extraction and knowledge alignment techniques further improves the efficiency and accuracy of information retrieval. Experimental results demonstrate that the proposed method not only achieves significant performance improvements in knowledge graph retrieval within the shipping domain but also holds important theoretical innovation and practical application value. 展开更多
关键词 Large language Models knowledge Graphs Graph Attention Networks Maritime Transportation
在线阅读 下载PDF
A Maritime Document Knowledge Graph Construction Method Based on Conceptual Proximity Relations
15
作者 Yiwen Lin Tao Yang +3 位作者 Yuqi Shao Meng Yuan Pinghua Hu Chen Li 《Journal of Computer and Communications》 2025年第2期51-67,共17页
The cost and strict input format requirements of GraphRAG make it less efficient for processing large documents. This paper proposes an alternative approach for constructing a knowledge graph (KG) from a PDF document ... The cost and strict input format requirements of GraphRAG make it less efficient for processing large documents. This paper proposes an alternative approach for constructing a knowledge graph (KG) from a PDF document with a focus on simplicity and cost-effectiveness. The process involves splitting the document into chunks, extracting concepts within each chunk using a large language model (LLM), and building relationships based on the proximity of concepts in the same chunk. Unlike traditional named entity recognition (NER), which identifies entities like “Shanghai”, the proposed method identifies concepts, such as “Convenient transportation in Shanghai” which is found to be more meaningful for KG construction. Each edge in the KG represents a relationship between concepts occurring in the same text chunk. The process is computationally inexpensive, leveraging locally set up tools like Mistral 7B openorca instruct and Ollama for model inference, ensuring the entire graph generation process is cost-free. A method of assigning weights to relationships, grouping similar pairs, and summarizing multiple relationships into a single edge with associated weight and relation details is introduced. Additionally, node degrees and communities are calculated for node sizing and coloring. This approach offers a scalable, cost-effective solution for generating meaningful knowledge graphs from large documents, achieving results comparable to GraphRAG while maintaining accessibility for personal machines. 展开更多
关键词 knowledge Graph Large language Model Concept Extraction Cost-Effective Graph Construction
在线阅读 下载PDF
Corpus and Knowledge Graph-Assisted Integrated English Teaching
16
作者 YU Weiwei 《Sino-US English Teaching》 2025年第3期113-117,共5页
With the continuous advancement of information technology,corpora and knowledge graphs(KGs)have become indispensable tools in modern language learning.This study explores how the integration of corpora and KGs in inte... With the continuous advancement of information technology,corpora and knowledge graphs(KGs)have become indispensable tools in modern language learning.This study explores how the integration of corpora and KGs in integrated English teaching can enhance students’abilities in vocabulary acquisition,grammar understanding,and discourse analysis.Through a comprehensive literature review,it elaborates on the theoretical foundations and practical values of these two technological tools in English instruction.The study designs a teaching model based on corpora and KGs and analyzes its specific applications in vocabulary,grammar,and discourse teaching within the Integrated English course.Additionally,the article discusses the challenges that may arise during implementation and proposes corresponding solutions.Finally,it envisions future research directions and application prospects. 展开更多
关键词 CORPUS knowledge graph integrated english teaching teaching model language proficiency educational innovation
在线阅读 下载PDF
Knowledge graphs in heterogeneous catalysis: Recent advances and future opportunities
17
作者 Raúl Díaz Hongliang Xin 《Chinese Journal of Chemical Engineering》 2025年第8期179-189,共11页
Knowledge graphs (KGs) offer a structured, machine-readable format for organizing complex information. In heterogeneous catalysis, where data on catalytic materials, reaction conditions, mechanisms, and synthesis rout... Knowledge graphs (KGs) offer a structured, machine-readable format for organizing complex information. In heterogeneous catalysis, where data on catalytic materials, reaction conditions, mechanisms, and synthesis routes are dispersed across diverse sources, KGs provide a semantic framework that supports data integration under the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. This review aims to survey recent developments in catalysis KGs, describe the main techniques for graph construction, and highlight how artificial intelligence, particularly large language models (LLMs), enhances graph generation and query. We conducted a systematic analysis of the literature, focusing on ontology-guided text mining pipelines, graph population methods, and maintenance strategies. Our review identifies key trends: ontology-based approaches enable the automated extraction of domain knowledge, LLM-driven retrieval-augmented generation supports natural-language queries, and scalable graph architectures range from a few thousand to over a million triples. We discuss state-of-the-art applications, such as catalyst recommendation systems and reaction mechanism discovery tools, and examine the major challenges, including data heterogeneity, ontology alignment, and long-term graph curation. We conclude that KGs, when combined with AI methods, hold significant promise for accelerating catalyst discovery and knowledge management, but progress depends on establishing community standards for ontology development and maintenance. This review provides a roadmap for researchers seeking to leverage KGs to advance heterogeneous catalysis research. 展开更多
关键词 Heterogeneous catalysis knowledge graph ONTOLOGY Large language models Deep learning
在线阅读 下载PDF
Artificial Intelligence for Spleen-Stomach Disorders in Traditional Chinese Medicine:Integrating Knowledge Graphs with Intelligent Diagnosis and Treatment
18
作者 Yu-yu Duan Si-feng Jia +4 位作者 Song Ye Lekhang Cheang Wahou Tai Li-zhi Xiang Zhe-wei Ye 《Current Medical Science》 2025年第6期1348-1357,共10页
Spleen-Stomach disorders are prevalent clinical conditions in Traditional Chinese Medicine(TCM).The complex diagnostic and treatment model used in TCM is based on a“symptom-pattern-disease-formula”framework that hea... Spleen-Stomach disorders are prevalent clinical conditions in Traditional Chinese Medicine(TCM).The complex diagnostic and treatment model used in TCM is based on a“symptom-pattern-disease-formula”framework that heavily relies on practitioners’experience.However,this model faces several challenges,including ambiguous knowledge representation,unstructured data,and difficulties with knowledge sharing.Recent advancements in artificial intelligence,natural language processing,and medical knowledge engineering have significantly improved research on knowledge graphs(KGs)and intelligent diagnosis and treatment systems for these disorders,making these technologies crucial for modernizing TCM.This article systematically reviews two core research pathways related to Spleen-Stomach disorders.The first pathway focuses on constructing knowledge graphs for“structured knowledge representation”.This includes ontology modeling,entity recognition,relation extraction,graph fusion,semantic reasoning,visualization services,and an ensemble model to predict treatment efficacy.The second pathway involves the development of intelligent diagnosis and treatment systems,with a focus on“clinical applications”.This pathway includes key technologies such as quantitative modeling of TCM,the four diagnostic methods(inspection,auscultation-olfaction,interrogation,and palpation),semantic analysis of classical texts,pattern differentiation algorithms,and multimodal consultation recommenders.Through the synthesis and analysis of current research,several ongoing challenges have been identified.These include inconsistent models and annotation of TCM clinical knowledge,limited semantic reasoning capabilities,insufficient integration between KGs and intelligent diagnostic models,and limited clinical adaptability of existing intelligent diagnostic systems.To address these challenges,this review suggests future research directions that include enhancing heterogeneous multisource knowledge integration techniques,deepening semantic reasoning through collaborative reasoning frameworks that incorporate large language models,and developing effective cross-disease transfer learning strategies.These directions aim to improve interpretability,reasoning accuracy,and clinical applicability of intelligent diagnosis and treatment systems for Spleen-Stomach disorders in TCM. 展开更多
关键词 knowledge graphs Intelligent diagnosis and treatment Spleen-Stomach disorders Natural language processing Large language models Syndrome differentiation Traditional Chinese medicine informatics
在线阅读 下载PDF
Dual-Perspective Evaluation of Knowledge Graphs for Graph-to-Text Generation
19
作者 Haotong Wang Liyan Wang Yves Lepage 《Computers, Materials & Continua》 2025年第7期305-324,共20页
Data curation is vital for selecting effective demonstration examples in graph-to-text generation.However,evaluating the quality ofKnowledgeGraphs(KGs)remains challenging.Prior research exhibits a narrowfocus on struc... Data curation is vital for selecting effective demonstration examples in graph-to-text generation.However,evaluating the quality ofKnowledgeGraphs(KGs)remains challenging.Prior research exhibits a narrowfocus on structural statistics,such as the shortest path length,while the correctness of graphs in representing the associated text is rarely explored.To address this gap,we introduce a dual-perspective evaluation framework for KG-text data,based on the computation of structural adequacy and semantic alignment.Froma structural perspective,we propose the Weighted Incremental EdgeMethod(WIEM)to quantify graph completeness by leveraging agreement between relation models to predict possible edges between entities.WIEM targets to find increments from models on“unseen links”,whose presence is inversely proportional to the structural adequacy of the original KG in representing the text.From a semantic perspective,we evaluate how well a KG aligns with the text in capturing the intended meaning.To do so,we instruct a large language model to convert KGs into natural language andmeasure the similarity between generated and reference texts.Based on these computations,we apply a Top-K union method,integrating the structural and semantic modules,to rank and select high-quality KGs.We evaluate our framework against various approaches for selecting few-shot examples in graph-to-text generation.Experiments on theAssociation for Computational LinguisticsAbstract Graph Dataset(ACL-AGD)and Automatic Content Extraction 05(ACE05)dataset demonstrate the effectiveness of our approach in distinguishing KG-text data of different qualities,evidenced by the largest performance gap between top-and bottom-ranked examples.We also find that the top examples selected through our dual-perspective framework consistently yield better performance than those selected by traditional measures.These results highlight the importance of data curation in improving graph-to-text generation. 展开更多
关键词 knowledge graph evaluation graph-to-text generation scientific abstract large language model
在线阅读 下载PDF
上一页 1 2 57 下一页 到第
使用帮助 返回顶部