期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Large Knowledge Model:Perspectives and Challenges 被引量:2
1
作者 Huajun Chen 《Data Intelligence》 EI 2024年第3期587-620,共34页
Humankind's understanding of the world is fundamentally linked to our perception and cognition,with human languages serving as one of the major carriers of world knowledge.In this vein,Large Language Models(LLMs)l... Humankind's understanding of the world is fundamentally linked to our perception and cognition,with human languages serving as one of the major carriers of world knowledge.In this vein,Large Language Models(LLMs)like ChatGPT epitomize the pre-training of extensive,sequence-based world knowledge into neural networks,facilitating the processing and manipulation of this knowledge in a parametric space.This article explores large models through the lens of"knowledge".We initially investigate the role of symbolic knowledge such as Knowledge Graphs(KGs)in enhancing LLMs,covering aspects like knowledge-augmented language model,structure-inducing pretraining,knowledgeable prompts,structured CoT,knowledge editing,semantic tools for LLM and knowledgeable Al agents.Subsequently,we examine how LLMs can boost traditional symbolic knowledge bases,encompassing aspects like using LLM as KG builder and controller,structured knowledge pretraining,and LLM-enhanced symbolic reasoning.Considering the intricate nature of human knowledge,we advocate for the creation of Large Knowledge Models(LKM),specifically engineered to manage diversified spectrum of knowledge structures.This promising undertaking would entail several key challenges,such as disentangling knowledge base from language models,cognitive alignment with human knowledge,integration of perception and cognition,and building large commonsense models for interacting with physical world,among others.We finally propose a five-"A"principle to distinguish the concept of LKM. 展开更多
关键词 Large Language Model knowledge Graph Large knowledge Model knowledge Representation knowledge augmentation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部