期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Text-augmented long-term relation dependency learning for knowledge graph representation
1
作者 Quntao Zhu Mengfan Li +3 位作者 Yuanjun Gao Yao Wan Xuanhua Shi Hai Jin 《High-Confidence Computing》 2025年第4期43-56,共14页
Knowledge graph(KG)representation learning aims to map entities and relations into a low-dimensional representation space,showing significant potential in many tasks.Existing approaches follow two categories:(1)Graph-... Knowledge graph(KG)representation learning aims to map entities and relations into a low-dimensional representation space,showing significant potential in many tasks.Existing approaches follow two categories:(1)Graph-based approaches encode KG elements into vectors using structural score functions.(2)Text-based approaches embed text descriptions of entities and relations via pre-trained language models(PLMs),further fine-tuned with triples.We argue that graph-based approaches struggle with sparse data,while text-based approaches face challenges with complex relations.To address these limitations,we propose a unified Text-Augmented Attention-based Recurrent Network,bridging the gap between graph and natural language.Specifically,we employ a graph attention network based on local influence weights to model local structural information and utilize a PLM based prompt learning to learn textual information,enhanced by a mask-reconstruction strategy based on global influence weights and textual contrastive learning for improved robustness and generalizability.Besides,to effectively model multi-hop relations,we propose a novel semantic-depth guided path extraction algorithm and integrate cross-attention layers into recurrent neural networks to facilitate learning the long-term relation dependency and offer an adaptive attention mechanism for varied-length information.Extensive experiments demonstrate that our model exhibits superiority over existing models across KG completion and question-answering tasks. 展开更多
关键词 knowledge graph representation graph attention network Pretrained language model Attention-based recurrent network Masked autoencoder Contrastive learning
在线阅读 下载PDF
Knowledge Graph Embedding for Hyper-Relational Data 被引量:8
2
作者 Chunhong Zhang Miao Zhou +2 位作者 Xiao Han Zheng Hu Yang Ji 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2017年第2期185-197,共13页
Knowledge graph representation has been a long standing goal of artificial intelligence. In this paper,we consider a method for knowledge graph embedding of hyper-relational data, which are commonly found in knowledge... Knowledge graph representation has been a long standing goal of artificial intelligence. In this paper,we consider a method for knowledge graph embedding of hyper-relational data, which are commonly found in knowledge graphs. Previous models such as Trans(E, H, R) and CTrans R are either insufficient for embedding hyper-relational data or focus on projecting an entity into multiple embeddings, which might not be effective for generalization nor accurately reflect real knowledge. To overcome these issues, we propose the novel model Trans HR, which transforms the hyper-relations in a pair of entities into an individual vector, serving as a translation between them. We experimentally evaluate our model on two typical tasks—link prediction and triple classification.The results demonstrate that Trans HR significantly outperforms Trans(E, H, R) and CTrans R, especially for hyperrelational data. 展开更多
关键词 distributed representation transfer matrix knowledge graph embedding
原文传递
Visual Entity Linking via Multi-modal Learning 被引量:4
3
作者 Qiushuo Zheng Hao Wen +1 位作者 Meng Wang Guilin Qi 《Data Intelligence》 EI 2022年第1期1-19,共19页
Existing visual scene understanding methods mainly focus on identifying coarse-grained concepts about the visual objects and their relationships,largely neglecting fine-grained scene understanding.In fact,many data-dr... Existing visual scene understanding methods mainly focus on identifying coarse-grained concepts about the visual objects and their relationships,largely neglecting fine-grained scene understanding.In fact,many data-driven applications on the Web(e.g.,news-reading and e-shopping)require accurate recognition of much less coarse concepts as entities and proper linking them to a knowledge graph(KG),which can take their performance to the next level.In light of this,in this paper,we identify a new research task:visual entity linking for fine-grained scene understanding.To accomplish the task,we first extract features of candidate entities from different modalities,i.e.,visual features,textual features,and KG features.Then,we design a deep modal-attention neural network-based learning-to-rank method which aggregates all features and maps visual objects to the entities in KG.Extensive experimental results on the newly constructed dataset show that our proposed method is effective as it significantly improves the accuracy performance from 66.46%to 83.16%compared with baselines. 展开更多
关键词 knowledge graph Multi-modal learning Entity linking Learning to rank knowledge graph representation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部