In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e...In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach.展开更多
Knowledge graphs are employed in several tasks,such as question answering and recommendation systems,due to their ability to represent relationships between concepts.Automatically constructing such a graphs,however,re...Knowledge graphs are employed in several tasks,such as question answering and recommendation systems,due to their ability to represent relationships between concepts.Automatically constructing such a graphs,however,remains an unresolved challenge within knowledge representation.To tackle this challenge,we propose CtxKG,a method specifically aimed at extracting knowledge graphs in a context of limited resources in which the only input is a set of unstructured text documents.CtxKG is based on OpenIE(a relationship triple extraction method)and BERT(a language model)and contains four stages:the extraction of relationship triples directly from text;the identification of synonyms across triples;the merging of similar entities;and the building of bridges between knowledge graphs of different documents.Our method distinguishes itself from those in the current literature(i)through its use of the parse tree to avoid the overlapping entities produced by base implementations of OpenIE;and(ii)through its bridges,which create a connected network of graphs,overcoming a limitation similar methods have of one isolated graph per document.We compare our method to two others by generating graphs for movie articles from Wikipedia and contrasting them with benchmark graphs built from the OMDb movie database.Our results suggest that our method is able to improve multiple aspects of knowledge graph construction.They also highlight the critical role that triple identification and named-entity recognition have in improving the quality of automatically generated graphs,suggesting future paths for investigation.Finally,we apply CtxKG to build BlabKG,a knowledge graph for the Blue Amazon,and discuss possible improvements.展开更多
基金Science and Technology Innovation 2030-Major Project of“New Generation Artificial Intelligence”granted by Ministry of Science and Technology,Grant Number 2020AAA0109300.
文摘In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach.
基金The authors of this work would like to thank the Center for Artificial Intelligence(C4AI-USP)and the support from the São Paulo Research Foundation(FAPESP grant#2019/07665-4)and from the IBM CorporationFabio G.Cozman acknowledges partial support by CNPq grant Pq 305753/2022-3This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior-Brazil(CAPES)-Finance Code 001。
文摘Knowledge graphs are employed in several tasks,such as question answering and recommendation systems,due to their ability to represent relationships between concepts.Automatically constructing such a graphs,however,remains an unresolved challenge within knowledge representation.To tackle this challenge,we propose CtxKG,a method specifically aimed at extracting knowledge graphs in a context of limited resources in which the only input is a set of unstructured text documents.CtxKG is based on OpenIE(a relationship triple extraction method)and BERT(a language model)and contains four stages:the extraction of relationship triples directly from text;the identification of synonyms across triples;the merging of similar entities;and the building of bridges between knowledge graphs of different documents.Our method distinguishes itself from those in the current literature(i)through its use of the parse tree to avoid the overlapping entities produced by base implementations of OpenIE;and(ii)through its bridges,which create a connected network of graphs,overcoming a limitation similar methods have of one isolated graph per document.We compare our method to two others by generating graphs for movie articles from Wikipedia and contrasting them with benchmark graphs built from the OMDb movie database.Our results suggest that our method is able to improve multiple aspects of knowledge graph construction.They also highlight the critical role that triple identification and named-entity recognition have in improving the quality of automatically generated graphs,suggesting future paths for investigation.Finally,we apply CtxKG to build BlabKG,a knowledge graph for the Blue Amazon,and discuss possible improvements.