面向知识库的问答(Question answering over knowledge base,KBQA)是问答系统的重要组成.近些年,随着以深度学习为代表的表示学习技术在多个领域的成功应用,许多研究者开始着手研究基于表示学习的知识库问答技术.其基本假设是把知识库...面向知识库的问答(Question answering over knowledge base,KBQA)是问答系统的重要组成.近些年,随着以深度学习为代表的表示学习技术在多个领域的成功应用,许多研究者开始着手研究基于表示学习的知识库问答技术.其基本假设是把知识库问答看做是一个语义匹配的过程.通过表示学习知识库以及用户问题的语义表示,将知识库中的实体、关系以及问句文本转换为一个低维语义空间中的数值向量,在此基础上,利用数值计算,直接匹配与用户问句语义最相似的答案.从目前的结果看,基于表示学习的知识库问答系统在性能上已经超过传统知识库问答方法.本文将对现有基于表示学习的知识库问答的研究进展进行综述,包括知识库表示学习和问句(文本)表示学习的代表性工作,同时对于其中存在难点以及仍存在的研究问题进行分析和讨论.展开更多
Aiming at the relation linking task for question answering over knowledge base,especially the multi relation linking task for complex questions,a relation linking approach based on the multi-attention recurrent neural...Aiming at the relation linking task for question answering over knowledge base,especially the multi relation linking task for complex questions,a relation linking approach based on the multi-attention recurrent neural network(RNN)model is proposed,which works for both simple and complex questions.First,the vector representations of questions are learned by the bidirectional long short-term memory(Bi-LSTM)model at the word and character levels,and named entities in questions are labeled by the conditional random field(CRF)model.Candidate entities are generated based on a dictionary,the disambiguation of candidate entities is realized based on predefined rules,and named entities mentioned in questions are linked to entities in knowledge base.Next,questions are classified into simple or complex questions by the machine learning method.Starting from the identified entities,for simple questions,one-hop relations are collected in the knowledge base as candidate relations;for complex questions,two-hop relations are collected as candidates.Finally,the multi-attention Bi-LSTM model is used to encode questions and candidate relations,compare their similarity,and return the candidate relation with the highest similarity as the result of relation linking.It is worth noting that the Bi-LSTM model with one attentions is adopted for simple questions,and the Bi-LSTM model with two attentions is adopted for complex questions.The experimental results show that,based on the effective entity linking method,the Bi-LSTM model with the attention mechanism improves the relation linking effectiveness of both simple and complex questions,which outperforms the existing relation linking methods based on graph algorithm or linguistics understanding.展开更多
A long-term goal of Artificial Intelligence(AI) is to provide machines with the capability of understanding natural language. Understanding natural language may be referred as the system must produce a correct respons...A long-term goal of Artificial Intelligence(AI) is to provide machines with the capability of understanding natural language. Understanding natural language may be referred as the system must produce a correct response to the received input order. This response can be a robot move, an answer to a question, etc. One way to achieve this goal is semantic parsing. It parses utterances into semantic representations called logical form, a representation of many important linguistic phenomena that can be understood by machines. Semantic parsing is a fundamental problem in natural language understanding area. In recent years, researchers have made tremendous progress in this field. In this paper, we review recent algorithms for semantic parsing including both conventional machine learning approaches and deep learning approaches. We first give an overview of a semantic parsing system, then we summary a general way to do semantic parsing in statistical learning. With the rise of deep learning, we will pay more attention on the deep learning based semantic parsing, especially for the application of Knowledge Base Question Answering(KBQA). At last, we survey several benchmarks for KBQA.展开更多
文摘面向知识库的问答(Question answering over knowledge base,KBQA)是问答系统的重要组成.近些年,随着以深度学习为代表的表示学习技术在多个领域的成功应用,许多研究者开始着手研究基于表示学习的知识库问答技术.其基本假设是把知识库问答看做是一个语义匹配的过程.通过表示学习知识库以及用户问题的语义表示,将知识库中的实体、关系以及问句文本转换为一个低维语义空间中的数值向量,在此基础上,利用数值计算,直接匹配与用户问句语义最相似的答案.从目前的结果看,基于表示学习的知识库问答系统在性能上已经超过传统知识库问答方法.本文将对现有基于表示学习的知识库问答的研究进展进行综述,包括知识库表示学习和问句(文本)表示学习的代表性工作,同时对于其中存在难点以及仍存在的研究问题进行分析和讨论.
基金The National Natural Science Foundation of China(No.61502095).
文摘Aiming at the relation linking task for question answering over knowledge base,especially the multi relation linking task for complex questions,a relation linking approach based on the multi-attention recurrent neural network(RNN)model is proposed,which works for both simple and complex questions.First,the vector representations of questions are learned by the bidirectional long short-term memory(Bi-LSTM)model at the word and character levels,and named entities in questions are labeled by the conditional random field(CRF)model.Candidate entities are generated based on a dictionary,the disambiguation of candidate entities is realized based on predefined rules,and named entities mentioned in questions are linked to entities in knowledge base.Next,questions are classified into simple or complex questions by the machine learning method.Starting from the identified entities,for simple questions,one-hop relations are collected in the knowledge base as candidate relations;for complex questions,two-hop relations are collected as candidates.Finally,the multi-attention Bi-LSTM model is used to encode questions and candidate relations,compare their similarity,and return the candidate relation with the highest similarity as the result of relation linking.It is worth noting that the Bi-LSTM model with one attentions is adopted for simple questions,and the Bi-LSTM model with two attentions is adopted for complex questions.The experimental results show that,based on the effective entity linking method,the Bi-LSTM model with the attention mechanism improves the relation linking effectiveness of both simple and complex questions,which outperforms the existing relation linking methods based on graph algorithm or linguistics understanding.
基金supported by National Science Foundation (No.CNS-1842407)National Institutes of Health (No.R01GM110240)Industry Members of NSF Center for Big Learning(http://nsfcbl.org/index.php/partners/)
文摘A long-term goal of Artificial Intelligence(AI) is to provide machines with the capability of understanding natural language. Understanding natural language may be referred as the system must produce a correct response to the received input order. This response can be a robot move, an answer to a question, etc. One way to achieve this goal is semantic parsing. It parses utterances into semantic representations called logical form, a representation of many important linguistic phenomena that can be understood by machines. Semantic parsing is a fundamental problem in natural language understanding area. In recent years, researchers have made tremendous progress in this field. In this paper, we review recent algorithms for semantic parsing including both conventional machine learning approaches and deep learning approaches. We first give an overview of a semantic parsing system, then we summary a general way to do semantic parsing in statistical learning. With the rise of deep learning, we will pay more attention on the deep learning based semantic parsing, especially for the application of Knowledge Base Question Answering(KBQA). At last, we survey several benchmarks for KBQA.