Sentiment analysis is now more and more important in modern natural language processing,and the sentiment classification is the one of the most popular applications.The crucial part of sentiment classification is feat...Sentiment analysis is now more and more important in modern natural language processing,and the sentiment classification is the one of the most popular applications.The crucial part of sentiment classification is feature extraction.In this paper,two methods for feature extraction,feature selection and feature embedding,are compared.Then Word2Vec is used as an embedding method.In this experiment,Chinese document is used as the corpus,and tree methods are used to get the features of a document:average word vectors,Doc2Vec and weighted average word vectors.After that,these samples are fed to three machine learning algorithms to do the classification,and support vector machine(SVM) has the best result.Finally,the parameters of random forest are analyzed.展开更多
Keyphrase extraction aims to extract important phrases that reflect the main topics of a document. Recently, deep learning methods are used to model semantic information and rank candidates based on the similarities b...Keyphrase extraction aims to extract important phrases that reflect the main topics of a document. Recently, deep learning methods are used to model semantic information and rank candidates based on the similarities between the n-grams and the document. However, existing keyphrase extraction methods mainly caused the keyphrase extraction task to be independent of the embedding. Based on the fact that phrases that are semantically closer to the document are more likely to become keyphrases, we propose a novel contrastive learning strategy for supervised keyphrase extraction by integrating local and global Information of the document. A pre-trained RoBERTa model is used to model contextual information of sub-words in the document. Then, the embedding vectors of n-grams and the document are calculated by the convolution neural layers. Finally, we propose a novel loss function for efficiently ranking candidate phrases by combining n-gram features and document embeddings during the training of the model.展开更多
Long-document semantic measurement has great significance in many applications such as semantic searchs, plagiarism detection, and automatic technical surveys. However, research efforts have mainly focused on the sema...Long-document semantic measurement has great significance in many applications such as semantic searchs, plagiarism detection, and automatic technical surveys. However, research efforts have mainly focused on the semantic similarity of short texts. Document-level semantic measurement remains an open issue due to problems such as the omission of background knowledge and topic transition. In this paper, we propose a novel semantic matching method for long documents in the academic domain. To accurately represent the general meaning of an academic article, we construct a semantic profile in which key semantic elements such as the research purpose, methodology, and domain are included and enriched. As such, we can obtain the overall semantic similarity of two papers by computing the distance between their profiles. The distances between the concepts of two different semantic profiles are measured by word vectors. To improve the semantic representation quality of word vectors, we propose a joint word-embedding model for incorporating a domain-specific semantic relation constraint into the traditional context constraint. Our experimental results demonstrate that, in the measurement of document semantic similarity, our approach achieves substantial improvement over state-of-the-art methods, and our joint word-embedding model produces significantly better word representations than traditional word-embedding models.展开更多
基金National Natural Science Foundation of China(No.71331008)
文摘Sentiment analysis is now more and more important in modern natural language processing,and the sentiment classification is the one of the most popular applications.The crucial part of sentiment classification is feature extraction.In this paper,two methods for feature extraction,feature selection and feature embedding,are compared.Then Word2Vec is used as an embedding method.In this experiment,Chinese document is used as the corpus,and tree methods are used to get the features of a document:average word vectors,Doc2Vec and weighted average word vectors.After that,these samples are fed to three machine learning algorithms to do the classification,and support vector machine(SVM) has the best result.Finally,the parameters of random forest are analyzed.
基金funded by the National Natural Science Foundation of China(No.61503116)the Special Project of Provincial Scientific Research Platform of Hefei Normal University(No.2020PT15)the Natural Science Foundation of the Anhui Higher Education Institutions of China(No.KJ2021A0902,No.2022AH052140)
文摘Keyphrase extraction aims to extract important phrases that reflect the main topics of a document. Recently, deep learning methods are used to model semantic information and rank candidates based on the similarities between the n-grams and the document. However, existing keyphrase extraction methods mainly caused the keyphrase extraction task to be independent of the embedding. Based on the fact that phrases that are semantically closer to the document are more likely to become keyphrases, we propose a novel contrastive learning strategy for supervised keyphrase extraction by integrating local and global Information of the document. A pre-trained RoBERTa model is used to model contextual information of sub-words in the document. Then, the embedding vectors of n-grams and the document are calculated by the convolution neural layers. Finally, we propose a novel loss function for efficiently ranking candidate phrases by combining n-gram features and document embeddings during the training of the model.
基金supported by the Foundation of the State Key Laboratory of Software Development Environment(No.SKLSDE-2015ZX-04)
文摘Long-document semantic measurement has great significance in many applications such as semantic searchs, plagiarism detection, and automatic technical surveys. However, research efforts have mainly focused on the semantic similarity of short texts. Document-level semantic measurement remains an open issue due to problems such as the omission of background knowledge and topic transition. In this paper, we propose a novel semantic matching method for long documents in the academic domain. To accurately represent the general meaning of an academic article, we construct a semantic profile in which key semantic elements such as the research purpose, methodology, and domain are included and enriched. As such, we can obtain the overall semantic similarity of two papers by computing the distance between their profiles. The distances between the concepts of two different semantic profiles are measured by word vectors. To improve the semantic representation quality of word vectors, we propose a joint word-embedding model for incorporating a domain-specific semantic relation constraint into the traditional context constraint. Our experimental results demonstrate that, in the measurement of document semantic similarity, our approach achieves substantial improvement over state-of-the-art methods, and our joint word-embedding model produces significantly better word representations than traditional word-embedding models.