期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Chinese Sentiment Classification Using Extended Word2Vec
1
作者 张胜 张鑫 +1 位作者 程佳军 王晖 《Journal of Donghua University(English Edition)》 EI CAS 2016年第5期823-826,共4页
Sentiment analysis is now more and more important in modern natural language processing,and the sentiment classification is the one of the most popular applications.The crucial part of sentiment classification is feat... Sentiment analysis is now more and more important in modern natural language processing,and the sentiment classification is the one of the most popular applications.The crucial part of sentiment classification is feature extraction.In this paper,two methods for feature extraction,feature selection and feature embedding,are compared.Then Word2Vec is used as an embedding method.In this experiment,Chinese document is used as the corpus,and tree methods are used to get the features of a document:average word vectors,Doc2Vec and weighted average word vectors.After that,these samples are fed to three machine learning algorithms to do the classification,and support vector machine(SVM) has the best result.Finally,the parameters of random forest are analyzed. 展开更多
关键词 embedding document segmentation dimensionality suffers projection latter classify preprocessing probabilistic
在线阅读 下载PDF
A Contrastive Learning Framework for Keyphrase Extraction
2
作者 Jing Song Xian Zu Fei Xie 《Data Intelligence》 2024年第4期1032-1056,共25页
Keyphrase extraction aims to extract important phrases that reflect the main topics of a document. Recently, deep learning methods are used to model semantic information and rank candidates based on the similarities b... Keyphrase extraction aims to extract important phrases that reflect the main topics of a document. Recently, deep learning methods are used to model semantic information and rank candidates based on the similarities between the n-grams and the document. However, existing keyphrase extraction methods mainly caused the keyphrase extraction task to be independent of the embedding. Based on the fact that phrases that are semantically closer to the document are more likely to become keyphrases, we propose a novel contrastive learning strategy for supervised keyphrase extraction by integrating local and global Information of the document. A pre-trained RoBERTa model is used to model contextual information of sub-words in the document. Then, the embedding vectors of n-grams and the document are calculated by the convolution neural layers. Finally, we propose a novel loss function for efficiently ranking candidate phrases by combining n-gram features and document embeddings during the training of the model. 展开更多
关键词 Keyphrase extraction Contrastive learning Supervised n-gram features document embedding
原文传递
Measuring Similarity of Academic Articles with Semantic Profile and Joint Word Embedding 被引量:11
3
作者 Ming Liu Bo Lang +1 位作者 Zepeng Gu Ahmed Zeeshan 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2017年第6期619-632,共14页
Long-document semantic measurement has great significance in many applications such as semantic searchs, plagiarism detection, and automatic technical surveys. However, research efforts have mainly focused on the sema... Long-document semantic measurement has great significance in many applications such as semantic searchs, plagiarism detection, and automatic technical surveys. However, research efforts have mainly focused on the semantic similarity of short texts. Document-level semantic measurement remains an open issue due to problems such as the omission of background knowledge and topic transition. In this paper, we propose a novel semantic matching method for long documents in the academic domain. To accurately represent the general meaning of an academic article, we construct a semantic profile in which key semantic elements such as the research purpose, methodology, and domain are included and enriched. As such, we can obtain the overall semantic similarity of two papers by computing the distance between their profiles. The distances between the concepts of two different semantic profiles are measured by word vectors. To improve the semantic representation quality of word vectors, we propose a joint word-embedding model for incorporating a domain-specific semantic relation constraint into the traditional context constraint. Our experimental results demonstrate that, in the measurement of document semantic similarity, our approach achieves substantial improvement over state-of-the-art methods, and our joint word-embedding model produces significantly better word representations than traditional word-embedding models. 展开更多
关键词 document semantic similarity text understanding semantic enrichment word embedding scientific literature analysis
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部