期刊文献+
共找到92篇文章
< 1 2 5 >
每页显示 20 50 100
Generating Abstractive Summaries from Social Media Discussions Using Transformers
1
作者 Afrodite Papagiannopoulou Chrissanthi Angeli Mazida Ahmad 《Open Journal of Applied Sciences》 2025年第1期239-258,共20页
The rise of social media platforms has revolutionized communication, enabling the exchange of vast amounts of data through text, audio, images, and videos. These platforms have become critical for sharing opinions and... The rise of social media platforms has revolutionized communication, enabling the exchange of vast amounts of data through text, audio, images, and videos. These platforms have become critical for sharing opinions and insights, influencing daily habits, and driving business, political, and economic decisions. Text posts are particularly significant, and natural language processing (NLP) has emerged as a powerful tool for analyzing such data. While traditional NLP methods have been effective for structured media, social media content poses unique challenges due to its informal and diverse nature. This has spurred the development of new techniques tailored for processing and extracting insights from unstructured user-generated text. One key application of NLP is the summarization of user comments to manage overwhelming content volumes. Abstractive summarization has proven highly effective in generating concise, human-like summaries, offering clear overviews of key themes and sentiments. This enhances understanding and engagement while reducing cognitive effort for users. For businesses, summarization provides actionable insights into customer preferences and feedback, enabling faster trend analysis, improved responsiveness, and strategic adaptability. By distilling complex data into manageable insights, summarization plays a vital role in improving user experiences and empowering informed decision-making in a data-driven landscape. This paper proposes a new implementation framework by fine-tuning and parameterizing Transformer Large Language Models to manage and maintain linguistic and semantic components in abstractive summary generation. The system excels in transforming large volumes of data into meaningful summaries, as evidenced by its strong performance across metrics like fluency, consistency, readability, and semantic coherence. 展开更多
关键词 abstractive summarization TRANSFORMERS Social Media summarization Transformer Language Models
在线阅读 下载PDF
Weakly Supervised Abstractive Summarization with Enhancing Factual Consistency for Chinese Complaint Reports
2
作者 Ren Tao Chen Shuang 《Computers, Materials & Continua》 SCIE EI 2023年第6期6201-6217,共17页
A large variety of complaint reports reflect subjective information expressed by citizens.A key challenge of text summarization for complaint reports is to ensure the factual consistency of generated summary.Therefore... A large variety of complaint reports reflect subjective information expressed by citizens.A key challenge of text summarization for complaint reports is to ensure the factual consistency of generated summary.Therefore,in this paper,a simple and weakly supervised framework considering factual consistency is proposed to generate a summary of city-based complaint reports without pre-labeled sentences/words.Furthermore,it considers the importance of entity in complaint reports to ensure factual consistency of summary.Experimental results on the customer review datasets(Yelp and Amazon)and complaint report dataset(complaint reports of Shenyang in China)show that the proposed framework outperforms state-of-the-art approaches in ROUGE scores and human evaluation.It unveils the effectiveness of our approach to helping in dealing with complaint reports. 展开更多
关键词 Automatic summarization abstractive summarization weakly supervised training entity recognition
在线阅读 下载PDF
A Method of Integrating Length Constraints into Encoder-Decoder Transformer for Abstractive Text Summarization
3
作者 Ngoc-Khuong Nguyen Dac-Nhuong Le +1 位作者 Viet-Ha Nguyen Anh-Cuong Le 《Intelligent Automation & Soft Computing》 2023年第10期1-18,共18页
Text summarization aims to generate a concise version of the original text.The longer the summary text is,themore detailed it will be fromthe original text,and this depends on the intended use.Therefore,the problem of... Text summarization aims to generate a concise version of the original text.The longer the summary text is,themore detailed it will be fromthe original text,and this depends on the intended use.Therefore,the problem of generating summary texts with desired lengths is a vital task to put the research into practice.To solve this problem,in this paper,we propose a new method to integrate the desired length of the summarized text into the encoder-decoder model for the abstractive text summarization problem.This length parameter is integrated into the encoding phase at each self-attention step and the decoding process by preserving the remaining length for calculating headattention in the generation process and using it as length embeddings added to theword embeddings.We conducted experiments for the proposed model on the two data sets,Cable News Network(CNN)Daily and NEWSROOM,with different desired output lengths.The obtained results show the proposed model’s effectiveness compared with related studies. 展开更多
关键词 Length controllable abstractive text summarization length embedding
在线阅读 下载PDF
An Intelligent Tree Extractive Text Summarization Deep Learning
4
作者 Abeer Abdulaziz AlArfaj Hanan Ahmed Hosni Mahmoud 《Computers, Materials & Continua》 SCIE EI 2022年第11期4231-4244,共14页
In recent research,deep learning algorithms have presented effective representation learning models for natural languages.The deep learningbased models create better data representation than classical models.They are ... In recent research,deep learning algorithms have presented effective representation learning models for natural languages.The deep learningbased models create better data representation than classical models.They are capable of automated extraction of distributed representation of texts.In this research,we introduce a new tree Extractive text summarization that is characterized by fitting the text structure representation in knowledge base training module,and also addresses memory issues that were not addresses before.The proposed model employs a tree structured mechanism to generate the phrase and text embedding.The proposed architecture mimics the tree configuration of the text-texts and provide better feature representation.It also incorporates an attention mechanism that offers an additional information source to conduct better summary extraction.The novel model addresses text summarization as a classification process,where the model calculates the probabilities of phrase and text-summary association.The model classification is divided into multiple features recognition such as information entropy,significance,redundancy and position.The model was assessed on two datasets,on the Multi-Doc Composition Query(MCQ)and Dual Attention Composition dataset(DAC)dataset.The experimental results prove that our proposed model has better summarization precision vs.other models by a considerable margin. 展开更多
关键词 Neural network architecture text structure abstractive summarization
在线阅读 下载PDF
Ext-ICAS:A Novel Self-Normalized Extractive Intra Cosine Attention Similarity Summarization
5
作者 P.Sharmila C.Deisy S.Parthasarathy 《Computer Systems Science & Engineering》 SCIE EI 2023年第4期377-393,共17页
With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex... With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex and requires a deeper understanding and proper reasoning to come up with its own summary outline.Abstractive summarization task is framed as seq2seq modeling.Existing seq2seq methods perform better on short sequences;however,for long sequences,the performance degrades due to high computation and hence a two-phase self-normalized deep neural document summarization model consisting of improvised extractive cosine normalization and seq2seq abstractive phases has been proposed in this paper.The novelty is to parallelize the sequence computation training by incorporating feed-forward,the self-normalized neural network in the Extractive phase using Intra Cosine Attention Similarity(Ext-ICAS)with sentence dependency position.Also,it does not require any normalization technique explicitly.Our proposed abstractive Bidirectional Long Short Term Memory(Bi-LSTM)encoder sequence model performs better than the Bidirectional Gated Recurrent Unit(Bi-GRU)encoder with minimum training loss and with fast convergence.The proposed model was evaluated on the Cable News Network(CNN)/Daily Mail dataset and an average rouge score of 0.435 was achieved also computational training in the extractive phase was reduced by 59%with an average number of similarity computations. 展开更多
关键词 abstractive summarization natural language processing sequence-tosequence learning(seq2seq) SELF-NORMALIZATION intra(self)attention
在线阅读 下载PDF
RETRACTED:Recent Approaches for Text Summarization Using Machine Learning&LSTM0
6
作者 Neeraj Kumar Sirohi Mamta Bansal S.N.Rajan 《Journal on Big Data》 2021年第1期35-47,共13页
Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information ... Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information and knowledge with lots of unwanted contents.To fetch this valuable information out of the huge text document,we need summarizer which is capable to extract data automatically and at the same time capable to summarize the document,particularly textual text in novel document,without losing its any vital information.The summarization could be in the form of extractive and abstractive summarization.The extractive summarization includes picking sentences of high rank from the text constructed by using sentence and word features and then putting them together to produced summary.An abstractive summarization is based on understanding the key ideas in the given text and then expressing those ideas in pure natural language.The abstractive summarization is the latest problem area for NLP(natural language processing),ML(Machine Learning)and NN(Neural Network)In this paper,the foremost techniques for automatic text summarization processes are defined.The different existing methods have been reviewed.Their effectiveness and limitations are described.Further the novel approach based on Neural Network and LSTM has been discussed.In Machine Learning approach the architecture of the underlying concept is called Encoder-Decoder. 展开更多
关键词 Text summarization extractive summary abstractive summary NLP LSTM
在线阅读 下载PDF
Enhancing N-Gram Based Metrics with Semantics for Better Evaluation of Abstractive Text Summarization
7
作者 Jia-Wei He Wen-Jun Jiang +2 位作者 Guo-Bang Chen Yu-Quan Le Xiao-Fei Ding 《Journal of Computer Science & Technology》 SCIE EI CSCD 2022年第5期1118-1133,共16页
Text summarization is an important task in natural language processing and it has been applied in many applications.Recently,abstractive summarization has attracted many attentions.However,the traditional evaluation m... Text summarization is an important task in natural language processing and it has been applied in many applications.Recently,abstractive summarization has attracted many attentions.However,the traditional evaluation metrics that consider little semantic information,are unsuitable for evaluating the quality of deep learning based abstractive summarization models,since these models may generate new words that do not exist in the original text.Moreover,the out-of-vocabulary(OOV)problem that affects the evaluation results,has not been well solved yet.To address these issues,we propose a novel model called ENMS,to enhance existing N-gram based evaluation metrics with semantics.To be specific,we present two types of methods:N-gram based Semantic Matching(NSM for short),and N-gram based Semantic Similarity(NSS for short),to improve several widely-used evaluation metrics including ROUGE(Recall-Oriented Understudy for Gisting Evaluation),BLEU(Bilingual Evaluation Understudy),etc.NSM and NSS work in different ways.The former calculates the matching degree directly,while the latter mainly improves the similarity measurement.Moreover we propose an N-gram representation mechanism to explore the vector representation of N-grams(including skip-grams).It serves as the basis of our ENMS model,in which we exploit some simple but effective integration methods to solve the OOV problem efficiently.Experimental results over the TAC AESOP dataset show that the metrics improved by our methods are well correlated with human judgements and can be used to better evaluate abstractive summarization methods. 展开更多
关键词 summarization evaluation abstractive summarization hard matching semantic information
原文传递
Exploiting comments information to improve legal public opinion news abstractive summarization
8
作者 Yuxin HUANG Zhengtao YU +2 位作者 Yan XIANG Zhiqiang YU Junjun GUO 《Frontiers of Computer Science》 SCIE EI CSCD 2022年第6期31-40,共10页
Automatically generating a brief summary for legal-related public opinion news(LPO-news,which contains legal words or phrases)plays an important role in rapid and effective public opinion disposal.For LPO-news,the cri... Automatically generating a brief summary for legal-related public opinion news(LPO-news,which contains legal words or phrases)plays an important role in rapid and effective public opinion disposal.For LPO-news,the critical case elements which are significant parts of the summary may be mentioned several times in the reader comments.Consequently,we investigate the task of comment-aware abstractive text summarization for LPO-news,which can generate salient summary by learning pivotal case elements from the reader comments.In this paper,we present a hierarchical comment-aware encoder(HCAE),which contains four components:1)a traditional sequenceto-sequence framework as our baseline;2)a selective denoising module to filter the noisy of comments and distinguish the case elements;3)a merge module by coupling the source article and comments to yield comment-aware context representation;4)a recoding module to capture the interaction among the source article words conditioned on the comments.Extensive experiments are conducted on a large dataset of legal public opinion news collected from micro-blog,and results show that the proposed model outperforms several existing state-of-the-art baseline models under the ROUGE metrics. 展开更多
关键词 legal public opinion news abstractive summarization COMMENT comment-aware context case elements bidirectional attention
原文传递
结合主题挖掘与话语中心性的对话摘要模型
9
作者 刘漳辉 张文涛 陈羽中 《小型微型计算机系统》 北大核心 2025年第11期2610-2616,共7页
任务的目的是识别对话中的关键信息并生成一段简短的文本.由于对话具有非正式化和动态交互性质,导致对话文本信息稀疏、关键信息分散.然而,现有模型未能实现对对话中主题特征信息的有效挖掘,缺乏对核心话语的识别,忽略了附加特征融合过... 任务的目的是识别对话中的关键信息并生成一段简短的文本.由于对话具有非正式化和动态交互性质,导致对话文本信息稀疏、关键信息分散.然而,现有模型未能实现对对话中主题特征信息的有效挖掘,缺乏对核心话语的识别,忽略了附加特征融合过程中的噪声问题.针对上述问题,本文提出一种结合主题挖掘与话语中心性的对话摘要模型DS-TMUC(Dialogue Summarization model combining Topic Mining and Utterance Centrality).首先,提出一种主题特征提取模块,该模块引入嵌入式主题模型来有效地挖掘对话中可解释的潜在主题信息,为抽象对话摘要过程提供更丰富的语义信息.其次,提出一种特征动态融合模块,设计特征感知网络为融合特征去除噪声以增强特征的表征能力,利用多头注意力捕捉特征之间的语义关联性,并且使用门控机制进行过滤融合,从而增强特征之间的有效融合.再次,提出一种话语赋权模块,设计无监督聚类方法计算话语中心性权重为话语赋权,通过引导模型选择核心话语,进而提高模型对对话上下文建模的有效性.在SAMSum和DialogSum数据集上的实验结果表明,DS-TMUC模型的总体性能优于对比模型. 展开更多
关键词 抽象对话摘要 嵌入式主题模型 主题挖掘 话语中心性
在线阅读 下载PDF
结合关键词与门控机制的预训练摘要模型
10
作者 任淑霞 赵宗现 +1 位作者 张靖 饶冬章 《计算机与数字工程》 2025年第5期1349-1355,共7页
模型的编码器输出中包含冗余信息,导致生成内容存在语义不相关和偏离主旨等问题,提出了一个结合关键词信息和门控单元的预训练文本摘要模型BGUK(BERT with Gated Unit and Keywords)。首先,该模型使用BERT对源文本进行编码,并引入了门... 模型的编码器输出中包含冗余信息,导致生成内容存在语义不相关和偏离主旨等问题,提出了一个结合关键词信息和门控单元的预训练文本摘要模型BGUK(BERT with Gated Unit and Keywords)。首先,该模型使用BERT对源文本进行编码,并引入了门控单元进行语义提取和冗余信息的过滤。其次,将主题关键词信息合并到模型中解决生成摘要偏离主旨的问题。最后,加入覆盖率机制来减少生成摘要时出现的重复。实验结果表明BGUK生成了更符合主题的高质量的摘要,同时ROUGE得分也超过了基线模型。 展开更多
关键词 生成式摘要 主题关键词 门控单元 覆盖率机制 预训练模型
在线阅读 下载PDF
生成式摘要的事实一致性与文本质量的平衡性研究 被引量:2
11
作者 杨昱睿 何禹瞳 琚生根 《四川大学学报(自然科学版)》 北大核心 2025年第2期347-358,共12页
事实一致性的提升已成为生成式摘要领域的一个研究热点,目前的主流方法可分为后编辑和模型机制优化两类.现有的方法虽然有效地提升了事实一致性,但基本上牺牲了文本质量,降低了可读性.针对这个问题,提出了一种结合强化学习与基于排序的... 事实一致性的提升已成为生成式摘要领域的一个研究热点,目前的主流方法可分为后编辑和模型机制优化两类.现有的方法虽然有效地提升了事实一致性,但基本上牺牲了文本质量,降低了可读性.针对这个问题,提出了一种结合强化学习与基于排序的对比学习的生成式摘要模型SumRCL.一方面,本文利用基于候选摘要排序的对比学习来提升模型对摘要赋予的概率与该摘要的事实一致性的相关度;另一方面,还使用基于文本质量评估指标的强化学习来保留高度文本质量,其中采用了蒙特卡罗搜索方法来解决中间摘要的评估问题.本文方法在CNN/DM与XSUM数据集上的实验表明,本文提出的SumRCL模型确实有助于生成事实一致性与文本质量都很高的摘要,并分析了对比学习中候选摘要数量和排序指标对最终效果的影响.最后,本文通过人工评估展现了SumRCL比如今流行的大语言模型具有更好的事实性行为. 展开更多
关键词 事实一致性 文本质量 强化学习 对比学习 SumRCL模型 CNNDM数据集 XSUM数据集 摘要生成
在线阅读 下载PDF
基于互补注意力记忆机制的方面级抽象式文本摘要研究
12
作者 李祖超 张石头 +2 位作者 艾浩军 李奇伟 王平 《中文信息学报》 北大核心 2025年第8期158-169,共12页
方面级抽象式文本摘要(Aspect-based Abstract Summarization,ABAS)是一项旨在为特定用户定制关注特定方面摘要的具有挑战性的新任务。该文提出了互补注意力记忆(Complementary Attentional Memory,CoAM)方法,通过记忆机制增强ABAS任务... 方面级抽象式文本摘要(Aspect-based Abstract Summarization,ABAS)是一项旨在为特定用户定制关注特定方面摘要的具有挑战性的新任务。该文提出了互补注意力记忆(Complementary Attentional Memory,CoAM)方法,通过记忆机制增强ABAS任务中的方面-上下文交互建模。该文将CoAM与摘要模型BART集成,实现特定方面与上下文特征更好的聚合,生成更高质量的摘要。在多个现有数据集上的实验结果表明,CoAM模型优于现有的包括大模型在内的基线模型,并具有跨领域的鲁棒泛化能力。为了检验CoAM模型在不同语言环境下的效果,该文构建了中文方面级抽象式文本摘要数据集CABAS,并在该数据集上进行了人工标注和模型评估,以推动中文精细化方面级文本摘要的发展。 展开更多
关键词 互补注意力 记忆机制 方面级抽象式文本摘要
在线阅读 下载PDF
基于深度学习的自动文本摘要研究综述 被引量:1
13
作者 其其日力格 斯琴图 王斯日古楞 《计算机工程与应用》 北大核心 2025年第18期24-40,共17页
自动文本摘要技术是自然语言处理领域的重要研究方向,旨在实现信息的高效压缩与核心语义的保留。随着深度学习技术的快速发展,基于该技术的自动文本摘要方法逐渐成为主流。从抽取式与生成式两大技术路线出发,系统梳理了序列标注、图神... 自动文本摘要技术是自然语言处理领域的重要研究方向,旨在实现信息的高效压缩与核心语义的保留。随着深度学习技术的快速发展,基于该技术的自动文本摘要方法逐渐成为主流。从抽取式与生成式两大技术路线出发,系统梳理了序列标注、图神经网络、预训练语言模型、序列到序列模型和强化学习等技术在自动文本摘要中的应用,并分析了各类模型的优缺点;介绍了自动文本摘要领域常用的公开数据集、国内低资源语言数据集及评价指标。通过多维度实验对比分析总结了现有技术面临的问题,提出了相应的改进方案。最后,探讨了自动文本摘要的未来研究方向,为后续研究提供参考。 展开更多
关键词 自动文本摘要 深度学习 生成式摘要 抽取式摘要 自然语言处理
在线阅读 下载PDF
基于关键短语和主题的生成式文本摘要模型
14
作者 郭常江 赵铁军 《中文信息学报》 北大核心 2025年第8期149-157,共9页
序列到序列式的生成式文本摘要研究中一直存在噪声干扰,导致模型生成的摘要无法抓住重点信息,甚至会丢失信息;另一方面,模型又受到训练方式的影响,存在“曝光偏差”问题。经研究发现,在模型训练过程中引入文章关键短语和主题信息,可以... 序列到序列式的生成式文本摘要研究中一直存在噪声干扰,导致模型生成的摘要无法抓住重点信息,甚至会丢失信息;另一方面,模型又受到训练方式的影响,存在“曝光偏差”问题。经研究发现,在模型训练过程中引入文章关键短语和主题信息,可以有效帮助模型在生成摘要时获取文章的重要信息,基于此该文提出了一个基于关键短语和主题的生成式文本摘要模型。该模型在编码器端引入关键短语门控网络,在解码器端引入主题感知网络,同时加入强化学习方法,缓解传统有监督训练方式的缺陷。该模型在中文数据集LCSTS和英文数据集CNN/Daily Mail数据集上的ROUGE指标均优于前人的结果。进一步,通过消融实验验证各个组件的正向作用。 展开更多
关键词 生成式文本摘要 关键短语门控 主题感知 强化学习
在线阅读 下载PDF
基于编码器增强的生成式文本摘要模型研究
15
作者 华庚兴 朱欣鑫 +1 位作者 陶林娟 李波 《计算机与数字工程》 2025年第8期2127-2132,共6页
当前主流的生成式文本摘要模型均是基于编码器-解码器架构,其中的编码器往往只使用了单一来源的信息。论文以循环神经网络为基础,通过融入卷积神经网络提取的局部语义信息和神经主题模型提取的主题信息来增强单一的循环神经网络编码器,... 当前主流的生成式文本摘要模型均是基于编码器-解码器架构,其中的编码器往往只使用了单一来源的信息。论文以循环神经网络为基础,通过融入卷积神经网络提取的局部语义信息和神经主题模型提取的主题信息来增强单一的循环神经网络编码器,以此来提升文本摘要的性能。实验结果表明,使用增强编码器的文本摘要模型性能显著优于一系列基线模型。 展开更多
关键词 生成式摘要 局部语义信息 神经主题模型
在线阅读 下载PDF
科技文献篇章分析在文本摘要中的计算机应用
16
作者 孙璧凡 辜丽川 《淮南师范学院学报》 2025年第2期131-135,共5页
文本摘要通常用于提炼大量文本的核心内容,但针对科技文献而非通用文本的专用摘要模型较少。文章提出一种面向科技文献中篇章结构的生成式文本摘要模型RTsum(Rhetorical Topic summarization model),其结合了语步结构分类模块,以科技文... 文本摘要通常用于提炼大量文本的核心内容,但针对科技文献而非通用文本的专用摘要模型较少。文章提出一种面向科技文献中篇章结构的生成式文本摘要模型RTsum(Rhetorical Topic summarization model),其结合了语步结构分类模块,以科技文献的篇章结构信息引导深度学习中的神经主题模型,来获取更具有事实一致性的全局语义,从而形成高质量的文本摘要。具体来说,RTsum首先根据文章篇章信息对原始文档句子进行分类,再融合层次化的Transformer编码器(Hierarchical transformer encoder)和神经主题模(Neural topic model),不仅可以将文本的全局语义与语步结构信息相结合,还可以减少次优主题句的冗余,并通过语步分类优化的主题分布融入生成式摘要,增强科学文献摘要的质量。实验结果表明,在CORD-19和XSUM数据集上,RTsum模型生成的摘要准确率和事实一致性的相关指标分别取得最高7.68%和9.09%的提升,提升了科技文献生成式文本摘要的事实性和准确性。 展开更多
关键词 生成式文本摘要 领域文本分析 深度学习 语步分类 自然语言处理
在线阅读 下载PDF
基于BERT的语义增强中文文本自动摘要研究
17
作者 盖泽超 池越 周亚同 《中文信息学报》 北大核心 2025年第5期110-119,共10页
目前,基于BERT预训练的文本摘要模型效果良好。然而,预训练模型内部使用的自注意力机制倾向于关注文本中字与字之间的相关信息,对词信息关注度较低,并且在解码时存在语义理解不充分的情况。针对上述问题,该文提出了一种基于BERT的语义... 目前,基于BERT预训练的文本摘要模型效果良好。然而,预训练模型内部使用的自注意力机制倾向于关注文本中字与字之间的相关信息,对词信息关注度较低,并且在解码时存在语义理解不充分的情况。针对上述问题,该文提出了一种基于BERT的语义增强文本摘要模型CBSUM-Aux(Convolution and BERT Based Summarization Model with Auxiliary Information)。首先,使用窗口大小不同的卷积神经网络模块提取原文中的词特征信息,并与输入的字嵌入进行特征融合,之后通过预训练模型对融合特征进行深度特征挖掘。然后,在解码输出阶段,将卷积之后的词特征信息作为解码辅助信息输入解码器中指导模型解码。最后,针对束搜索算法倾向于输出短句的问题对其进行优化。该文使用LCSTS和CSTSD数据集对模型进行验证,实验结果表明,该文模型在ROUGE指标上有明显提升,生成的摘要与原文语义更加贴合。 展开更多
关键词 生成式文本摘要 预训练模型 自注意力机制 卷积神经网络 辅助信息
在线阅读 下载PDF
Internet上文本的自动摘要技术 被引量:13
18
作者 尹存燕 戴新宇 陈家骏 《计算机工程》 EI CAS CSCD 北大核心 2006年第3期88-90,共3页
主要研究了Internet上的文本自动摘要,介绍了自动摘要的主流技术;讨论Internet上文本摘要的新需求以及网页上与自动摘要相关的信息,介绍了摘要处理过程和当前自动摘要的主要评估方法;对Internet上文本的自动摘要作出了总结和展望。
关键词 自动摘要 抽取型摘要 概括型摘要 互联网
在线阅读 下载PDF
基于知识的文本摘要系统研究与实现 被引量:19
19
作者 孙春葵 李蕾 +1 位作者 杨晓兰 钟义信 《计算机研究与发展》 EI CSCD 北大核心 2000年第7期874-881,共8页
提出了一个基于知识的文摘系统模型 ,并基于这种模型实现了一个文本摘要系统 L ADIES.另外 ,还提出了一种文摘系统的评估方法 .
关键词 中文信息处理 知识 文本摘要系统
在线阅读 下载PDF
基于序列到序列模型的生成式文本摘要研究综述 被引量:16
20
作者 石磊 阮选敏 +1 位作者 魏瑞斌 成颖 《情报学报》 CSSCI CSCD 北大核心 2019年第10期1102-1116,共15页
相较于早期的生成式摘要方法,基于序列到序列模型的文本摘要方法更接近人工摘要的生成过程,生成摘要的质量也有明显提高,越来越受到学界的关注。本文梳理了近年来基于序列到序列模型的生成式文本摘要的相关研究,根据模型的结构,分别综... 相较于早期的生成式摘要方法,基于序列到序列模型的文本摘要方法更接近人工摘要的生成过程,生成摘要的质量也有明显提高,越来越受到学界的关注。本文梳理了近年来基于序列到序列模型的生成式文本摘要的相关研究,根据模型的结构,分别综述了编码、解码、训练等方面的研究工作,并对这些工作进行了比较和讨论,在此基础上总结出该领域未来研究的若干技术路线和发展方向。 展开更多
关键词 生成式摘要 序列到序列模型 编码器-解码器模型 注意力机制 神经网络
在线阅读 下载PDF
上一页 1 2 5 下一页 到第
使用帮助 返回顶部