期刊文献+
共找到29篇文章
< 1 2 >
每页显示 20 50 100
Multilingual Text Summarization in Healthcare Using Pre-Trained Transformer-Based Language Models
1
作者 Josua Käser Thomas Nagy +1 位作者 Patrick Stirnemann Thomas Hanne 《Computers, Materials & Continua》 2025年第4期201-217,共17页
We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of t... We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of these models and their ability to perform the task of abstractive text summarization in the healthcare field.The research hypothesis was that large language models could perform high-quality abstractive text summarization on German technical healthcare texts,even if the model is not specifically trained in that language.Through experiments,the research questions explore the performance of transformer language models in dealing with complex syntax constructs,the difference in performance between models trained in English and German,and the impact of translating the source text to English before conducting the summarization.We conducted an evaluation of four PLMs(GPT-3,a translation-based approach also utilizing GPT-3,a German language Model,and a domain-specific bio-medical model approach).The evaluation considered the informativeness using 3 types of metrics based on Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and the quality of results which is manually evaluated considering 5 aspects.The results show that text summarization models could be used in the German healthcare domain and that domain-independent language models achieved the best results.The study proves that text summarization models can simplify the search for pre-existing German knowledge in various domains. 展开更多
关键词 text summarization pre-trained transformer-based language models large language models technical healthcare texts natural language processing
在线阅读 下载PDF
Drug and Vaccine Extractive Text Summarization Insights Using Fine-Tuned Transformers
2
作者 Rajesh Bandaru Y.Radhika 《Journal of Artificial Intelligence and Technology》 2024年第4期351-362,共12页
Text representation is a key aspect in determining the success of various text summarizing techniques.Summarization using pretrained transformer models has produced encouraging results.Yet the scope of applying these ... Text representation is a key aspect in determining the success of various text summarizing techniques.Summarization using pretrained transformer models has produced encouraging results.Yet the scope of applying these models in medical and drug discovery is not examined to a proper extent.To address this issue,this article aims to perform extractive summarization based on fine-tuned transformers pertaining to drug and medical domain.This research also aims to enhance sentence representation.Exploring the extractive text summarization aspects of medical and drug discovery is a challenging task as the datasets are limited.Hence,this research concentrates on the collection of abstracts collected from PubMed for various domains of medical and drug discovery such as drug and COVID,with a total capacity of 1,370 abstracts.A detailed experimentation using BART(Bidirectional Autoregressive Transformer),T5(Text-to-Text Transfer Transformer),LexRank,and TexRank for the analysis of the dataset is carried out in this research to perform extractive text summarization. 展开更多
关键词 BART BERT extractive text summarization LexRank TexRank
暂未订购
A Hybrid Query-Based Extractive Text Summarization Based on K-Means and Latent Dirichlet Allocation Techniques
3
作者 Sohail Muhammad Muzammil Khan Sarwar Shah Khan 《Journal on Artificial Intelligence》 2024年第1期193-209,共17页
Retrieving information from evolving digital data collection using a user’s query is always essential and needs efficient retrieval mechanisms that help reduce the required time from such massive collections.Large-sc... Retrieving information from evolving digital data collection using a user’s query is always essential and needs efficient retrieval mechanisms that help reduce the required time from such massive collections.Large-scale time consumption is certain to scan and analyze to retrieve the most relevant textual data item from all the documents required a sophisticated technique for a query against the document collection.It is always challenging to retrieve a more accurate and fast retrieval from a large collection.Text summarization is a dominant research field in information retrieval and text processing to locate the most appropriate data object as single or multiple documents from the collection.Machine learning and knowledge-based techniques are the two query-based extractive text summarization techniques in Natural Language Processing(NLP)which can be used for precise retrieval and are considered to be the best option.NLP uses machine learning approaches for both supervised and unsupervised learning for calculating probabilistic features.The study aims to propose a hybrid approach for query-based extractive text summarization in the research study.Text-Rank Algorithm is used as a core algorithm for the flow of an implementation of the approach to gain the required goals.Query-based text summarization of multiple documents using a hybrid approach,combining the K-Means clustering technique with Latent Dirichlet Allocation(LDA)as topic modeling technique produces 0.288,0.631,and 0.328 for precision,recall,and F-score,respectively.The results show that the proposed hybrid approach performs better than the graph-based independent approach and the sentences and word frequency-based approach. 展开更多
关键词 Extractive text summarization machine learning natural language processing K-MEANS latent dirichlet allocation
在线阅读 下载PDF
AUTOMATIC TEXT SUMMARIZATION BASED ON TEXTUAL COHESION 被引量:6
4
作者 Chen Yanmin Liu Bingquan Wang Xiaolong 《Journal of Electronics(China)》 2007年第3期338-346,共9页
This paper presents two different algorithms that derive the cohesion structure in the form of lexical chains from two kinds of language resources HowNet and TongYiCiCiLin. The re-search that connects the cohesion str... This paper presents two different algorithms that derive the cohesion structure in the form of lexical chains from two kinds of language resources HowNet and TongYiCiCiLin. The re-search that connects the cohesion structure of a text to the derivation of its summary is displayed. A novel model of automatic text summarization is devised,based on the data provided by lexical chains from original texts. Moreover,the construction rules of lexical chains are modified accord-ing to characteristics of the knowledge database in order to be more suitable for Chinese summa-rization. Evaluation results show that high quality indicative summaries are produced from Chi-nese texts. 展开更多
关键词 text summarization textual cohesion Lexical chain HOWNET TongYiCiCiLin
在线阅读 下载PDF
Using LSA and text segmentation to improve automatic Chinese dialogue text summarization 被引量:3
5
作者 LIU Chuan-han WANG Yong-cheng +1 位作者 ZHENG Fei LIU De-rong 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2007年第1期79-87,共9页
Automatic Chinese text summarization for dialogue style is a relatively new research area. In this paper, Latent Semantic Analysis (LSA) is first used to extract semantic knowledge from a given document, all questio... Automatic Chinese text summarization for dialogue style is a relatively new research area. In this paper, Latent Semantic Analysis (LSA) is first used to extract semantic knowledge from a given document, all question paragraphs are identified, an automatic text segmentation approach analogous to Text'filing is exploited to improve the precision of correlating question paragraphs and answer paragraphs, and finally some "important" sentences are extracted from the generic content and the question-answer pairs to generate a complete summary. Experimental results showed that our approach is highly efficient and improves significantly the coherence of the summary while not compromising informativeness. 展开更多
关键词 Automatic text summarization Latent semantic analysis (LSA) text segmentation Dialogue style COHERENCE Question-answer pairs
在线阅读 下载PDF
Automated Multi-Document Biomedical Text Summarization Using Deep Learning Model 被引量:3
6
作者 Ahmed S.Almasoud Siwar Ben Haj Hassine +5 位作者 Fahd N.Al-Wesabi Mohamed K.Nour Anwer Mustafa Hilal Mesfer Al Duhayyim Manar Ahmed Hamza Abdelwahed Motwakel 《Computers, Materials & Continua》 SCIE EI 2022年第6期5799-5815,共17页
Due to the advanced developments of the Internet and information technologies,a massive quantity of electronic data in the biomedical sector has been exponentially increased.To handle the huge amount of biomedical dat... Due to the advanced developments of the Internet and information technologies,a massive quantity of electronic data in the biomedical sector has been exponentially increased.To handle the huge amount of biomedical data,automated multi-document biomedical text summarization becomes an effective and robust approach of accessing the increased amount of technical and medical literature in the biomedical sector through the summarization of multiple source documents by retaining the significantly informative data.So,multi-document biomedical text summarization acts as a vital role to alleviate the issue of accessing precise and updated information.This paper presents a Deep Learning based Attention Long Short Term Memory(DLALSTM)Model for Multi-document Biomedical Text Summarization.The proposed DL-ALSTM model initially performs data preprocessing to convert the available medical data into a compatible format for further processing.Then,the DL-ALSTM model gets executed to summarize the contents from the multiple biomedical documents.In order to tune the summarization performance of the DL-ALSTM model,chaotic glowworm swarm optimization(CGSO)algorithm is employed.Extensive experimentation analysis is performed to ensure the betterment of the DL-ALSTM model and the results are investigated using the PubMed dataset.Comprehensive comparative result analysis is carried out to showcase the efficiency of the proposed DL-ALSTM model with the recently presented models. 展开更多
关键词 BIOMEDICAL text summarization healthcare deep learning lstm parameter tuning
暂未订购
A Hybrid Method of Extractive Text Summarization Based on Deep Learning and Graph Ranking Algorithms 被引量:1
7
作者 SHI Hui WANG Tiexin 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2022年第S01期158-165,共8页
In the era of Big Data,we are faced with an inevitable and challenging problem of“overload information”.To alleviate this problem,it is important to use effective automatic text summarization techniques to obtain th... In the era of Big Data,we are faced with an inevitable and challenging problem of“overload information”.To alleviate this problem,it is important to use effective automatic text summarization techniques to obtain the key information quickly and efficiently from the huge amount of text.In this paper,we propose a hybrid method of extractive text summarization based on deep learning and graph ranking algorithms(ETSDG).In this method,a pre-trained deep learning model is designed to yield useful sentence embeddings.Given the association between sentences in raw documents,a traditional LexRank algorithm with fine-tuning is adopted fin ETSDG.In order to improve the performance of the extractive text summarization method,we further integrate the traditional LexRank algorithm with deep learning.Testing results on the data set DUC2004 show that ETSDG has better performance in ROUGE metrics compared with certain benchmark methods. 展开更多
关键词 extractive text summarization deep learning sentence embeddings LexRank
在线阅读 下载PDF
TG-SMR:AText Summarization Algorithm Based on Topic and Graph Models 被引量:1
8
作者 Mohamed Ali Rakrouki Nawaf Alharbe +1 位作者 Mashael Khayyat Abeer Aljohani 《Computer Systems Science & Engineering》 SCIE EI 2023年第4期395-408,共14页
Recently,automation is considered vital in most fields since computing methods have a significant role in facilitating work such as automatic text summarization.However,most of the computing methods that are used in r... Recently,automation is considered vital in most fields since computing methods have a significant role in facilitating work such as automatic text summarization.However,most of the computing methods that are used in real systems are based on graph models,which are characterized by their simplicity and stability.Thus,this paper proposes an improved extractive text summarization algorithm based on both topic and graph models.The methodology of this work consists of two stages.First,the well-known TextRank algorithm is analyzed and its shortcomings are investigated.Then,an improved method is proposed with a new computational model of sentence weights.The experimental results were carried out on standard DUC2004 and DUC2006 datasets and compared to four text summarization methods.Finally,through experiments on the DUC2004 and DUC2006 datasets,our proposed improved graph model algorithm TG-SMR(Topic Graph-Summarizer)is compared to other text summarization systems.The experimental results prove that the proposed TG-SMR algorithm achieves higher ROUGE scores.It is foreseen that the TG-SMR algorithm will open a new horizon that concerns the performance of ROUGE evaluation indicators. 展开更多
关键词 Natural language processing text summarization graph model topic model
在线阅读 下载PDF
Graph Ranked Clustering Based Biomedical Text Summarization Using Top k Similarity
9
作者 Supriya Gupta Aakanksha Sharaff Naresh Kumar Nagwani 《Computer Systems Science & Engineering》 SCIE EI 2023年第6期2333-2349,共17页
Text Summarization models facilitate biomedical clinicians and researchers in acquiring informative data from enormous domain-specific literature within less time and effort.Evaluating and selecting the most informati... Text Summarization models facilitate biomedical clinicians and researchers in acquiring informative data from enormous domain-specific literature within less time and effort.Evaluating and selecting the most informative sentences from biomedical articles is always challenging.This study aims to develop a dual-mode biomedical text summarization model to achieve enhanced coverage and information.The research also includes checking the fitment of appropriate graph ranking techniques for improved performance of the summarization model.The input biomedical text is mapped as a graph where meaningful sentences are evaluated as the central node and the critical associations between them.The proposed framework utilizes the top k similarity technique in a combination of UMLS and a sampled probability-based clustering method which aids in unearthing relevant meanings of the biomedical domain-specific word vectors and finding the best possible associations between crucial sentences.The quality of the framework is assessed via different parameters like information retention,coverage,readability,cohesion,and ROUGE scores in clustering and non-clustering modes.The significant benefits of the suggested technique are capturing crucial biomedical information with increased coverage and reasonable memory consumption.The configurable settings of combined parameters reduce execution time,enhance memory utilization,and extract relevant information outperforming other biomedical baseline models.An improvement of 17%is achieved when the proposed model is checked against similar biomedical text summarizers. 展开更多
关键词 Biomedical text summarization UMLS BioBERT SDPMM clustering top K similarity PPF HITS page rank graph ranking
暂未订购
A Dual Attention Encoder-Decoder Text Summarization Model
10
作者 Nada Ali Hakami Hanan Ahmed Hosni Mahmoud 《Computers, Materials & Continua》 SCIE EI 2023年第2期3697-3710,共14页
A worthy text summarization should represent the fundamental content of the document.Recent studies on computerized text summarization tried to present solutions to this challenging problem.Attention models are employ... A worthy text summarization should represent the fundamental content of the document.Recent studies on computerized text summarization tried to present solutions to this challenging problem.Attention models are employed extensively in text summarization process.Classical attention techniques are utilized to acquire the context data in the decoding phase.Nevertheless,without real and efficient feature extraction,the produced summary may diverge from the core topic.In this article,we present an encoder-decoder attention system employing dual attention mechanism.In the dual attention mechanism,the attention algorithm gathers main data from the encoder side.In the dual attentionmodel,the system can capture and producemore rational main content.The merging of the two attention phases produces precise and rational text summaries.The enhanced attention mechanism gives high score to text repetition to increase phrase score.It also captures the relationship between phrases and the title giving them higher score.We assessed our proposed model with or without significance optimization using ablation procedure.Our model with significance optimization achieved the highest performance of 96.7%precision and the least CPU time among other models in both training and sentence extraction. 展开更多
关键词 text summarization attention model phrase significance
在线阅读 下载PDF
Applied Linguistics with Mixed Leader Optimizer Based English Text Summarization Model
11
作者 Hala J.Alshahrani Khaled Tarmissi +5 位作者 Ayman Yafoz Abdullah Mohamed Manar Ahmed Hamza Ishfaq Yaseen Abu Sarwar Zamani Mohammad Mahzari 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期3203-3219,共17页
The term‘executed linguistics’corresponds to an interdisciplinary domain in which the solutions are identified and provided for real-time language-related problems.The exponential generation of text data on the Inte... The term‘executed linguistics’corresponds to an interdisciplinary domain in which the solutions are identified and provided for real-time language-related problems.The exponential generation of text data on the Internet must be leveraged to gain knowledgeable insights.The extraction of meaningful insights from text data is crucial since it can provide value-added solutions for business organizations and end-users.The Automatic Text Summarization(ATS)process reduces the primary size of the text without losing any basic components of the data.The current study introduces an Applied Linguistics-based English Text Summarization using a Mixed Leader-Based Optimizer with Deep Learning(ALTS-MLODL)model.The presented ALTS-MLODL technique aims to summarize the text documents in the English language.To accomplish this objective,the proposed ALTS-MLODL technique pre-processes the input documents and primarily extracts a set of features.Next,the MLO algorithm is used for the effectual selection of the extracted features.For the text summarization process,the Cascaded Recurrent Neural Network(CRNN)model is exploited whereas the Whale Optimization Algorithm(WOA)is used as a hyperparameter optimizer.The exploitation of the MLO-based feature selection and the WOA-based hyper-parameter tuning enhanced the summarization results.To validate the perfor-mance of the ALTS-MLODL technique,numerous simulation analyses were conducted.The experimental results signify the superiority of the proposed ALTS-MLODL technique over other approaches. 展开更多
关键词 text summarization deep learning hyperparameter tuning applied linguistics multi-leader optimizer
在线阅读 下载PDF
A Novel Optimized Language-Independent Text Summarization Technique
12
作者 Hanan A.Hosni Mahmoud Alaaeldin M.Hafez 《Computers, Materials & Continua》 SCIE EI 2022年第12期5121-5136,共16页
A substantial amount of textual data is present electronically in several languages.These texts directed the gear to information redundancy.It is essential to remove this redundancy and decrease the reading time of th... A substantial amount of textual data is present electronically in several languages.These texts directed the gear to information redundancy.It is essential to remove this redundancy and decrease the reading time of these data.Therefore,we need a computerized text summarization technique to extract relevant information from group of text documents with correlated subjects.This paper proposes a language-independent extractive summarization technique.The proposed technique presents a clustering-based optimization technique.The clustering technique determines the main subjects of the text,while the proposed optimization technique minimizes redundancy,and maximizes significance.Experiments are devised and evaluated using BillSum dataset for the English language,MLSUM for German and Russian and Mawdoo3 for the Arabic language.The experiments are evaluated using ROUGE metrics.The results showed the effectiveness of the proposed technique compared to other language-dependent and languageindependent summarization techniques.Our technique achieved better ROUGE metrics for all the utilized datasets.The technique accomplished an F-measure of 41.9%for Rouge-1,18.7%for Rouge-2,39.4%for Rouge-3,and 16.8%for Rouge-4 on average for all the dataset using all three objectives.Our system also exhibited an improvement of 26.6%,35.5%,34.65%,and 31.54%w.r.t.The recent model contributed in the summarization of BillSum in terms of ROUGE metric evaluation.Our model’s performance is higher than the comparedmodels,especially in themetric results ofROUGE_2which is bi-gram matching. 展开更多
关键词 text summarization:language-independent summarization ROUGE
在线阅读 下载PDF
A Deep Look into Extractive Text Summarization
13
作者 Jhonathan Quillo-Espino Rosa María Romero-González Ana-Marcela Herrera-Navarro 《Journal of Computer and Communications》 2021年第6期24-37,共14页
This investigation has presented an approach to Extractive Automatic Text Summarization (EATS). A framework focused on the summary of a single document has been developed, using the Tf-ldf method (Frequency Term, Inve... This investigation has presented an approach to Extractive Automatic Text Summarization (EATS). A framework focused on the summary of a single document has been developed, using the Tf-ldf method (Frequency Term, Inverse Document Frequency) as a reference, dividing the document into a subset of documents and generating value of each of the words contained in each document, those documents that show Tf-Idf equal or higher than the threshold are those that represent greater importance, therefore;can be weighted and generate a text summary according to the user’s request. This document represents a derived model of text mining application in today’s world. We demonstrate the way of performing the summarization. Random values were used to check its performance. The experimented results show a satisfactory and understandable summary and summaries were found to be able to run efficiently and quickly, showing which are the most important text sentences according to the threshold selected by the user. 展开更多
关键词 text Mining Preprocesses text summarization Extractive text Sumarization
在线阅读 下载PDF
RETRACTED:Recent Approaches for Text Summarization Using Machine Learning&LSTM0
14
作者 Neeraj Kumar Sirohi Mamta Bansal S.N.Rajan 《Journal on Big Data》 2021年第1期35-47,共13页
Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information ... Nowadays,data is very rapidly increasing in every domain such as social media,news,education,banking,etc.Most of the data and information is in the form of text.Most of the text contains little invaluable information and knowledge with lots of unwanted contents.To fetch this valuable information out of the huge text document,we need summarizer which is capable to extract data automatically and at the same time capable to summarize the document,particularly textual text in novel document,without losing its any vital information.The summarization could be in the form of extractive and abstractive summarization.The extractive summarization includes picking sentences of high rank from the text constructed by using sentence and word features and then putting them together to produced summary.An abstractive summarization is based on understanding the key ideas in the given text and then expressing those ideas in pure natural language.The abstractive summarization is the latest problem area for NLP(natural language processing),ML(Machine Learning)and NN(Neural Network)In this paper,the foremost techniques for automatic text summarization processes are defined.The different existing methods have been reviewed.Their effectiveness and limitations are described.Further the novel approach based on Neural Network and LSTM has been discussed.In Machine Learning approach the architecture of the underlying concept is called Encoder-Decoder. 展开更多
关键词 text summarization extractive summary abstractive summary NLP LSTM
在线阅读 下载PDF
Abstractive Arabic Text Summarization Using Hyperparameter Tuned Denoising Deep Neural Network
15
作者 Ibrahim M.Alwayle Hala J.Alshahrani +5 位作者 Saud S.Alotaibi Khaled M.Alalayah Amira Sayed A.Aziz Khadija M.Alaidarous Ibrahim Abdulrab Ahmed Manar Ahmed Hamza 《Intelligent Automation & Soft Computing》 2023年第11期153-168,共16页
ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN t... ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches. 展开更多
关键词 text summarization deep learning denoising deep neural networks hyperparameter tuning Arabic language
在线阅读 下载PDF
A Method of Integrating Length Constraints into Encoder-Decoder Transformer for Abstractive Text Summarization
16
作者 Ngoc-Khuong Nguyen Dac-Nhuong Le +1 位作者 Viet-Ha Nguyen Anh-Cuong Le 《Intelligent Automation & Soft Computing》 2023年第10期1-18,共18页
Text summarization aims to generate a concise version of the original text.The longer the summary text is,themore detailed it will be fromthe original text,and this depends on the intended use.Therefore,the problem of... Text summarization aims to generate a concise version of the original text.The longer the summary text is,themore detailed it will be fromthe original text,and this depends on the intended use.Therefore,the problem of generating summary texts with desired lengths is a vital task to put the research into practice.To solve this problem,in this paper,we propose a new method to integrate the desired length of the summarized text into the encoder-decoder model for the abstractive text summarization problem.This length parameter is integrated into the encoding phase at each self-attention step and the decoding process by preserving the remaining length for calculating headattention in the generation process and using it as length embeddings added to theword embeddings.We conducted experiments for the proposed model on the two data sets,Cable News Network(CNN)Daily and NEWSROOM,with different desired output lengths.The obtained results show the proposed model’s effectiveness compared with related studies. 展开更多
关键词 Length controllable abstractive text summarization length embedding
在线阅读 下载PDF
A Survey of Text Summarization Approaches Based on Deep Learning 被引量:2
17
作者 Sheng-Luan Hou Xi-Kun Huang +4 位作者 Chao-Qun Fei Shu-Han Zhang Yang-Yang Li Qi-Lin Sun Chuan-Qing Wang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第3期633-663,共31页
Automatic text summarization(ATS)has achieved impressive performance thanks to recent advances in deep learning(DL)and the availability of large-scale corpora.The key points in ATS are to estimate the salience of info... Automatic text summarization(ATS)has achieved impressive performance thanks to recent advances in deep learning(DL)and the availability of large-scale corpora.The key points in ATS are to estimate the salience of information and to generate coherent results.Recently,a variety of DL-based approaches have been developed for better considering these two aspects.However,there is still a lack of comprehensive literature review for DL-based ATS approaches.The aim of this paper is to comprehensively review significant DL-based approaches that have been proposed in the literature with respect to the notion of generic ATS tasks and provide a walk-through of their evolution.We first give an overview of ATS and DL.The comparisons of the datasets are also given,which are commonly used for model training,validation,and evaluation.Then we summarize single-document summarization approaches.After that,an overview of multi-document summarization approaches is given.We further analyze the performance of the popular ATS models on common datasets.Various popular approaches can be employed for different ATS tasks.Finally,we propose potential research directions in this fast-growing field.We hope this exploration can provide new insights into future research of DL-based ATS. 展开更多
关键词 automatic text summarization artificial intelligence deep learning attentional encoder-decoder natural language processing
原文传递
Reflective thinking meets artificial intelligence:Synthesizing sustainability transition knowledge in left-behind mountain regions
18
作者 Andrej Ficko Simo Sarkki +2 位作者 Yasar Selman Gultekin Antonia Egli Juha Hiedanpää 《Geography and Sustainability》 2025年第1期159-169,共11页
We demonstrate a multi-method approach towards discovering and structuring sustainability transition knowl edge in marginalized mountain regions.By employing reflective thinking,artificial intelligence(AI)-powered tex... We demonstrate a multi-method approach towards discovering and structuring sustainability transition knowl edge in marginalized mountain regions.By employing reflective thinking,artificial intelligence(AI)-powered text summarization and text mining,we synthesize experts’narratives on sustainable development challenges and solutions in Kardüz Upland,Türkiye.We then analyze their alignment with the UN Sustainable Development Goals(SDGs)using document embedding.Investment in infrastructure,education,and resilient socio-ecological systems emerged as priority sectors to combat poor infrastructure,geographic isolation,climate change,poverty,depopulation,unemployment,low education levels,and inadequate social services.The narratives were closest in substance to SDG 1,3,and 11.Social dimensions of sustainability were more pronounced than environmental dimensions.The presented approach supports policymakers in organizing loosely structured sustainability tran sition knowledge and fragmented data corpora,while also advancing AI applications for designing and planning sustainable development policies at the regional level. 展开更多
关键词 Artificial intelligence INNOVATION Reflective thinking Scientific imagination text mining text summarization
在线阅读 下载PDF
A Semantic Supervision Method for Abstractive Summarization 被引量:1
19
作者 Sunqiang Hu Xiaoyu Li +3 位作者 Yu Deng Yu Peng Bin Lin Shan Yang 《Computers, Materials & Continua》 SCIE EI 2021年第10期145-158,共14页
In recent years,many text summarization models based on pretraining methods have achieved very good results.However,in these text summarization models,semantic deviations are easy to occur between the original input r... In recent years,many text summarization models based on pretraining methods have achieved very good results.However,in these text summarization models,semantic deviations are easy to occur between the original input representation and the representation that passed multi-layer encoder,which may result in inconsistencies between the generated summary and the source text content.The Bidirectional Encoder Representations from Transformers(BERT)improves the performance of many tasks in Natural Language Processing(NLP).Although BERT has a strong capability to encode context,it lacks the fine-grained semantic representation.To solve these two problems,we proposed a semantic supervision method based on Capsule Network.Firstly,we extracted the fine-grained semantic representation of the input and encoded result in BERT by Capsule Network.Secondly,we used the fine-grained semantic representation of the input to supervise the fine-grained semantic representation of the encoded result.Then we evaluated our model on a popular Chinese social media dataset(LCSTS),and the result showed that our model achieved higher ROUGE scores(including R-1,R-2),and our model outperformed baseline systems.Finally,we conducted a comparative study on the stability of the model,and the experimental results showed that our model was more stable. 展开更多
关键词 text summarization semantic supervision capsule network
在线阅读 下载PDF
Constructing a taxonomy to support multi-document summarization of dissertation abstracts
20
作者 KHOO Christopher S.G. GOH Dion H. 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2005年第11期1258-1267,共10页
This paper reports part of a study to develop a method for automatic multi-document summarization. The current focus is on dissertation abstracts in the field of sociology. The summarization method uses macro-level an... This paper reports part of a study to develop a method for automatic multi-document summarization. The current focus is on dissertation abstracts in the field of sociology. The summarization method uses macro-level and micro-level discourse structure to identify important information that can be extracted from dissertation abstracts, and then uses a variable-based framework to integrate and organize extracted information across dissertation abstracts. This framework focuses more on research concepts and their research relationships found in sociology dissertation abstracts and has a hierarchical structure. A taxonomy is constructed to support the summarization process in two ways: (1) helping to identify important concepts and relations expressed in the text, and (2) providing a structure for linking similar concepts in different abstracts. This paper describes the variable-based framework and the summarization process, and then reports the construction of the taxonomy for supporting the summarization process. An example is provided to show how to use the constructed taxonomy to identify important concepts and integrate the concepts extracted from different abstracts. 展开更多
关键词 text summarization Automatic multi-document summarization Variable-based framework Digital library
在线阅读 下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部