期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
Multilingual Text Summarization in Healthcare Using Pre-Trained Transformer-Based Language Models
1
作者 Josua Käser Thomas Nagy +1 位作者 Patrick Stirnemann Thomas Hanne 《Computers, Materials & Continua》 2025年第4期201-217,共17页
We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of t... We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of these models and their ability to perform the task of abstractive text summarization in the healthcare field.The research hypothesis was that large language models could perform high-quality abstractive text summarization on German technical healthcare texts,even if the model is not specifically trained in that language.Through experiments,the research questions explore the performance of transformer language models in dealing with complex syntax constructs,the difference in performance between models trained in English and German,and the impact of translating the source text to English before conducting the summarization.We conducted an evaluation of four PLMs(GPT-3,a translation-based approach also utilizing GPT-3,a German language Model,and a domain-specific bio-medical model approach).The evaluation considered the informativeness using 3 types of metrics based on Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and the quality of results which is manually evaluated considering 5 aspects.The results show that text summarization models could be used in the German healthcare domain and that domain-independent language models achieved the best results.The study proves that text summarization models can simplify the search for pre-existing German knowledge in various domains. 展开更多
关键词 Text summarization pre-trained transformer-based language models large language models technical healthcare texts natural language processing
在线阅读 下载PDF
U-Net-Based Medical Image Segmentation:A Comprehensive Analysis and Performance Review
2
作者 Aliyu Abdulfatah Zhang Sheng Yirga Eyasu Tenawerk 《Journal of Electronic Research and Application》 2025年第1期202-208,共7页
Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Im... Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Imaging(MRIs),and X-rays.The introduction of U-Net in 2015 has significantly advanced segmentation capabilities,especially for small datasets commonly found in medical imaging.Since then,various modifications to the original U-Net architecture have been proposed to enhance segmentation accuracy and tackle challenges like class imbalance,data scarcity,and multi-modal image processing.This paper provides a detailed review and comparison of several U-Net-based architectures,focusing on their effectiveness in medical image segmentation tasks.We evaluate performance metrics such as Dice Similarity Coefficient(DSC)and Intersection over Union(IoU)across different U-Net variants including HmsU-Net,CrossU-Net,mResU-Net,and others.Our results indicate that architectural enhancements such as transformers,attention mechanisms,and residual connections improve segmentation performance across diverse medical imaging applications,including tumor detection,organ segmentation,and lesion identification.The study also identifies current challenges in the field,including data variability,limited dataset sizes,and issues with class imbalance.Based on these findings,the paper suggests potential future directions for improving the robustness and clinical applicability of U-Net-based models in medical image segmentation. 展开更多
关键词 U-Net architecture Medical image segmentation DSC IOU transformer-based segmentation
在线阅读 下载PDF
Analyzing Arabic Twitter-Based Patient Experience Sentiments Using Multi-Dialect Arabic Bidirectional Encoder Representations from Transformers
3
作者 Sarab AlMuhaideb Yasmeen AlNegheimish +3 位作者 Taif AlOmar Reem AlSabti Maha AlKathery Ghala AlOlyyan 《Computers, Materials & Continua》 SCIE EI 2023年第7期195-220,共26页
Healthcare organizations rely on patients’feedback and experiences to evaluate their performance and services,thereby allowing such organizations to improve inadequate services and address any shortcomings.According ... Healthcare organizations rely on patients’feedback and experiences to evaluate their performance and services,thereby allowing such organizations to improve inadequate services and address any shortcomings.According to the literature,social networks and particularly Twitter are effective platforms for gathering public opinions.Moreover,recent studies have used natural language processing to measure sentiments in text segments collected from Twitter to capture public opinions about various sectors,including healthcare.The present study aimed to analyze Arabic Twitter-based patient experience sentiments and to introduce an Arabic patient experience corpus.The authors collected 12,400 tweets from Arabic patients discussing patient experiences related to healthcare organizations in Saudi Arabia from 1 January 2008 to 29 January 2022.The tweets were labeled according to sentiment(positive or negative)and sector(public or private),and thereby the Hospital Patient Experiences in Saudi Arabia(HoPE-SA)dataset was produced.A simple statistical analysis was conducted to examine differences in patient views of healthcare sectors.The authors trained five models to distinguish sentiments in tweets automatically with the following schemes:a transformer-based model fine-tuned with deep learning architecture and a transformer-based model fine-tuned with simple architecture,using two different transformer-based embeddings based on Bidirectional Encoder Representations from Transformers(BERT),Multi-dialect Arabic BERT(MAR-BERT),and multilingual BERT(mBERT),as well as a pretrained word2vec model with a support vector machine classifier.This is the first study to investigate the use of a bidirectional long short-term memory layer followed by a feedforward neural network for the fine-tuning of MARBERT.The deep-learning fine-tuned MARBERT-based model—the authors’best-performing model—achieved accuracy,micro-F1,and macro-F1 scores of 98.71%,98.73%,and 98.63%,respectively. 展开更多
关键词 Sentiment analysis patient experience healthcare TWITTER MARBERT bidirectional long short-term memory support vector machine transformer-based learning deep learning
在线阅读 下载PDF
Improve Code Summarization via Prompt-Tuning CodeT5
4
作者 LI Huanzhen 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2023年第6期474-482,共9页
Code comments are crucial in software engineering, aiding in program maintenance and code reuse. The process of generating clear and descriptive code comments, outlining code functionality, is called code summarizatio... Code comments are crucial in software engineering, aiding in program maintenance and code reuse. The process of generating clear and descriptive code comments, outlining code functionality, is called code summarization. Existing code summarization methods are typically trained using transformer-based models. However, these trained models often possess limited parameters and lack specific training tasks, hindering their ability to capture code semantics effectively. This paper uses a high-capacity pre-trained model, CodeT5, for code summarization. CodeT5 is designed with an encoder-decoder architecture that excels in code summarization tasks. Furthermore, we adopt a novel paradigm, "pre-train, prompt, predict", to unlock the knowledge embedded within CodeT5. We devise a prompt template to convert input code into code prompts and fine-tune CodeT5 with these prompts—a process we term prompt tuning. Our effectiveness experiments demonstrate that prompt tuning CodeT5 with only 40% of the dataset can achieve comparable performance to fine-tuning CodeT5 with 100% of the dataset. This means our approach is applicable in few-shot learning scenarios. Additionally, our prompt learning method is not sensitive to the size of the tuning dataset. Our practicality experiments show that the performance of prompt-tuned CodeT5 far surpasses that of transformer-based models trained on code-comment datasets collected from Stack Overflow. 展开更多
关键词 code summarization transformer-based model prompt learning CodeT5 few-shot learning
原文传递
PhishNet: A Real-Time, Scalable Ensemble Framework for Smishing Attack Detection Using Transformers and LLMs
5
作者 Abeer Alhuzali Qamar Al-Qahtani +2 位作者 Asmaa Niyazi Lama Alshehri Fatemah Alharbi 《Computers, Materials & Continua》 2026年第1期2194-2212,共19页
The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integra... The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integrates transformer-based models(RoBERTa)and large language models(LLMs)(GPT-OSS 120B,LLaMA3.370B,and Qwen332B)to enhance smishing detection performance significantly.To mitigate class imbalance,we apply synthetic data augmentation using T5 and leverage various text preprocessing techniques.Our system employs a duallayer voting mechanism:weighted majority voting among LLMs and a final ensemble vote to classify messages as ham,spam,or smishing.Experimental results show an average accuracy improvement from 96%to 98.5%compared to the best standalone transformer,and from 93%to 98.5%when compared to LLMs across datasets.Furthermore,we present a real-time,user-friendly application to operationalize our detection model for practical use.PhishNet demonstrates superior scalability,usability,and detection accuracy,filling critical gaps in current smishing detection methodologies. 展开更多
关键词 Smishing attack detection phishing attacks ensemble learning cybersecurity deep learning transformer-based models large language models
在线阅读 下载PDF
Individual Software Expertise Formalization and Assessment from Project Management Tool Databases
6
作者 Traian-Radu Plosca Alexandru-Mihai Pescaru +1 位作者 Bianca-Valeria Rus Daniel-Ioan Curiac 《Computers, Materials & Continua》 2026年第1期389-411,共23页
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods... Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results. 展开更多
关键词 Expertise formalization transformer-based models natural language processing augmented data project management tool skill classification
在线阅读 下载PDF
How Robust Are Language Models against Backdoors in Federated Learning?
7
作者 Seunghan Kim Changhoon Lim +1 位作者 Gwonsang Ryu Hyunil Kim 《Computer Modeling in Engineering & Sciences》 2025年第11期2617-2630,共14页
Federated Learning enables privacy-preserving training of Transformer-based language models,but remains vulnerable to backdoor attacks that compromise model reliability.This paper presents a comparative analysis of de... Federated Learning enables privacy-preserving training of Transformer-based language models,but remains vulnerable to backdoor attacks that compromise model reliability.This paper presents a comparative analysis of defense strategies against both classical and advanced backdoor attacks,evaluated across autoencoding and autoregressive models.Unlike prior studies,this work provides the first systematic comparison of perturbation-based,screening-based,and hybrid defenses in Transformer-based FL environments.Our results show that screening-based defenses consistently outperform perturbation-based ones,effectively neutralizing most attacks across architectures.However,this robustness comes with significant computational overhead,revealing a clear trade-off between security and efficiency.By explicitly identifying this trade-off,our study advances the understanding of defense strategies in federated learning and highlights the need for lightweight yet effective screening methods for trustworthy deployment in diverse application domains. 展开更多
关键词 Backdoor attack federated learning transformer-based language model system robustness
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部