期刊文献+
共找到554篇文章
< 1 2 28 >
每页显示 20 50 100
ExplainableDetector:Exploring transformer-based language modeling approach for SMS spam detection with explainability analysis
1
作者 Mohammad Amaz Uddin Muhammad Nazrul Islam +2 位作者 Leandros Maglaras Helge Janicke Iqbal H.Sarker 《Digital Communications and Networks》 2025年第5期1504-1518,共15页
Short Message Service(SMS)is a widely used and cost-effective communication medium that has unfortunately become a frequent target for unsolicited messages-commonly known as SMS spam.With the rapid adoption of smartph... Short Message Service(SMS)is a widely used and cost-effective communication medium that has unfortunately become a frequent target for unsolicited messages-commonly known as SMS spam.With the rapid adoption of smartphones and increased Internet connectivity,SMS spam has emerged as a prevalent threat.Spammers have recognized the critical role SMS plays in today’s modern communication,making it a prime target for abuse.As cybersecurity threats continue to evolve,the volume of SMS spam has increased substantially in recent years.Moreover,the unstructured format of SMS data creates significant challenges for SMS spam detection,making it more difficult to successfully combat spam attacks.In this paper,we present an optimized and fine-tuned transformer-based Language Model to address the problem of SMS spam detection.We use a benchmark SMS spam dataset to analyze this spam detection model.Additionally,we utilize pre-processing techniques to obtain clean and noise-free data and address class imbalance problem by leveraging text augmentation techniques.The overall experiment showed that our optimized fine-tuned BERT(Bidirectional Encoder Representations from Transformers)variant model RoBERTa obtained high accuracy with 99.84%.To further enhance model transparency,we incorporate Explainable Artificial Intelligence(XAI)techniques that compute positive and negative coefficient scores,offering insight into the model’s decision-making process.Additionally,we evaluate the performance of traditional machine learning models as a baseline for comparison.This comprehensive analysis demonstrates the significant impact language models can have on addressing complex text-based challenges within the cybersecurity landscape. 展开更多
关键词 CYBERSECURITY Machine learning Large language model Spam detection Text analytics Explainable AI Fine-tuning TRANSFORMER
在线阅读 下载PDF
Large language model-based multi-objective modeling framework for vacuum gas oil hydrotreating
2
作者 Zheyuan Pang Siying Liu +4 位作者 Yiting Lin Xiangchen Fang Honglai Liu Chong Peng Cheng Lian 《Chinese Journal of Chemical Engineering》 2025年第8期133-145,共13页
Data-driven approaches are extensively employed to model complex chemical engineering processes, such as hydrotreating, to address the challenges of mechanism-based methods demanding deep process understanding. Howeve... Data-driven approaches are extensively employed to model complex chemical engineering processes, such as hydrotreating, to address the challenges of mechanism-based methods demanding deep process understanding. However, the development of such models requires specialized expertise in data science, limiting their broader application. Large language models (LLMs), such as GPT-4, have demonstrated potential in supporting and guiding research efforts. This work presents a novel AI-assisted framework where GPT-4, through well-engineered prompts, facilitates the construction and explanation of multi-objective neural networks. These models predict hydrotreating products properties (such as distillation range), including refined diesel and refined gas oil, using feedstock properties, operating conditions, and recycle hydrogen composition. Gradient-weighted class activation mapping was employed to identify key features influencing the output variables. This work illustrates an innovative AI-guided paradigm for chemical engineering applications, and the designed prompts hold promise for adaptation to other complex processes. 展开更多
关键词 HYDROGENATION Prompt engineering Large language model Neural networks Prediction
在线阅读 下载PDF
Bangla language modeling algorithm for automatic recognition of hand-sign-spelled Bangla sign language
3
作者 Muhammad Aminur RAHAMAN Mahmood JASIM +1 位作者 Md.Haider ALI Md.HASANUZZAMAN 《Frontiers of Computer Science》 SCIE EI CSCD 2020年第3期45-64,共20页
Because of using traditional hand-sign segmentation and classification algorithm,many diversities of Bangla language including joint-letters,dependent vowels etc.and representing 51 Bangla written characters by using ... Because of using traditional hand-sign segmentation and classification algorithm,many diversities of Bangla language including joint-letters,dependent vowels etc.and representing 51 Bangla written characters by using only 36 hand-signs,continuous hand-sign-spelled Bangla sign language(BdSL)recognition is challenging.This paper presents a Bangla language modeling algorithm for automatic recognition of hand-sign-spelled Bangla sign language which consists of two phases.First phase is designed for hand-sign classification and the second phase is designed for Bangla language modeling algorithm(BLMA)for automatic recognition of hand-sign-spelled Bangla sign language.In first phase,we have proposed two step classifiers for hand-sign classification using normalized outer boundary vector(NOBV)and window-grid vector(WGV)by calculating maximum inter correlation coefficient(ICC)between test feature vector and pre-trained feature vectors.At first,the system classifies hand-signs using NOBV.If classification score does not satisfy specific threshold then another classifier based on WGV is used.The system is trained using 5,200 images and tested using another(5,200×6)images of 52 hand-signs from 10 signers in 6 different challenging environments achieving mean accuracy of 95.83%for classification with the computational cost of 39.972 milliseconds per frame.In the Second Phase,we have proposed Bangla language modeling algorithm(BLMA)which discovers all"hidden characters"based on"recognized characters"from 52 hand-signs of BdSL to make any Bangla words,composite numerals and sentences in BdSL with no training,only based on the result of first phase.To the best of our knowledge,the proposed system is the first system in BdSL designed on automatic recognition of hand-sign-spelled BdSL for large lexicon.The system is tested for BLMA using hand-sign-spelled 500 words,100 composite numerals and 80 sentences in BdSL achieving mean accuracy of 93.50%,95.50%and 90.50%respectively. 展开更多
关键词 Bangla sign language(BdSL) hand-sign CLASSIFICATION Bangla language modeling rules(BLMR) Bangla language modeling algorithm(BLMA)
原文传递
Unsupervised statistical text simplification using pre-trained language modeling for initialization 被引量:1
4
作者 Jipeng QIANG Feng ZHANG +3 位作者 Yun LI Yunhao YUAN Yi ZHU Xindong WU 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第1期81-90,共10页
Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based mach... Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based machine translation system (UnsupPBMT) achieved good performance, which initializes the phrase tables using the similar words obtained by word embedding modeling. Since word embedding modeling only considers the relevance between words, the phrase table in UnsupPBMT contains a lot of dissimilar words. In this paper, we propose an unsupervised statistical text simplification using pre-trained language modeling BERT for initialization. Specifically, we use BERT as a general linguistic knowledge base for predicting similar words. Experimental results show that our method outperforms the state-of-the-art unsupervised text simplification methods on three benchmarks, even outperforms some supervised baselines. 展开更多
关键词 text simplification pre-trained language modeling BERT word embeddings
原文传递
New Retrieval Method Based on Relative Entropy for LanguageModeling with Different Smoothing Methods
5
作者 霍华 刘俊强 冯博琴 《Journal of Southwest Jiaotong University(English Edition)》 2006年第2期113-120,共8页
A language model for information retrieval is built by using a query language model to generate queries and a document language model to generate documents. The documents are ranked according to the relative entropies... A language model for information retrieval is built by using a query language model to generate queries and a document language model to generate documents. The documents are ranked according to the relative entropies of estimated document language models with respect to the estimated query language model. Two popular and relatively efficient smoothing methods, the Jelinek- Mercer method and the absolute discounting method, are used to smooth the document language model in estimation of the document language, A combined model composed of the feedback document language model and the collection language model is used to estimate the query model. A performacne comparison between the new retrieval method and the existing method with feedback is made, and the retrieval performances of the proposed method with the two different smoothing techniques are evaluated on three Text Retrieval Conference (TREC) data sets. Experimental results show that the method is effective and performs better than the basic language modeling approach; moreover, the method using the Jelinek-Mercer technique performs better than that using the absolute discounting technique, and the perfomance is sensitive to the smoothing peramters. 展开更多
关键词 Information retrieval Relative entropy language modeling SMOOTHING
在线阅读 下载PDF
Kindergarten Teachers’Language Modeling Behaviors in Daily Activities and Free Play
6
作者 WU Qiong HU Biying GUAN Lin 《Frontiers of Education in China》 2024年第1期1-17,共17页
Teachers’language modeling behaviors,including frequent conversation,open-ended questions,repetition and extension,self-and parallel talks,and advanced language,have significantly impacted young children’s language ... Teachers’language modeling behaviors,including frequent conversation,open-ended questions,repetition and extension,self-and parallel talks,and advanced language,have significantly impacted young children’s language learning and development.This study examined 60 classrooms from 20 kindergartens in Guangzhou,China,and analyzed 62 films of daily activities and 57 videos of free play.It aims to address the research gap in existing research that pays little attention to teachers’language modeling behaviors in daily activities and free play.The results indicate that the more frequent teachers’language modeling behaviors,the larger the vocabulary young children use and the better their performance in lexical richness.However,such behaviors in daily activities and free play are infrequent and superficial,failing to guide young children’s language development effectively.To optimize teachers’language modeling behaviors in daily activities and free play,they are expected to establish positive emotional bonds with young children in a kind and respectful manner and receive training.Teachers are also encouraged to frequently communicate and engage in dialogues with young children,create contexts that facilitate the use of language,increase the frequency of stimuli for vocabulary learning,and guide and encourage young children’s advanced language. 展开更多
关键词 teachers’language modeling behaviors young children daily activities free play
原文传递
A Modeling Language Based on UML for Modeling Simulation Testing System of Avionic Software 被引量:2
7
作者 WANG Lize LIU Bin LU Minyan 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2011年第2期181-194,共14页
With direct expression of individual application domain patterns and ideas,domain-specific modeling language(DSML) is more and more frequently used to build models instead of using a combination of one or more gener... With direct expression of individual application domain patterns and ideas,domain-specific modeling language(DSML) is more and more frequently used to build models instead of using a combination of one or more general constructs.Based on the profile mechanism of unified modeling language(UML) 2.2,a kind of DSML is presented to model simulation testing systems of avionic software(STSAS).To define the syntax,semantics and notions of the DSML,the domain model of the STSAS from which we generalize the domain concepts and relationships among these concepts is given,and then,the domain model is mapped into a UML meta-model,named UML-STSAS profile.Assuming a flight control system(FCS) as system under test(SUT),we design the relevant STSAS.The results indicate that extending UML to the simulation testing domain can effectively and precisely model STSAS. 展开更多
关键词 AVIONICS HARDWARE-IN-THE-LOOP test facilities META-MODEL UML profile domain-specific modeling language abstract state machine
原文传递
Large language models for robotics:Opportunities,challenges,and perspectives 被引量:4
8
作者 Jiaqi Wang Enze Shi +7 位作者 Huawen Hu Chong Ma Yiheng Liu Xuhui Wang Yincheng Yao Xuan Liu Bao Ge Shu Zhang 《Journal of Automation and Intelligence》 2025年第1期52-64,共13页
Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and langua... Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction. 展开更多
关键词 Large language models ROBOTICS Generative AI Embodied intelligence
在线阅读 下载PDF
On large language models safety,security,and privacy:A survey 被引量:3
9
作者 Ran Zhang Hong-Wei Li +2 位作者 Xin-Yuan Qian Wen-Bo Jiang Han-Xiao Chen 《Journal of Electronic Science and Technology》 2025年第1期1-21,共21页
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De... The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats. 展开更多
关键词 Large language models Privacy issues Safety issues Security issues
在线阅读 下载PDF
When Software Security Meets Large Language Models:A Survey 被引量:2
10
作者 Xiaogang Zhu Wei Zhou +3 位作者 Qing-Long Han Wanlun Ma Sheng Wen Yang Xiang 《IEEE/CAA Journal of Automatica Sinica》 2025年第2期317-334,共18页
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ... Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research. 展开更多
关键词 Large language models(LLMs) software analysis software security software testing
在线阅读 下载PDF
The Security of Using Large Language Models:A Survey With Emphasis on ChatGPT 被引量:2
11
作者 Wei Zhou Xiaogang Zhu +4 位作者 Qing-Long Han Lin Li Xiao Chen Sheng Wen Yang Xiang 《IEEE/CAA Journal of Automatica Sinica》 2025年第1期1-26,共26页
ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential sec... ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential security risks that need to be carefully evaluated and addressed. In this survey, we provide an overview of the current state of research on security of using ChatGPT, with aspects of bias, disinformation, ethics, misuse,attacks and privacy. We review and discuss the literature on these topics and highlight open research questions and future directions.Through this survey, we aim to contribute to the academic discourse on AI security, enriching the understanding of potential risks and mitigations. We anticipate that this survey will be valuable for various stakeholders involved in AI development and usage, including AI researchers, developers, policy makers, and end-users. 展开更多
关键词 Artificial intelligence(AI) ChatGPT large language models(LLMs) SECURITY
在线阅读 下载PDF
Large Language Model Agent with VGI Data for Mapping 被引量:2
12
作者 SONG Jiayu ZHANG Yifan +1 位作者 WANG Zhiyun YU Wenhao 《Journal of Geodesy and Geoinformation Science》 2025年第2期57-73,共17页
In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach th... In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach that Integrates Large Language Models(LLMs)into a fully automated mapping workflow,utilizing VGI data.The process leverages Prompt Engineering,which involves designing and optimizing input instructions to ensure the LLM produces desired mapping outputs.By constructing precise and detailed prompts,LLM agents are able to accurately interpret mapping requirements,and autonomously extract,analyze,and process VGI geospatial data.They dynamically interact with mapping tools to automate the entire mapping process—from data acquisition to map generation.This approach significantly streamlines the creation of high-quality mapping outputs,reducing the time and resources typically required for such tasks.Moreover,the system lowers the barrier for non-expert users,enabling them to generate accurate maps without extensive technical expertise.Through various case studies,we demonstrate the LLM application across different mapping scenarios,highlighting its potential to enhance the efficiency,accuracy,and accessibility of map production.The results suggest that LLM-powered mapping systems can not only optimize VGI data processing but also expand the usability of ubiquitous mapping across diverse fields,including urban planning and infrastructure development. 展开更多
关键词 Volunteered Geographic Information(VGI) Geospatial Artificial Intelligence(GeoAI) AGENT large language model
在线阅读 下载PDF
HyPepTox-Fuse:An interpretable hybrid framework for accurate peptide toxicity prediction fusing protein language model-based embeddings with conventional descriptors 被引量:1
13
作者 Duong Thanh Tran Nhat Truong Pham +2 位作者 Nguyen Doan Hieu Nguyen Leyi Wei Balachandran Manavalan 《Journal of Pharmaceutical Analysis》 2025年第8期1873-1886,共14页
Peptide-based therapeutics hold great promise for the treatment of various diseases;however,their clinical application is often hindered by toxicity challenges.The accurate prediction of peptide toxicity is crucial fo... Peptide-based therapeutics hold great promise for the treatment of various diseases;however,their clinical application is often hindered by toxicity challenges.The accurate prediction of peptide toxicity is crucial for designing safe peptide-based therapeutics.While traditional experimental approaches are time-consuming and expensive,computational methods have emerged as viable alternatives,including similarity-based and machine learning(ML)-/deep learning(DL)-based methods.However,existing methods often struggle with robustness and generalizability.To address these challenges,we propose HyPepTox-Fuse,a novel framework that fuses protein language model(PLM)-based embeddings with conventional descriptors.HyPepTox-Fuse integrates ensemble PLM-based embeddings to achieve richer peptide representations by leveraging a cross-modal multi-head attention mechanism and Transformer architecture.A robust feature ranking and selection pipeline further refines conventional descriptors,thus enhancing prediction performance.Our framework outperforms state-of-the-art methods in cross-validation and independent evaluations,offering a scalable and reliable tool for peptide toxicity prediction.Moreover,we conducted a case study to validate the robustness and generalizability of HyPepTox-Fuse,highlighting its effectiveness in enhancing model performance.Furthermore,the HyPepTox-Fuse server is freely accessible at https://balalab-skku.org/HyPepTox-Fuse/and the source code is publicly available at https://github.com/cbbl-skku-org/HyPepTox-Fuse/.The study thus presents an intuitive platform for predicting peptide toxicity and supports reproducibility through openly available datasets. 展开更多
关键词 Peptide toxicity Hybrid framework Multi-head attention Transformer Deep learning Machine learning Protein language model
暂未订购
Evaluating research quality with Large Language Models:An analysis of ChatGPT’s effectiveness with different settings and inputs 被引量:1
14
作者 Mike Thelwall 《Journal of Data and Information Science》 2025年第1期7-25,共19页
Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether ... Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations. 展开更多
关键词 ChatGPT Large language Models LLMs SCIENTOMETRICS Research Assessment
在线阅读 下载PDF
GPT2-ICC:A data-driven approach for accurate ion channel identification using pre-trained large language models 被引量:1
15
作者 Zihan Zhou Yang Yu +9 位作者 Chengji Yang Leyan Cao Shaoying Zhang Junnan Li Yingnan Zhang Huayun Han Guoliang Shi Qiansen Zhang Juwen Shen Huaiyu Yang 《Journal of Pharmaceutical Analysis》 2025年第8期1800-1809,共10页
Current experimental and computational methods have limitations in accurately and efficiently classifying ion channels within vast protein spaces.Here we have developed a deep learning algorithm,GPT2 Ion Channel Class... Current experimental and computational methods have limitations in accurately and efficiently classifying ion channels within vast protein spaces.Here we have developed a deep learning algorithm,GPT2 Ion Channel Classifier(GPT2-ICC),which effectively distinguishing ion channels from a test set containing approximately 239 times more non-ion-channel proteins.GPT2-ICC integrates representation learning with a large language model(LLM)-based classifier,enabling highly accurate identification of potential ion channels.Several potential ion channels were predicated from the unannotated human proteome,further demonstrating GPT2-ICC’s generalization ability.This study marks a significant advancement in artificial-intelligence-driven ion channel research,highlighting the adaptability and effectiveness of combining representation learning with LLMs to address the challenges of imbalanced protein sequence data.Moreover,it provides a valuable computational tool for uncovering previously uncharacterized ion channels. 展开更多
关键词 Ion channel Artificial intelligence Representation learning GPT2 Protein language model
在线阅读 下载PDF
Evaluating large language models as patient education tools for inflammatory bowel disease:A comparative study 被引量:1
16
作者 Yan Zhang Xiao-Han Wan +6 位作者 Qing-Zhou Kong Han Liu Jun Liu Jing Guo Xiao-Yun Yang Xiu-Li Zuo Yan-Qing Li 《World Journal of Gastroenterology》 2025年第6期34-43,共10页
BACKGROUND Inflammatory bowel disease(IBD)is a global health burden that affects millions of individuals worldwide,necessitating extensive patient education.Large language models(LLMs)hold promise for addressing patie... BACKGROUND Inflammatory bowel disease(IBD)is a global health burden that affects millions of individuals worldwide,necessitating extensive patient education.Large language models(LLMs)hold promise for addressing patient information needs.However,LLM use to deliver accurate and comprehensible IBD-related medical information has yet to be thoroughly investigated.AIM To assess the utility of three LLMs(ChatGPT-4.0,Claude-3-Opus,and Gemini-1.5-Pro)as a reference point for patients with IBD.METHODS In this comparative study,two gastroenterology experts generated 15 IBD-related questions that reflected common patient concerns.These questions were used to evaluate the performance of the three LLMs.The answers provided by each model were independently assessed by three IBD-related medical experts using a Likert scale focusing on accuracy,comprehensibility,and correlation.Simultaneously,three patients were invited to evaluate the comprehensibility of their answers.Finally,a readability assessment was performed.RESULTS Overall,each of the LLMs achieved satisfactory levels of accuracy,comprehensibility,and completeness when answering IBD-related questions,although their performance varies.All of the investigated models demonstrated strengths in providing basic disease information such as IBD definition as well as its common symptoms and diagnostic methods.Nevertheless,when dealing with more complex medical advice,such as medication side effects,dietary adjustments,and complication risks,the quality of answers was inconsistent between the LLMs.Notably,Claude-3-Opus generated answers with better readability than the other two models.CONCLUSION LLMs have the potential as educational tools for patients with IBD;however,there are discrepancies between the models.Further optimization and the development of specialized models are necessary to ensure the accuracy and safety of the information provided. 展开更多
关键词 Inflammatory bowel disease Large language models Patient education Medical information accuracy Readability assessment
暂未订购
Assessing the possibility of using large language models in ocular surface diseases 被引量:1
17
作者 Qian Ling Zi-Song Xu +11 位作者 Yan-Mei Zeng Qi Hong Xian-Zhe Qian Jin-Yu Hu Chong-Gang Pei Hong Wei Jie Zou Cheng Chen Xiao-Yu Wang Xu Chen Zhen-Kai Wu Yi Shao 《International Journal of Ophthalmology(English edition)》 2025年第1期1-8,共8页
AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surfa... AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surface diseases:ChatGPT-4,ChatGPT-3.5,Claude 2,PaLM2,and SenseNova.METHODS:A group of experienced ophthalmology professors were asked to develop a 100-question singlechoice question on ocular surface diseases designed to assess the performance of LLMs and human participants in answering ophthalmology specialty exam questions.The exam includes questions on the following topics:keratitis disease(20 questions),keratoconus,keratomalaciac,corneal dystrophy,corneal degeneration,erosive corneal ulcers,and corneal lesions associated with systemic diseases(20 questions),conjunctivitis disease(20 questions),trachoma,pterygoid and conjunctival tumor diseases(20 questions),and dry eye disease(20 questions).Then the total score of each LLMs and compared their mean score,mean correlation,variance,and confidence were calculated.RESULTS:GPT-4 exhibited the highest performance in terms of LLMs.Comparing the average scores of the LLMs group with the four human groups,chief physician,attending physician,regular trainee,and graduate student,it was found that except for ChatGPT-4,the total score of the rest of the LLMs is lower than that of the graduate student group,which had the lowest score in the human group.Both ChatGPT-4 and PaLM2 were more likely to give exact and correct answers,giving very little chance of an incorrect answer.ChatGPT-4 showed higher credibility when answering questions,with a success rate of 59%,but gave the wrong answer to the question 28% of the time.CONCLUSION:GPT-4 model exhibits excellent performance in both answer relevance and confidence.PaLM2 shows a positive correlation(up to 0.8)in terms of answer accuracy during the exam.In terms of answer confidence,PaLM2 is second only to GPT4 and surpasses Claude 2,SenseNova,and GPT-3.5.Despite the fact that ocular surface disease is a highly specialized discipline,GPT-4 still exhibits superior performance,suggesting that its potential and ability to be applied in this field is enormous,perhaps with the potential to be a valuable resource for medical students and clinicians in the future. 展开更多
关键词 ChatGPT-4.0 ChatGPT-3.5 large language models ocular surface diseases
原文传递
Optimizing Fine-Tuning in Quantized Language Models:An In-Depth Analysis of Key Variables
18
作者 Ao Shen Zhiquan Lai +1 位作者 Dongsheng Li Xiaoyu Hu 《Computers, Materials & Continua》 SCIE EI 2025年第1期307-325,共19页
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci... Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments. 展开更多
关键词 Large-scale language Model Parameter-Efficient Fine-Tuning parameter quantization key variable trainable parameters experimental analysis
在线阅读 下载PDF
Phoneme Sequence Modeling in the Context of Speech Signal Recognition in Language “Baoule”
19
作者 Hyacinthe Konan Etienne Soro +2 位作者 Olivier Asseu Bi Tra Goore Raymond Gbegbe 《Engineering(科研)》 2016年第9期597-617,共22页
This paper presents the recognition of “Baoule” spoken sentences, a language of C?te d’Ivoire. Several formalisms allow the modelling of an automatic speech recognition system. The one we used to realize our system... This paper presents the recognition of “Baoule” spoken sentences, a language of C?te d’Ivoire. Several formalisms allow the modelling of an automatic speech recognition system. The one we used to realize our system is based on Hidden Markov Models (HMM) discreet. Our goal in this article is to present a system for the recognition of the Baoule word. We present three classical problems and develop different algorithms able to resolve them. We then execute these algorithms with concrete examples. 展开更多
关键词 HMM MATLAB language Model Acoustic Model Recognition Automatic Speech
在线阅读 下载PDF
Cognitive Biases in Artificial Intelligence:Susceptibility of a Large Language Model to Framing Effect and Confirmation Bias
20
作者 Li Hao Wang You Yang Xueling 《心理科学》 北大核心 2025年第4期892-906,共15页
The rapid advancement of Artificial Intelligence(AI)and Large Language Models(LLMs)has led to their increasing integration into various domains,from text generation and translation to question-answering.However,a crit... The rapid advancement of Artificial Intelligence(AI)and Large Language Models(LLMs)has led to their increasing integration into various domains,from text generation and translation to question-answering.However,a critical question remains:do these sophisticated models,much like humans,exhibit susceptibility to cognitive biases?Understanding the presence and nature of such biases in AI is paramount for assessing their reliability,enhancing their performance,and predicting their societal impact.This research specifically investigates the susceptibility of Google’s Gemini 1.5 Pro and DeepSeek,two prominent LLMs,to framing effects and confirmation bias.The study meticulously designed a series of experimental trials,systematically manipulating information proportions and presentation orders to evaluate these biases.In the framing effect experiment,a genetic testing decision-making scenario was constructed.The proportion of positive and negative information(e.g.,20%,50%,or 80%positive)and their presentation order were varied.The models’inclination towards undergoing genetic testing was recorded.For the confirmation bias experiment,two reports-one positive and one negative-about“RoboTaxi”autonomous vehicles were provided.The proportion of erroneous information within these reports(10%,30%,and 50%)and their presentation order were systematically altered,and the models’support for each report was assessed.The findings demonstrate that both Gemini 1.5 Pro and DeepSeek are susceptible to framing effects.In the genetic testing scenario,their decision-making was primarily influenced by the proportion of positive and negative information presented.When the proportion of positive information was higher,both models showed a greater inclination to recommend or proceed with genetic testing.Conversely,a higher proportion of negative information led to greater caution or a tendency not to recommend the testing.Importantly,the order in which this information was presented did not significantly influence their decisions in the framing effect scenarios.Regarding confirmation bias,the two models exhibited distinct behaviors.Gemini 1.5 Pro did not show an overall preference for either positive or negative reports.However,its judgments were significantly influenced by the order of information presentation,demonstrating a“recency effect,”meaning it tended to support the report presented later.The proportion of erroneous information within the reports had no significant impact on Gemini 1.5 Pro’s decisions.In contrast,DeepSeek exhibited an overall confirmation bias,showing a clear preference for positive reports.Similar to Gemini 1.5 Pro,DeepSeek’s decisions were also significantly affected by the order of information presentation,while the proportion of misinformation had no significant effect.These results reveal human-like cognitive vulnerabilities in advanced LLMs,highlighting critical challenges to their reliability and objectivity in decision-making processes.Gemini 1.5 Pro’s sensitivity to presentation order and DeepSeek’s general preference for positive information,coupled with its sensitivity to order,underscore the need for careful evaluation of potential cognitive biases during the development and application of AI.The study suggests that effective measures are necessary to mitigate these biases and prevent potential negative societal impacts.Future research should include a broader range of models for comparative analysis and explore more complex interactive scenarios to further understand and address these phenomena.The findings contribute significantly to understanding the limitations and capabilities of current AI systems,guiding their responsible development,and anticipating their potential societal implications. 展开更多
关键词 artificial intelligence large language models cognitive bias confirmation bias framing effect
原文传递
上一页 1 2 28 下一页 到第
使用帮助 返回顶部