期刊文献+
共找到554篇文章
< 1 2 28 >
每页显示 20 50 100
A Modeling Language Based on UML for Modeling Simulation Testing System of Avionic Software 被引量:2
1
作者 WANG Lize LIU Bin LU Minyan 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2011年第2期181-194,共14页
With direct expression of individual application domain patterns and ideas,domain-specific modeling language(DSML) is more and more frequently used to build models instead of using a combination of one or more gener... With direct expression of individual application domain patterns and ideas,domain-specific modeling language(DSML) is more and more frequently used to build models instead of using a combination of one or more general constructs.Based on the profile mechanism of unified modeling language(UML) 2.2,a kind of DSML is presented to model simulation testing systems of avionic software(STSAS).To define the syntax,semantics and notions of the DSML,the domain model of the STSAS from which we generalize the domain concepts and relationships among these concepts is given,and then,the domain model is mapped into a UML meta-model,named UML-STSAS profile.Assuming a flight control system(FCS) as system under test(SUT),we design the relevant STSAS.The results indicate that extending UML to the simulation testing domain can effectively and precisely model STSAS. 展开更多
关键词 AVIONICS HARDWARE-IN-THE-LOOP test facilities META-MODEL UML profile domain-specific modeling language abstract state machine
原文传递
Large language model-based multi-objective modeling framework for vacuum gas oil hydrotreating
2
作者 Zheyuan Pang Siying Liu +4 位作者 Yiting Lin Xiangchen Fang Honglai Liu Chong Peng Cheng Lian 《Chinese Journal of Chemical Engineering》 2025年第8期133-145,共13页
Data-driven approaches are extensively employed to model complex chemical engineering processes, such as hydrotreating, to address the challenges of mechanism-based methods demanding deep process understanding. Howeve... Data-driven approaches are extensively employed to model complex chemical engineering processes, such as hydrotreating, to address the challenges of mechanism-based methods demanding deep process understanding. However, the development of such models requires specialized expertise in data science, limiting their broader application. Large language models (LLMs), such as GPT-4, have demonstrated potential in supporting and guiding research efforts. This work presents a novel AI-assisted framework where GPT-4, through well-engineered prompts, facilitates the construction and explanation of multi-objective neural networks. These models predict hydrotreating products properties (such as distillation range), including refined diesel and refined gas oil, using feedstock properties, operating conditions, and recycle hydrogen composition. Gradient-weighted class activation mapping was employed to identify key features influencing the output variables. This work illustrates an innovative AI-guided paradigm for chemical engineering applications, and the designed prompts hold promise for adaptation to other complex processes. 展开更多
关键词 HYDROGENATION Prompt engineering Large language model Neural networks Prediction
在线阅读 下载PDF
ExplainableDetector:Exploring transformer-based language modeling approach for SMS spam detection with explainability analysis
3
作者 Mohammad Amaz Uddin Muhammad Nazrul Islam +2 位作者 Leandros Maglaras Helge Janicke Iqbal H.Sarker 《Digital Communications and Networks》 2025年第5期1504-1518,共15页
Short Message Service(SMS)is a widely used and cost-effective communication medium that has unfortunately become a frequent target for unsolicited messages-commonly known as SMS spam.With the rapid adoption of smartph... Short Message Service(SMS)is a widely used and cost-effective communication medium that has unfortunately become a frequent target for unsolicited messages-commonly known as SMS spam.With the rapid adoption of smartphones and increased Internet connectivity,SMS spam has emerged as a prevalent threat.Spammers have recognized the critical role SMS plays in today’s modern communication,making it a prime target for abuse.As cybersecurity threats continue to evolve,the volume of SMS spam has increased substantially in recent years.Moreover,the unstructured format of SMS data creates significant challenges for SMS spam detection,making it more difficult to successfully combat spam attacks.In this paper,we present an optimized and fine-tuned transformer-based Language Model to address the problem of SMS spam detection.We use a benchmark SMS spam dataset to analyze this spam detection model.Additionally,we utilize pre-processing techniques to obtain clean and noise-free data and address class imbalance problem by leveraging text augmentation techniques.The overall experiment showed that our optimized fine-tuned BERT(Bidirectional Encoder Representations from Transformers)variant model RoBERTa obtained high accuracy with 99.84%.To further enhance model transparency,we incorporate Explainable Artificial Intelligence(XAI)techniques that compute positive and negative coefficient scores,offering insight into the model’s decision-making process.Additionally,we evaluate the performance of traditional machine learning models as a baseline for comparison.This comprehensive analysis demonstrates the significant impact language models can have on addressing complex text-based challenges within the cybersecurity landscape. 展开更多
关键词 CYBERSECURITY Machine learning Large language model Spam detection Text analytics Explainable AI Fine-tuning TRANSFORMER
在线阅读 下载PDF
A framework for an integrated unified modeling language 被引量:3
4
作者 Mohammad ALSHAYEB Nasser KHASHAN Sajjad MAHMOOD 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2016年第2期143-159,共17页
The unified modeling language(UML) is one of the most commonly used modeling languages in the software industry.It simplifies the complex process of design by providing a set of graphical notations,which helps express... The unified modeling language(UML) is one of the most commonly used modeling languages in the software industry.It simplifies the complex process of design by providing a set of graphical notations,which helps express the objectoriented analysis and design of software projects.Although UML is applicable to different types of systems,domains,methods,and processes,it cannot express certain problem domain needs.Therefore,many extensions to UML have been proposed.In this paper,we propose a framework for integrating the UML extensions and then use the framework to propose an integrated unified modeling language-graphical(iUML-g) form.iUML-g integrates the existing UML extensions into one integrated form.This includes an integrated diagram for UML class,sequence,and use case diagrams.The proposed approach is evaluated using a case study.The proposed iUML-g is capable of modeling systems that use different domains. 展开更多
关键词 Unified modeling language (UML) INTEGRATION modeling System analysis and design
原文传递
Validation of static properties in unified modeling language models for cyber physical systems 被引量:2
5
作者 Gabriela MAGUREANU Madalin GAVRILESCU Dan PESCARU 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2013年第5期332-346,共15页
Cyber physical systems (CPSs) can be found nowadays in various fields of activity. The increased interest for these systems as evidenced by the large number of applications led to complex research regarding the most s... Cyber physical systems (CPSs) can be found nowadays in various fields of activity. The increased interest for these systems as evidenced by the large number of applications led to complex research regarding the most suitable methods for design and development. A promising solution for specification, visualization, and documentation of CPSs uses the Object Management Group (OMG) unified modeling language (UML). UML models allow an intuitive approach for embedded systems design, helping end-users to specify the requirements. However, the UML models are represented in an informal language. Therefore, it is difficult to verify the correctness and completeness of a system design. The object constraint language (OCL) was defined to add constraints to UML, but it is deficient in strict notations of mathematics and logic that permits rigorous analysis and reasoning about the specifications. In this paper, we investigated how CPS applications modeled using UML deployment diagrams could be formally expressed and verified. We used Z language constructs and prototype verification system (PVS) as formal verification tools. Considering some relevant case studies presented in the literature, we investigated the opportunity of using this approach for validation of static properties in CPS UML models. 展开更多
关键词 Cyber physical system (CPS) Unified modeling language (UML) design Formal verification Prototype verification system (PVS) Z language
原文传递
Intelligent modeling method for OV models in DoDAF2.0 based on knowledge graph
6
作者 ZHANG Yue JIANG Jiang +3 位作者 YANG Kewei WANG Xingliang XU Chi LI Minghao 《Journal of Systems Engineering and Electronics》 2025年第1期139-154,共16页
Architecture framework has become an effective method recently to describe the system of systems(SoS)architecture,such as the United States(US)Department of Defense Architecture Framework Version 2.0(DoDAF2.0).As a vi... Architecture framework has become an effective method recently to describe the system of systems(SoS)architecture,such as the United States(US)Department of Defense Architecture Framework Version 2.0(DoDAF2.0).As a viewpoint in DoDAF2.0,the operational viewpoint(OV)describes operational activities,nodes,and resource flows.The OV models are important for SoS architecture development.However,as the SoS complexity increases,constructing OV models with traditional methods exposes shortcomings,such as inefficient data collection and low modeling standards.Therefore,we propose an intelligent modeling method for five OV models,including operational resource flow OV-2,organizational relationships OV-4,operational activity hierarchy OV-5a,operational activities model OV-5b,and operational activity sequences OV-6c.The main idea of the method is to extract OV architecture data from text and generate interoperable OV models.First,we construct the OV meta model based on the DoDAF2.0 meta model(DM2).Second,OV architecture named entities is recognized from text based on the bidirectional long short-term memory and conditional random field(BiLSTM-CRF)model.And OV architecture relationships are collected with relationship extraction rules.Finally,we define the generation rules for OV models and develop an OV modeling tool.We use unmanned surface vehicles(USV)swarm target defense SoS architecture as a case to verify the feasibility and effectiveness of the intelligent modeling method. 展开更多
关键词 system of systems(SoS)architecture operational viewpoint(OV)model meta model bidirectional long short-term memory and conditional random field(BiLSTM-CRF) model generation systems modeling language
在线阅读 下载PDF
A Review of Process Modeling Language Paradigms 被引量:1
7
作者 MA Qin-hai, GUAN Zhi-min, LI Ying, ZHAO Xi-nanSchool of Business Administration, Northeastern University, Shenyang 110004, China 《Systems Science and Systems Engineering》 CSCD 2002年第4期439-454,共16页
Process representation or modeling plays an important role in business process engineering. Process modeling languages can be evaluated by the extent to which they provide constructs useful for representing and reason... Process representation or modeling plays an important role in business process engineering. Process modeling languages can be evaluated by the extent to which they provide constructs useful for representing and reasoning about the aspects of a process, and subsequently are chosen for a certain purpose. This paper reviews process modeling language paradigms and points out their advantages and disadvantages. 展开更多
关键词 business process reengineering process representation process modelling language
原文传递
New Retrieval Method Based on Relative Entropy for LanguageModeling with Different Smoothing Methods
8
作者 霍华 刘俊强 冯博琴 《Journal of Southwest Jiaotong University(English Edition)》 2006年第2期113-120,共8页
A language model for information retrieval is built by using a query language model to generate queries and a document language model to generate documents. The documents are ranked according to the relative entropies... A language model for information retrieval is built by using a query language model to generate queries and a document language model to generate documents. The documents are ranked according to the relative entropies of estimated document language models with respect to the estimated query language model. Two popular and relatively efficient smoothing methods, the Jelinek- Mercer method and the absolute discounting method, are used to smooth the document language model in estimation of the document language, A combined model composed of the feedback document language model and the collection language model is used to estimate the query model. A performacne comparison between the new retrieval method and the existing method with feedback is made, and the retrieval performances of the proposed method with the two different smoothing techniques are evaluated on three Text Retrieval Conference (TREC) data sets. Experimental results show that the method is effective and performs better than the basic language modeling approach; moreover, the method using the Jelinek-Mercer technique performs better than that using the absolute discounting technique, and the perfomance is sensitive to the smoothing peramters. 展开更多
关键词 Information retrieval Relative entropy language modeling SMOOTHING
在线阅读 下载PDF
Large language models for robotics:Opportunities,challenges,and perspectives 被引量:4
9
作者 Jiaqi Wang Enze Shi +7 位作者 Huawen Hu Chong Ma Yiheng Liu Xuhui Wang Yincheng Yao Xuan Liu Bao Ge Shu Zhang 《Journal of Automation and Intelligence》 2025年第1期52-64,共13页
Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and langua... Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction. 展开更多
关键词 Large language models ROBOTICS Generative AI Embodied intelligence
在线阅读 下载PDF
On large language models safety,security,and privacy:A survey 被引量:3
10
作者 Ran Zhang Hong-Wei Li +2 位作者 Xin-Yuan Qian Wen-Bo Jiang Han-Xiao Chen 《Journal of Electronic Science and Technology》 2025年第1期1-21,共21页
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De... The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats. 展开更多
关键词 Large language models Privacy issues Safety issues Security issues
在线阅读 下载PDF
When Software Security Meets Large Language Models:A Survey 被引量:2
11
作者 Xiaogang Zhu Wei Zhou +3 位作者 Qing-Long Han Wanlun Ma Sheng Wen Yang Xiang 《IEEE/CAA Journal of Automatica Sinica》 2025年第2期317-334,共18页
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ... Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research. 展开更多
关键词 Large language models(LLMs) software analysis software security software testing
在线阅读 下载PDF
The Security of Using Large Language Models:A Survey With Emphasis on ChatGPT 被引量:2
12
作者 Wei Zhou Xiaogang Zhu +4 位作者 Qing-Long Han Lin Li Xiao Chen Sheng Wen Yang Xiang 《IEEE/CAA Journal of Automatica Sinica》 2025年第1期1-26,共26页
ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential sec... ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential security risks that need to be carefully evaluated and addressed. In this survey, we provide an overview of the current state of research on security of using ChatGPT, with aspects of bias, disinformation, ethics, misuse,attacks and privacy. We review and discuss the literature on these topics and highlight open research questions and future directions.Through this survey, we aim to contribute to the academic discourse on AI security, enriching the understanding of potential risks and mitigations. We anticipate that this survey will be valuable for various stakeholders involved in AI development and usage, including AI researchers, developers, policy makers, and end-users. 展开更多
关键词 Artificial intelligence(AI) ChatGPT large language models(LLMs) SECURITY
在线阅读 下载PDF
Large Language Model Agent with VGI Data for Mapping 被引量:2
13
作者 SONG Jiayu ZHANG Yifan +1 位作者 WANG Zhiyun YU Wenhao 《Journal of Geodesy and Geoinformation Science》 2025年第2期57-73,共17页
In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach th... In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach that Integrates Large Language Models(LLMs)into a fully automated mapping workflow,utilizing VGI data.The process leverages Prompt Engineering,which involves designing and optimizing input instructions to ensure the LLM produces desired mapping outputs.By constructing precise and detailed prompts,LLM agents are able to accurately interpret mapping requirements,and autonomously extract,analyze,and process VGI geospatial data.They dynamically interact with mapping tools to automate the entire mapping process—from data acquisition to map generation.This approach significantly streamlines the creation of high-quality mapping outputs,reducing the time and resources typically required for such tasks.Moreover,the system lowers the barrier for non-expert users,enabling them to generate accurate maps without extensive technical expertise.Through various case studies,we demonstrate the LLM application across different mapping scenarios,highlighting its potential to enhance the efficiency,accuracy,and accessibility of map production.The results suggest that LLM-powered mapping systems can not only optimize VGI data processing but also expand the usability of ubiquitous mapping across diverse fields,including urban planning and infrastructure development. 展开更多
关键词 Volunteered Geographic Information(VGI) Geospatial Artificial Intelligence(GeoAI) AGENT large language model
在线阅读 下载PDF
Evaluating research quality with Large Language Models:An analysis of ChatGPT’s effectiveness with different settings and inputs 被引量:1
14
作者 Mike Thelwall 《Journal of Data and Information Science》 2025年第1期7-25,共19页
Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether ... Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations. 展开更多
关键词 ChatGPT Large language Models LLMs SCIENTOMETRICS Research Assessment
在线阅读 下载PDF
HyPepTox-Fuse:An interpretable hybrid framework for accurate peptide toxicity prediction fusing protein language model-based embeddings with conventional descriptors 被引量:1
15
作者 Duong Thanh Tran Nhat Truong Pham +2 位作者 Nguyen Doan Hieu Nguyen Leyi Wei Balachandran Manavalan 《Journal of Pharmaceutical Analysis》 2025年第8期1873-1886,共14页
Peptide-based therapeutics hold great promise for the treatment of various diseases;however,their clinical application is often hindered by toxicity challenges.The accurate prediction of peptide toxicity is crucial fo... Peptide-based therapeutics hold great promise for the treatment of various diseases;however,their clinical application is often hindered by toxicity challenges.The accurate prediction of peptide toxicity is crucial for designing safe peptide-based therapeutics.While traditional experimental approaches are time-consuming and expensive,computational methods have emerged as viable alternatives,including similarity-based and machine learning(ML)-/deep learning(DL)-based methods.However,existing methods often struggle with robustness and generalizability.To address these challenges,we propose HyPepTox-Fuse,a novel framework that fuses protein language model(PLM)-based embeddings with conventional descriptors.HyPepTox-Fuse integrates ensemble PLM-based embeddings to achieve richer peptide representations by leveraging a cross-modal multi-head attention mechanism and Transformer architecture.A robust feature ranking and selection pipeline further refines conventional descriptors,thus enhancing prediction performance.Our framework outperforms state-of-the-art methods in cross-validation and independent evaluations,offering a scalable and reliable tool for peptide toxicity prediction.Moreover,we conducted a case study to validate the robustness and generalizability of HyPepTox-Fuse,highlighting its effectiveness in enhancing model performance.Furthermore,the HyPepTox-Fuse server is freely accessible at https://balalab-skku.org/HyPepTox-Fuse/and the source code is publicly available at https://github.com/cbbl-skku-org/HyPepTox-Fuse/.The study thus presents an intuitive platform for predicting peptide toxicity and supports reproducibility through openly available datasets. 展开更多
关键词 Peptide toxicity Hybrid framework Multi-head attention Transformer Deep learning Machine learning Protein language model
暂未订购
GPT2-ICC:A data-driven approach for accurate ion channel identification using pre-trained large language models 被引量:1
16
作者 Zihan Zhou Yang Yu +9 位作者 Chengji Yang Leyan Cao Shaoying Zhang Junnan Li Yingnan Zhang Huayun Han Guoliang Shi Qiansen Zhang Juwen Shen Huaiyu Yang 《Journal of Pharmaceutical Analysis》 2025年第8期1800-1809,共10页
Current experimental and computational methods have limitations in accurately and efficiently classifying ion channels within vast protein spaces.Here we have developed a deep learning algorithm,GPT2 Ion Channel Class... Current experimental and computational methods have limitations in accurately and efficiently classifying ion channels within vast protein spaces.Here we have developed a deep learning algorithm,GPT2 Ion Channel Classifier(GPT2-ICC),which effectively distinguishing ion channels from a test set containing approximately 239 times more non-ion-channel proteins.GPT2-ICC integrates representation learning with a large language model(LLM)-based classifier,enabling highly accurate identification of potential ion channels.Several potential ion channels were predicated from the unannotated human proteome,further demonstrating GPT2-ICC’s generalization ability.This study marks a significant advancement in artificial-intelligence-driven ion channel research,highlighting the adaptability and effectiveness of combining representation learning with LLMs to address the challenges of imbalanced protein sequence data.Moreover,it provides a valuable computational tool for uncovering previously uncharacterized ion channels. 展开更多
关键词 Ion channel Artificial intelligence Representation learning GPT2 Protein language model
在线阅读 下载PDF
Assessing the possibility of using large language models in ocular surface diseases 被引量:1
17
作者 Qian Ling Zi-Song Xu +11 位作者 Yan-Mei Zeng Qi Hong Xian-Zhe Qian Jin-Yu Hu Chong-Gang Pei Hong Wei Jie Zou Cheng Chen Xiao-Yu Wang Xu Chen Zhen-Kai Wu Yi Shao 《International Journal of Ophthalmology(English edition)》 2025年第1期1-8,共8页
AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surfa... AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surface diseases:ChatGPT-4,ChatGPT-3.5,Claude 2,PaLM2,and SenseNova.METHODS:A group of experienced ophthalmology professors were asked to develop a 100-question singlechoice question on ocular surface diseases designed to assess the performance of LLMs and human participants in answering ophthalmology specialty exam questions.The exam includes questions on the following topics:keratitis disease(20 questions),keratoconus,keratomalaciac,corneal dystrophy,corneal degeneration,erosive corneal ulcers,and corneal lesions associated with systemic diseases(20 questions),conjunctivitis disease(20 questions),trachoma,pterygoid and conjunctival tumor diseases(20 questions),and dry eye disease(20 questions).Then the total score of each LLMs and compared their mean score,mean correlation,variance,and confidence were calculated.RESULTS:GPT-4 exhibited the highest performance in terms of LLMs.Comparing the average scores of the LLMs group with the four human groups,chief physician,attending physician,regular trainee,and graduate student,it was found that except for ChatGPT-4,the total score of the rest of the LLMs is lower than that of the graduate student group,which had the lowest score in the human group.Both ChatGPT-4 and PaLM2 were more likely to give exact and correct answers,giving very little chance of an incorrect answer.ChatGPT-4 showed higher credibility when answering questions,with a success rate of 59%,but gave the wrong answer to the question 28% of the time.CONCLUSION:GPT-4 model exhibits excellent performance in both answer relevance and confidence.PaLM2 shows a positive correlation(up to 0.8)in terms of answer accuracy during the exam.In terms of answer confidence,PaLM2 is second only to GPT4 and surpasses Claude 2,SenseNova,and GPT-3.5.Despite the fact that ocular surface disease is a highly specialized discipline,GPT-4 still exhibits superior performance,suggesting that its potential and ability to be applied in this field is enormous,perhaps with the potential to be a valuable resource for medical students and clinicians in the future. 展开更多
关键词 ChatGPT-4.0 ChatGPT-3.5 large language models ocular surface diseases
原文传递
Evaluating large language models as patient education tools for inflammatory bowel disease:A comparative study 被引量:1
18
作者 Yan Zhang Xiao-Han Wan +6 位作者 Qing-Zhou Kong Han Liu Jun Liu Jing Guo Xiao-Yun Yang Xiu-Li Zuo Yan-Qing Li 《World Journal of Gastroenterology》 2025年第6期34-43,共10页
BACKGROUND Inflammatory bowel disease(IBD)is a global health burden that affects millions of individuals worldwide,necessitating extensive patient education.Large language models(LLMs)hold promise for addressing patie... BACKGROUND Inflammatory bowel disease(IBD)is a global health burden that affects millions of individuals worldwide,necessitating extensive patient education.Large language models(LLMs)hold promise for addressing patient information needs.However,LLM use to deliver accurate and comprehensible IBD-related medical information has yet to be thoroughly investigated.AIM To assess the utility of three LLMs(ChatGPT-4.0,Claude-3-Opus,and Gemini-1.5-Pro)as a reference point for patients with IBD.METHODS In this comparative study,two gastroenterology experts generated 15 IBD-related questions that reflected common patient concerns.These questions were used to evaluate the performance of the three LLMs.The answers provided by each model were independently assessed by three IBD-related medical experts using a Likert scale focusing on accuracy,comprehensibility,and correlation.Simultaneously,three patients were invited to evaluate the comprehensibility of their answers.Finally,a readability assessment was performed.RESULTS Overall,each of the LLMs achieved satisfactory levels of accuracy,comprehensibility,and completeness when answering IBD-related questions,although their performance varies.All of the investigated models demonstrated strengths in providing basic disease information such as IBD definition as well as its common symptoms and diagnostic methods.Nevertheless,when dealing with more complex medical advice,such as medication side effects,dietary adjustments,and complication risks,the quality of answers was inconsistent between the LLMs.Notably,Claude-3-Opus generated answers with better readability than the other two models.CONCLUSION LLMs have the potential as educational tools for patients with IBD;however,there are discrepancies between the models.Further optimization and the development of specialized models are necessary to ensure the accuracy and safety of the information provided. 展开更多
关键词 Inflammatory bowel disease Large language models Patient education Medical information accuracy Readability assessment
暂未订购
Phoneme Sequence Modeling in the Context of Speech Signal Recognition in Language “Baoule”
19
作者 Hyacinthe Konan Etienne Soro +2 位作者 Olivier Asseu Bi Tra Goore Raymond Gbegbe 《Engineering(科研)》 2016年第9期597-617,共22页
This paper presents the recognition of “Baoule” spoken sentences, a language of C?te d’Ivoire. Several formalisms allow the modelling of an automatic speech recognition system. The one we used to realize our system... This paper presents the recognition of “Baoule” spoken sentences, a language of C?te d’Ivoire. Several formalisms allow the modelling of an automatic speech recognition system. The one we used to realize our system is based on Hidden Markov Models (HMM) discreet. Our goal in this article is to present a system for the recognition of the Baoule word. We present three classical problems and develop different algorithms able to resolve them. We then execute these algorithms with concrete examples. 展开更多
关键词 HMM MATLAB language Model Acoustic Model Recognition Automatic Speech
在线阅读 下载PDF
Optimizing Fine-Tuning in Quantized Language Models:An In-Depth Analysis of Key Variables
20
作者 Ao Shen Zhiquan Lai +1 位作者 Dongsheng Li Xiaoyu Hu 《Computers, Materials & Continua》 SCIE EI 2025年第1期307-325,共19页
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci... Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments. 展开更多
关键词 Large-scale language Model Parameter-Efficient Fine-Tuning parameter quantization key variable trainable parameters experimental analysis
在线阅读 下载PDF
上一页 1 2 28 下一页 到第
使用帮助 返回顶部