期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
Large Language Model-Driven Knowledge Discovery for Designing Advanced Micro/Nano Electrocatalyst Materials
1
作者 Ying Shen Shichao Zhao +3 位作者 Yanfei Lv Fei Chen Li Fu Hassan Karimi-Maleh 《Computers, Materials & Continua》 2025年第8期1921-1950,共30页
This review presents a comprehensive and forward-looking analysis of how Large Language Models(LLMs)are transforming knowledge discovery in the rational design of advancedmicro/nano electrocatalyst materials.Electroca... This review presents a comprehensive and forward-looking analysis of how Large Language Models(LLMs)are transforming knowledge discovery in the rational design of advancedmicro/nano electrocatalyst materials.Electrocatalysis is central to sustainable energy and environmental technologies,but traditional catalyst discovery is often hindered by high complexity,fragmented knowledge,and inefficiencies.LLMs,particularly those based on Transformer architectures,offer unprecedented capabilities in extracting,synthesizing,and generating scientific knowledge from vast unstructured textual corpora.This work provides the first structured synthesis of how LLMs have been leveraged across various electrocatalysis tasks,including automated information extraction from literature,text-based property prediction,hypothesis generation,synthesis planning,and knowledge graph construction.We comparatively analyze leading LLMs and domain-specific frameworks(e.g.,CatBERTa,CataLM,CatGPT)in terms of methodology,application scope,performance metrics,and limitations.Through curated case studies across key electrocatalytic reactions—HER,OER,ORR,and CO_(2)RR—we highlight emerging trends such as the growing use of embedding-based prediction,retrieval-augmented generation,and fine-tuned scientific LLMs.The review also identifies persistent challenges,including data heterogeneity,hallucination risks,lack of standard benchmarks,and limited multimodal integration.Importantly,we articulate future research directions,such as the development of multimodal and physics-informedMatSci-LLMs,enhanced interpretability tools,and the integration of LLMswith selfdriving laboratories for autonomous discovery.By consolidating fragmented advances and outlining a unified research roadmap,this review provides valuable guidance for both materials scientists and AI practitioners seeking to accelerate catalyst innovation through large language model technologies. 展开更多
关键词 large languagemodels ELECTROCATALYSIS NANOMATERIALS knowledge discovery materials design artificial intelligence natural language processing
在线阅读 下载PDF
Envisioning the blueprint:Aeronautics in large models era
2
作者 Weiwei ZHANG Shule ZHAO 《Chinese Journal of Aeronautics》 2025年第8期139-141,共3页
Following the groundbreaking introduction of the Transformer architecture in 2017,the development of Large Language Models(LLMs)formally commenced.In May 2020,Chat GPT-3,with over one hundred billion parameters,entere... Following the groundbreaking introduction of the Transformer architecture in 2017,the development of Large Language Models(LLMs)formally commenced.In May 2020,Chat GPT-3,with over one hundred billion parameters,entered the public eye,marking a significant milestone in LLM advancement. 展开更多
关键词 AERONAUTICS large languagemodels transformer architecture transformerarchitecture llms chatgpt large language models llms formally
原文传递
An Analytical Review of Large Language Models Leveraging KDGI Fine-Tuning,Quantum Embedding’s,and Multimodal Architectures
3
作者 Uddagiri Sirisha Chanumolu Kiran Kumar +2 位作者 Revathi Durgam Poluru Eswaraiah G Muni Nagamani 《Computers, Materials & Continua》 2025年第6期4031-4059,共29页
A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehens... A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research. 展开更多
关键词 large languagemodels quantum embeddings fine-tuning techniques multimodal architectures ethical AI scenarios
在线阅读 下载PDF
A Keyword-Guided Training Approach to Large Language Models for Judicial Document Generation
4
作者 Yi-Ting Peng Chin-Laung Lei 《Computer Modeling in Engineering & Sciences》 2025年第12期3969-3992,共24页
The rapid advancement of Large Language Models(LLMs)has enabled their application in diverse professional domains,including law.However,research on automatic judicial document generation remains limited,particularly f... The rapid advancement of Large Language Models(LLMs)has enabled their application in diverse professional domains,including law.However,research on automatic judicial document generation remains limited,particularly for taiwan region of China courts.This study proposes a keyword-guided training framework that enhances LLMs’ability to generate structured and semantically coherent judicial decisions in Chinese.The proposed method first employs LLMs to extract representative legal keywords from absolute court judgments.Then it integrates these keywords into Supervised Fine-Tuning(SFT)and Reinforcement Learning withHuman Feedback using Proximal Policy Optimization(RLHF-PPO).Experimental evaluations using models such as Chinese Alpaca 7B and TAIDE-LX-7B demonstrate that keyword-guided training significantly improves generation quality,achieving ROUGE-1,ROUGE-2,and ROUGE-L score gains of up to 17%,16%,and 20%,respectively.The results confirm that the proposed framework effectively aligns generated judgments with human-written legal logic and structural conventions.This research advances domainadaptive LLM fine-tuning strategies and establishes a technical foundation forAI-assisted judicial document generation in the taiwan region of China legal context.This research provides empirical evidence that domain-adaptive LLM fine-tuning strategies can significantly improve performance in complex,structured legal text generation. 展开更多
关键词 Legal AI large languagemodels natural language processing generative AI legal document generation
在线阅读 下载PDF
A NAS-Based Risk Prediction Model and Interpretable System for Amyloidosis
5
作者 Chen Wang Tiezheng Guo +3 位作者 Qingwen Yang Yanyi Liu Jiawei Tang Yingyou Wen 《Computers, Materials & Continua》 2025年第6期5561-5574,共14页
Primary light chain amyloidosis is a rare hematologic disease with multi-organ involvement.Nearly one-third of patients with amyloidosis experience five or more consultations before diagnosis,which may lead to a poor ... Primary light chain amyloidosis is a rare hematologic disease with multi-organ involvement.Nearly one-third of patients with amyloidosis experience five or more consultations before diagnosis,which may lead to a poor prognosis due to delayed diagnosis.Early risk prediction based on artificial intelligence is valuable for clinical diagnosis and treatment of amyloidosis.For this disease,we propose an Evolutionary Neural Architecture Searching(ENAS)based risk prediction model,which achieves high-precision early risk prediction using physical examination data as a reference factor.To further enhance the value of clinic application,we designed a natural language-based interpretable system around the NAS-assisted risk prediction model for amyloidosis,which utilizes a large language model and Retrieval-Augmented Generation(RAG)to achieve further interpretation of the predicted conclusions.We also propose a document-based global semantic slicing approach in RAG to achievemore accurate slicing and improve the professionalism of the generated interpretations.Tests and implementation show that the proposed risk prediction model can be effectively used for early screening of amyloidosis and that the interpretation method based on the large language model and RAG can effectively provide professional interpretation of predicted results,which provides an effective method and means for the clinical applications of AI. 展开更多
关键词 Medical AI evolutionary neural architecture searching(ENAS) large languagemodel(LLM) retrievalaugmented generation(RAG) AMYLOIDOSIS
在线阅读 下载PDF
Unlocking the Potential:A Comprehensive Systematic Review of ChatGPT in Natural Language Processing Tasks
6
作者 Ebtesam Ahmad Alomari 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期43-85,共43页
As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects in... As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues. 展开更多
关键词 Generative AI large languagemodel(LLM) natural language processing(NLP) ChatGPT GPT(generative pretraining transformer) GPT-4 sentiment analysis NER information extraction ANNOTATION text classification
在线阅读 下载PDF
Mindstorms in natural language-based societies of mind
7
作者 Mingchen Zhuge Haozhe Liu +23 位作者 Francesco Faccio Dylan R.Ashley Róbert Csordás Anand Gopalakrishnan Abdullah Hamdi Hasan Abed Al Kader Hammoud Vincent Herrmann Kazuki Irie Louis Kirsch Bing Li Guohao Li Shuming Liu Jinjie Mai Piotr Piękos Aditya A.Ramesh Imanol Schlag Weimin Shi Aleksandar Stanić Wenyi Wang Yuhui Wang Mengmeng Xu Deng-Ping Fan Bernard Ghanem Jürgen Schmidhuber 《Computational Visual Media》 2025年第1期29-81,共53页
Inspired by Minsky’s Society of Mind,Schmidhuber’s Learning to Think,and other more 9-16 recent works,this paper proposes and advocates for the concept of natural language-based societies of mind(NLSOMs).We imagine ... Inspired by Minsky’s Society of Mind,Schmidhuber’s Learning to Think,and other more 9-16 recent works,this paper proposes and advocates for the concept of natural language-based societies of mind(NLSOMs).We imagine these societies as consisting of a collection of multimodal neural networks,including large language models,which engage in a“mindstorm”to solve problems using a shared natural language interface.Here,we work to identify and discuss key questions about the social structure,governance,and economic principles for NLSOMs,emphasizing their impact on the future of AI.Our demonstrations with NLSOMs—which feature up to 129 agents—show their effectiveness in various tasks,including visual question answering,image captioning,and prompt generation for text-to-image synthesis. 展开更多
关键词 mindstorm society of mind(SOM) large languagemodels(LLMs) multimodal learning learning to think
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部