In materials science and engineering design,high-fidelity and high-efficiency numerical simulation has become a driving force for innovation and practical implementation.To address longstanding bottlenecks in the deve...In materials science and engineering design,high-fidelity and high-efficiency numerical simulation has become a driving force for innovation and practical implementation.To address longstanding bottlenecks in the development of conventional material constitutive models—such as lengthy modeling cycles and difficulties in numerical implementation—this study proposes an intelligent modeling and code generation approach powered by large languagemodels.A structured knowledge base integrating constitutive theory,numerical algorithms,and UMAT(User Material)interface specifications is constructed,and a retrieval-augmented generation strategy is employed to establish an end-to-end workflow spanning experimental data parsing,constitutive model formulation,and automatic UMAT subroutine generation.Experimental results show that the method achieves high accuracy for both a classical Johnson–Cookmodel and a physics-informed neural network(PINN)model,with key parameter identification errors below 5%.Moreover,the automatically generated UMAT subroutines yield finite element simulation results in Abaqus that are highly consistent with theoretical predictions(coefficient of determination R2>0.98)while maintaining good numerical stability.This framework is currently focused on the automatic construction of rate-dependent elastoplastic material models,and its core method also provides a clear path for extending to other constitutive categories such as hyperelasticity and viscoelasticity.This work provides an effective technical route for the rapid development and reliable numerical implementation of material constitutive models,significantly advancing the intelligence level of computational mechanics research and improving engineering application efficiency.展开更多
Architecture framework has become an effective method recently to describe the system of systems(SoS)architecture,such as the United States(US)Department of Defense Architecture Framework Version 2.0(DoDAF2.0).As a vi...Architecture framework has become an effective method recently to describe the system of systems(SoS)architecture,such as the United States(US)Department of Defense Architecture Framework Version 2.0(DoDAF2.0).As a viewpoint in DoDAF2.0,the operational viewpoint(OV)describes operational activities,nodes,and resource flows.The OV models are important for SoS architecture development.However,as the SoS complexity increases,constructing OV models with traditional methods exposes shortcomings,such as inefficient data collection and low modeling standards.Therefore,we propose an intelligent modeling method for five OV models,including operational resource flow OV-2,organizational relationships OV-4,operational activity hierarchy OV-5a,operational activities model OV-5b,and operational activity sequences OV-6c.The main idea of the method is to extract OV architecture data from text and generate interoperable OV models.First,we construct the OV meta model based on the DoDAF2.0 meta model(DM2).Second,OV architecture named entities is recognized from text based on the bidirectional long short-term memory and conditional random field(BiLSTM-CRF)model.And OV architecture relationships are collected with relationship extraction rules.Finally,we define the generation rules for OV models and develop an OV modeling tool.We use unmanned surface vehicles(USV)swarm target defense SoS architecture as a case to verify the feasibility and effectiveness of the intelligent modeling method.展开更多
Data-driven approaches are extensively employed to model complex chemical engineering processes, such as hydrotreating, to address the challenges of mechanism-based methods demanding deep process understanding. Howeve...Data-driven approaches are extensively employed to model complex chemical engineering processes, such as hydrotreating, to address the challenges of mechanism-based methods demanding deep process understanding. However, the development of such models requires specialized expertise in data science, limiting their broader application. Large language models (LLMs), such as GPT-4, have demonstrated potential in supporting and guiding research efforts. This work presents a novel AI-assisted framework where GPT-4, through well-engineered prompts, facilitates the construction and explanation of multi-objective neural networks. These models predict hydrotreating products properties (such as distillation range), including refined diesel and refined gas oil, using feedstock properties, operating conditions, and recycle hydrogen composition. Gradient-weighted class activation mapping was employed to identify key features influencing the output variables. This work illustrates an innovative AI-guided paradigm for chemical engineering applications, and the designed prompts hold promise for adaptation to other complex processes.展开更多
Dear Editor,This letter deals with automatically constructing an OPC UA information model(IM)aimed at enhancing data interoperability among heterogeneous system components within manufacturing automation systems.Empow...Dear Editor,This letter deals with automatically constructing an OPC UA information model(IM)aimed at enhancing data interoperability among heterogeneous system components within manufacturing automation systems.Empowered by the large language model(LLM),we propose a novel multi-agent collaborative framework to streamline the end-to-end OPC UA IM modeling process.Each agent is equipped with meticulously engineered prompt templates,augmenting their capacity to execute specific tasks.We conduct modeling experiments using real textual data to demonstrate the effectiveness of the proposed method,improving modeling efficiency and reducing the labor workload.展开更多
Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large lan...Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large language models(LLMs),while highlighting ongoing challenges such as data bottlenecks,biases,fairness,and computational costs.Finally,it discusses future directions,including efficient parameter fine-tuning,multimodal translation,and community-driven corpus construction,providing insights for advancing LRL translation research.展开更多
Short Message Service(SMS)is a widely used and cost-effective communication medium that has unfortunately become a frequent target for unsolicited messages-commonly known as SMS spam.With the rapid adoption of smartph...Short Message Service(SMS)is a widely used and cost-effective communication medium that has unfortunately become a frequent target for unsolicited messages-commonly known as SMS spam.With the rapid adoption of smartphones and increased Internet connectivity,SMS spam has emerged as a prevalent threat.Spammers have recognized the critical role SMS plays in today’s modern communication,making it a prime target for abuse.As cybersecurity threats continue to evolve,the volume of SMS spam has increased substantially in recent years.Moreover,the unstructured format of SMS data creates significant challenges for SMS spam detection,making it more difficult to successfully combat spam attacks.In this paper,we present an optimized and fine-tuned transformer-based Language Model to address the problem of SMS spam detection.We use a benchmark SMS spam dataset to analyze this spam detection model.Additionally,we utilize pre-processing techniques to obtain clean and noise-free data and address class imbalance problem by leveraging text augmentation techniques.The overall experiment showed that our optimized fine-tuned BERT(Bidirectional Encoder Representations from Transformers)variant model RoBERTa obtained high accuracy with 99.84%.To further enhance model transparency,we incorporate Explainable Artificial Intelligence(XAI)techniques that compute positive and negative coefficient scores,offering insight into the model’s decision-making process.Additionally,we evaluate the performance of traditional machine learning models as a baseline for comparison.This comprehensive analysis demonstrates the significant impact language models can have on addressing complex text-based challenges within the cybersecurity landscape.展开更多
Commercial phosphor-converted white LEDs(pc-WLEDs)face two inherent limitations,namely blue light hazard and low color rendering index,due to the use of blue LEDs as excitation source.To address these challenges,viole...Commercial phosphor-converted white LEDs(pc-WLEDs)face two inherent limitations,namely blue light hazard and low color rendering index,due to the use of blue LEDs as excitation source.To address these challenges,violet LEDs are proposed as an alternative solution.Currently,phosphors that can be efficiently excited by violet light(with wavelengths from 400 to 420 nm)remain under development still.In this study,we utilize large language models to construct a comprehensive database of Eu^(2+)and Ce^(3+)doped phosphors for discovering novel violet-excited phosphors.A total of 822 phosphor data entries,including elemental compositions,crystal structures and excitation/emission wavelengths,have been extracted and validated from 9551 research papers.Compared with Ce^(3+)doped phosphors,the Eu^(2+)are in general more suited for violet-excited phosphors,as well as red-emitting phosphors.In particular,Eu^(2+)doped nitrides and sulfides are worth of exploration for violet-excited phosphors.This database is expected to be useful in the future development of phosphors for pc-WLEDs based on artificial intelligence methods.The datasets in this article are listed in Science Data Bank at http://doi.org/10.57760/sciencedb.34314.展开更多
Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLM...Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.展开更多
This study evaluated the accuracy,completeness,and comprehensibility of responses from mainstream large language models(LLMs)to hepatitis C virus(HCV)-related questions,aiming to assess their performance in addressing...This study evaluated the accuracy,completeness,and comprehensibility of responses from mainstream large language models(LLMs)to hepatitis C virus(HCV)-related questions,aiming to assess their performance in addressing patient queries about disease and lifestyle behaviors.The models selected were ChatGPT-4o,Gemini 2.0 Pro,Claude 3.5 Sonnet,and DeepSeek V3,with 12 questions chosen by two HCV experts from the domains of prevention,diagnosis,and treatment.展开更多
Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interact...Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interactions to predict future items of interest.However,many current methods rely on unique user and item IDs,limiting their ability to represent users and items effectively,especially in zero-shot learning scenarios where training data is scarce.With the rapid development of Large Language Models(LLMs),researchers are exploring their potential to enhance recommendation systems.However,there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems,where items are typically indexed by IDs.Moreover,most research focuses on item representations,neglecting personalized user modeling.To address these issues,we propose a sequential recommendation framework using LLMs,called CIT-Rec,a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations.Specifically,by aligning intuitive image information with text containing semantic features,we can more accurately represent items,improving item representation quality.We focus not only on item representations but also on user representations.To more precisely capture users’personalized preferences,we use traditional sequential recommendation models to train on users’historical interaction data,effectively capturing behavioral patterns.Finally,by combining LLMs and traditional sequential recommendation models,we allow the LLM to understand linguistic semantics while capturing collaborative semantics.Extensive evaluations on real-world datasets show that our model outperforms baseline methods,effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations.展开更多
The emergence of large language models(LLMs)has brought about revolutionary social value.However,concerns have arisen regarding the generation of deceptive content by LLMs and their potential for misuse.Consequently,a...The emergence of large language models(LLMs)has brought about revolutionary social value.However,concerns have arisen regarding the generation of deceptive content by LLMs and their potential for misuse.Consequently,a crucial research question arises:How can we differentiate between AI-generated and human-authored text?Existing detectors face some challenges,such as operating as black boxes,relying on supervised training,and being vulnerable to manipulation and misinformation.To tackle these challenges,we propose an innovative unsupervised white-box detection method that utilizes a“dual-driven verification mechanism”to achieve high-performance detection,even in the presence of obfuscated attacks in the text content.To be more specific,we initially employ the SpaceInfi strategy to enhance the difficulty of detecting the text content.Subsequently,we randomly select vulnerable spots from the text and perturb them using another pre-trained language model(e.g.,T5).Finally,we apply a dual-driven defense mechanism(D3M)that validates text content with perturbations,whether generated by a model or authored by a human,based on the dimensions of Information TransmissionQuality and Information TransmissionDensity.Through experimental validation,our proposed novelmethod demonstrates state-of-the-art(SOTA)performancewhen exposed to equivalent levels of perturbation intensity across multiple benchmarks,thereby showcasing the effectiveness of our strategies.展开更多
Classifying job offers into occupational categories is a fundamental task in human resource information systems,as it improves and streamlines indexing,search,and matching between openings and job seekers.Comprehensiv...Classifying job offers into occupational categories is a fundamental task in human resource information systems,as it improves and streamlines indexing,search,and matching between openings and job seekers.Comprehensive occupational databases such as O∗NET or ESCO provide detailed taxonomies of interrelated positions that can be leveraged to align the textual content of postings with occupational categories,thereby facilitating standardization,cross-system interoperability,and access to metadata for each occupation(e.g.,tasks,knowledge,skills,and abilities).In this work,we explore the effectiveness of fine-tuning existing language models(LMs)to classify job offers with occupational descriptors from O∗NET.This enables a more precise assessment of candidate suitability by identifying the specific knowledge and skills required for each position,and helps automate recruitment processes by mitigating human bias and subjectivity in candidate selection.We evaluate three representative BERT-like models:BERT,RoBERTa,and DeBERTa.BERT serves as the baseline encoder-only architecture;RoBERTa incorporates advances in pretraining objectives and data scale;and DeBERTa introduces architectural improvements through disentangled attention mechanisms.The best performance was achieved with the DeBERTa model,although the other models also produced strong results,and no statistically significant differences were observed acrossmodels.We also find that these models typically reach optimal performance after only a few training epochs,and that training with smaller,balanced datasets is effective.Consequently,comparable results can be obtained with models that require fewer computational resources and less training time,facilitating deployment and practical use.展开更多
Liver transplantation(LT)remains the optimal life-saving intervention for patients with end-stage liver disease.Despite the recent advances in LT several barriers,including organ allocation,donor-recipient matching,an...Liver transplantation(LT)remains the optimal life-saving intervention for patients with end-stage liver disease.Despite the recent advances in LT several barriers,including organ allocation,donor-recipient matching,and patient education,persist.With the growing progress of artificial intelligence,particularly large language models(LLMs)like ChatGPT,new applications have emerged in the field of LT.Current studies demonstrating usage of ChatGPT in LT include various areas of application,from clinical settings to research and education.ChatGPT usage can benefit both healthcare professionals,by decreasing the time spent on non-clinical work,but also LT recipients by providing accurate information.Future potential applications include the expanding usage of ChatGPT and other LLMs in the field of LT pathology and radiology as well as the automated creation of discharge summaries or other related paperwork.Additionally,the next models of ChatGPT might have the potential to provide more accurate patient education material with increased readability.Although ChatGPT usage presents promising applications,there are certain ethical and practical limitations.Key concerns include patient data privacy,information accuracy,misinformation possibility and lack of legal framework.Healthcare providers and policymakers should collaborate for the establishment of a controlled framework for the safe use of ChatGPT.The aim of this minireview is to summarize current literature on ChatGPT in LT,highlighting both opportunities and limitations,while also providing future possible applications.展开更多
It is known that correlation does not imply causality.Some relationships identified in the analysis of data are coincidental or unknown,and some are produced by real-world causality of the situation,which is problemat...It is known that correlation does not imply causality.Some relationships identified in the analysis of data are coincidental or unknown,and some are produced by real-world causality of the situation,which is problematic,since there is a need to differentiate between these two scenarios.Until recently,the proper−semantic−causality of the relationship could have been determined only by human experts from the area of expertise of the studied data.This has changed with the advance of large language models,which are often utilized as surrogates for such human experts,making the process automated and readily available to all data analysts.This motivates the main objective of this work,which is to introduce the design and implementation of a large language model-based semantic causality evaluator based on correlation analysis,together with its visual analysis model called Causal heatmap.After the implementation itself,the model is evaluated from the point of view of the quality of the visual model,from the point of view of the quality of causal evaluation based on large language models,and from the point of view of comparative analysis,while the results reached in the study highlight the usability of large language models in the task and the potential of the proposed approach in the analysis of unknown datasets.The results of the experimental evaluation demonstrate the usefulness of the Causal heatmap method,supported by the evident highlighting of interesting relationships,while suppressing irrelevant ones.展开更多
Background:Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology—Head and Neck Surgery systematic literature reviews.Methods:Three PRISMA-based systematic reviews(Ja...Background:Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology—Head and Neck Surgery systematic literature reviews.Methods:Three PRISMA-based systematic reviews(Jabbour et al.2017,Wong et al.2018,and Wu et al.2021)were replicated using ChatGPTv3.5 and Bard.Outputs(author,title,publication year,and journal)were compared to the original references and cross-referenced with medical databases for authenticity and recall.Results:Several themes emerged when comparing Bard and ChatGPT across the three reviews.Bard generated more outputs and had greater recall in Wong et al.'s review,with a broader date range in Jabbour et al.'s review.In Wu et al.'s review,ChatGPT-2 had higher recall and identified more authentic outputs than Bard-2.Conclusion:Large language models(LLMs)failed to fully replicate peer-reviewed methodologies,producing outputs with inaccuracies but identifying relevant,especially recent,articles missed by the references.While human-led PRISMA-based reviews remain the gold standard,refining LLMs for literature reviews shows potential.展开更多
This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to use...This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.展开更多
Although Named Entity Recognition(NER)in cybersecurity has historically concentrated on threat intelligence,vital security data can be found in a variety of sources,such as open-source intelligence and unprocessed too...Although Named Entity Recognition(NER)in cybersecurity has historically concentrated on threat intelligence,vital security data can be found in a variety of sources,such as open-source intelligence and unprocessed tool outputs.When dealing with technical language,the coexistence of structured and unstructured data poses serious issues for traditional BERT-based techniques.We introduce a three-phase approach for improved NER inmulti-source cybersecurity data that makes use of large language models(LLMs).To ensure thorough entity coverage,our method starts with an identification module that uses dynamic prompting techniques.To lessen hallucinations,the extraction module uses confidence-based self-assessment and cross-checking using regex validation.The tagging module links to knowledge bases for contextual validation and uses SecureBERT in conjunction with conditional random fields to detect entity boundaries precisely.Our framework creates efficient natural language segments by utilizing decoderbased LLMs with 10B parameters.When compared to baseline SecureBERT implementations,evaluation across four cybersecurity data sources shows notable gains,with a 9.4%–25.21%greater recall and a 6.38%–17.3%better F1-score.Our refined model matches larger models and achieves 2.6%–4.9%better F1-score for technical phrase recognition than the state-of-the-art alternatives Claude 3.5 Sonnet,Llama3-8B,and Mixtral-7B.The three-stage architecture identification-extraction-tagging pipeline tackles important cybersecurity NER issues.Through effective architectures,these developments preserve deployability while setting a new standard for entity extraction in challenging security scenarios.The findings show how specific enhancements in hybrid recognition,validation procedures,and prompt engineering raise NER performance above monolithic LLM approaches in cybersecurity applications,especially for technical entity extraction fromheterogeneous sourceswhere conventional techniques fall short.Because of itsmodular nature,the framework can be upgraded at the component level as new methods are developed.展开更多
Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Re...Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.展开更多
BACKGROUND:To provide a comprehensive analysis of the landscape of artifi cial intelligence(AI)applications in cardiac arrest(CA).METHODS:Comprehensive searches were conducted in PubMed,the Cochrane Library,Webof Scie...BACKGROUND:To provide a comprehensive analysis of the landscape of artifi cial intelligence(AI)applications in cardiac arrest(CA).METHODS:Comprehensive searches were conducted in PubMed,the Cochrane Library,Webof Science,and EMBASE from database inception through 10 June 2025.Studies that applied AI inboth in-hospital cardiac arrest(IHCA)and out-of-hospital cardiac arrest(OHCA)populations acrossthe following domains were included:prediction of cardiac arrest occurrence,prognostication ofCA outcomes,applications of large language models(LLMs),and evaluation of cardiopulmonaryresuscitation(CPR)and other AI-driven interventions related to CA.RESULTS:The scoping review included 114 studies,encompassing data from 9,574,462patients in total.AI was most commonly applied to the prediction of CA(overall,n=40;IHCA,n=30;OHCA,n=4;and both,n=6),CPR-related decision support during CA(n=16),and post-arrestprognosis and rehabilitation outcomes(overall,n=38;OHCA,n=21;IHCA,n=3;and both,n=14).Additional application areas included LLM-based applications(n=8),emergency call handling(n=4),wearable device-based detection(n=3),heart rhythm identification(n=2),education(n=2),and extracorporeal cardiopulmonary resuscitation(ECPR)candidate identifi cation(n=1).Across allapplication scenarios,the highest area under the receiver operating characteristic curve(AUROC)value for pre-arrest CA prediction in IHCA patients was 0.998 using a multilayer perceptron(MLP)model,whereas the optimal AUROC for pre-arrest CA prediction in OHCA patients was 0.950 usingextreme gradient boosting(XGBoost)or random forest(RF)models.For CPR-related decisionsupport during CA,the highest AUROC achieved was 0.990 with a convolutional neural network(CNN)model.In prognostic prediction,the optimal AUROC for IHCA patients was 0.960 usingXGBoost,while for OHCA patients it reached 0.976 using an MLP model.CONCLUSION:This review shows that AI is most commonly used for the prediction of CA andCPR-related support,as well as post-arrest and rehabilitation outcomes.Future research directions includedrug discovery,post-resuscitation management,neurorehabilitation,and clinical trial innovation.Furtherstudies should prioritize multicenter clinical trials to evaluate AI models in real-world settings and validatetheir eff ectiveness across diverse patient populations.Overall,AI has signifi cant potential to improve clinicalpractice,and its role in CA application is increasingly important.展开更多
Background:Despite the promise shown by large language models(LLMs)for standardized tasks,their multidimensional performance in real-world oncology decision-making remains unevaluated.This study aims to introduce a fr...Background:Despite the promise shown by large language models(LLMs)for standardized tasks,their multidimensional performance in real-world oncology decision-making remains unevaluated.This study aims to introduce a framework for evaluating LLMs and physician decisions in challenging lung cancer cases.Methods:We curated 50 challenging lung cancer cases(25 local and 25 published)classified as complex,rare,or refractory.Blinded three-dimensional,five-point Likert evaluations(1–5 for comprehensiveness,specificity,and readability)compared standalone LLMs(DeepSeek R1,Claude 3.5,Gemini 1.5,and GPT-4o),physicians by experience level(junior,intermediate,and senior),and AI-assisted juniors;intergroup differences and augmentation effects were analyzed statistically.Results:Of 50 challenging cases(18 complex,17 rare,and 15 refractory)rated by three experts,DeepSeek R1 achieved scores of 3.95±0.33,3.71±0.53,and 4.26±0.18 for comprehensiveness,specificity,and readability,respectively,positioning it between intermediate(3.68,3.68,3.75)and senior(4.50,4.64,4.53)physicians.GPT-4o and Claude 3.5 reached intermediate physician–level comprehensiveness(3.76±0.39,3.60±0.39)but junior-to-intermediate physician–level specificity(3.39±0.39,3.39±0.49).All LLMs scored higher on rare cases than intermediate physicians but fell below junior physicians in refractory-case specificity.AIassisted junior physicians showed marked gains in rare cases,with comprehensiveness rising from 2.32 to 4.29(84.8%),specificity from 2.24 to 4.26(90.8%),and readability from 2.76 to 4.59(66.0%),while specificity declined by 3.2%(3.17 to 3.07)in refractory cases.Error analysis showed complementary strengths,with physicians demonstrating reasoning stability and LLMs excelling in knowledge updating and risk management.Conclusions:LLMs performed variably in clinical decision-making tasks depending on case type,performing better in rare cases and worse in refractory cases requiring longitudinal reasoning.Complementary strengths between LLMs and physicians support case-and task-tailored human–AI collaboration.展开更多
基金funded by the National Natural Science Foundation of China,grant number 52405341Foundation of National Key Laboratory of Computational Physics,grant number 6142A05QN24012+1 种基金Chongqing Science and Technology Committee,grant number CSTB2023NSCQ-MSX0363The Science and Technology Research Program of Chongqing Municipal Education Commission,grant number KJQN202301117.
文摘In materials science and engineering design,high-fidelity and high-efficiency numerical simulation has become a driving force for innovation and practical implementation.To address longstanding bottlenecks in the development of conventional material constitutive models—such as lengthy modeling cycles and difficulties in numerical implementation—this study proposes an intelligent modeling and code generation approach powered by large languagemodels.A structured knowledge base integrating constitutive theory,numerical algorithms,and UMAT(User Material)interface specifications is constructed,and a retrieval-augmented generation strategy is employed to establish an end-to-end workflow spanning experimental data parsing,constitutive model formulation,and automatic UMAT subroutine generation.Experimental results show that the method achieves high accuracy for both a classical Johnson–Cookmodel and a physics-informed neural network(PINN)model,with key parameter identification errors below 5%.Moreover,the automatically generated UMAT subroutines yield finite element simulation results in Abaqus that are highly consistent with theoretical predictions(coefficient of determination R2>0.98)while maintaining good numerical stability.This framework is currently focused on the automatic construction of rate-dependent elastoplastic material models,and its core method also provides a clear path for extending to other constitutive categories such as hyperelasticity and viscoelasticity.This work provides an effective technical route for the rapid development and reliable numerical implementation of material constitutive models,significantly advancing the intelligence level of computational mechanics research and improving engineering application efficiency.
基金National Natural Science Foundation of China(71690233,71971213,71901214)。
文摘Architecture framework has become an effective method recently to describe the system of systems(SoS)architecture,such as the United States(US)Department of Defense Architecture Framework Version 2.0(DoDAF2.0).As a viewpoint in DoDAF2.0,the operational viewpoint(OV)describes operational activities,nodes,and resource flows.The OV models are important for SoS architecture development.However,as the SoS complexity increases,constructing OV models with traditional methods exposes shortcomings,such as inefficient data collection and low modeling standards.Therefore,we propose an intelligent modeling method for five OV models,including operational resource flow OV-2,organizational relationships OV-4,operational activity hierarchy OV-5a,operational activities model OV-5b,and operational activity sequences OV-6c.The main idea of the method is to extract OV architecture data from text and generate interoperable OV models.First,we construct the OV meta model based on the DoDAF2.0 meta model(DM2).Second,OV architecture named entities is recognized from text based on the bidirectional long short-term memory and conditional random field(BiLSTM-CRF)model.And OV architecture relationships are collected with relationship extraction rules.Finally,we define the generation rules for OV models and develop an OV modeling tool.We use unmanned surface vehicles(USV)swarm target defense SoS architecture as a case to verify the feasibility and effectiveness of the intelligent modeling method.
基金supported by the National Key Research and Development Program of China(2023YFA1507601)the National Natural Science Foundation of China(22278127,22378038)+2 种基金the Fundamental Research Funds for the Central Universities(2022ZFJH004)the Shanghai Pilot Program for Basic Research(22T01400100-18)the Natural Science Foundation of Liaoning Province,China(2024-MSBA-15).
文摘Data-driven approaches are extensively employed to model complex chemical engineering processes, such as hydrotreating, to address the challenges of mechanism-based methods demanding deep process understanding. However, the development of such models requires specialized expertise in data science, limiting their broader application. Large language models (LLMs), such as GPT-4, have demonstrated potential in supporting and guiding research efforts. This work presents a novel AI-assisted framework where GPT-4, through well-engineered prompts, facilitates the construction and explanation of multi-objective neural networks. These models predict hydrotreating products properties (such as distillation range), including refined diesel and refined gas oil, using feedstock properties, operating conditions, and recycle hydrogen composition. Gradient-weighted class activation mapping was employed to identify key features influencing the output variables. This work illustrates an innovative AI-guided paradigm for chemical engineering applications, and the designed prompts hold promise for adaptation to other complex processes.
基金supported supported by the Fundamental Research Funds for the Central Universities(226-2024-00004)the National Natural Science Foundation of China(U23 A20326)Key Research and Development Program of Zhejiang Province(2025C01061).
文摘Dear Editor,This letter deals with automatically constructing an OPC UA information model(IM)aimed at enhancing data interoperability among heterogeneous system components within manufacturing automation systems.Empowered by the large language model(LLM),we propose a novel multi-agent collaborative framework to streamline the end-to-end OPC UA IM modeling process.Each agent is equipped with meticulously engineered prompt templates,augmenting their capacity to execute specific tasks.We conduct modeling experiments using real textual data to demonstrate the effectiveness of the proposed method,improving modeling efficiency and reducing the labor workload.
基金supported by China Undergraduate Innovation Training Program[Grant No.202410699184]Humanities and Social Sciences Research Project funded by the Ministry of Education of China[Grant No.23YJAZH139].
文摘Machine translation of low-resource languages(LRLs)has long been hindered by limited corpora and linguistic complexity.This review summarizes key developments,from traditional methods to recent progress with large language models(LLMs),while highlighting ongoing challenges such as data bottlenecks,biases,fairness,and computational costs.Finally,it discusses future directions,including efficient parameter fine-tuning,multimodal translation,and community-driven corpus construction,providing insights for advancing LRL translation research.
文摘Short Message Service(SMS)is a widely used and cost-effective communication medium that has unfortunately become a frequent target for unsolicited messages-commonly known as SMS spam.With the rapid adoption of smartphones and increased Internet connectivity,SMS spam has emerged as a prevalent threat.Spammers have recognized the critical role SMS plays in today’s modern communication,making it a prime target for abuse.As cybersecurity threats continue to evolve,the volume of SMS spam has increased substantially in recent years.Moreover,the unstructured format of SMS data creates significant challenges for SMS spam detection,making it more difficult to successfully combat spam attacks.In this paper,we present an optimized and fine-tuned transformer-based Language Model to address the problem of SMS spam detection.We use a benchmark SMS spam dataset to analyze this spam detection model.Additionally,we utilize pre-processing techniques to obtain clean and noise-free data and address class imbalance problem by leveraging text augmentation techniques.The overall experiment showed that our optimized fine-tuned BERT(Bidirectional Encoder Representations from Transformers)variant model RoBERTa obtained high accuracy with 99.84%.To further enhance model transparency,we incorporate Explainable Artificial Intelligence(XAI)techniques that compute positive and negative coefficient scores,offering insight into the model’s decision-making process.Additionally,we evaluate the performance of traditional machine learning models as a baseline for comparison.This comprehensive analysis demonstrates the significant impact language models can have on addressing complex text-based challenges within the cybersecurity landscape.
基金National Key Research and Development Program of China(2021YFB3500501)。
文摘Commercial phosphor-converted white LEDs(pc-WLEDs)face two inherent limitations,namely blue light hazard and low color rendering index,due to the use of blue LEDs as excitation source.To address these challenges,violet LEDs are proposed as an alternative solution.Currently,phosphors that can be efficiently excited by violet light(with wavelengths from 400 to 420 nm)remain under development still.In this study,we utilize large language models to construct a comprehensive database of Eu^(2+)and Ce^(3+)doped phosphors for discovering novel violet-excited phosphors.A total of 822 phosphor data entries,including elemental compositions,crystal structures and excitation/emission wavelengths,have been extracted and validated from 9551 research papers.Compared with Ce^(3+)doped phosphors,the Eu^(2+)are in general more suited for violet-excited phosphors,as well as red-emitting phosphors.In particular,Eu^(2+)doped nitrides and sulfides are worth of exploration for violet-excited phosphors.This database is expected to be useful in the future development of phosphors for pc-WLEDs based on artificial intelligence methods.The datasets in this article are listed in Science Data Bank at http://doi.org/10.57760/sciencedb.34314.
文摘Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture.
基金funded by the National Key Research and Development Program of China(No.2021YFA1100500)the National Natural Science Foundation of China(No.82370662)the Key Research&Development Plan of Zhejiang Province(No.2024C03051).
文摘This study evaluated the accuracy,completeness,and comprehensibility of responses from mainstream large language models(LLMs)to hepatitis C virus(HCV)-related questions,aiming to assess their performance in addressing patient queries about disease and lifestyle behaviors.The models selected were ChatGPT-4o,Gemini 2.0 Pro,Claude 3.5 Sonnet,and DeepSeek V3,with 12 questions chosen by two HCV experts from the domains of prevention,diagnosis,and treatment.
基金supported by the National Key R&D Program of China[2022YFF0902703]the State Administration for Market Regulation Science and Technology Plan Project(2024MK033).
文摘Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interactions to predict future items of interest.However,many current methods rely on unique user and item IDs,limiting their ability to represent users and items effectively,especially in zero-shot learning scenarios where training data is scarce.With the rapid development of Large Language Models(LLMs),researchers are exploring their potential to enhance recommendation systems.However,there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems,where items are typically indexed by IDs.Moreover,most research focuses on item representations,neglecting personalized user modeling.To address these issues,we propose a sequential recommendation framework using LLMs,called CIT-Rec,a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations.Specifically,by aligning intuitive image information with text containing semantic features,we can more accurately represent items,improving item representation quality.We focus not only on item representations but also on user representations.To more precisely capture users’personalized preferences,we use traditional sequential recommendation models to train on users’historical interaction data,effectively capturing behavioral patterns.Finally,by combining LLMs and traditional sequential recommendation models,we allow the LLM to understand linguistic semantics while capturing collaborative semantics.Extensive evaluations on real-world datasets show that our model outperforms baseline methods,effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations.
文摘The emergence of large language models(LLMs)has brought about revolutionary social value.However,concerns have arisen regarding the generation of deceptive content by LLMs and their potential for misuse.Consequently,a crucial research question arises:How can we differentiate between AI-generated and human-authored text?Existing detectors face some challenges,such as operating as black boxes,relying on supervised training,and being vulnerable to manipulation and misinformation.To tackle these challenges,we propose an innovative unsupervised white-box detection method that utilizes a“dual-driven verification mechanism”to achieve high-performance detection,even in the presence of obfuscated attacks in the text content.To be more specific,we initially employ the SpaceInfi strategy to enhance the difficulty of detecting the text content.Subsequently,we randomly select vulnerable spots from the text and perturb them using another pre-trained language model(e.g.,T5).Finally,we apply a dual-driven defense mechanism(D3M)that validates text content with perturbations,whether generated by a model or authored by a human,based on the dimensions of Information TransmissionQuality and Information TransmissionDensity.Through experimental validation,our proposed novelmethod demonstrates state-of-the-art(SOTA)performancewhen exposed to equivalent levels of perturbation intensity across multiple benchmarks,thereby showcasing the effectiveness of our strategies.
文摘Classifying job offers into occupational categories is a fundamental task in human resource information systems,as it improves and streamlines indexing,search,and matching between openings and job seekers.Comprehensive occupational databases such as O∗NET or ESCO provide detailed taxonomies of interrelated positions that can be leveraged to align the textual content of postings with occupational categories,thereby facilitating standardization,cross-system interoperability,and access to metadata for each occupation(e.g.,tasks,knowledge,skills,and abilities).In this work,we explore the effectiveness of fine-tuning existing language models(LMs)to classify job offers with occupational descriptors from O∗NET.This enables a more precise assessment of candidate suitability by identifying the specific knowledge and skills required for each position,and helps automate recruitment processes by mitigating human bias and subjectivity in candidate selection.We evaluate three representative BERT-like models:BERT,RoBERTa,and DeBERTa.BERT serves as the baseline encoder-only architecture;RoBERTa incorporates advances in pretraining objectives and data scale;and DeBERTa introduces architectural improvements through disentangled attention mechanisms.The best performance was achieved with the DeBERTa model,although the other models also produced strong results,and no statistically significant differences were observed acrossmodels.We also find that these models typically reach optimal performance after only a few training epochs,and that training with smaller,balanced datasets is effective.Consequently,comparable results can be obtained with models that require fewer computational resources and less training time,facilitating deployment and practical use.
文摘Liver transplantation(LT)remains the optimal life-saving intervention for patients with end-stage liver disease.Despite the recent advances in LT several barriers,including organ allocation,donor-recipient matching,and patient education,persist.With the growing progress of artificial intelligence,particularly large language models(LLMs)like ChatGPT,new applications have emerged in the field of LT.Current studies demonstrating usage of ChatGPT in LT include various areas of application,from clinical settings to research and education.ChatGPT usage can benefit both healthcare professionals,by decreasing the time spent on non-clinical work,but also LT recipients by providing accurate information.Future potential applications include the expanding usage of ChatGPT and other LLMs in the field of LT pathology and radiology as well as the automated creation of discharge summaries or other related paperwork.Additionally,the next models of ChatGPT might have the potential to provide more accurate patient education material with increased readability.Although ChatGPT usage presents promising applications,there are certain ethical and practical limitations.Key concerns include patient data privacy,information accuracy,misinformation possibility and lack of legal framework.Healthcare providers and policymakers should collaborate for the establishment of a controlled framework for the safe use of ChatGPT.The aim of this minireview is to summarize current literature on ChatGPT in LT,highlighting both opportunities and limitations,while also providing future possible applications.
基金supported by University Grant Agency of Matej Bel University in Banská Bystrica project number UGA-14-PDS-2025.
文摘It is known that correlation does not imply causality.Some relationships identified in the analysis of data are coincidental or unknown,and some are produced by real-world causality of the situation,which is problematic,since there is a need to differentiate between these two scenarios.Until recently,the proper−semantic−causality of the relationship could have been determined only by human experts from the area of expertise of the studied data.This has changed with the advance of large language models,which are often utilized as surrogates for such human experts,making the process automated and readily available to all data analysts.This motivates the main objective of this work,which is to introduce the design and implementation of a large language model-based semantic causality evaluator based on correlation analysis,together with its visual analysis model called Causal heatmap.After the implementation itself,the model is evaluated from the point of view of the quality of the visual model,from the point of view of the quality of causal evaluation based on large language models,and from the point of view of comparative analysis,while the results reached in the study highlight the usability of large language models in the task and the potential of the proposed approach in the analysis of unknown datasets.The results of the experimental evaluation demonstrate the usefulness of the Causal heatmap method,supported by the evident highlighting of interesting relationships,while suppressing irrelevant ones.
文摘Background:Assess ChatGPT and Bard's effectiveness in the initial identification of articles for Otolaryngology—Head and Neck Surgery systematic literature reviews.Methods:Three PRISMA-based systematic reviews(Jabbour et al.2017,Wong et al.2018,and Wu et al.2021)were replicated using ChatGPTv3.5 and Bard.Outputs(author,title,publication year,and journal)were compared to the original references and cross-referenced with medical databases for authenticity and recall.Results:Several themes emerged when comparing Bard and ChatGPT across the three reviews.Bard generated more outputs and had greater recall in Wong et al.'s review,with a broader date range in Jabbour et al.'s review.In Wu et al.'s review,ChatGPT-2 had higher recall and identified more authentic outputs than Bard-2.Conclusion:Large language models(LLMs)failed to fully replicate peer-reviewed methodologies,producing outputs with inaccuracies but identifying relevant,especially recent,articles missed by the references.While human-led PRISMA-based reviews remain the gold standard,refining LLMs for literature reviews shows potential.
基金funded by the Office of the Vice-President for Research and Development of Cebu Technological University.
文摘This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.
文摘Although Named Entity Recognition(NER)in cybersecurity has historically concentrated on threat intelligence,vital security data can be found in a variety of sources,such as open-source intelligence and unprocessed tool outputs.When dealing with technical language,the coexistence of structured and unstructured data poses serious issues for traditional BERT-based techniques.We introduce a three-phase approach for improved NER inmulti-source cybersecurity data that makes use of large language models(LLMs).To ensure thorough entity coverage,our method starts with an identification module that uses dynamic prompting techniques.To lessen hallucinations,the extraction module uses confidence-based self-assessment and cross-checking using regex validation.The tagging module links to knowledge bases for contextual validation and uses SecureBERT in conjunction with conditional random fields to detect entity boundaries precisely.Our framework creates efficient natural language segments by utilizing decoderbased LLMs with 10B parameters.When compared to baseline SecureBERT implementations,evaluation across four cybersecurity data sources shows notable gains,with a 9.4%–25.21%greater recall and a 6.38%–17.3%better F1-score.Our refined model matches larger models and achieves 2.6%–4.9%better F1-score for technical phrase recognition than the state-of-the-art alternatives Claude 3.5 Sonnet,Llama3-8B,and Mixtral-7B.The three-stage architecture identification-extraction-tagging pipeline tackles important cybersecurity NER issues.Through effective architectures,these developments preserve deployability while setting a new standard for entity extraction in challenging security scenarios.The findings show how specific enhancements in hybrid recognition,validation procedures,and prompt engineering raise NER performance above monolithic LLM approaches in cybersecurity applications,especially for technical entity extraction fromheterogeneous sourceswhere conventional techniques fall short.Because of itsmodular nature,the framework can be upgraded at the component level as new methods are developed.
文摘Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.
基金supported by grant from the National Natural Science Foundation of China(82372207).
文摘BACKGROUND:To provide a comprehensive analysis of the landscape of artifi cial intelligence(AI)applications in cardiac arrest(CA).METHODS:Comprehensive searches were conducted in PubMed,the Cochrane Library,Webof Science,and EMBASE from database inception through 10 June 2025.Studies that applied AI inboth in-hospital cardiac arrest(IHCA)and out-of-hospital cardiac arrest(OHCA)populations acrossthe following domains were included:prediction of cardiac arrest occurrence,prognostication ofCA outcomes,applications of large language models(LLMs),and evaluation of cardiopulmonaryresuscitation(CPR)and other AI-driven interventions related to CA.RESULTS:The scoping review included 114 studies,encompassing data from 9,574,462patients in total.AI was most commonly applied to the prediction of CA(overall,n=40;IHCA,n=30;OHCA,n=4;and both,n=6),CPR-related decision support during CA(n=16),and post-arrestprognosis and rehabilitation outcomes(overall,n=38;OHCA,n=21;IHCA,n=3;and both,n=14).Additional application areas included LLM-based applications(n=8),emergency call handling(n=4),wearable device-based detection(n=3),heart rhythm identification(n=2),education(n=2),and extracorporeal cardiopulmonary resuscitation(ECPR)candidate identifi cation(n=1).Across allapplication scenarios,the highest area under the receiver operating characteristic curve(AUROC)value for pre-arrest CA prediction in IHCA patients was 0.998 using a multilayer perceptron(MLP)model,whereas the optimal AUROC for pre-arrest CA prediction in OHCA patients was 0.950 usingextreme gradient boosting(XGBoost)or random forest(RF)models.For CPR-related decisionsupport during CA,the highest AUROC achieved was 0.990 with a convolutional neural network(CNN)model.In prognostic prediction,the optimal AUROC for IHCA patients was 0.960 usingXGBoost,while for OHCA patients it reached 0.976 using an MLP model.CONCLUSION:This review shows that AI is most commonly used for the prediction of CA andCPR-related support,as well as post-arrest and rehabilitation outcomes.Future research directions includedrug discovery,post-resuscitation management,neurorehabilitation,and clinical trial innovation.Furtherstudies should prioritize multicenter clinical trials to evaluate AI models in real-world settings and validatetheir eff ectiveness across diverse patient populations.Overall,AI has signifi cant potential to improve clinicalpractice,and its role in CA application is increasingly important.
文摘Background:Despite the promise shown by large language models(LLMs)for standardized tasks,their multidimensional performance in real-world oncology decision-making remains unevaluated.This study aims to introduce a framework for evaluating LLMs and physician decisions in challenging lung cancer cases.Methods:We curated 50 challenging lung cancer cases(25 local and 25 published)classified as complex,rare,or refractory.Blinded three-dimensional,five-point Likert evaluations(1–5 for comprehensiveness,specificity,and readability)compared standalone LLMs(DeepSeek R1,Claude 3.5,Gemini 1.5,and GPT-4o),physicians by experience level(junior,intermediate,and senior),and AI-assisted juniors;intergroup differences and augmentation effects were analyzed statistically.Results:Of 50 challenging cases(18 complex,17 rare,and 15 refractory)rated by three experts,DeepSeek R1 achieved scores of 3.95±0.33,3.71±0.53,and 4.26±0.18 for comprehensiveness,specificity,and readability,respectively,positioning it between intermediate(3.68,3.68,3.75)and senior(4.50,4.64,4.53)physicians.GPT-4o and Claude 3.5 reached intermediate physician–level comprehensiveness(3.76±0.39,3.60±0.39)but junior-to-intermediate physician–level specificity(3.39±0.39,3.39±0.49).All LLMs scored higher on rare cases than intermediate physicians but fell below junior physicians in refractory-case specificity.AIassisted junior physicians showed marked gains in rare cases,with comprehensiveness rising from 2.32 to 4.29(84.8%),specificity from 2.24 to 4.26(90.8%),and readability from 2.76 to 4.59(66.0%),while specificity declined by 3.2%(3.17 to 3.07)in refractory cases.Error analysis showed complementary strengths,with physicians demonstrating reasoning stability and LLMs excelling in knowledge updating and risk management.Conclusions:LLMs performed variably in clinical decision-making tasks depending on case type,performing better in rare cases and worse in refractory cases requiring longitudinal reasoning.Complementary strengths between LLMs and physicians support case-and task-tailored human–AI collaboration.