期刊文献+
共找到928,859篇文章
< 1 2 250 >
每页显示 20 50 100
QingNangTCM:a parameter-efficient fine-tuning large language model for traditional Chinese medicine
1
作者 Xuming Tong Liyan Liu +7 位作者 Yanhong Yuan Xiaozheng Ding Huiru Jia Xu Yang Sio Kei Im Mini Han Wang Zhang Xiong Yapeng Wang 《Digital Chinese Medicine》 2026年第1期1-12,共12页
Objective To develop QingNangTCM,a specialized large language model(LLM)tailored for expert-level traditional Chinese medicine(TCM)question-answering and clinical reasoning,addressing the scarcity of domain-specific c... Objective To develop QingNangTCM,a specialized large language model(LLM)tailored for expert-level traditional Chinese medicine(TCM)question-answering and clinical reasoning,addressing the scarcity of domain-specific corpora and specialized alignment.Methods We constructed QnTCM_Dataset,a corpus of 100000 entries,by integrating data from ShenNong_TCM_Dataset and SymMap v2.0,and synthesizing additional samples via retrieval-augmented generation(RAG)and persona-driven generation.The dataset comprehensively covers diagnostic inquiries,prescriptions,and herbal knowledge.Utilizing P-Tuning v2,we fine-tuned the GLM-4-9B-Chat backbone to develop QingNangTCM.A multidimensional evaluation framework,assessing accuracy,coverage,consistency,safety,professionalism,and fluency,was established using metrics such as bilingual evaluation understudy(BLEU),recall-oriented understudy for gisting evaluation(ROUGE),metric for evaluation of translation with explicit ordering(METEOR),and LLM-as-a-Judge with expert review.Qualitative analysis was conducted across four simulated clinical scenarios:symptom analysis,disease treatment,herb inquiry,and failure cases.Baseline models included GLM-4-9BChat,DeepSeek-V2,HuatuoGPT-II(7B),and GLM-4-9B-Chat(freeze-tuning).Results QingNangTCM achieved the highest scores in BLEU-1/2/3/4(0.425/0.298/0.137/0.064),ROUGE-1/2(0.368/0.157),and METEOR(0.218),demonstrating a balanced and superior normalized performance profile of 0.900 across the dimensions of accuracy,coverage,and consistency.Although its ROUGE-L score(0.299)was lower than that of HuatuoGPT-II(7B)(0.351),it significantly outperformed domain-specific models in expert-validated win rates for professionalism(86%)and safety(73%).Qualitative analysis confirmed that the model strictly adheres to the“symptom-syndrome-pathogenesis-treatment”reasoning chain,though occasional misclassifications and hallucinations persisted when dealing with rare medicinal materials and uncommon syndromes. 展开更多
关键词 Large language model(LLM) Traditional Chinese medicine(TCM) fine-tuning P-Tuning v2 Clinical decision support
在线阅读 下载PDF
Detection of Maliciously Disseminated Hate Speech in Spanish Using Fine-Tuning and In-Context Learning Techniques with Large Language Models
2
作者 Tomás Bernal-Beltrán RonghaoPan +3 位作者 JoséAntonio García-Díaz María del Pilar Salas-Zárate Mario Andrés Paredes-Valverde Rafael Valencia-García 《Computers, Materials & Continua》 2026年第4期353-390,共38页
The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in S... The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns. 展开更多
关键词 Hate speech detection malicious communication campaigns AI-driven cybersecurity social media analytics large language models prompt-tuning fine-tuning in-context learning natural language processing
在线阅读 下载PDF
Fine-tuning Atmospheric Parameters for Improving ENSO Simulation in the Zebiak–Cane Model
3
作者 Xiaojun WEI Lin CHEN +2 位作者 Ming SUN Ruihuang XIE Rong-Hua ZHANG 《Advances in Atmospheric Sciences》 2026年第2期420-435,I0022-I0026,共21页
The Zebiak–Cane(ZC) model, renowned as a coupled ocean-atmosphere model specifically designed to simulate and predict El Ni??o-Southern Oscillation(ENSO), is an indispensable tool for ENSO studies. However, the origi... The Zebiak–Cane(ZC) model, renowned as a coupled ocean-atmosphere model specifically designed to simulate and predict El Ni??o-Southern Oscillation(ENSO), is an indispensable tool for ENSO studies. However, the original ZC model exhibits certain biases in reproducing the ENSO–related sea surface temperature anomalies and heating anomalies, limiting its broader applicability. To improve the accuracy of ENSO simulation, we propose a modified ZC model based on Xie et al.(2015), named the MZC_XJH model, through refining the heating parameterization scheme. The performance in simulating the nonlinear SST–precipitation relationship in the MZC_XJH model is firstly elaborated. Then, we investigate the impacts of three key atmospheric parameters on ENSO simulation by conducting experiments with the MZC_XJH model. Through assessing the performance in simulating five fundamental ENSO metrics(amplitude, periodicity,seasonality, diversity, and skewness), we uncover that the sensitivities of simulated ENSO behaviors to different parameters are distinct. Moreover, we explain why a particular parameter greatly affects some simulated ENSO behaviors while others exert minor influence. We also reveal that the nonlinear effect due to the covariation of multi-parameters on ENSO simulation warrants careful consideration when tuning multi-parameters synchronously. Lastly, we present an updated version of the MZC_XJH model, in which some biases have been mitigated but some remain obvious. Although there are no universally optimal parameters that would ensure flawless performance in simulating every aspect of ENSO, this study provides a valuable reference for tuning atmospheric parameters in the MZC_XJH model, rendering the MZC_XJH model applicable to some research objectives. 展开更多
关键词 ENSO Zebiak–Cane model SST–precipitation relationship parameterization schemes
在线阅读 下载PDF
Unlocking Edge Fine-Tuning:A Sample-Efficient Language-Empowered Split Fine-Tuning Framework
4
作者 Zuyi Huang Yue Wang +4 位作者 Jia Liu Haodong Yi Lejun Ai Min Chen Salman A.AlQahtani 《Computers, Materials & Continua》 2026年第4期1584-1606,共23页
The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness dimin... The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning,which leads to excessive communication overhead.To address this,we propose Language-Empowered Split Fine-Tuning(LESFT),a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm.LESFT simultaneously learns frommultiple logically equivalent but linguistically diverse reasoning chains,providing richer supervisory signals and improving data efficiency.This process-oriented training allows more effective reasoning adaptation with fewer samples.Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy.LESFT consistently outperforms strong baselines on GSM8K,CommonsenseQA,and AQUA_RAT,with the largest gains observed on Qwen2.5-3B.These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments. 展开更多
关键词 Large language models edge computing efficient fine-tuning few-shot fine-tuning split federated learning
在线阅读 下载PDF
Optimizing Fine-Tuning in Quantized Language Models:An In-Depth Analysis of Key Variables
5
作者 Ao Shen Zhiquan Lai +1 位作者 Dongsheng Li Xiaoyu Hu 《Computers, Materials & Continua》 SCIE EI 2025年第1期307-325,共19页
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci... Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments. 展开更多
关键词 Large-scale Language model Parameter-Efficient fine-tuning parameter quantization key variable trainable parameters experimental analysis
在线阅读 下载PDF
Fine-tuning a large language model for automating computational fluid dynamics simulations
6
作者 Zhehao Dong Zhen Lu Yue Yang 《Theoretical & Applied Mechanics Letters》 2025年第3期219-225,共7页
Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automat... Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD. 展开更多
关键词 Large language models fine-tuning Computational fluid dynamics Automated CFD Multi-agent system
在线阅读 下载PDF
Optimizing Airline Review Sentiment Analysis:A Comparative Analysis of LLaMA and BERT Models through Fine-Tuning and Few-Shot Learning
7
作者 Konstantinos I.Roumeliotis Nikolaos D.Tselikas Dimitrios K.Nasiopoulos 《Computers, Materials & Continua》 2025年第2期2769-2792,共24页
In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o... In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry. 展开更多
关键词 Sentiment classification review sentiment analysis user-generated content domain adaptation customer satisfaction LLaMA model BERT model airline reviews LLM classification fine-tuning
在线阅读 下载PDF
An Analytical Review of Large Language Models Leveraging KDGI Fine-Tuning,Quantum Embedding’s,and Multimodal Architectures
8
作者 Uddagiri Sirisha Chanumolu Kiran Kumar +2 位作者 Revathi Durgam Poluru Eswaraiah G Muni Nagamani 《Computers, Materials & Continua》 2025年第6期4031-4059,共29页
A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehens... A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research. 展开更多
关键词 Large languagemodels quantum embeddings fine-tuning techniques multimodal architectures ethical AI scenarios
在线阅读 下载PDF
New approach to assess sperm DNA fragmentation dynamics: Fine-tuning mathematical models
9
作者 Isabel Ortiz Jesus Dorado +4 位作者 Jane Morrell Jaime Gosalvez Francisco Crespo Juan M.Jimenez Manuel Hidalgo 《Journal of Animal Science and Biotechnology》 SCIE CAS CSCD 2017年第3期592-600,共9页
Background: Sperm DNA fragmentation(sDF) has been proved to be an important parameter in order to predict in vitro the potential fertility of a semen sample. Colloid centrifugation could be a suitable technique to ... Background: Sperm DNA fragmentation(sDF) has been proved to be an important parameter in order to predict in vitro the potential fertility of a semen sample. Colloid centrifugation could be a suitable technique to select those donkey sperm more resistant to DNA fragmentation after thawing. Previous studies have shown that to elucidate the latent damage of the DNA molecule, sDF should be assessed dynamically, where the rate of fragmentation between treatments indicates how resistant the DNA is to iatrogenic damage. The rate of fragmentation is calculated using the slope of a linear regression equation. However, it has not been studied if s DF dynamics fit this model. The objectives of this study were to evaluate the effect of different after-thawing centrifugation protocols on sperm DNA fragmentation and elucidate the most accurate mathematical model(linear regression, exponential or polynomial) for DNA fragmentation over time in frozen-thawed donkey semen.Results: After submitting post-thaw semen samples to no centrifugation(UDC), sperm washing(SW) or single layer centrifugation(SLC) protocols, sD F values after 6 h of incubation were significantly lower in SLC samples than in SW or UDC.Coefficient of determination(R-2) values were significantly higher for a second order polynomial model than for linear or exponential. The highest values for acceleration of fragmentation(aSDF) were obtained for SW, fol owed by SLC and UDC.Conclusion: SLC after thawing seems to preserve longer DNA longevity in comparison to UDC and SW. Moreover,the fine-tuning of models has shown that sDF dynamics in frozen-thawed donkey semen fit a second order polynomial model, which implies that fragmentation rate is not constant and fragmentation acceleration must be taken into account to elucidate hidden damage in the DNA molecule. 展开更多
关键词 Colloid centrifugation Dynamics fine-tuning Mathematical models Sperm DNA fragmentation
在线阅读 下载PDF
Comparing Fine-Tuning, Zero and Few-Shot Strategies with Large Language Models in Hate Speech Detection in English
10
作者 Ronghao Pan JoséAntonio García-Díaz Rafael Valencia-García 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2849-2868,共20页
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning... Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives. 展开更多
关键词 Hate speech detection zero-shot few-shot fine-tuning natural language processing
在线阅读 下载PDF
A lightweight physics-conditioned diffusion multi-model for medical image reconstruction
11
作者 Raja Vavekanand Ganesh Kumar Shakhlokhon Kurbanova 《Biomedical Engineering Communications》 2026年第2期50-59,共10页
Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstructio... Background:Medical imaging advancements are constrained by fundamental trade-offs between acquisition speed,radiation dose,and image quality,forcing clinicians to work with noisy,incomplete data.Existing reconstruction methods either compromise on accuracy with iterative algorithms or suffer from limited generalizability with task-specific deep learning approaches.Methods:We present LDM-PIR,a lightweight physics-conditioned diffusion multi-model for medical image reconstruction that addresses key challenges in magnetic resonance imaging(MRI),CT,and low-photon imaging.Unlike traditional iterative methods,which are computationally expensive,or task-specific deep learning approaches lacking generalizability,integrates three innovations.A physics-conditioned diffusion framework that embeds acquisition operators(Fourier/Radon transforms)and noise models directly into the reconstruction process.A multi-model architecture that unifies denoising,inpainting,and super-resolution via shared weight conditioning.A lightweight design(2.1M parameters)enabling rapid inference(0.8s/image on GPU).Through self-supervised fine-tuning with measurement consistency losses adapts to new imaging modalities using fewer annotated samples.Results:Achieves state-of-the-art performance on fastMRI(peak signal-to-noise ratio(PSNR):34.04 for single-coil/31.50 for multi-coil)and Lung Image Database Consortium and Image Database Resource Initiative(28.83 PSNR under Poisson noise).Clinical evaluations demonstrate superior preservation of anatomical structures,with SSIM improvements of 8.8%for single-coil and 4.36%for multi-coil MRI over uDPIR.Conclusion:It offers a flexible,efficient,and scalable solution for medical image reconstruction,addressing the challenges of noise,undersampling,and modality generalization.The model’s lightweight design allows for rapid inference,while its self-supervised fine-tuning capability minimizes reliance on large annotated datasets,making it suitable for real-world clinical applications. 展开更多
关键词 medical image reconstruction physics-conditioned diffusion multi-task learning self-supervised fine-tuning multimodal fusion lightweight neural networks
在线阅读 下载PDF
Agri-Eval:Multi-level Large Language Model Valuation Benchmark for Agriculture
12
作者 WANG Yaojun GE Mingliang +2 位作者 XU Guowei ZHANG Qiyu BIE Yuhui 《农业机械学报》 北大核心 2026年第1期290-299,共10页
Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLM... Model evaluation using benchmark datasets is an important method to measure the capability of large language models(LLMs)in specific domains,and it is mainly used to assess the knowledge and reasoning abilities of LLMs.Therefore,in order to better assess the capability of LLMs in the agricultural domain,Agri-Eval was proposed as a benchmark for assessing the knowledge and reasoning ability of LLMs in agriculture.The assessment dataset used in Agri-Eval covered seven major disciplines in the agricultural domain:crop science,horticulture,plant protection,animal husbandry,forest science,aquaculture science,and grass science,and contained a total of 2283 questions.Among domestic general-purpose LLMs,DeepSeek R1 performed best with an accuracy rate of 75.49%.In the realm of international general-purpose LLMs,Gemini 2.0 pro exp 0205 standed out as the top performer,achieving an accuracy rate of 74.28%.As an LLMs in agriculture vertical,Shennong V2.0 outperformed all the LLMs in China,and the answer accuracy rate of agricultural knowledge exceeded that of all the existing general-purpose LLMs.The launch of Agri-Eval helped the LLM developers to comprehensively evaluate the model's capability in the field of agriculture through a variety of tasks and tests to promote the development of the LLMs in the field of agriculture. 展开更多
关键词 large language models assessment systems agricultural knowledge agricultural datasets
在线阅读 下载PDF
Ecological Dynamics of a Logistic Population Model with Impulsive Age-selective Harvesting
13
作者 DAI Xiangjun JIAO Jianjun 《应用数学》 北大核心 2026年第1期72-79,共8页
In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asy... In this paper,we establish and study a single-species logistic model with impulsive age-selective harvesting.First,we prove the ultimate boundedness of the solutions of the system.Then,we obtain conditions for the asymptotic stability of the trivial solution and the positive periodic solution.Finally,numerical simulations are presented to validate our results.Our results show that age-selective harvesting is more conducive to sustainable population survival than non-age-selective harvesting. 展开更多
关键词 The logistic population model Selective harvesting Asymptotic stability EXTINCTION
在线阅读 下载PDF
Ecosystem service models are indeed being validated:A response to Pereira et al.(2025)
14
作者 James M.Bullock Danny A.P.Hooftman +1 位作者 John W.Redhead Simon Willcock 《Geography and Sustainability》 2026年第1期247-248,共2页
In their recent paper Pereira et al.(2025)claim that validation is overlooked in mapping and modelling of ecosystem services(ES).They state that“many studies lack critical evaluation of the results and no validation ... In their recent paper Pereira et al.(2025)claim that validation is overlooked in mapping and modelling of ecosystem services(ES).They state that“many studies lack critical evaluation of the results and no validation is provided”and that“the validation step is largely overlooked”.This assertion may have been true several years ago,for example,when Ochoa and Urbina-Cardona(2017)made a similar observation.However,there has been much work on ES model validation over the last decade. 展开更多
关键词 evaluation MAPPING modeling es model ecosystem services VALIDATION
在线阅读 下载PDF
Modeling of Precipitation over Africa:Progress,Challenges,and Prospects
15
作者 A.A.AKINSANOLA C.N.WENHAJI +21 位作者 R.BARIMALALA P.-A.MONERIE R.D.DIXON A.T.TAMOFFO M.O.ADENIYI V.ONGOMA I.DIALLO M.GUDOSHAVA C.M.WAINWRIGHT R.JAMES K.C.SILVERIO A.FAYE S.S.NANGOMBE M.W.POKAM D.A.VONDOU N.C.G.HART I.PINTO M.KILAVI S.HAGOS E.N.RAJAGOPAL R.K.KOLLI S.JOSEPH 《Advances in Atmospheric Sciences》 2026年第1期59-86,共28页
In recent years,there has been an increasing need for climate information across diverse sectors of society.This demand has arisen from the necessity to adapt to and mitigate the impacts of climate variability and cha... In recent years,there has been an increasing need for climate information across diverse sectors of society.This demand has arisen from the necessity to adapt to and mitigate the impacts of climate variability and change.Likewise,this period has seen a significant increase in our understanding of the physical processes and mechanisms that drive precipitation and its variability across different regions of Africa.By leveraging a large volume of climate model outputs,numerous studies have investigated the model representation of African precipitation as well as underlying physical processes.These studies have assessed whether the physical processes are well depicted and whether the models are fit for informing mitigation and adaptation strategies.This paper provides a review of the progress in precipitation simulation overAfrica in state-of-the-science climate models and discusses the major issues and challenges that remain. 展开更多
关键词 RAINFALL MONSOON climate modeling CORDEX CMIP6 convection-permitting models
在线阅读 下载PDF
Preferences of Chinese Dermatologists for Large Language Model Responses in Clinical Psoriasis Scenarios:A Nationwide Cross-Sectional Survey in China
16
作者 Jungang Yang Jingkai Xu +6 位作者 Xuejiao Song Chengxu Li Lili Chen Lingbo Bi Tingting Jiang Xianbo Zuo Yong Cui 《Health Care Science》 2026年第1期40-48,共9页
Background:Large language models(LLMs)have shown considerable promise in supporting clinical decision-making.However,their adoption and evaluation in dermatology remains limited.This study aimed to explore the prefere... Background:Large language models(LLMs)have shown considerable promise in supporting clinical decision-making.However,their adoption and evaluation in dermatology remains limited.This study aimed to explore the preferences of Chinese dermatologists regarding LLM-generated responses in clinical psoriasis scenarios and to assess how they prioritize key quality dimensions,including accuracy,traceability,and logicality.Methods:A cross-sectional,web-based survey was conducted between December 25,2024,and January 22,2025,following the Checklist for Reporting Results of Internet E-Surveys guidelines.A total of 1247 valid responses were collected from practicing dermatologists across 33 of China's provincial-level administrative divisions.Participants evaluated responses to five categories of clinical questions(etiology,clinical presentation,differential diagnosis,treatment,and case study)generated by five LLMs:ChatGPT-4o,Kimi.ai,Doubao,ZuoYiGPT,and Lingyi-agent.Statistical associations between participant characteristics and model preferences were examined using chi-square tests.Results:ChatGPT-4o(Model 1)emerged as the most preferred model across all clinical tasks,consistently receiving the highest number of votes in case study(n=740),clinical presentation(n=666),differential diagnosis(n=707),etiology(n=602),and treatment(n=656).Significant variation in model preference by professional title was observed only for the differential diagnosis task(χ^(2)=21.13,df=12,p=0.0485),while no significant differences were found across hospital tiers(p>0.05).In terms of evaluation dimensions,accuracy was most frequently rated as“very important”(n=635).A significant association existed between hospital tier and the most valued dimension(χ^(2)=27.667,df=9,p=0.0011),with dermatologists in primary hospitals prioritizing traceability more than their peers in higher-tier hospitals.No significant associations were found across professional titles(p=0.127).Conclusions:Chinese dermatologists suggest a strong preference for ChatGPT-4o over domestic LLMs in psoriasis-related clinical tasks.While accuracy remains the primary criterion,traceability and logicality are also critical,particularly for clinicians in lower-tier hospitals.These findings suggest that future clinical LLMs should prioritize not only content accuracy but also source transparency and structural clarity to meet the diverse needs of different clinical settings. 展开更多
关键词 DERMATOLOGY large language model model evaluation
暂未订购
Stability of k-ε model in Kolmogorov flow
17
作者 Jiashuo GUO Le FANG 《Applied Mathematics and Mechanics(English Edition)》 2026年第1期165-184,共20页
The Reynolds-averaged Navier-Stokes(RANS)technique enables critical engineering predictions and is widely adopted.However,since this iterative computation relies on the fixed-point iteration,it may converge to unexpec... The Reynolds-averaged Navier-Stokes(RANS)technique enables critical engineering predictions and is widely adopted.However,since this iterative computation relies on the fixed-point iteration,it may converge to unexpected non-physical phase points in practice.We conduct an analysis on the phase-space characteristics and the fixed-point theory underlying the k-ε turbulence model,and employ the classical Kolmogorov flow as a framework,leveraging its direct numerical simulation(DNS)data to construct a one-dimensional(1D)system under periodic/fixed boundary conditions.The RANS results demonstrate that under periodic boundary conditions,the k-ε model exhibits only a unique trivial fixed point,with asymptotes capturing the phase portraits.The stability of this trivial fixed point is determined by a mathematically derived stability phase diagram,indicating the fact that the k-ε model will never converge to correct values under periodic conditions.In contrast,under fixed boundary conditions,the model can yield a stable non-trivial fixed point.The evolutionary mechanisms and their relationship with boundary condition settings systematically explain the inherent limitations of the k-ε model,i.e.,its deficiency in computing the flow field under periodic boundary conditions and sensitivity to boundary-value specifications under fixed boundary conditions.These conclusions are finally validated with the open-source code OpenFOAM. 展开更多
关键词 k-εmodel Kolmogorov flow INSTABILITY turbulence model
在线阅读 下载PDF
Design optimization and FEA of B-6 and B-7 levels ballistics armor:A modelling approach
18
作者 Muhammad Naveed CHU Jinkui +1 位作者 Atif Ur Rehman Arsalan Hyder 《大连理工大学学报》 北大核心 2026年第1期66-77,共12页
Utilizing finite element analysis,the ballistic protection provided by a combination of perforated D-shaped and base armor plates,collectively referred to as radiator armor,is evaluated.ANSYS Explicit Dynamics is empl... Utilizing finite element analysis,the ballistic protection provided by a combination of perforated D-shaped and base armor plates,collectively referred to as radiator armor,is evaluated.ANSYS Explicit Dynamics is employed to simulate the ballistic impact of 7.62 mm armor-piercing projectiles on Aluminum AA5083-H116 and Steel Secure 500 armors,focusing on the evaluation of material deformation and penetration resistance at varying impact points.While the D-shaped armor plate is penetrated by the armor-piercing projectiles,the combination of the perforated D-shaped and base armor plates successfully halts penetration.A numerical model based on the finite element method is developed using software such as SolidWorks and ANSYS to analyze the interaction between radiator armor and bullet.The perforated design of radiator armor is to maintain airflow for radiator function,with hole sizes smaller than the bullet core diameter to protect radiator assemblies.Predictions are made regarding the brittle fracture resulting from the projectile core′s bending due to asymmetric impact,and the resulting fragments failed to penetrate the perforated base armor plate.Craters are formed on the surface of the perforated D-shaped armor plate due to the impact of projectile fragments.The numerical model accurately predicts hole growth and projectile penetration upon impact with the armor,demonstrating effective protection of the radiator assemblies by the radiator armor. 展开更多
关键词 radiator armor ballistics simulation Johnson-Cook model armor-piercing projectile perforated D-shaped armor plate
在线阅读 下载PDF
CIT-Rec:Enhancing Sequential Recommendation System with Large Language Models
19
作者 Ziyu Li Zhen Chen +2 位作者 Xuejing Fu Tong Mo Weiping Li 《Computers, Materials & Continua》 2026年第3期2328-2343,共16页
Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interact... Recommendation systems are key to boosting user engagement,satisfaction,and retention,particularly on media platforms where personalized content is vital.Sequential recommendation systems learn from user-item interactions to predict future items of interest.However,many current methods rely on unique user and item IDs,limiting their ability to represent users and items effectively,especially in zero-shot learning scenarios where training data is scarce.With the rapid development of Large Language Models(LLMs),researchers are exploring their potential to enhance recommendation systems.However,there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems,where items are typically indexed by IDs.Moreover,most research focuses on item representations,neglecting personalized user modeling.To address these issues,we propose a sequential recommendation framework using LLMs,called CIT-Rec,a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations.Specifically,by aligning intuitive image information with text containing semantic features,we can more accurately represent items,improving item representation quality.We focus not only on item representations but also on user representations.To more precisely capture users’personalized preferences,we use traditional sequential recommendation models to train on users’historical interaction data,effectively capturing behavioral patterns.Finally,by combining LLMs and traditional sequential recommendation models,we allow the LLM to understand linguistic semantics while capturing collaborative semantics.Extensive evaluations on real-world datasets show that our model outperforms baseline methods,effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations. 展开更多
关键词 Large language models vision language models sequential recommendation instruction tuning
在线阅读 下载PDF
Lithospheric magnetic variations on the Tibetan Plateau based on a 3D surface spline model,compared with strong earthquake occurrences
20
作者 PengTao Zhang Jun Yang +3 位作者 LiLi Feng Xia Li YuHong Zhao YingFeng Ji 《Earth and Planetary Physics》 2026年第1期30-43,共14页
The National Geophysical Data Center(NGDC)of the United States has collected aeromagnetic data for input into a series of geomagnetic models to improve model resolution;however,in the Tibetan Plateau region,ground-bas... The National Geophysical Data Center(NGDC)of the United States has collected aeromagnetic data for input into a series of geomagnetic models to improve model resolution;however,in the Tibetan Plateau region,ground-based observations remain insufficient to clearly reflect the characteristics of the region’s lithospheric magnetism.In this study,we evaluate the lithospheric magnetism of the Tibetan Plateau by using a 3D surface spline model based on observations from>200 newly constructed repeat stations(portable stations)to determine the spatial distribution of plateau geomagnetism,as well as its correlation with the tectonic features of the region.We analyze the relationships between M≥5 earthquakes and lithospheric magnetic field variations on the Tibetan Plateau and identify regions susceptible to strong earthquakes.We compare the geomagnetic results with those from an enhanced magnetic model(EMM2015)developed by the NGDC and provide insights into improving lithospheric magnetic field calculations in the Tibetan Plateau region.Further research reveals that these magnetic anomalies exhibit distinct differences from the magnetic-seismic correlation mechanisms observed in other tectonic settings;here,they are governed primarily by the combined effects of compressional magnetism,thermal magnetism,and deep thermal stress.This study provides new evidence of geomagnetic anomalies on the Tibetan Plateau,interprets them physically,and demonstrates their potential for identifying seismic hazard zones on the Plateau. 展开更多
关键词 Tibetan Plateau magnetic variation SEISMICITY surface spline model enhanced magnetic model
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部