期刊文献+
共找到115,810篇文章
< 1 2 250 >
每页显示 20 50 100
Optimizing Fine-Tuning in Quantized Language Models:An In-Depth Analysis of Key Variables
1
作者 Ao Shen Zhiquan Lai +1 位作者 Dongsheng Li Xiaoyu Hu 《Computers, Materials & Continua》 SCIE EI 2025年第1期307-325,共19页
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci... Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments. 展开更多
关键词 Large-scale Language Model Parameter-Efficient fine-tuning parameter quantization key variable trainable parameters experimental analysis
在线阅读 下载PDF
Optimizing Airline Review Sentiment Analysis:A Comparative Analysis of LLaMA and BERT Models through Fine-Tuning and Few-Shot Learning
2
作者 Konstantinos I.Roumeliotis Nikolaos D.Tselikas Dimitrios K.Nasiopoulos 《Computers, Materials & Continua》 2025年第2期2769-2792,共24页
In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o... In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry. 展开更多
关键词 Sentiment classification review sentiment analysis user-generated content domain adaptation customer satisfaction LLaMA model BERT model airline reviews LLM classification fine-tuning
在线阅读 下载PDF
Fine-tuning a large language model for automating computational fluid dynamics simulations
3
作者 Zhehao Dong Zhen Lu Yue Yang 《Theoretical & Applied Mechanics Letters》 2025年第3期219-225,共7页
Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automat... Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD. 展开更多
关键词 Large language models fine-tuning Computational fluid dynamics Automated CFD Multi-agent system
在线阅读 下载PDF
An Analytical Review of Large Language Models Leveraging KDGI Fine-Tuning,Quantum Embedding’s,and Multimodal Architectures
4
作者 Uddagiri Sirisha Chanumolu Kiran Kumar +2 位作者 Revathi Durgam Poluru Eswaraiah G Muni Nagamani 《Computers, Materials & Continua》 2025年第6期4031-4059,共29页
A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehens... A complete examination of Large Language Models’strengths,problems,and applications is needed due to their rising use across disciplines.Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance,strengths,and weaknesses.This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies.In this research,50 studies on 25+LLMs,including GPT-3,GPT-4,Claude 3.5,DeepKet,and hybrid multimodal frameworks like ContextDET and GeoRSCLIP,are thoroughly reviewed.We propose LLM application taxonomy by grouping techniques by task focus—healthcare,chemistry,sentiment analysis,agent-based simulations,and multimodal integration.Advanced methods like parameter-efficient tuning(LoRA),quantumenhanced embeddings(DeepKet),retrieval-augmented generation(RAG),and safety-focused models(GalaxyGPT)are evaluated for dataset requirements,computational efficiency,and performance measures.Frameworks for ethical issues,data limited hallucinations,and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted.The investigation’s scope,mad,and methods are described,but the primary results are not.The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings performbetter for context-heavy applications.In medical text normalization,ChatGPT-4 outperforms previous models,while two multimodal frameworks,GeoRSCLIP,increase remote sensing.Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance,demonstrating the necessity for adaptive models in multiple domains.To discover the optimum domain-specific models,explain domain-specific fine-tuning,and present quantum andmultimodal LLMs to address scalability and cross-domain issues.The framework helps academics and practitioners identify,adapt,and innovate LLMs for different purposes.This work advances the field of efficient,interpretable,and ethical LLM application research. 展开更多
关键词 Large languagemodels quantum embeddings fine-tuning techniques multimodal architectures ethical AI scenarios
在线阅读 下载PDF
An Adaptive Cubic Regularisation Algorithm Based on Affine Scaling Methods for Constrained Optimization
5
作者 PEI Yonggang WANG Jingyi 《应用数学》 北大核心 2026年第1期258-277,共20页
In this paper,an adaptive cubic regularisation algorithm based on affine scaling methods(ARCBASM)is proposed for solving nonlinear equality constrained programming with nonnegative constraints on variables.From the op... In this paper,an adaptive cubic regularisation algorithm based on affine scaling methods(ARCBASM)is proposed for solving nonlinear equality constrained programming with nonnegative constraints on variables.From the optimality conditions of the problem,we introduce appropriate affine matrix and construct an affine scaling ARC subproblem with linearized constraints.Composite step methods and reduced Hessian methods are applied to tackle the linearized constraints.As a result,a standard unconstrained ARC subproblem is deduced and its solution can supply sufficient decrease.The fraction to the boundary rule maintains the strict feasibility(for nonnegative constraints on variables)of every iteration point.Reflection techniques are employed to prevent the iterations from approaching zero too early.Under mild assumptions,global convergence of the algorithm is analysed.Preliminary numerical results are reported. 展开更多
关键词 Constrained optimization adaptive cubic regularisation Affine scaling Global convergence
在线阅读 下载PDF
Evaluation of Reinforcement Learning-Based Adaptive Modulation in Shallow Sea Acoustic Communication
6
作者 Yifan Qiu Xiaoyu Yang +1 位作者 Feng Tong Dongsheng Chen 《哈尔滨工程大学学报(英文版)》 2026年第1期292-299,共8页
While reinforcement learning-based underwater acoustic adaptive modulation shows promise for enabling environment-adaptive communication as supported by extensive simulation-based research,its practical performance re... While reinforcement learning-based underwater acoustic adaptive modulation shows promise for enabling environment-adaptive communication as supported by extensive simulation-based research,its practical performance remains underexplored in field investigations.To evaluate the practical applicability of this emerging technique in adverse shallow sea channels,a field experiment was conducted using three communication modes:orthogonal frequency division multiplexing(OFDM),M-ary frequency-shift keying(MFSK),and direct sequence spread spectrum(DSSS)for reinforcement learning-driven adaptive modulation.Specifically,a Q-learning method is used to select the optimal modulation mode according to the channel quality quantified by signal-to-noise ratio,multipath spread length,and Doppler frequency offset.Experimental results demonstrate that the reinforcement learning-based adaptive modulation scheme outperformed fixed threshold detection in terms of total throughput and average bit error rate,surpassing conventional adaptive modulation strategies. 展开更多
关键词 adaptive modulation Shallow sea underwater acoustic modulation Reinforcement learning
在线阅读 下载PDF
Dynamic psychological vulnerability and adaptation in rheumatoid arthritis:Trajectories,predictors,and interventions
7
作者 Xue-Meng Chen Xian Cheng Wei Wu 《World Journal of Psychiatry》 2026年第1期32-46,共15页
Rheumatoid arthritis(RA)patients face significant psychological challenges alongside physical symptoms,necessitating a comprehensive understanding of how psychological vulnerability and adaptation patterns evolve thro... Rheumatoid arthritis(RA)patients face significant psychological challenges alongside physical symptoms,necessitating a comprehensive understanding of how psychological vulnerability and adaptation patterns evolve throughout the disease course.This review examined 95 studies(2000-2025)from PubMed,Web of Science,and CNKI databases including longitudinal cohorts,randomized controlled trials,and mixed-methods research,to characterize the complex interplay between biological,psychological,and social factors affecting RA patients’mental health.Findings revealed three distinct vulnerability trajectories(45%persistently low,30%fluctuating improvement,25%persistently high)and four adaptation stages,with critical intervention periods occurring 3-6 months postdiagnosis and during disease flares.Multiple factors significantly influence psychological outcomes,including gender(females showing 1.8-fold increased risk),age(younger patients experiencing 42%higher vulnerability),pain intensity,inflammatory markers,and neuroendocrine dysregulation(48%showing cortisol rhythm disruption).Early psychological intervention(within 3 months of diagnosis)demonstrated robust benefits,reducing depression incidence by 42%with effects persisting 24-36 months,while different modalities showed complementary advantages:Cognitive behavioral therapy for depression(Cohen’s d=0.68),mindfulness for pain acceptance(38%improvement),and peer support for meaning reconstruction(25.6%increase).These findings underscore the importance of integrating routine psychological assessment into standard RA care,developing stage-appropriate interventions,and advancing research toward personalized biopsychosocial approaches that address the dynamic psychological dimensions of the disease. 展开更多
关键词 Rheumatoid arthritis Psychological vulnerability Disease adaptation ability Dynamic changes Mental health
暂未订购
基于Adapter的软件总线体系结构 被引量:9
8
作者 徐正权 潘晓波 《华中科技大学学报(自然科学版)》 EI CAS CSCD 北大核心 2005年第5期10-12,共3页
提出基于适配器Adapter的软件总线体系结构,引进适配器作为可复用构件和软件总线的中介,负责管理和存储与系统相关的构件的组装信息,在消息交换和数据共享中负责消息派送和数据格式转换的工作.适配器还参与可复用构件的组合,扩充和屏蔽... 提出基于适配器Adapter的软件总线体系结构,引进适配器作为可复用构件和软件总线的中介,负责管理和存储与系统相关的构件的组装信息,在消息交换和数据共享中负责消息派送和数据格式转换的工作.适配器还参与可复用构件的组合,扩充和屏蔽可复用构件的现有功能. 展开更多
关键词 软件总线 适配器 软件体系结构 构件
在线阅读 下载PDF
基于Adapter模式的远程教育资源整合
9
作者 李娟 李介 《计算机与数字工程》 2008年第10期192-195,共4页
首先分析各高校远程教育资源信息多平台问题,然后研究了现有系统整合的重要技术难点,在保证整体功能和原有框架的基础上,基于Adapter模式进行了资源系统整合的构建和实现,从而减少多系统带来的共享、安全等问题,是各种资源有效整合共享... 首先分析各高校远程教育资源信息多平台问题,然后研究了现有系统整合的重要技术难点,在保证整体功能和原有框架的基础上,基于Adapter模式进行了资源系统整合的构建和实现,从而减少多系统带来的共享、安全等问题,是各种资源有效整合共享的探索和尝试。 展开更多
关键词 远程教育 系统整合 adapter模式
在线阅读 下载PDF
使用Adapter模式在组态软件中实现串口通讯协议复用
10
作者 沈宇亮 游大海 《微计算机信息》 2004年第12期24-24,54,共2页
本文使用Adapter模式对组态软件中串口通讯协议部分实现复用.应用该方法可以大大减少使用组态软件与第三方设备进行串口通讯的工作量。
关键词 adapter模式 串口通讯 软件复用
在线阅读 下载PDF
基于TTCN-3的移动IPv6协议一致性测试中适配器Adapter的设计与实现
11
作者 刘静 《内蒙古科技与经济》 2010年第6期105-107,共3页
文章介绍了移动IPv6协议的TTCN-3测试系统适配器Adapter的设计与实现;首先根据TTCN-3 Adapter的功能划分,提出了移动IPv6协议一致性测试下Adapter三大模块(即测试系统适配器SA、平台适配器PA和编解码器CD)的测试要求;其次以Tau(测试工具... 文章介绍了移动IPv6协议的TTCN-3测试系统适配器Adapter的设计与实现;首先根据TTCN-3 Adapter的功能划分,提出了移动IPv6协议一致性测试下Adapter三大模块(即测试系统适配器SA、平台适配器PA和编解码器CD)的测试要求;其次以Tau(测试工具)中已提供的Adapter接口函数为基础,设计并实现了针对移动IPv6协议一致性的某些函数的扩展功能;指出了这些函数中的难点所在。 展开更多
关键词 adapter(适配器) TCI TRI 编解码 测试控制
在线阅读 下载PDF
基于Adapter模式的协同办公系统应用整合方案 被引量:1
12
作者 于本浩 王丽芳 《科学技术与工程》 2007年第7期1490-1494,1501,共6页
企业应用集成是目前企业信息化建设的核心问题,企业业务整合和系统集成时数据共享是企业应用集成需要解决的最重要的问题之一。根据Adapter模式技术的特点,分析了协同办公环境下基于消息传输组件的特点,从数据集成的角度,提出了在该环... 企业应用集成是目前企业信息化建设的核心问题,企业业务整合和系统集成时数据共享是企业应用集成需要解决的最重要的问题之一。根据Adapter模式技术的特点,分析了协同办公环境下基于消息传输组件的特点,从数据集成的角度,提出了在该环境下实现信息资源共享的ABI模式的体系框架,并对集成方案的模型、消息传输组件及框架进行了详细论述。 展开更多
关键词 企业应用集成 adapter模式 协同办公 企业适配器集成(ABI)
在线阅读 下载PDF
基于Adaptive LASSO模型辅助校准的非概率样本与概率样本融合研究
13
作者 王小宁 孙敏 邹梦文 《调研世界》 2025年第9期84-96,共13页
在过往的调查研究中,大部分统计研究者所使用的都是概率样本进行估计,但随着数据技术的发展与概率抽样成本的增加,非概率抽样的时效性与便捷性使其使用率日益上升。基于这一研究背景,考虑辅助变量高维的情况下,将Adaptive LASSO引入模... 在过往的调查研究中,大部分统计研究者所使用的都是概率样本进行估计,但随着数据技术的发展与概率抽样成本的增加,非概率抽样的时效性与便捷性使其使用率日益上升。基于这一研究背景,考虑辅助变量高维的情况下,将Adaptive LASSO引入模型辅助校准估计法,筛选出相关性强的辅助变量对非概率样本的权数进行校准,解决由于非概率样本入样概率未知而导致难以进行统计推断的问题,实现非概率样本与概率样本融合来估计总体。通过模拟分析以及利用网民社会意识调查和中国社会状况综合调查两个数据集进行的实证分析,验证了本文提出的基于Adaptive LASSO进行模型辅助校准的数据融合方法可有效提高估计的精度。 展开更多
关键词 数据融合 模型辅助校准 adaptive LASSO
在线阅读 下载PDF
Rotary-scaling fine-tuning (RSFT) method for optimizing railway wheel profiles and its application to a locomotive 被引量:13
14
作者 Yunguang Ye Yayun Qi +3 位作者 Dachuan Shi Yu Sun Yichang Zhou Markus Hecht 《Railway Engineering Science》 2020年第2期160-183,共24页
The existing multi-objective wheel profile optimization methods mainly consist of three sub-modules:(1)wheel profile generation,(2)multi-body dynamics simulation,and(3)an optimization algorithm.For the first module,a ... The existing multi-objective wheel profile optimization methods mainly consist of three sub-modules:(1)wheel profile generation,(2)multi-body dynamics simulation,and(3)an optimization algorithm.For the first module,a comparably conservative rotary-scaling finetuning(RSFT)method,which introduces two design variables and an empirical formula,is proposed to fine-tune the traditional wheel profiles for improving their engineering applicability.For the second module,for the TRAXX locomotives serving on the Blankenburg–Rubeland line,an optimization function representing the relationship between the wheel profile and the wheel–rail wear number is established based on Kriging surrogate model(KSM).For the third module,a method combining the regression capability of KSM with the iterative computing power of particle swarm optimization(PSO)is proposed to quickly and reliably implement the task of optimizing wheel profiles.Finally,with the RSFT–KSM–PSO method,we propose two wear-resistant wheel profiles for the TRAXX locomotives serving on the Blankenburg–Rubeland line,namely S1002-S and S1002-M.The S1002-S profile minimizes the total wear number by 30%,while the S1002-M profile makes the wear distribution more uniform through a proper sacrifice of the tread wear number,and the total wear number is reduced by 21%.The quasi-static and hunting stability tests further demonstrate that the profile designed by the RSFT–KSM–PSO method is promising for practical engineering applications. 展开更多
关键词 Wheel profile optimization Wear reduction Rotary-scaling fine-tuning Particle swarm optimization Kriging surrogate model
在线阅读 下载PDF
Fine-tuning electronic structure of N-doped graphitic carbon-supported Co-and Fe-incorporated Mo_(2)C to achieve ultrahigh electrochemical water oxidation activity 被引量:2
15
作者 Md.Selim Arif Sher Shah Hyeonjung Jung +3 位作者 Vinod K.Paidi Kug-Seung Lee Jeong Woo Han Jong Hyeok Park 《Carbon Energy》 SCIE EI CAS CSCD 2024年第7期134-149,共16页
Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated... Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated in Mo_(2)C that,therefore,has a finely tuned electronic structure,which is not achievable by incorporation of any one of the metals.Consequently,the resulting electrocatalyst Co_(0.8)Fe_(0.2)-Mo_(2)C-80 displayed excellent OER catalytic performance,which is evidenced by a low overpotential of 214.0(and 246.5)mV to attain a current density of 10(and 50)mA cm^(-2),an ultralow Tafel slope of 38.4 mV dec^(-1),and longterm stability in alkaline medium.Theoretical data demonstrates that Co_(0.8)Fe_(0.2)-Mo_(2)C-80 requires the lowest overpotential(1.00 V)for OER and Co centers to be the active sites.The ultrahigh catalytic performance of the electrocatalyst is attributed to the excellent intrinsic catalytic activity due to high Brunauer-Emmett-Teller specific surface area,large electrochemically active surface area,small Tafel slope,and low chargetransfer resistance. 展开更多
关键词 fine-tuning electronic structures heteronanostructures Mo_(2)C multimetal(Co/Fe) oxygen evolution reaction
在线阅读 下载PDF
3D heterogeneous integration of wideband RF chips using silicon-based adapter board technology 被引量:4
16
作者 Wang Yong Wei Wei +4 位作者 Yang Dong Sun Biao Zhang Xingwen Zhang Youming Huang Fengyi 《Journal of Southeast University(English Edition)》 EI CAS 2021年第1期8-13,共6页
An ultra-wideband mixing component cascaded by a mixing multi-function chip and a frequency multiplier multi-function chip was demonstrated and implemented using 3D heterogeneous integration based on the silicon adapt... An ultra-wideband mixing component cascaded by a mixing multi-function chip and a frequency multiplier multi-function chip was demonstrated and implemented using 3D heterogeneous integration based on the silicon adapter board technology.Four layers of high-resistance silicon substrate stack packaging are implemented based on the wafer-level gold-gold bonding process.Each layer adopts though silicon via(TSV)technology to realize signal interconnection.A core monolithic integrated microwave chip(MMIC)is embedded in the silicon cavity,and the silicon-based filter is integrated with the high-resistance silicon substrate.The interconnect line,cavity and filter of the silicon-based adapter board are designed with AutoCAD,and HFSS is adopted for 3D electromagnetic field simulation.According to the measured results,the radio frequency(RF)of the mixing multi-function chip is 40-44 GHz and its intermediate frequency(IF)can cover the Ku band with a chip size of 10 mm×11 mm×1 mm.The multiplier multi-function chip operates at 16-20 GHz.The fundamental suppression is greater than 50 dB and the second harmonic suppression is better than 40 dB with a chip size of 8 mm×8 mm×1 mm.The cascaded fully assembled mixing component achieves a spur of better than-50 dBc and a gain of better than 15 dB. 展开更多
关键词 silicon-based adapter board frequency mixing frequency multiplier multi-function chip
在线阅读 下载PDF
Railway wheel profile fine-tuning system for profile recommendation 被引量:3
17
作者 Yunguang Ye Jonas Vuitton +1 位作者 Yu Sun Markus Hecht 《Railway Engineering Science》 2021年第1期74-93,共20页
This paper develops a wheel profile fine-tuning system(WPFTS)that comprehensively considers the influence of wheel profile on wheel damage,vehicle stability,vehicle safety,and passenger comfort.WPFTS can recommend one... This paper develops a wheel profile fine-tuning system(WPFTS)that comprehensively considers the influence of wheel profile on wheel damage,vehicle stability,vehicle safety,and passenger comfort.WPFTS can recommend one or more optimized wheel profiles according to train operators’needs,e.g.,reducing wheel wear,mitigating the development of wheel out-of-roundness(OOR),improving the shape stability of the wheel profile.Specifically,WPFTS includes four modules:(I)a wheel profile generation module based on the rotary-scaling finetuning(RSFT)method;(II)a multi-objective generation module consisting of a rigid multi-body dynamics simulation(MBS)model,an analytical model,and a rigid–flexible MBS model,for generating 11 objectives related to wheel damage,vehicle stability,vehicle safety,and passenger comfort;(III)a weight assignment module consisting of an adaptive weight assignment strategy and a manual weight assignment strategy;and(IV)an optimization module based on radial basis function(RBF)and particle swarm optimization(PSO).Finally,three cases are introduced to show how WPTFS recommends a wheel profile according to train operators’needs.Among them,a wheel profile with high shape stability,a wheel profile for mitigating the development of wheel OOR,and a wheel profile considering hunting stability and derailment safety are developed,respectively. 展开更多
关键词 Wheel profile fine-tuning system Optimization RECOMMENDATION WEAR Contact concentration index Multi-body dynamics simulation(MBS) Railway wheel
在线阅读 下载PDF
Study on influencing factors of adapters separating with the underwater missile
18
作者 傅德彬 牛青林 +1 位作者 刘小军 李霞 《Journal of Beijing Institute of Technology》 EI CAS 2015年第2期158-163,共6页
To analyze main factors affecting the separation reliability between a missile and adapters for the launching process, a six DOF underwater dynamic model for the missile and adapters is utilized to simulate the separa... To analyze main factors affecting the separation reliability between a missile and adapters for the launching process, a six DOF underwater dynamic model for the missile and adapters is utilized to simulate the separation process, considering elastic forces of separating springs, hydrodynamic forces, gravity and buoyancy. Moreover, a criterion based on the maximum separating distance is put forward to determine whether adapters separate with the missile reliably. The results show that the magnitude and position of elastic force, the wedge angle and mass of the adapter significantly affect the separating process. The local sensitivity analysis for the reference status of design parameters demonstrates that the wedge angle of adapters has the maximum influence about 70. 4% on the separating distance. 展开更多
关键词 adapter hydrodynamic force separating spring reliability criterion
在线阅读 下载PDF
Comparing Fine-Tuning, Zero and Few-Shot Strategies with Large Language Models in Hate Speech Detection in English
19
作者 Ronghao Pan JoséAntonio García-Díaz Rafael Valencia-García 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2849-2868,共20页
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning... Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives. 展开更多
关键词 Hate speech detection zero-shot few-shot fine-tuning natural language processing
在线阅读 下载PDF
基于Android平台的通用Adapter适配器的设计与实现
20
作者 武伟 《电脑知识与技术》 2016年第4期99-101,共3页
Android开发中,ListView列表是使用率最高的UI组件,传统开发模式中,ListView的代码编写比较繁琐,项目中充斥着大量的Adapter适配器,该文在分析ListView开发原理的基础上,设计并实现一个可以通用于各种ListView场景开发的Adapter适配器,... Android开发中,ListView列表是使用率最高的UI组件,传统开发模式中,ListView的代码编写比较繁琐,项目中充斥着大量的Adapter适配器,该文在分析ListView开发原理的基础上,设计并实现一个可以通用于各种ListView场景开发的Adapter适配器,使得无论多少ListView,只需一个Adapter适配器,从而解决项目开发中代码高冗余、高复杂、管理麻烦等实际问题。 展开更多
关键词 ANDROID LISTVIEW adapter 适配器
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部