期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Unlocking Edge Fine-Tuning:A Sample-Efficient Language-Empowered Split Fine-Tuning Framework
1
作者 Zuyi Huang Yue Wang +4 位作者 Jia Liu Haodong Yi lejun ai Min Chen Salman A.AlQahtani 《Computers, Materials & Continua》 2026年第4期1584-1606,共23页
The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness dimin... The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning,which leads to excessive communication overhead.To address this,we propose Language-Empowered Split Fine-Tuning(LESFT),a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm.LESFT simultaneously learns frommultiple logically equivalent but linguistically diverse reasoning chains,providing richer supervisory signals and improving data efficiency.This process-oriented training allows more effective reasoning adaptation with fewer samples.Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy.LESFT consistently outperforms strong baselines on GSM8K,CommonsenseQA,and AQUA_RAT,with the largest gains observed on Qwen2.5-3B.These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments. 展开更多
关键词 Large language models edge computing efficient fine-tuning few-shot fine-tuning split federated learning
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部