期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
DRL-based federated self-supervised learning for task offloading and resource allocation in ISAC-enabled vehicle edge computing
1
作者 Xueying Gu Qiong Wu +3 位作者 Pingyi Fan Nan Cheng Wen Chen Khaled B.Letaief 《Digital Communications and Networks》 2025年第5期1614-1627,共14页
Intelligent Transportation Systems(ITS)leverage Integrated Sensing and Communications(ISAC)to enhance data exchange between vehicles and infrastructure in the Internet of Vehicles(IoV).This integration inevitably incr... Intelligent Transportation Systems(ITS)leverage Integrated Sensing and Communications(ISAC)to enhance data exchange between vehicles and infrastructure in the Internet of Vehicles(IoV).This integration inevitably increases computing demands,risking real-time system stability.Vehicle Edge Computing(VEC)addresses this by offloading tasks to Road Side Units(RSUs),ensuring timely services.Our previous work,the FLSimCo algorithm,which uses local resources for federated Self-Supervised Learning(SSL),has a limitation:vehicles often can’t complete all iteration tasks.Our improved algorithm offloads partial tasks to RSUs and optimizes energy consumption by adjusting transmission power,CPU frequency,and task assignment ratios,balancing local and RSU-based training.Meanwhile,setting an offloading threshold further prevents inefficiencies.Simulation results show that the enhanced algorithm reduces energy consumption and improves offloading efficiency and accuracy of federated SSL. 展开更多
关键词 Integrated sensing and communications(ISAC) federated self-supervised learning Resource allocation and offloading Deep reinforcement learning(DRL) Vehicle edge computing(VEC)
在线阅读 下载PDF
Mobility‑aware federated self‑supervised learning in vehicular network 被引量:2
2
作者 Xueying Gu Qiong Wu +1 位作者 Qiang Fan Pingyi Fan 《Urban Lifeline》 2024年第1期122-131,共10页
The development of the Internet of Things has led to a significant increase in the number of devices,consequently generating a vast amount of data and resulting in an influx of unlabeled data.Collecting these data ena... The development of the Internet of Things has led to a significant increase in the number of devices,consequently generating a vast amount of data and resulting in an influx of unlabeled data.Collecting these data enables the training of robust models to support a broader range of applications.However,labeling these data can be costly,and the models dependent on labeled data are often unsuitable for rapidly evolving fields like vehicular networks and mobile Internet of Things,where new data continuously emerge.To address this challenge,Self-Supervised Learning(SSL)offers a way to train models without the need for labels.Nevertheless,the data stored locally in vehicles are considered private,and vehicles are reluctant to share data with others.Federated Learning(FL)is an advanced distributed machine learning approach that protects each vehicle’s privacy by allowing models to be trained locally and the model parameters to be exchanged across multiple devices simultaneously.Additionally,vehicles capture images while driving through cameras mounted on their rooftops.If a vehicle’s velocity is too high,the captured images,donated as local data,may be blurred.Simple aggregation of such data can negatively impact the accuracy of the aggregated model and slow down the convergence speed of FL.This paper proposes a FL algorithm for aggregation based on image blur levels,which is called FLSimCo.This algorithm does not require labels and serves as a pre-training stage for SSL in vehicular networks.Simulation results demonstrate that the proposed algorithm achieves fast and stable convergence. 展开更多
关键词 federated learning self-supervised learning Vehicular network MOBILITY
在线阅读 下载PDF
基于知识遗忘的联邦自监督学习后门防御方法
3
作者 朱万全 朱诚诚 +1 位作者 张佳乐 孙小兵 《扬州大学学报(自然科学版)》 2025年第4期43-50,59,共9页
针对联邦自监督学习在后门攻击场景下存在的脆弱性,提出了一种基于知识遗忘的联邦自监督后门防御方法。该方法结合后门攻击成功的原始机理,通过计算样本对的嵌入相似性逆向还原触发器,并利用全局模型下发已还原的触发器对本地模型进行... 针对联邦自监督学习在后门攻击场景下存在的脆弱性,提出了一种基于知识遗忘的联邦自监督后门防御方法。该方法结合后门攻击成功的原始机理,通过计算样本对的嵌入相似性逆向还原触发器,并利用全局模型下发已还原的触发器对本地模型进行遗忘学习,继而通过双向优化训练擦除恶意参与方植入的后门特征,使得联邦自监督学习的全局模型迁移到下游任务时,能够避免因后门触发器的恶意影响而导致的误分类,同时保持其在干净输入上的准确性。实验结果表明,所提方法能有效防御联邦学习中多种典型后门攻击,性能优越,并能在模型训练过程中消除后门攻击的负面影响,无须依赖可信中央服务器,为自监督学习后门场景下的敏感数据训练提供了高效、鲁棒的解决方法。 展开更多
关键词 联邦自监督学习 触发器还原 遗忘学习 后门攻击 后门防御
在线阅读 下载PDF
非独立同分布下联邦半监督学习的数据分享研究 被引量:2
4
作者 顾永跟 高凌轩 +1 位作者 吴小红 陶杰 《计算机工程》 CAS CSCD 北大核心 2024年第6期188-196,共9页
联邦学习作为一种保护本地数据隐私安全的分布式机器学习方法,联合分散的设备共同训练共享模型。通常联邦学习在数据均有标签情况下进行训练,然而现实中无法保证标签数据完全存在,提出联邦半监督学习。在联邦半监督学习中,如何利用无标... 联邦学习作为一种保护本地数据隐私安全的分布式机器学习方法,联合分散的设备共同训练共享模型。通常联邦学习在数据均有标签情况下进行训练,然而现实中无法保证标签数据完全存在,提出联邦半监督学习。在联邦半监督学习中,如何利用无标签数据提升系统性能和如何缓解数据异质性带来的负面影响是两大挑战。针对标签数据仅在服务器场景,基于分享的思想,设计一种可应用在联邦半监督学习系统上的方法Share&Mark,该方法将客户端的分享数据由专家标记后参与联邦训练。同时,为充分利用分享的数据,根据各客户端模型在服务器数据集上的损失值动态调整各客户端模型在联邦聚合时的占比,即ServerLoss聚合算法。综合考虑隐私牺牲、通信开销以及人工标注成本3个方面的因素,对不同分享率下的实验结果进行分析,结果表明,约3%的数据分享比例能平衡各方面因素。此时,采用Share&Mark方法的联邦半监督学习系统FedMatch在CIFAR-10和Fashion-MNIST数据集上训练的模型准确率均可提升8%以上,并具有较优的鲁棒性。 展开更多
关键词 联邦半监督学习 联邦学习 数据非独立同分布 鲁棒性 聚合算法 数据分享
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部