Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers a...Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers are linked to a computing power center via wireless links.Through this FL procedure,each MEC server in CPN can independently train the learning models using localized data,thus preserving data privacy.However,it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers.To address these issues,we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers.Afterwards,we formulate a comprehensive algorithm to jointly optimize the communication resource(wireless bandwidth and transmission power)allocations and the computation resource(computation capacity of MEC servers)allocations while ensuring the local accuracy of the training of each MEC server.The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN.展开更多
Computing Power Network(CPN)is a new paradigm that integrates communication,computing,and storage resources to provide services for tasks.However,tasks composed of non-independent subtasks have a preference for the re...Computing Power Network(CPN)is a new paradigm that integrates communication,computing,and storage resources to provide services for tasks.However,tasks composed of non-independent subtasks have a preference for the resources required at each stage,which increases the difficulty of heterogeneous resource allocation and reduces the latency performance of CPN services.Motivated by this,this paper jointly optimizes the full-service cycle of tasks,including transmission,task partitioning,and offloading.First,the transmission bandwidth is dynamically configured based on delay sensitivity of tasks.Second,with the real-time information from edge resource clusters and state resource clusters in the network,the optimal partitioning for a computation task is derived.Third,personalized resource allocation schemes are customized for computation and storage tasks respectively.Finally,the impact of resource parameter configuration on the latency violation probability of CPN is revealed.Moreover,compared with the benchmark schemes,our proposed scheme reduces the network latency violation probability by up to 1.17×in the same network setting.展开更多
针对当前算力网络安全服务发展中存在的资源池编排调度效率低下、云网安技术融合不足等关键问题,以运营商在安全服务市场面临的云网安协同创新与运营压力为研究背景,提出基于IPv6转发平面的段路由(segment routing over IPv6,SRv6)+应...针对当前算力网络安全服务发展中存在的资源池编排调度效率低下、云网安技术融合不足等关键问题,以运营商在安全服务市场面临的云网安协同创新与运营压力为研究背景,提出基于IPv6转发平面的段路由(segment routing over IPv6,SRv6)+应用响应网络(application responsive network,ARN)的算力网络安全排调度技术策略。该策略通过深度融合SRv6协议实现网络与业务端到端贯通,运用ARN提升数据面标识的简洁性和动态调度能力,构建具备高效编排调度能力的网络数据平面,支持安全业务的灵活组合和快速部署。研究成果主要包含SRv6+ARN编排调度技术架构、关键技术和可靠性保障,为运营商构建云网安协同的算力网络安全资源池提供技术支撑。展开更多
基金partly funded by MOST Major Research and Development Project(Grant No 2021YFB2900204)Natural Science Foundation of China(Grant No 62132004)+1 种基金Sichuan Major R&D Project(Grant No 22QYCX0168)the Key Research and Development Program of Zhejiang Province(Grant No 2022C01093)。
文摘Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers are linked to a computing power center via wireless links.Through this FL procedure,each MEC server in CPN can independently train the learning models using localized data,thus preserving data privacy.However,it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers.To address these issues,we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers.Afterwards,we formulate a comprehensive algorithm to jointly optimize the communication resource(wireless bandwidth and transmission power)allocations and the computation resource(computation capacity of MEC servers)allocations while ensuring the local accuracy of the training of each MEC server.The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN.
基金supported in part by the Chongqing Postgraduate Research and Innovation Project(CYB22250)National Natural Science Foundation of China(62271096,U20A20157)+2 种基金Natural Science Foundation of Chongqing-China(CSTB2023NSCQ-LZX0134,CSTB2024NSCQ-LZX0124)University Innovation Research Group of Chongqing(CXQT20017)Youth Innovation Group Support Program of ICE Discipline of CQUPT(SCIE-QN-2022-04)。
文摘Computing Power Network(CPN)is a new paradigm that integrates communication,computing,and storage resources to provide services for tasks.However,tasks composed of non-independent subtasks have a preference for the resources required at each stage,which increases the difficulty of heterogeneous resource allocation and reduces the latency performance of CPN services.Motivated by this,this paper jointly optimizes the full-service cycle of tasks,including transmission,task partitioning,and offloading.First,the transmission bandwidth is dynamically configured based on delay sensitivity of tasks.Second,with the real-time information from edge resource clusters and state resource clusters in the network,the optimal partitioning for a computation task is derived.Third,personalized resource allocation schemes are customized for computation and storage tasks respectively.Finally,the impact of resource parameter configuration on the latency violation probability of CPN is revealed.Moreover,compared with the benchmark schemes,our proposed scheme reduces the network latency violation probability by up to 1.17×in the same network setting.