This paper considers the collaborative resource allocation problem over a hybrid cloud center and edge server network, an emerging infrastructure for efficient Internet services. The cloud center acts as a pool of ine...This paper considers the collaborative resource allocation problem over a hybrid cloud center and edge server network, an emerging infrastructure for efficient Internet services. The cloud center acts as a pool of inexhaustible computation and storage powers. The edge servers often have limited computation and storage powers but are able to provide quick responses to service requests from end users. Upon receiving service requests, edge servers assign them to themselves, their neighboring edge servers, as well as the cloud center, aiming at minimizing the overall network cost. This paper first establishes an optimization model for this problem. Second, in light of the separable structure of the optimization model, we utilize the alternating direction method of multipliers (ADMM) to develop a fully collaborative resource allocation algorithm. The edge servers and the cloud center autonomously collaborate to compute their local optimization variables and prices of network resources, and reach an optimal solution. Numerical experiments demonstrate the effectiveness of the hybrid network infrastructure as well as the proposed algorithm.展开更多
This paper investigates network partition and edge server placement problem to exploit the benefit of edge computing for distributed state estimation.A constrained many-objective optimization problem is formulated to ...This paper investigates network partition and edge server placement problem to exploit the benefit of edge computing for distributed state estimation.A constrained many-objective optimization problem is formulated to minimize the cost of edge server deployment,operation,and maintenance,avoid the difference in the partition sizes,reduce the level of coupling between connected partitions,and maximize the inner cohesion of each partition.Capacities of edge server are constrained against underload and overload.To efficiently solve the problem,an improved non-dominated sorting genetic algorithm III(NSGA-III)is developed,with a specifically designed directed mutation operator based on topological characteristics of the partitions to accelerate convergence.Case study validates that the proposed formulations effectively characterize the practical concerns and reveal their trade-offs,and the improved algorithm outperforms existing representative ones for large-scale networks in converging to a near-optimal solution.The optimized result contributes significantly to real-time distributed state estimation.展开更多
Federated edge learning(FEEL)technology for vehicular networks is considered as a promising technology to reduce the computation workload while keeping the privacy of users.In the FEEL system,vehicles upload data to t...Federated edge learning(FEEL)technology for vehicular networks is considered as a promising technology to reduce the computation workload while keeping the privacy of users.In the FEEL system,vehicles upload data to the edge servers,which train the vehicles’data to update local models and then return the result to vehicles to avoid sharing the original data.However,the cache queue in the edge is limited and the channel between edge server and each vehicle is time-varying.Thus,it is challenging to select a suitable number of vehicles to ensure that the uploaded data can keep a stable cache queue in edge server while maximizing the learning accuracy.Moreover,selecting vehicles with different resource statuses to update data will affect the total amount of data involved in training,which further affects the model accuracy.In this paper,we propose a vehicle selection scheme,which maximizes the learning accuracy while ensuring the stability of the cache queue,where the statuses of all the vehicles in the coverage of edge server are taken into account.The performance of this scheme is evaluated through simulation experiments,which indicates that our proposed scheme can perform better than the known benchmark scheme.展开更多
文摘This paper considers the collaborative resource allocation problem over a hybrid cloud center and edge server network, an emerging infrastructure for efficient Internet services. The cloud center acts as a pool of inexhaustible computation and storage powers. The edge servers often have limited computation and storage powers but are able to provide quick responses to service requests from end users. Upon receiving service requests, edge servers assign them to themselves, their neighboring edge servers, as well as the cloud center, aiming at minimizing the overall network cost. This paper first establishes an optimization model for this problem. Second, in light of the separable structure of the optimization model, we utilize the alternating direction method of multipliers (ADMM) to develop a fully collaborative resource allocation algorithm. The edge servers and the cloud center autonomously collaborate to compute their local optimization variables and prices of network resources, and reach an optimal solution. Numerical experiments demonstrate the effectiveness of the hybrid network infrastructure as well as the proposed algorithm.
基金supported by the Shanghai Sailing Program(No.19YF1423700)the National Key Research and Development Program of China(No.2016YFB0900100)the Key Project of Shanghai Science and Technology Committee(No.18DZ1100303).
文摘This paper investigates network partition and edge server placement problem to exploit the benefit of edge computing for distributed state estimation.A constrained many-objective optimization problem is formulated to minimize the cost of edge server deployment,operation,and maintenance,avoid the difference in the partition sizes,reduce the level of coupling between connected partitions,and maximize the inner cohesion of each partition.Capacities of edge server are constrained against underload and overload.To efficiently solve the problem,an improved non-dominated sorting genetic algorithm III(NSGA-III)is developed,with a specifically designed directed mutation operator based on topological characteristics of the partitions to accelerate convergence.Case study validates that the proposed formulations effectively characterize the practical concerns and reveal their trade-offs,and the improved algorithm outperforms existing representative ones for large-scale networks in converging to a near-optimal solution.The optimized result contributes significantly to real-time distributed state estimation.
基金supported in part by the National Natural Science Foundation of China(No.61701197)in part by the open research fund of State Key Laboratory of Integrated Services Networks(No.ISN23-11)+3 种基金in part by the National Key Research and Development Program of China(No.2021YFA1000500(4))in part by the 111 Project(No.B23008)in part by the Future Network Scientific Research Fund Project(FNSRFP2021-YB-11)in part by the project of Changzhou Key Laboratory of 5G+Industrial Internet Fusion Application(No.CM20223015)。
文摘Federated edge learning(FEEL)technology for vehicular networks is considered as a promising technology to reduce the computation workload while keeping the privacy of users.In the FEEL system,vehicles upload data to the edge servers,which train the vehicles’data to update local models and then return the result to vehicles to avoid sharing the original data.However,the cache queue in the edge is limited and the channel between edge server and each vehicle is time-varying.Thus,it is challenging to select a suitable number of vehicles to ensure that the uploaded data can keep a stable cache queue in edge server while maximizing the learning accuracy.Moreover,selecting vehicles with different resource statuses to update data will affect the total amount of data involved in training,which further affects the model accuracy.In this paper,we propose a vehicle selection scheme,which maximizes the learning accuracy while ensuring the stability of the cache queue,where the statuses of all the vehicles in the coverage of edge server are taken into account.The performance of this scheme is evaluated through simulation experiments,which indicates that our proposed scheme can perform better than the known benchmark scheme.