As an emerging memory device,memristor shows great potential in neuromorphic computing applications due to its advantage of low power consumption.This review paper focuses on the application of low-power-based memrist...As an emerging memory device,memristor shows great potential in neuromorphic computing applications due to its advantage of low power consumption.This review paper focuses on the application of low-power-based memristors in various aspects.The concept and structure of memristor devices are introduced.The selection of functional materials for low-power memristors is discussed,including ion transport materials,phase change materials,magnetoresistive materials,and ferroelectric materials.Two common types of memristor arrays,1T1R and 1S1R crossbar arrays are introduced,and physical diagrams of edge computing memristor chips are discussed in detail.Potential applications of low-power memristors in advanced multi-value storage,digital logic gates,and analogue neuromorphic computing are summarized.Furthermore,the future challenges and outlook of neuromorphic computing based on memristor are deeply discussed.展开更多
A novel dynamic software allocation algorithm suitable for pervasive computing environments is proposed to minimize power consumption of mobile devices. Considering the power cost incurred by the computation, communic...A novel dynamic software allocation algorithm suitable for pervasive computing environments is proposed to minimize power consumption of mobile devices. Considering the power cost incurred by the computation, communication and migration of software components, a power consumption model of component assignments between a mobile device and a server is set up. Also, the mobility of components and the mobility relationships between components are taken into account in software allocation. By using network flow theory, the optimization problem of power conservation is transformed into the optimal bipartition problem of a flow network which can be partitioned by the max-flow rain-cut algorithm. Simulation results show that the proposed algorithm can save si^nificantlv more energy than existing algorithms.展开更多
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these...With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.展开更多
In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi...In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.展开更多
The unmanned aerial vehicle(UAV)-enabled mobile edge computing(MEC) architecture is expected to be a powerful technique to facilitate 5 G and beyond ubiquitous wireless connectivity and diverse vertical applications a...The unmanned aerial vehicle(UAV)-enabled mobile edge computing(MEC) architecture is expected to be a powerful technique to facilitate 5 G and beyond ubiquitous wireless connectivity and diverse vertical applications and services, anytime and anywhere. Wireless power transfer(WPT) is another promising technology to prolong the operation time of low-power wireless devices in the era of Internet of Things(IoT). However, the integration of WPT and UAV-enabled MEC systems is far from being well studied, especially in dynamic environments. In order to tackle this issue, this paper aims to investigate the stochastic computation offloading and trajectory scheduling for the UAV-enabled wireless powered MEC system. A UAV offers both RF wireless power transmission and computation services for IoT devices. Considering the stochastic task arrivals and random channel conditions, a long-term average energyefficiency(EE) minimization problem is formulated.Due to non-convexity and the time domain coupling of the variables in the formulated problem, a lowcomplexity online computation offloading and trajectory scheduling algorithm(OCOTSA) is proposed by exploiting Lyapunov optimization. Simulation results verify that there exists a balance between EE and the service delay, and demonstrate that the system EE performance obtained by the proposed scheme outperforms other benchmark schemes.展开更多
In the era of Internet of Things(Io T),mobile edge computing(MEC)and wireless power transfer(WPT)provide a prominent solution for computation-intensive applications to enhance computation capability and achieve sustai...In the era of Internet of Things(Io T),mobile edge computing(MEC)and wireless power transfer(WPT)provide a prominent solution for computation-intensive applications to enhance computation capability and achieve sustainable energy supply.A wireless-powered mobile edge computing(WPMEC)system consisting of a hybrid access point(HAP)combined with MEC servers and many users is considered in this paper.In particular,a novel multiuser cooperation scheme based on orthogonal frequency division multiple access(OFDMA)is provided to improve the computation performance,where users can split the computation tasks into various parts for local computing,offloading to corresponding helper,and HAP for remote execution respectively with the aid of helper.Specifically,we aim at maximizing the weighted sum computation rate(WSCR)by optimizing time assignment,computation-task allocation,and transmission power at the same time while keeping energy neutrality in mind.We transform the original non-convex optimization problem to a convex optimization problem and then obtain a semi-closed form expression of the optimal solution by considering the convex optimization techniques.Simulation results demonstrate that the proposed multi-user cooperationassisted WPMEC scheme greatly improves the WSCR of all users than the existing schemes.In addition,OFDMA protocol increases the fairness and decreases delay among the users when compared to TDMA protocol.展开更多
Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with bo...Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with both packet transmission and data processing,it requires joint optimization of communications and computing.Considering the diverse requirements of applications,we develop a dynamic control policy of routing to determine both paths and computing nodes in a distributed computing power network.Different from traditional routing protocols,additional metrics related to computing are taken into consideration in the proposed policy.Based on the multi-attribute decision theory and the fuzzy logic theory,we propose two routing selection algorithms,the Fuzzy Logic-Based Routing(FLBR)algorithm and the low-complexity Pairwise Multi-Attribute Decision-Making(l PMADM)algorithm.Simulation results show that the proposed policy could achieve better performance in average processing delay,user satisfaction,and load balancing compared with existing works.展开更多
Electric power systems provide the backbone of modern industrial societies.Enabling scalable grid analytics is the keystone to successfully operating large transmission and distribution systems.However,today’s power ...Electric power systems provide the backbone of modern industrial societies.Enabling scalable grid analytics is the keystone to successfully operating large transmission and distribution systems.However,today’s power systems are suffering from ever-increasing computational burdens in sustaining the expanding communities and deep integration of renewable energy resources,as well as managing huge volumes of data accordingly.These unprecedented challenges call for transformative analytics to support the resilient operations of power systems.Recently,the explosive growth of quantum computing techniques has ignited new hopes of revolutionizing power system computations.Quantum computing harnesses quantum mechanisms to solve traditionally intractable computational problems,which may lead to ultra-scalable and efficient power grid analytics.This paper reviews the newly emerging application of quantum computing techniques in power systems.We present a comprehensive overview of existing quantum-engineered power analytics from different operation perspectives,including static analysis,transient analysis,stochastic analysis,optimization,stability,and control.We thoroughly discuss the related quantum algorithms,their benefits and limitations,hardware implementations,and recommended practices.We also review the quantum networking techniques to ensure secure communication of power systems in the quantum era.Finally,we discuss challenges and future research directions.This paper will hopefully stimulate increasing attention to the development of quantum-engineered smart grids.展开更多
1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the es...1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the escalating demand for high-performance computing over space.Computing over space entails the deployment of computational resources on platforms such as satellites to process large-scale data under constraints such as high radiation exposure,restricted power consumption,and minimized weight.展开更多
To address the sensitive and uncertain limitations of single-energy computed tomography(CT)calibration methods in computing proton stopping power ratio during treatment planning,different methods have been proposed us...To address the sensitive and uncertain limitations of single-energy computed tomography(CT)calibration methods in computing proton stopping power ratio during treatment planning,different methods have been proposed using a dual energy CT approach.This paper reviews the most recent dual-energy CT approaches for computing proton stopping power ratio.These include image domain and projection domain methods.The advantages and uncertainties of these methods are analyzed based on existing studies.This paper highlights recent advances in dual energy CT,discussing their implementation,advantages,limitations,and potential for clinical adoption.展开更多
With the development of smart grid, the electric power supervisory control and data acquisition (SCADA) system is limited by the traditional IT infrastructure, leading to low resource utilization and poor scalabilit...With the development of smart grid, the electric power supervisory control and data acquisition (SCADA) system is limited by the traditional IT infrastructure, leading to low resource utilization and poor scalability. Information islands are formed due to poor system interoperability. The development of innovative applications is limited, and the launching period of new businesses is long. Management costs and risks increase, and equipment utilization declines. To address these issues, a professional private cloud solution is introduced to integrate the electric power SCADA system, and conduct experimental study of its applicability, reliability, security, and real time. The experimental results show that the professional private cloud solution is technical and commercial feasible, meeting the requirements of the electric power SCADA system.展开更多
As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and...As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.展开更多
Benefited from wireless power transfer(WPT)and mobile-edge computing(MEC),wireless powered MEC systems have attracted widespread attention.Specifically,we design an online offloading scheme based on deep reinforcement...Benefited from wireless power transfer(WPT)and mobile-edge computing(MEC),wireless powered MEC systems have attracted widespread attention.Specifically,we design an online offloading scheme based on deep reinforcement learning that maximizes the computation rate and minimizes the energy consumption of all wireless devices(WDs).Extensive results validate that the proposed scheme can achieve better tradeoff between energy consumption and computation delay.展开更多
Separation issue is one of the most important problems about cloud computing security. Tenants should be separated from each other based on cloud infrastructure and different users from one tenant should be separated ...Separation issue is one of the most important problems about cloud computing security. Tenants should be separated from each other based on cloud infrastructure and different users from one tenant should be separated from each other with the constraint of security policies. Learning from the notion of trusted cloud computing and trustworthiness in cloud, in this paper, a multi-level authorization separation model is formally described, and a series of rules are proposed to summarize the separation property of this model. The correctness of the rules is proved. Furthermore, based on this model, a tenant separation mechanism is deployed in a real world mixed-critical information system. Performance benchmarks have shown the availability and efficiency of this mechanism.展开更多
Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers a...Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers are linked to a computing power center via wireless links.Through this FL procedure,each MEC server in CPN can independently train the learning models using localized data,thus preserving data privacy.However,it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers.To address these issues,we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers.Afterwards,we formulate a comprehensive algorithm to jointly optimize the communication resource(wireless bandwidth and transmission power)allocations and the computation resource(computation capacity of MEC servers)allocations while ensuring the local accuracy of the training of each MEC server.The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN.展开更多
In this paper,we study an Intelligent Reflecting Surface(IRS)assisted Mobile Edge Computing(MEC)system under eavesdropping threats,where the IRS is used to enhance the energy signal transmission and the offloading per...In this paper,we study an Intelligent Reflecting Surface(IRS)assisted Mobile Edge Computing(MEC)system under eavesdropping threats,where the IRS is used to enhance the energy signal transmission and the offloading performance between Wireless Devices(WDs)and the Access Point(AP).Specifically,in the proposed scheme,the AP first powers all WDs with the wireless power transfer through both direct and IRS-assisted links.Then,powered by the harvested energy,all WDs securely offload their computation tasks through the two links in the time division multiple access mode.To determine the local and offloading computational bits,we formulate an optimization problem to jointly design the IRS's phase shift and allocate the time slots constrained by the security and energy requirements.To cope with this non-convex optimization problem,we adopt semidefinite relaxations,singular value decomposition techniques,and Lagrange dual method.Moreover,we propose a dichotomy particle swarm algorithm based on the bisection method to process the overall optimization problem and improve the convergence speed.The numerical results illustrate that the proposed scheme can boost the performance of MEC and secure computation rates compared with other IRS-assisted MEC benchmark schemes.展开更多
Traditional digital processing approaches are based on semiconductor transistors, which suffer from high power consumption, aggravating with technology node scaling. To solve definitively this problem, a number of eme...Traditional digital processing approaches are based on semiconductor transistors, which suffer from high power consumption, aggravating with technology node scaling. To solve definitively this problem, a number of emerging non-volatile nanodevices are under intense investigations. Meanwhile, novel computing circuits are invented to dig the full potential of the nanodevices. The combination of non-volatile nanodevices with suitable computing paradigms have many merits compared with the complementary metal-oxide-semiconductor transistor (CMOS) technology based structures, such as zero standby power, ultra-high density, non-volatility, and acceptable access speed. In this paper, we overview and compare the computing paradigms based on the emerging nanodevices towards ultra-low dissipation.展开更多
This paper investigates the multi-Unmanned Aerial Vehicle(UAV)-assisted wireless-powered Mobile Edge Computing(MEC)system,where UAVs provide computation and powering services to mobile terminals.We aim to maximize the...This paper investigates the multi-Unmanned Aerial Vehicle(UAV)-assisted wireless-powered Mobile Edge Computing(MEC)system,where UAVs provide computation and powering services to mobile terminals.We aim to maximize the number of completed computation tasks by jointly optimizing the offloading decisions of all terminals and the trajectory planning of all UAVs.The action space of the system is extremely large and grows exponentially with the number of UAVs.In this case,single-agent learning will require an overlarge neural network,resulting in insufficient exploration.However,the offloading decisions and trajectory planning are two subproblems performed by different executants,providing an opportunity for problem-solving.We thus adopt the idea of decomposition and propose a 2-Tiered Multi-agent Soft Actor-Critic(2T-MSAC)algorithm,decomposing a single neural network into multiple small-scale networks.In the first tier,a single agent is used for offloading decisions,and an online pretrained model based on imitation learning is specially designed to accelerate the training process of this agent.In the second tier,UAVs utilize multiple agents to plan their trajectories.Each agent exerts its influence on the parameter update of other agents through actions and rewards,thereby achieving joint optimization.Simulation results demonstrate that the proposed algorithm can be applied to scenarios with various location distributions of terminals,outperforming existing benchmarks that perform well only in specific scenarios.In particular,2T-MSAC increases the number of completed tasks by 45.5%in the scenario with uneven terminal distributions.Moreover,the pretrained model based on imitation learning reduces the convergence time of 2T-MSAC by 58.2%.展开更多
Federated Learning(FL)is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security.However,the traditional FL model in communication sce...Federated Learning(FL)is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security.However,the traditional FL model in communication scenarios,whether for uplink or downlink communications,may give rise to several network problems,such as bandwidth occupation,additional network latency,and bandwidth fragmentation.In this paper,we propose an adaptive chained training approach(Fed ACT)for FL in computing power networks.First,a Computation-driven Clustering Strategy(CCS)is designed.The server clusters clients by task processing delays to minimize waiting delays at the central server.Second,we propose a Genetic-Algorithm-based Sorting(GAS)method to optimize the order of clients participating in training.Finally,based on the table lookup and forwarding rules of the Segment Routing over IPv6(SRv6)protocol,the sorting results of GAS are written into the SRv6 packet header,to control the order in which clients participate in model training.We conduct extensive experiments on two datasets of CIFAR-10 and MNIST,and the results demonstrate that the proposed algorithm offers improved accuracy,diminished communication costs,and reduced network delays.展开更多
With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more d...With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more dynamic and more distributed,with higher uncertainty.These new power system behaviors bring significant challenges in power system modeling and simulation as more data need to be analyzed for larger systems and more complex models to be solved in a shorter time period.The conventional computing approaches will not be sufficient for future power systems.This paper provides a historical review of computing for power system operation and planning,discusses technology advancements in high performance computing(HPC),and describes the drivers for employing HPC techniques.Some high performance computing application examples with different HPC techniques,including the latest quantum computing,are also presented to show how HPC techniques can help us be well prepared to meet the requirements of power system computing in a clean energy future.展开更多
基金supported by the NSFC(12474071)Natural Science Foundation of Shandong Province(ZR2024YQ051)+5 种基金Open Research Fund of State Key Laboratory of Materials for Integrated Circuits(SKLJC-K2024-12)the Shanghai Sailing Program(23YF1402200,23YF1402400)Funded by Basic Research Program of Jiangsu(BK20240424)Taishan Scholar Foundation of Shandong Province(tsqn202408006)Young Talent of Lifting engineering for Science and Technology in Shandong,China(SDAST2024QTB002)the Qilu Young Scholar Program of Shandong University.
文摘As an emerging memory device,memristor shows great potential in neuromorphic computing applications due to its advantage of low power consumption.This review paper focuses on the application of low-power-based memristors in various aspects.The concept and structure of memristor devices are introduced.The selection of functional materials for low-power memristors is discussed,including ion transport materials,phase change materials,magnetoresistive materials,and ferroelectric materials.Two common types of memristor arrays,1T1R and 1S1R crossbar arrays are introduced,and physical diagrams of edge computing memristor chips are discussed in detail.Potential applications of low-power memristors in advanced multi-value storage,digital logic gates,and analogue neuromorphic computing are summarized.Furthermore,the future challenges and outlook of neuromorphic computing based on memristor are deeply discussed.
基金The National Natural Science Foundation of China(No60503041)the Science and Technology Commission of ShanghaiInternational Cooperation Project (No05SN07114)
文摘A novel dynamic software allocation algorithm suitable for pervasive computing environments is proposed to minimize power consumption of mobile devices. Considering the power cost incurred by the computation, communication and migration of software components, a power consumption model of component assignments between a mobile device and a server is set up. Also, the mobility of components and the mobility relationships between components are taken into account in software allocation. By using network flow theory, the optimization problem of power conservation is transformed into the optimal bipartition problem of a flow network which can be partitioned by the max-flow rain-cut algorithm. Simulation results show that the proposed algorithm can save si^nificantlv more energy than existing algorithms.
基金supported by the National Science Foundation of China under Grant 62271062 and 62071063by the Zhijiang Laboratory Open Project Fund 2020LCOAB01。
文摘With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.
基金This work was supported by the National Key R&D Program of China No.2019YFB1802800.
文摘In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.
基金supported in part by the U.S. National Science Foundation under Grant CNS-2007995in part by the National Natural Science Foundation of China under Grant 92067201,62171231in part by Jiangsu Provincial Key Research and Development Program under Grant BE2020084-1。
文摘The unmanned aerial vehicle(UAV)-enabled mobile edge computing(MEC) architecture is expected to be a powerful technique to facilitate 5 G and beyond ubiquitous wireless connectivity and diverse vertical applications and services, anytime and anywhere. Wireless power transfer(WPT) is another promising technology to prolong the operation time of low-power wireless devices in the era of Internet of Things(IoT). However, the integration of WPT and UAV-enabled MEC systems is far from being well studied, especially in dynamic environments. In order to tackle this issue, this paper aims to investigate the stochastic computation offloading and trajectory scheduling for the UAV-enabled wireless powered MEC system. A UAV offers both RF wireless power transmission and computation services for IoT devices. Considering the stochastic task arrivals and random channel conditions, a long-term average energyefficiency(EE) minimization problem is formulated.Due to non-convexity and the time domain coupling of the variables in the formulated problem, a lowcomplexity online computation offloading and trajectory scheduling algorithm(OCOTSA) is proposed by exploiting Lyapunov optimization. Simulation results verify that there exists a balance between EE and the service delay, and demonstrate that the system EE performance obtained by the proposed scheme outperforms other benchmark schemes.
基金supported in part by the National Natural Science Foundation of China(NSFC)under Grant No.62071306in part by Shenzhen Science and Technology Program under Grants JCYJ20200109113601723,JSGG20210802154203011 and JSGG20210420091805014。
文摘In the era of Internet of Things(Io T),mobile edge computing(MEC)and wireless power transfer(WPT)provide a prominent solution for computation-intensive applications to enhance computation capability and achieve sustainable energy supply.A wireless-powered mobile edge computing(WPMEC)system consisting of a hybrid access point(HAP)combined with MEC servers and many users is considered in this paper.In particular,a novel multiuser cooperation scheme based on orthogonal frequency division multiple access(OFDMA)is provided to improve the computation performance,where users can split the computation tasks into various parts for local computing,offloading to corresponding helper,and HAP for remote execution respectively with the aid of helper.Specifically,we aim at maximizing the weighted sum computation rate(WSCR)by optimizing time assignment,computation-task allocation,and transmission power at the same time while keeping energy neutrality in mind.We transform the original non-convex optimization problem to a convex optimization problem and then obtain a semi-closed form expression of the optimal solution by considering the convex optimization techniques.Simulation results demonstrate that the proposed multi-user cooperationassisted WPMEC scheme greatly improves the WSCR of all users than the existing schemes.In addition,OFDMA protocol increases the fairness and decreases delay among the users when compared to TDMA protocol.
文摘Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with both packet transmission and data processing,it requires joint optimization of communications and computing.Considering the diverse requirements of applications,we develop a dynamic control policy of routing to determine both paths and computing nodes in a distributed computing power network.Different from traditional routing protocols,additional metrics related to computing are taken into consideration in the proposed policy.Based on the multi-attribute decision theory and the fuzzy logic theory,we propose two routing selection algorithms,the Fuzzy Logic-Based Routing(FLBR)algorithm and the low-complexity Pairwise Multi-Attribute Decision-Making(l PMADM)algorithm.Simulation results show that the proposed policy could achieve better performance in average processing delay,user satisfaction,and load balancing compared with existing works.
基金supported in part by the Advanced Grid Modeling Program under U.S.Department of Energy’s Office of Electricity under Agreement No.37533(P.Z.)in part by Stony Brook Uni-versity’s Office of the Vice President for Research through a Quantum Information Science and Technology Seed Grant(P.Z.)in part by the National Science Foundation under Grant No.PHY 1915165(T.-C.W.).
文摘Electric power systems provide the backbone of modern industrial societies.Enabling scalable grid analytics is the keystone to successfully operating large transmission and distribution systems.However,today’s power systems are suffering from ever-increasing computational burdens in sustaining the expanding communities and deep integration of renewable energy resources,as well as managing huge volumes of data accordingly.These unprecedented challenges call for transformative analytics to support the resilient operations of power systems.Recently,the explosive growth of quantum computing techniques has ignited new hopes of revolutionizing power system computations.Quantum computing harnesses quantum mechanisms to solve traditionally intractable computational problems,which may lead to ultra-scalable and efficient power grid analytics.This paper reviews the newly emerging application of quantum computing techniques in power systems.We present a comprehensive overview of existing quantum-engineered power analytics from different operation perspectives,including static analysis,transient analysis,stochastic analysis,optimization,stability,and control.We thoroughly discuss the related quantum algorithms,their benefits and limitations,hardware implementations,and recommended practices.We also review the quantum networking techniques to ensure secure communication of power systems in the quantum era.Finally,we discuss challenges and future research directions.This paper will hopefully stimulate increasing attention to the development of quantum-engineered smart grids.
基金supported in part by the National Natural Science Foundation of China(62025404)in part by the National Key Research and Development Program of China(2022YFB3902802)+1 种基金in part by the Beijing Natural Science Foundation(L241013)in part by the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA000000).
文摘1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the escalating demand for high-performance computing over space.Computing over space entails the deployment of computational resources on platforms such as satellites to process large-scale data under constraints such as high radiation exposure,restricted power consumption,and minimized weight.
文摘To address the sensitive and uncertain limitations of single-energy computed tomography(CT)calibration methods in computing proton stopping power ratio during treatment planning,different methods have been proposed using a dual energy CT approach.This paper reviews the most recent dual-energy CT approaches for computing proton stopping power ratio.These include image domain and projection domain methods.The advantages and uncertainties of these methods are analyzed based on existing studies.This paper highlights recent advances in dual energy CT,discussing their implementation,advantages,limitations,and potential for clinical adoption.
文摘With the development of smart grid, the electric power supervisory control and data acquisition (SCADA) system is limited by the traditional IT infrastructure, leading to low resource utilization and poor scalability. Information islands are formed due to poor system interoperability. The development of innovative applications is limited, and the launching period of new businesses is long. Management costs and risks increase, and equipment utilization declines. To address these issues, a professional private cloud solution is introduced to integrate the electric power SCADA system, and conduct experimental study of its applicability, reliability, security, and real time. The experimental results show that the professional private cloud solution is technical and commercial feasible, meeting the requirements of the electric power SCADA system.
基金supported by the National Natural Science Foundation of China under Grant 62272391in part by the Key Industry Innovation Chain of Shaanxi under Grant 2021ZDLGY05-08.
文摘As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.
基金National Natural Science Foundation of China(No.61902060)Fundamental Research Fund for the Central Universities,China(No.2232019D3-51)Shanghai Sailing Program,China(No.19YF1402100).
文摘Benefited from wireless power transfer(WPT)and mobile-edge computing(MEC),wireless powered MEC systems have attracted widespread attention.Specifically,we design an online offloading scheme based on deep reinforcement learning that maximizes the computation rate and minimizes the energy consumption of all wireless devices(WDs).Extensive results validate that the proposed scheme can achieve better tradeoff between energy consumption and computation delay.
基金supported by the Fundamental Research funds for the central Universities of China (No. K15JB00190)the Ph.D. Programs Foundation of Ministry of Education of China (No. 20120009120010)the Program for Innovative Research Team in University of Ministry of Education of China (IRT201206)
文摘Separation issue is one of the most important problems about cloud computing security. Tenants should be separated from each other based on cloud infrastructure and different users from one tenant should be separated from each other with the constraint of security policies. Learning from the notion of trusted cloud computing and trustworthiness in cloud, in this paper, a multi-level authorization separation model is formally described, and a series of rules are proposed to summarize the separation property of this model. The correctness of the rules is proved. Furthermore, based on this model, a tenant separation mechanism is deployed in a real world mixed-critical information system. Performance benchmarks have shown the availability and efficiency of this mechanism.
基金partly funded by MOST Major Research and Development Project(Grant No 2021YFB2900204)Natural Science Foundation of China(Grant No 62132004)+1 种基金Sichuan Major R&D Project(Grant No 22QYCX0168)the Key Research and Development Program of Zhejiang Province(Grant No 2022C01093)。
文摘Computing Power Network(CPN)is emerging as one of the important research interests in beyond 5G(B5G)or 6G.This paper constructs a CPN based on Federated Learning(FL),where all Multi-access Edge Computing(MEC)servers are linked to a computing power center via wireless links.Through this FL procedure,each MEC server in CPN can independently train the learning models using localized data,thus preserving data privacy.However,it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers.To address these issues,we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers.Afterwards,we formulate a comprehensive algorithm to jointly optimize the communication resource(wireless bandwidth and transmission power)allocations and the computation resource(computation capacity of MEC servers)allocations while ensuring the local accuracy of the training of each MEC server.The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN.
基金supported in part by the National Natural Science Foundation of China under Grant 62271399 and 62206221in part by the Key Research and Development Program of Shaanxi Province under Grant 2022KW-07+1 种基金in part by National Key Research and Development Program of China under Grant 2020YFB1807003in part by the Shanghai Sailing Program under Grant 20YF1416700。
文摘In this paper,we study an Intelligent Reflecting Surface(IRS)assisted Mobile Edge Computing(MEC)system under eavesdropping threats,where the IRS is used to enhance the energy signal transmission and the offloading performance between Wireless Devices(WDs)and the Access Point(AP).Specifically,in the proposed scheme,the AP first powers all WDs with the wireless power transfer through both direct and IRS-assisted links.Then,powered by the harvested energy,all WDs securely offload their computation tasks through the two links in the time division multiple access mode.To determine the local and offloading computational bits,we formulate an optimization problem to jointly design the IRS's phase shift and allocate the time slots constrained by the security and energy requirements.To cope with this non-convex optimization problem,we adopt semidefinite relaxations,singular value decomposition techniques,and Lagrange dual method.Moreover,we propose a dichotomy particle swarm algorithm based on the bisection method to process the overall optimization problem and improve the convergence speed.The numerical results illustrate that the proposed scheme can boost the performance of MEC and secure computation rates compared with other IRS-assisted MEC benchmark schemes.
文摘Traditional digital processing approaches are based on semiconductor transistors, which suffer from high power consumption, aggravating with technology node scaling. To solve definitively this problem, a number of emerging non-volatile nanodevices are under intense investigations. Meanwhile, novel computing circuits are invented to dig the full potential of the nanodevices. The combination of non-volatile nanodevices with suitable computing paradigms have many merits compared with the complementary metal-oxide-semiconductor transistor (CMOS) technology based structures, such as zero standby power, ultra-high density, non-volatility, and acceptable access speed. In this paper, we overview and compare the computing paradigms based on the emerging nanodevices towards ultra-low dissipation.
基金supported in part by the National Natural Science Foundation of China under Grant 62271306,Grant 62072410,and Grant 62331017in part by the Fundamental Research Funds for the Provincial Universities of Zhejiang under Grant RF-B2022002。
文摘This paper investigates the multi-Unmanned Aerial Vehicle(UAV)-assisted wireless-powered Mobile Edge Computing(MEC)system,where UAVs provide computation and powering services to mobile terminals.We aim to maximize the number of completed computation tasks by jointly optimizing the offloading decisions of all terminals and the trajectory planning of all UAVs.The action space of the system is extremely large and grows exponentially with the number of UAVs.In this case,single-agent learning will require an overlarge neural network,resulting in insufficient exploration.However,the offloading decisions and trajectory planning are two subproblems performed by different executants,providing an opportunity for problem-solving.We thus adopt the idea of decomposition and propose a 2-Tiered Multi-agent Soft Actor-Critic(2T-MSAC)algorithm,decomposing a single neural network into multiple small-scale networks.In the first tier,a single agent is used for offloading decisions,and an online pretrained model based on imitation learning is specially designed to accelerate the training process of this agent.In the second tier,UAVs utilize multiple agents to plan their trajectories.Each agent exerts its influence on the parameter update of other agents through actions and rewards,thereby achieving joint optimization.Simulation results demonstrate that the proposed algorithm can be applied to scenarios with various location distributions of terminals,outperforming existing benchmarks that perform well only in specific scenarios.In particular,2T-MSAC increases the number of completed tasks by 45.5%in the scenario with uneven terminal distributions.Moreover,the pretrained model based on imitation learning reduces the convergence time of 2T-MSAC by 58.2%.
基金supported by the National Key R&D Program of China(No.2021YFB2900200)。
文摘Federated Learning(FL)is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security.However,the traditional FL model in communication scenarios,whether for uplink or downlink communications,may give rise to several network problems,such as bandwidth occupation,additional network latency,and bandwidth fragmentation.In this paper,we propose an adaptive chained training approach(Fed ACT)for FL in computing power networks.First,a Computation-driven Clustering Strategy(CCS)is designed.The server clusters clients by task processing delays to minimize waiting delays at the central server.Second,we propose a Genetic-Algorithm-based Sorting(GAS)method to optimize the order of clients participating in training.Finally,based on the table lookup and forwarding rules of the Segment Routing over IPv6(SRv6)protocol,the sorting results of GAS are written into the SRv6 packet header,to control the order in which clients participate in model training.We conduct extensive experiments on two datasets of CIFAR-10 and MNIST,and the results demonstrate that the proposed algorithm offers improved accuracy,diminished communication costs,and reduced network delays.
基金the support from U.S.Department of Energy through its Advanced Grid Modeling program,Exascale Computing Program(ECP)The Grid Modernization Laboratory Consortium(GMLC)+1 种基金Advanced Research Projects Agency-Energy(ARPA-E),The National Quantum Information Science Research Centers,Co-design Center for Quantum Advantage(C2QA)the Office of Advanced Scientific Computing Research(ASCR).
文摘With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more dynamic and more distributed,with higher uncertainty.These new power system behaviors bring significant challenges in power system modeling and simulation as more data need to be analyzed for larger systems and more complex models to be solved in a shorter time period.The conventional computing approaches will not be sufficient for future power systems.This paper provides a historical review of computing for power system operation and planning,discusses technology advancements in high performance computing(HPC),and describes the drivers for employing HPC techniques.Some high performance computing application examples with different HPC techniques,including the latest quantum computing,are also presented to show how HPC techniques can help us be well prepared to meet the requirements of power system computing in a clean energy future.